Skip to main content

Looking at Explanations of AI in Context


Text: Bielefeld University

How can explanations of artificial intelligent systems be comprehensible, and what role does context play? These questions are the focus of the 3rd TRR 318 conference, “Contextualizing Explanations,” which will take place in Bielefeld on June 17 and 18. The conference is organized by the Transregional Collaborative Research Center (TRR) 318 of Bielefeld and Paderborn Universities. International scientists will present their current research and approaches in the field of explainable artificial intelligence. Further scientific impulses will be given by renowned keynote speakers.

Artificial intelligence (AI) systems are being increasingly used in sensitive areas where wrong decisions can have serious consequences – for example, in medicine or finance. By making AI systems transparent, effective control can be ensured, allowing users to question decisions based on AI. “For users to be able to act autonomously, explanations of AI decisions must be relevant and provide sufficient information,” says Professor Philipp Cimiano, deputy speaker of TRR 318. ”Since no explanation meets all requirements, TRR 318 takes the approach of giving users the opportunity to actively shape AI explanations and control them according to their needs.”

The team behind the conference: Professor Philipp Cimiano, Professor Anna-Lisa Vollmer und Professor Benjamin Paaßen (from left).

At the 3rd TRR 318 conference, researchers are bringing context into the explanation process. “Context is dynamic,” explains co-organizer Professor Anna-Lisa Vollmer. “Initially, it may be relevant to the explanation that a person has a different linguistic and cultural background. However, it can quickly become less important if, for example, the person has little time and the explanation has to be shorter. The space in which the explanation is given may also sometimes be relevant.”

Professor Benjamin Paaßen adds: “By contextualizing AI explanations, users gain more understanding and control over the processes. The presentations at the TRR 318 conference illustrate how diverse context can be and its impact on users.”

Invited speakers

The conference will begin on Tuesday with a keynote talk by Dr. Kacper Sokol. He is a researcher in the Medical Data Science group at ETH Zurich. In his talk, he will draw on a wide range of relevant interdisciplinary findings and propose to support human decision-making in a data-driven way. The second keynote will be given by Professor Virginia Dignum from Umeå University via video connection. She will present the current EU debates on AI regulation and discuss how to ensure that technological progress goes hand in hand with ethical and legal responsibility.

On the second day of the conference, attendees can look forward to the keynote by Professor Angelo Cangelosi from the University of Manchester. His talk is titled “The Importance of Starting Small with Baby Robots.” Cangelosi will present examples of how robots can learn language and discuss essential principles such as “starting small,” i.e., with simple concepts and tasks. He also discusses the pros and cons of basic models in robotics, as well as issues related to Explainable AI (XAI) and trust.

The conference will conclude with a panel discussion in which members of other research networks will discuss different explanatory contexts.

How can AI explain in an understandable way, while taking context into account? This question will be discussed at the TRR 318 conference.

Further Information and Registration

Website of TRR 318

Website of the conference “Contextualizing Explanations” Registration for the conference (until June 1)

OSZAR »