Causality and explainability
Explainable AI (XAI) is crucial to the continued deployment of AI solutions in critical societal infrastructure such as healthcare, finance and political debate. This is particularly important to monitor AI function, and to ensure and justify the trust from society in AI solutions. Many relevant systems are subject to changes between training and testing. Here, causal methods may help to better model such changes and quantify uncertainty. Core technical challenges within Causality and Explainability include interpretability, fairness, uncertainty quantification, model communication, and distributional shift.
Based on mathematical modeling of causal representations, explainability and fairness and by extensive interdisciplinary work including law and philosophy, this collaboratory will make foundational contributions to the centre’s basic research areas:
Explainability: Analyze causal models and explore the fundamental limits to counterfactual reasoning with machine learning models. Understand the role of agency and intervention in deep learning systems. Progress in explainability will in a completely novel way enable interactive AI.
Fair AI: AI will never have sufficient training data to have seen all possible examples, and generalization is key, but can be achieved only via the introduction of inductive biases. Address the interplay between inductive biases and biases in data. A fundamental, yet unsolved question is: How may we achieve fair generalization in AI?
Acclaimed researchers appointed as co-leads at P1
On 1 March 2023, Isabelle Augenstein and Sebastian Weichwald were appointed as co-leads in the Pioneer Centre for Artificial Intelligence also known as P1. As co-leads they will help set the tone for the next ground-breaking research to be conducted within the centre.