Collaboratory

Causality and explainability

  • 03

    Team members

  • 01

    News items

  • 02

    Upcoming events

Explainable AI (XAI) is crucial to the continued deployment of AI solutions in critical societal infrastructure such as healthcare, finance and political debate. This is particularly important to monitor AI function, and to ensure and justify the trust from society in AI solutions. Many relevant systems are subject to changes between training and testing. Here, causal methods may help to better model such changes and quantify uncertainty. Core technical challenges within Causality and Explainability include interpretability, fairness, uncertainty quantification, model communication, and distributional shift.

Based on mathematical modeling of causal representations, explainability and fairness and by extensive interdisciplinary work including law and philosophy, this collaboratory will make foundational contributions to the centre’s basic research areas:

Explainability:  Analyze causal models and explore the fundamental limits to counterfactual reasoning with machine learning models. Understand the role of agency and intervention in deep learning systems. Progress in explainability will in a completely novel way enable interactive AI.

Fair AI: AI will never have sufficient training data to have seen all possible examples, and generalization is key, but can be achieved only via the introduction of inductive biases. Address the interplay between inductive biases and biases in data. A fundamental, yet unsolved question is: How may we achieve fair generalization in AI?