Learning theory and optimisation
Machine learning and AI have evolved from a mainly academic discipline to widespread practical use in just a few years. Driven by access to large data sets, more computing, flexible statistical deep learning methodology and share of of ideas and software, industrial and academic research have increased orders of magnitide and are multidisciplinary.
Based on statistical and computational modeling of learning systems and large scale experimental methods this collaboratory will make foundational contributions to all four of Centre’s basic research lines:
Explainability: Develop large scale inference methods with human interpretable explanations. Design new deep causal models that allow counterfactual explanations for informed intervention planning.
Self-supervised learning: Develop mathematical models of self-supervised learning and generalization. Provide general understanding of the limitations of contrastive learning.
Novelty detection: Propose new and universal schemes for teaching computers to discover unknown patterns and anomalies with well-calibrated quantification of uncertainty.
Fair AI: Develop mathematical models of fairness. Explore the fundamental question: How may we introduce inductive biases for optimal generalization in human data without discrimination?