P1 Programs
Data Privacy in Machine Learning
Primary Point of Contact
Program Co-Directors
Program Description
Through this program, the aim is to develop algorithms that ensure individual privacy without unduly reducing model utility. In the EU—where regulatory scrutiny of data handling and public concern over data rights are growing—and as machine learning relies on increasingly large, sensitive datasets, robust privacy guarantees are essential. This program addresses that gap and seeks to position Denmark as a hub for privacy-preserving ML in Europe.
The program is organized around two complementary themes. The first theme develops privacy-preserving learning algorithms, including with differential privacy (DP) [Dwork et al., 2006] and secure multi-party computation (MPC) [Yao, 1982]. We will work on algorithms that provide formal privacy guarantees while maintaining model utility. The second focuses on data control and removal methods, such as machine unlearning [Cao and Yang, 2015], to allow contributors to withdraw or modify their data without prohibitive computational cost.
The vision is to create a cohesive research community bridging theoretical foundations, system implementation, and empirical evaluation in privacy-preserving machine learing.
People
University of Copenhagen, Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems (ELLIS)
Amartya Sanyal
University of Copenhagen
Boel Nelson
University of Copenhagen
Carolin Christin Heinzler
Aarhus University
Chris Schwiegelshohn
Aarhus University
Claudio Orlandi
Aalborg University
Daniele Dell'aglio
Aarhus University
Hannah Keller
University of Copenhagen
Johanna Düngler
University of Copenhagen
Lukas Retschmeier
IT University of Copenhagen
Martin Aumüller
University of Copenhagen
Nirupam Gupta
University of Tokyo, University of Copenhagen
Quentin Emmanuel Hillebrand
University of Copenhagen
Rasmus Pagh
University of Copenhagen
Sia Susanne Sejer
University of Southern Denmark
Teresa Anna Steiner