P1 Programs

Trustworthy Artificial Intelligence (TrAI)

Primary Point of Contact

Program Co-Directors

Program Description

Trustworthiness is fundamental to AI as automated systems impact critical decisions in healthcare, finance, and public services. It is now the overarching guiding principle for AI policy and regulation. However, ‘trustworthiness’ encompasses dimensions – explainability, fairness, robustness, privacy, accountability, with little agreement within the research and innovation community on the (technical) definition of these metrics and their operationalization.

Our Trustworthy AI program creates a collaborative platform bringing together ML researchers, ethicists, and clinical and industry practitioners to explore diverse perspectives on trustworthy AI and identify standard definitions and practices.

TrAI serves three interconnected purposes:

  1. Knowledge sharing: Participants will present research across trustworthy AI’s facets, creating a living repository of Danish and Nordic expertise. The aim is to develop a shared definition of ‘trustworthy AI’ across domains, published as a report or white paper.
  2. Networking: This program aims to establish a connecting platform between Danish researchers, Nordic partners and industry practitioners, fostering collaborations beyond the program’s duration.
  3. Project and funding development: As a second stage of the TrAI, the aim is to build interdisciplinary Nordic consortia targeting EU Horizon Europe and Innovation Fund Denmark, leveraging our diverse expertise.

The TrAI program will address fundamental challenges across trustworthiness dimensions, working cross-disciplinary by connecting academia, clinics, industry, and Nordic countries, exploring: How do trustworthiness requirements vary across domains? Can we develop unified trustworthiness metrics? What are the trade-offs between trust dimensions?