P1 Projects

YODA (Yearning to Operationalize Democratic AI)

Led by

About YODA

Democracy builds on trust, transparency, and representation, where power is vested in the people. As more AI tools are used to simplify or enhance our lives as citizens, we should ensure that they do not undermine our democracies. In addition, democratization in AI could help sustain democracies. Part of democratizing AI can be through ensuring factors of trust in AI systems; EU’s AI Act proposes the following factors for a trustworthy AI system: human agency and oversight, transparency, non-discrimination, fairness, privacy, and robustness. Other parts of democratizing AI can be through decentralization and representation. A democratized process tries to ensure that power, access, and the right to participation are shared broadly among the people and the majority stakeholders of the society. 

Our collaborative initiative YODA – Yearning to Operationalize Democratic AI – aims to achieve a more de mocratized AI use, development, and governance. Our goal is to establish a consortium dedicated to developing, through fundamental research in AI and its evaluation, tools, and methods that can be utilized effectively. We will collaborate with researchers from other relevant fields, such as law, participatory design, and healthcare, as well as with industrial partners, to ensure that our work addresses real needs. The idea is to create examples of transparent and trustworthy AI knowledge and technology transfers. 


Objectives and Research Directions 

We see ourselves doing research within AI alignment, trustworthy AI, low-resource, and democratic AI, and always with a focus on the actionable (operationalizable) fundamentals of AI. 

  • Trustworthy AI - As automated systems increasingly influence critical decisions in healthcare, finance, and public services, the EU’s AI Act has responded by establishing trustworthiness as the fundamental principle guiding AI policy and regulation. This concept includes several dimensions—explainability, fairness, robustness, privacy, and accountability. However, the technical definitions and metrics used by the research and innovation community do not always align with the operational needs for implementing these principles. 
  • Transparent AI defined by dimensions such as explainability, interpretability, and accountability, which involves research in explainable AI and AI evaluation design and metrics. In addition, achieving transparency also involves how we document, communicate, and share technology and technology assessments. Going from regulative control, to self-control, where the ’deployer’ of a technology (including industry) demonstrates its accountability and adherence to regulations, values, etc. Towards reproducibility, we will open-source code and data (where possible) as well as open documentation, evaluations, etc. Examples of frameworks supporting this are datasheets for datasets and model cards for AI models. 
  • Alignment of AI is a value-based approach to creating more trustworthy AI, resulting from the alignment problem described by Brian Christian dealing with the issue of aligning AI with human values. However, such values may be very different depending on diverse cultural contexts, whereby there probably is no one-AI-fits-all. Through this work we will develop frameworks to enable AI value alignment to applications like healthcare and education in specific cultural contexts. We are further interested in communalities and discrepancies between this value-based design approach and other design approaches. 
  • Democractic AI - At the core of democratic AI is the objective to distribute the power of AI more broadly. This may be achieved through dimensions like: transparency, decentralization of power (in this context, e.g., development of technology, data, and information/education), participatory governance (and design), accessibility and affordability, and data sovereignty and privacy. Democratic AI thus involves a collaborative and more decentralized approach, which can support more diverse AI solutions that address the specific needs and contexts of diverse communities and cultures. We will also also refer to this apsect as Representative AI. We therefore believe in this as a collaborative consortium, engaging with many stakeholders and contributors. 

Sneha Das, Ahcène Boubekki & Line Clemmensen