Event
YODA Workshop: Operationalizing the EU AI Act

Location
Date
Organizer
From Policy Debate to Technical Practice
The Workshop is organised by the YODA (Yearning to Operationalize Democratic AI) collaborative consortium, a P1 initiative dedicated to bridging the gap between academic research and corporate implementation. Our work spans trustworthy AI, AI alignment, and democratic AI - always with a focus on the actionable: developing research-backed tools, metrics, and frameworks that practitioners can actually use. We bring together researchers from machine learning, law, and participatory design, alongside industrial partners, to ensure our work addresses real-world needs.
This workshop is part of that mission - creating a space for transparent, peer-driven knowledge transfer between academia, industry, and the people responsible for overseeing AI in practice.
In 2026, the most pressing questions are no longer about policy interpretation, but about the specific engineering and governance methods required for verifiable compliance. We yearn for research-backed tools and methods to ensure AI remains transparent, democratic, and trustworthy across the sectors that matter most.
Program
A focused, half-day intensive session facilitating direct exchange of actionable case studies, implementation progress reports, and real-world compliance challenges.
11.00-12.00 Welcome & Introduction: A framing talk on the current status of the EU AI Act at European and Danish levels, featuring insights from national oversight authorities.
13.00-15.30 Thematic Breakout Sessions: Join one of three dedicated tracks for deep-dive peer exchange on specific compliance challenges.
16.00-16.30 Synthesis & Next Steps: Wrap-up of breakout findings, discussion on the launch of the long-term practitioner network, and final networking.
Three Themes of Discussion
- Design & Governance: What frameworks and documentation standards have you implemented for the AI Act's ethical pillars? How do you navigate regulatory uncertainty with national bodies or AI sandboxes? What process checks ensure Trustworthiness, Fairness, Robustness, and Explainability
- Tools & Validation: What tools, models, and metrics are you using in practice? How do you quantitatively measure and validate modeling choices? What key technical or validation roadblocks have you encountered? What support from research institutions would help overcome challenges?
- Risk & Safety: How is risk management integrated into your development lifecycle? How do you ensure training data is high-quality and representative? How do you protect systems from data poisoning and adversarial threats?
Who Should Join?
We are looking for experts at the implementation frontier and professionals whose work centres on the practical validation and rigorous oversight of high-risk or medium-risk AI systems.
- ML Engineers & Data Scientists
- AI Architects & Data Governance Leads
- Heads of AI Strategy
- Compliance Experts
- Technical Risk Managers & AI Audit Specialists
- NGO & Civil Society Representatives
Target Sectors
High- and medium risk industries within the focus areas:
- Health & Biotech
- Finance
- Public Administration
- Legal & Justice
- Critical Infrastructure
- Education
The expected outcomes of this workshop is to conduct an anonymized report of validated technical and regulatory insights, launch a long-term practitioner network, and facilitate ongoing collaboration between research, industry, and regulatory bodies.
Find more information about the workshop and the organisers here or via the sign up button below.