• Join Us
    • Affiliation
    • Jobs
    • Bring an Independent Grant to the Pioneer Centre for AI
    • Graduate Programmes and Internships
    • P1 Programs
    Join Us
  • Research
    • Vision
    • Collaboratories

      • Overview
      • Causality and explainability
      • Extended reality
      • Fine grained analysis
      • Learning theory and optimisation
      • Signals and decoding
      • Speech and language
      • Networks and graphs
    • Societal Challenges
    • Publications
    • Artifacts
    • 7-Step Plan
    • Belongie Lab
    • Computing
    Research
  • News & Events
    • Events
    • News
    • P1 in the Media
    • AI Connects
    News & Events
  • About
    • The Centre – P1
    • People
    • National Advisory Board
    • Practical Info about the Facilities
    About
  • Contact
    • Contact
    • Find us
    • People
    Contact

Event

Last Fridays Talks: Speech & Language

Featured image

Location

Classroom at the Pioneer Centre for AI, Øster Voldgade 3, 1350 København K

Date

25 Apr 2025

14:00 - 15:00

Organizer

The Collaboratory of Speech & Language

Each last-Friday-of-the-month, P1 is hosting the Last Fridays Talks where one Collaboratory will present insights from their current work. Join us for discussions on results, and for socializing afterwards! 

 

Talk 1 

Revealing Political Opinions in Large Language Models

 

Abstract

Large language models are biased; how do we know in what ways they are biased? For political bias, the dominant approach is to prompt models to generate stances towards different political propositions and aggregate these stances into a final score. However, the stances generated by LLMs can vary greatly depending on how they are prompted, and ignore the plain text arguments which reveal more fine-grained values and opinions. In this talk, I will present how we have addressed this by analyzing 156k LLM responses to 62 political propositions using 420 prompt variations. Through this, we propose to identify tropes: phrases that are repeated across many prompts, revealing patterns in the text that a given LLM is prone to produce. Finally, I will discuss ongoing and future work towards quantifying LLM political bias through their generated plain text arguments, contrasting this with the existing approach which relies on closed-form survey responses.

 

Speaker

Dustin Wright 

 

Bio

Dustin is a Danish Data Science Academy postdoctoral fellow at the University of Copenhagen focused on natural language processing, including the  factuality, transparency, and efficiency of NLP systems. He is deeply interested in applying AI to information integrity and reliability, both in society (e.g., misinformation) and NLP systems themselves (e.g., alignment). Previously he was a visitor to the BlaBlaBlab at the University of Michigan, and received his PhD from the University of Copenhagen for work on automatic scientific fact checking.

 

Talk 2

Leveraging Consistencies across the Language Model Training Pipeline

 

Abstract

With new Language Models being released at an accelerating pace, identifying consistent patterns in their learning dynamics presents an opportunity for improving their training efficiency. We will examine a few of these consistencies throughout the LM training pipeline: First, how training data composition affects abstract downstream behaviors such as gender bias and cultural adaptation. Second, how stable the learning dynamics of fundamental linguistic information are across different LM training runs. Finally, we demonstrate practical applications of these consistencies in scenarios where collecting sufficient training data is physically impossible.

 

Speaker

Max Müller-Eberstein

 

Bio

Max is a postdoc at the IT University of Copenhagen’s NLPnorth Lab and the Pioneer Centre for AI, studying consistencies in the learning dynamics across ML models. Towards the goal of enabling AI training in extremely low-resource scenarios, he has applied these findigs to LM adaptation (e.g., to Danish culture), and to improved accessibility (e.g., atypical speech recognition, mapping visual art to music). Previously, he completed a PhD on “Quantifying Linguistic Variation”, within the ELLIS network, conducting joint research at ITU Copenhagen, LMU Munich, and the University of Edinburgh. Find more information here.

 
Sign up

Collaboratories

  • SL

    Speech and language

    Led by Christian Hardmeier and Isabelle Augenstein
View all Events

Stay updated

Sign up to our mailing list

Sign up

Join the conversation

In cooperation with

  • https://www.aau.dk
  • https://www.dtu.dk
  • https://www.au.dk
  • https://www.ku.dk
  • https://www.itu.dk

Funded by

  • https://www.ufm.dk/
  • https://novonordiskfonden.dk/
  • https://www.dg.dk/
  • https://www.carlsbergfondet.dk/
  • https://lundbeckfonden.com/
  • https://villumfonden.dk/
© 2022 Pioneer Centre for Artificial Intelligence
This is a cookiefree site - read our privacy policy