Event

Pioneer Centre for AI & Danish Data Science Community Meetup

Location

Date

Type

Organizer

About the Event

Welcome to a fascinating meeting about artificial intelligence (AI) hosted by the Pioneer Centre for Artificial Intelligence (P1) together with the Danish Data Science Community (DDSC). First, we will be greeted by Camilla Nørgaard Jensen who will introduce us to P1 and its work with AI. After that, we will get a brief introduction to the DDSC by Thor Steen Larsen, who is a board member in the organization. Next, Anna Rogers, assistant professor at IT University of Copenhagen (ITU), will take us on a deep dive into her research on emergent properties in LLMs. Her presentation will be followed by opportunities to ask questions. After a small break, Daniel Hershcovich, assistant professor at the Institute of Computer Science at the University of Copenhagen (DIKU), will take center stage. He will talk about his research cultural bias in LLMs and open up for further discussion.

We will conclude our time together with more networking and light refreshments from P1.

Registration is mandatory, so please make sure you have your ticket beforehand.

Please note directions are tricky, read: https://www.aicentre.dk/find-us

 

Talks

Bio

Daniel Hershcovich is an assistant professor at the Department of Computer Science, University of Copenhagen. His research interests include cross-cultural adaptation of (and with) language technology.

 

Title

Cultural bias in LLMs 

 

Abstract

Are current LLMs culturally biased? That’s, at least, one of the main arguments for building local LLMs (e.g., in Denmark). But is there more to it than fluffy claims and anecdotal evidence? I will describe some methods to measure this bias, including experimental results confirming it in popular closed and open-source LLMs. 

Related paper: https://aclanthology.org/2023.c3nlp-1.7/

 

Bio

Anna Rogers is an assistant professor at IT University of Copenhagen. She works on interpretability and and evaluation of NLP models, their societal impact, and NLP research methodology.

 

Title

A sanity check on emergent properties

 

Abstract

One of the frequent points in the mainstream narrative about large language models is that they have “emergent properties” (sometimes even dangerous enough to be considered existential risk to mankind). However, there is much disagreement about even the very definition of such properties. If they are understood as a kind of generalization beyond training data – as something that a model does without being explicitly trained for it – I argue that we have not in fact established the existence of any such properties, and at the moment we do not even have the methodology for doing so.

Related paper: https://arxiv.org/abs/2308.07120

 

Agenda

  • Arrival 16:20-16:30
  • Welcome & intro to DDSC 16:30-16:40
  • Intro to P1 16:40-16:50
  • Anna Rogers “A sanity check on emergent properties” (incl 10 min Q&A) 16:50-17:20
  • Break 17:20-17:30
  • Daniel Hershcovich “Cultural bias in LLMs” (incl 10 min Q&A) 17:30-18:00
  • Networking and light refreshments in the east wing lunch area 18:00-19:00