Event

Talk: On hallucination, memorization, interpretability and trust for fact-intensive applications of language models

Featured image

Location

Date

Type

Title

On hallucination, memorization, interpretability and trust for fact-intensive applications of language models
 
 

Summary

Recent developments of NLP based systems like ChatGPT, Bing Chat and Bard have illustrated the potential of language models as natural and simple interfaces to factual information. However, the usefulness of these systems is limited by their tendency to generate false or erroneous answers that seem correct and confident, i.e. hallucinations. Currently, we cannot trust these systems and are therefore unable to actualize their full potential.
 
In this talk, Lovisa will cover some recent work that has the potential to aid the development of trustable NLP systems for fact-intensive applications. These works, including those of her own and others, cover interpretability, retrieval-augmented methods and studies of hallucinated content. Lovisa will also discuss potential future research directions related to these topics and properly introduce herself to P1.
 
 

Bio

Lovisa is a 4th year PhD student (PhD student positions in Sweden usually last for 5 years) at Chalmers University of Technology, Sweden, under the supervision of Richard Johansson. She has previously published work on visual grounding, retrieval augmentation and factual consistency. Currently, her research interests include language model controllability, interpretability and retrieval-augmented methods for fact-intensive applications.
 
Lovisa will join Isabelle Augenstein‘s CopeNLU group as a visiting PhD student starting from April 3. She will stay with the group in Copenhagen for approximately 6 months.
 
 
Please sign-up to attend.