Speech and language

  • 02

    Team members

  • 01

    News items

Language is central to human beings and AI, yet automatic natural language understanding differs from human-like understanding in fundamental ways, notably in its inferior capabilities in efficiency and generalizability. While human language communications transfers and recovers meaning efficiently through sequences of word forms, recovering it automatically requires deep understanding of meaning and communicative intent, to recover rich hierarchical structures. Successful communication is grounded in a shared experience of the world, which calls for interdisciplinary development of language and speech AI technology.

Based on mathematical and cognitive modeling of natural language representations and large scale experience approaches, this collaboratory will make foundational contributions to the centre’s basic research areas: 

Self-supervised learning: Mathematical models of self-supervised learning in NLP. New schemes for contrastive learning in semi-structure text and multi-modal domains. Low-resource NLP using active self-supervised learning. Reducing the footprint of deep learning through transfer learning and self-supervision. 

Fair AI: Understanding the mechanisms behind diffusion of misinformation. Development of dialogue systems for on-line interactive exploration of AI fairness.