Signals and decoding
One of the deepest problems in cognitive science is how to make sense of the vast amount of raw data constantly bombarding us from the environment. The key to this is sorting of the input. Attention is a basic perceptual mechanism for selective decoding of complex signals. AI can help in the attention and decoding of perceptual signals, and can be used for making sense of the signals from a processing brain.
Based on statistical modeling of signal processing pipelines and large scale experimental approaches this collaboratory will make foundational contributions to three of the centre’s basic research:
Explainability: We propose explainability methods to develop interactive systems for prediction of response to real-time intervention in bio-medical systems.
Self-supervised learning: New tools for deep learning in highly non-stationary domains based on self-supervised ensembles. Quantification of epistemic uncertainty after self-supervised learning.
Novelty detection: Analysis of multi-level novelty detection in large-scale deployment of bio-medical deep learning systems. Design, modeling and evaluation of robust dynamical systems in domains with strong anomalies. Explainability methods for deep outlier detection.
29 Sept 2023
Machine Learning in Space
The IT University of Copenhagen and the Pioneer Centre for Artificial Intelligence invite you to an event exploring Machine Learning in Space - educational and research activities in the context of the Danish Partnership for Space Education.
04 Oct 2023
Workshop: AI Research Moonshots
This one-day workshop will focus on pushing the boundaries of research - how to envision and implement grand ideas. We will start the day together and then break out into 7 sections, organized around the Pioneer Centre for AI’s 7-collaboratories. And then we will meet again in plenum to discuss what happened in the break-out sessions.