Collaboratory

Signals and decoding

  • 02

    Team members

  • 02

    Upcoming events

One of the deepest problems in cognitive science is how to make sense of the vast amount of raw data constantly bombarding us from the environment. The key to this is sorting of the input. Attention is a basic perceptual mechanism for selective decoding of complex signals. AI can help in the attention and decoding of perceptual signals, and can be used for making sense of the signals from a processing brain.

Based on statistical modeling of signal processing pipelines and large scale experimental approaches this collaboratory will make foundational contributions to three of the centre’s basic research:

Explainability: We propose explainability methods to develop interactive systems for prediction of response to real-time intervention in bio-medical systems.

Self-supervised learning: New tools for deep learning in highly non-stationary domains based on self-supervised ensembles. Quantification of epistemic uncertainty after self-supervised learning.

Novelty detection: Analysis of multi-level novelty detection in large-scale deployment of bio-medical deep learning systems. Design, modeling and evaluation of robust dynamical systems in domains with strong anomalies. Explainability methods for deep outlier detection.