Artifacts
Course: SSL4EO-2024 – Self-Supervised Learning for Earth Observation
In order to support further progress at the intersection of self-supervised learning (SSL) and Earth observation (EO), we organized the first SSL4EO summer school in July 2024 in Copenhagen. This full-week program brought together leading experts working on SSL and EO to teach recent advances and discuss open research questions at this intersection.
We were very happy that the first cohort of PhD students joining this format was sold out. The 40 participants from 17 international institutions heard from 8 invited speakers and worked on mini-projects to gain hands-on experience with the methods discussed. Thanks to the generous support from DeiC which provided access to their GPU-cluster during the course, the participants studied the role of augmentations, learning objectives, architectural design, and sampling strategies.
Many thanks to all the invited speakers for contributing to the program: Randall Balestriero, Marc Rußwurm, Konstantin Klemmer, Bruno Sanchez-Andrade Nuño, Jan Dirk Wegner, Zhitong Xiong, Xiaoxiang Zhu, Puzhao Zhang.
This course was supported by the University of Copenhagen, the Pioneer Centre for AI, and the Danish e-infrastructure Consortium (DeiC).
The organizers: Ankit Kariryaa, Nico Lang, Stefan Oehmcke, and Christian Igel.
Additional Links:
Danoliterate GLLM Benchmark
Generative Large Language Models (GLLMs), such as GPT-4, have shown immense potential for business and societal disruptions. These general-purpose models are also used intensively in lesser-resourced languages like Danish. However, if practitioners only have tools for measuring model capabilities in English, these language domains might miss out on important developments.
We present an open benchmark for GLLM performance in Danish, evaluating more than 50 models across eight diverse Danish Natural Language Processing scenarios, such as solving citizenship tests or writing helpful social media post replies. The results are displayed on a live leaderboard on the benchmark website where a combined Danoliteracy Index suggests which models understand and generate Danish with the highest performance. A human feedback interactive arena survey is also hosted on the site, displaying results of the judgements of Danish speakers on model capability.
Course: Real-Time Visual and Machine Learning Systems
This course explores the principles and applications of real-time visual and machine learning systems. Students will learn how to design, implement, and optimize systems that process visual data and make intelligent decisions in real-time. The curriculum covers key topics such as programming in rust, memory hierarchies, concurrency and data types. The course focus on a mix of hands-on exercises with larger code project in the end. All paths of the material is open source to explore at your own pace.
Course: Machine Learning Operations
This course explores a number of coding practices that will help machine learning practitioners to organize, scale, monitor and deploy machine learning models either in a research or production setting. The course focus on hands-on experience with a number of frameworks, both local and in the cloud, for doing large scale machine learning models. All parts of the material is open-source to explore at your own pace.
MMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation Learning
The volume of unlabelled Earth observation (EO) data is huge, but many important applications lack labelled training data. However, EO data offers the unique opportunity to pair data from different modalities and sensors automatically based on geographic location and time, at virtually no human labor cost. We seize this opportunity to create MMEarth, a diverse multi-modal pretraining dataset at global scale. Using this new corpus of 1.2 million locations, we propose a Multi-Pretext Masked Autoencoder (MP-MAE) approach to learn general-purpose representations for optical satellite images. Our approach builds on the ConvNeXt V2 architecture, a fully convolutional masked autoencoder (MAE). Drawing upon a suite of multi-modal pretext tasks, we demonstrate that our MP-MAE approach outperforms both MAEs pretrained on ImageNet and MAEs pretrained on domain-specific satellite images. This is shown on several downstream tasks including image classification and semantic segmentation. We find that pretraining with multi-modal pretext tasks notably improves the linear probing performance compared to pretraining on optical satellite images only. This also leads to better label efficiency and parameter efficiency which are crucial aspects in global scale applications.