Each module contains 3 ECTS. You choose a total of 10 modules/30 ECTS in the following module categories:
- 12-15 ECTS in technical scientific modules (TSM)
TSM modules teach profile-specific specialist skills and supplement the decentralised specialisation modules. - 9-12 ECTS in fundamental theoretical principles modules (FTP)
FTP modules deal with theoretical fundamentals such as higher mathematics, physics, information theory, chemistry, etc. They will teach more detailed, abstract scientific knowledge and help you to bridge the gap between abstraction and application that is so important for innovation. - 6-9 ECTS in context modules (CM)
CM modules will impart additional skills in areas such as technology management, business administration, communication, project management, patent law, contract law, etc.
In the module description (download pdf) you find the entire language information per module divided into the following categories:
- instruction
- documentation
- examination
The purpose of this module is to enhance students' understanding of deep learning techniques.
We will explore significant and current developments in deep learning, including generative models, attention networks, transformers, graph neural networks and other related techniques.
Furthermore, we will examine case studies that pertain to language, speech, or visual processing domains.
Prerequisites
Machine Learning Basics, Deep Learning, Programming (Python), Statistics, Linear Algebra
Learning Objectives
Learning objectives:
- Know the Attention and Transformer architectures and how to use them in different applications.
- Know which architectures are suitable for which kind of machine learning problem
- Know how to apply self-supervised learning and the advantages and disadvantages of this method
- To know and apply the different methods of generative algorithms.
- Know how to interpret and explain results of predictions.
Contents of Module
- Refresher of basic deep learning methods such as MLP, CNN, RNN, Architectures, Backpropagation etc.
- Attention and transformers: concepts and applications in language and vision domains
- Self-supervised learning: contrastive and non-contrastive methods
- Graph Neural Networks and applications
- Generative models: VAEs, GANs, Diffusion models, Energy models, etc.
- Foundation models, few and zero-shot learning
- Interpretability, Explainability
- Special topics of the year, such as NNs for physical systems, 3D reconstruction, theoretical approaches and other
Teaching and Learning Methods
- Lectures / presence
- Tutorial / presence
- Exercises / presence
- Self-study
Download full module description
Back