Lecture: Deep Learning

Within the last decade, deep neural networks have emerged as an indispensable tool in many areas of artificial intelligence including computer vision, computer graphics, natural language processing, speech recognition and robotics. This course will introduce the practical and theoretical principles of deep neural networks. Amongst other topics, we will cover computation graphs, activation functions, loss functions, training, regularization and data augmentation as well as various basic and state-of-the-art deep neural network architectures including convolutional networks and graph neural networks. The course will also address deep generative models such as auto-encoders, variational auto-encoders and generative adversarial networks. In addition, applications from various fields will be presented throughout the course. The tutorials will deepen the understanding of deep neural networks by implementing and applying them in Python and PyTorch.

Qualification Goals

Students gain an understanding of the practical and theoretical concepts of deep neural networks including, optimization, inference, architectures and applications. After this course, students should be able to develop and train deep neural network architectures for various tasks and understand the potentials and pitfalls when applying deep neural networks in practice.


  • Credits: 6 ECTS (2h lecture + 2h exercise)
  • Course number: ML-4103 (also credited for INFO-4182 and INF-4182)
  • Lectures and exercises will be held asynchronously through YouTube (see sidebar for link). We will provide all lectures and exercise introductions several days before the Q&A sessions for self-study. You must watch these videos before participating in the live Q&A sessions.
  • Each Wednesday, we will host a live lecture and exercise Q&A session from 12:00 to 14:00 via Zoom (see sidebar for link) where questions regarding the lecture and exercises are answered. Make sure that you have the latest Zoom client installed.
  • All exercise assignments will be handed out and graded through ILIAS (see sidebar for link).
  • Students shall watch both the lecture and exercise videos before the Q&A session and take note of questions that they like to have answered during the Q&A sessions.



  • To participate in this lecture or to enroll for our exam, you must enroll via ILIAS (see sidebar for link) until October 28, 2020. Note that the registration deadline has passed and we are not accepting new students do the course. Please do not email us asking about this.
  • Information about exam registration and make-up exams can be found here.


The exercises play an essential role in understanding the content of the course. There will be 6 assignments in total (see content table below) which can be conducted in groups of up to 2 students (groups can be formed in the first weeks via the booking pool in ILIAS). The assignments contain pen and paper questions as well as programming problems. In the first half of the course, the students will develop and use the Educational Deep Learning Framework (EDF), a small Python only deep learning framework. This will allow us to understand every aspect of deep learning (computation graphs, backpropagation, optimization, regularization) in detail on small problems (MNIST). In the second half of the course, the students will use PyTorch, a state-of-the-art deep learning framework which features GPU support and auto-differentiation, to address more challenging problems. If you have questions regarding the exercises or the lecture, please ask them during the zoom Q&A sessions or in our ILIAS forum.

Lecture Notes

The students will collectively write Latex lecture notes to complement the slides, summarizing the content discussed in the lecture videos. In the beginning of the course, every registered student will be assigned one lecture. The lecture notes must be submitted via ILIAS at the latest 7 days after the respective official lecture date (see content table below). Lecture notes must be written individually (not in groups). We will continuously merge and consolidate the lecture notes into a single document. You can edit the lecture notes in Overleaf or a local Latex editor. To get started, copy the Deep Learning Lecture Notes Latex Template.


To qualify for the final exam, students must have:

  • registered to the lecture on ILIAS by 28.10.2020
  • successfully solved 50% of the exercises - and -
  • submitted lecture notes for one lecture
    (slots will be assigned in the first week, notification via email)

To obtain a 0.3 bonus in the final exam, students must:

  • successfully solve 75% of the exercises

All students must participate in the main exam that will take place during the official examination period. Only students that failed the main exam and students that were ill during the main exam (doctoral certificate required) are allowed to enroll for a make-up exam.





Exercises (EDF | PyTorch)

TA Support



Introduction | Slides

1.1 Introduction | Video

1.2 History of Deep Learning | Video

1.3 Machine Learning Basics | Video

01 - Intro | Slides | Assignment

Introduction to EDF and

Computation Graphs

Christian Reiser



Computation Graphs | Slides

2.1 Logistic Regression | Video

2.2 Computation Graphs | Video

2.3 Backpropagation | Video

2.4 Educational Framework | Video

01 - Q&A

Christian Reiser


No Lecture

No Lecture



Deep Networks | Slides

3.1 XOR Problem | Video

3.2 Multi-Layer Perceptron | Video

3.3 Backpropagation in MLPs | Video

3.4 Universal Approximation | Video

01 - Discussion
02 - Intro | Slides | Assignment
Image Classification

Songyou Peng



Deep Networks II | Slides

4.1 Loss Functions | Video

4.2 Activation Functions | Video

4.3 Initialization | Video

4.4 Image Classification | Video

02 - Q&A

Songyou Peng



Regularization | Slides

5.1 Parameter Penalties | Video

5.2 Early Stopping | Video

5.3 Ensemble Methods | Video

5.4 Dropout | Video

5.5 Data Augmentation | Video

02 - Discussion
03 - Intro | Slides | Assignment

Regularization and Optimization

Aditya Prakash



Optimization | Slides
6.1 Learning vs. Optimization | Video

6.2 Challenges of Neural Network Optimization | Video

6.3 Optimization Algorithms | Video

6.4 Optimization and Debugging Strategies | Video

03 - Q&A

Aditya Prakash



Convolutional Neural Networks | Slides

7.1 Convolution | Video

7.2 Pooling and Unpooling | Video

7.3 Padding, Striding, Dilation | Video

7.4 Architectures | Video

7.5 Visualization | Video

03 - Discussion
04 - Intro | Slides | Assignment

Introduction to PyTorch and

Convolutional Networks

Axel Sauer



Recurrent Neural Networks | Slides
8.1 Introduction | Video

8.2 Long Short-Term Memory | Video

8.3 Gated Recurrent Unit | Video

8.4 Temporal Convolution | Video

04 - Q&A

Axel Sauer



Natural Language Processing | Slides

9.1 Introduction | Video

9.2 High-dimensional Outputs | Video

9.3 Neural Machine Translation | Video

9.4 Attention

04 - Discussion
05 - Intro | Slides | Assignment

Natural Language Processing

Joo Ho Lee



Generative Models | Slides

10.1 Introduction | Video | Video

10.2 Variational Autoencoders | Video

10.3 Generative Adversarial Networks | Video

10.4 Evaluating Generative Models | Video

05 - Q&A

Joo Ho Lee



Graph Neural Networks | Slides

11.1 Graph Convolutions | Video

11.2 Scene Graphs | Video

05 - Discussion
06 - Intro | Slides | Assignment

Generative Models

Christian Reiser


No Lecture

06 - Q&A

No Lecture



Self-Supervised Learning | Slides

12.1 Contrastive Learning for NLP and CV | Video

12.2 Pretext Tasks | Video

12.3 Self-supervision for Low-level Vision | Video

06 - Discussion

Christian Reiser