Lecture: Deep Learning
Within the last decade, deep neural networks have emerged as an indispensable tool in many areas of artificial intelligence including computer vision, computer graphics, natural language processing, speech recognition and robotics. This course will introduce the practical and theoretical principles of deep neural networks. Amongst other topics, we will cover computation graphs, activation functions, loss functions, training, regularization and data augmentation as well as various basic and state-of-the-art deep neural network architectures including convolutional networks and graph neural networks. The course will also address deep generative models such as auto-encoders, variational auto-encoders and generative adversarial networks. In addition, applications from various fields will be presented throughout the course. The tutorials will deepen the understanding of deep neural networks by implementing and applying them in Python and PyTorch.
Qualification Goals
Students gain an understanding of the theoretical and practical concepts of deep neural networks including, optimization, inference, architectures and applications. After this course, students should be able to develop and train deep neural networks, reproduce research results and conduct original research in this area.
Overview
- Course number: ML-4103 (also credited for INFO-4182 and INF-4182)
- Credits: 6 ECTS
- Recommended for: Master, 1st semester
- Total Workload: 180h
- This lecture is taught as flipped classroom. Lectures will be held asynchronously via YouTube (see sidebar for link). We will provide all lectures before the respective interactive live sessions for self-study. Please watch the relevant videos before participating in the interactive live sessions.
- Each week, we host an interactive live session where questions regarding the lecture and exercises are discussed together (see sidebar for details).
- We also offer a weekly zoom helpdesk where students may ask questions or share their screen to obtain individual feedback and support for solving the exercises (see sidebar for details).
- Exercises will not be graded. Instead, we will discuss the solution together.
- Students may obtain bonus points for the exam by answering questions about the lectures and exercises in weekly quizzes. The questions also serve as a measure for self-assessment and self-motivation. All quizzes are provided via our Lecture Quiz Server (see sidebar for details).
Prerequisites
- Basic Computer Science skills: Variables, functions, loops, classes, algorithms
- Basic Python and PyTorch coding skills
- Basic Math skills: Linear algebra, probability and information theory (eg., Math for ML lecture
https://www.tml.cs.uni-tuebingen.de/teaching/2020_maths_for_ml/index.php)
As a refresher we recommend reading Chapters 1-4 of: http://www.deeplearningbook.org or watching our newly micro tutorials Mathematics for Deep Learning
Registration
- To participate in this lecture, you must enroll via ILIAS (see sidebar for link)
- Registration via ILIAS will open on 30.09. at 12:00
- Information about exam registration can be found here
Exercises
The exercises play an essential role in understanding the content of the course. There will be 6 assignments in total. The assignments contain pen and paper questions as well as programming problems. In the first half of the course, the students will use the Educational Deep Learning Framework (EDF), a small Python only deep learning framework. This will allow them to understand every aspect of deep learning (computation graphs, backpropagation, optimization, regularization) in detail on small problems. In the second half of the course, the students will use PyTorch, a state-of-the-art deep learning framework which features GPU support and auto-differentiation, to address more challenging problems. If you have questions regarding the exercises or the lecture, please ask them during the live sessions, at the zoom helpdesk or in our ILIAS forum.
Further Readings
- Student's Deep Learning Lecture Notes
- Goodfellow, Bengio and Courville: Deep Learning
- Zhang, Lipton, Li, Smola: Dive into Deep Learning
- Bishop: Pattern Recognition and Machine Learning
- Deisenroth, Faisal and Ong: Mathematics for Machine Learning
- Articles and papers mentioned in the lecture slides
- Micro Tutorials Mathematics for Deep Learning
Schedule
|
|
|
|
| |||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| ||
|
| ||
|
|
|
|
|
| ||
|
|
|
|
|