Lecture: Self-Driving Cars
Within the last years, driverless cars have emerged as one of the major workhorses in the field of artificial intelligence. Given the large number of traffic fatalities, the limited mobility of elderly and handicapped people as well as the increasing problem of traffic jams and congestion, self-driving cars promise a solution to one of our socities most important problems: the future of mobility. However, making a car drive on its own in largely unconstrained environments requires a set of algorithmic skills that rival human cognition, thus rendering the task very hard. This course we will cover the most dominant paradigms of self-driving cars: modular pipeline-based approaches as well as deep-learning based end-to-end driving techniques. Topics include camera, lidar and radar-based perception, localization, navigation, path planning, vehicle modeling/control, imitation learning and reinfocement learning. The tutorials will deepen the acquired knowledge through the implementation of several deep learning based approaches to perception and sensori-motor control in the context of autonomous driving. Towards this goal, we will build upon existing simulation environments and established deep learning frameworks.
Qualification Goals
Students develop an understanding of the capabilities and limitations of state-of-the-art autonomous driving solutions. They gain a basic understanding of the entire system comprising perception, learning and vehicle control. In addition, they are able to implement and train simple models for sensori-motor control.
Overview
- Course number: ML-4340
- Credits: 6 ECTS (2h lecture + 2h exercise)
- Recommended for: Master, 3rd semester
- Total Workload: 180h
- This lecture is taught as flipped classroom. Lectures will be held asynchronously via YouTube (see sidebar for link). We will upload all lectures at least one week before the respective interactive live sessions. Students should watch these videos before the corresponding interactive live session takes place and take note of questions which they like to ask during the live session.
- Each Friday, we will host a physical live session from 10:15-12:00 in the MvL6 lecture hall. The event will be hybrid and streamed via Zoom (see sidebar for link). Students must bring a 3G proof, a mobile phone with QR scanning app, and their university credentials (username/password) for registering and contact tracing. Our TAs will start onboarding at 10:00. During the event, we will conduct quizzes to deepen the understanding of the materials, introduce the exercises and provide the opportunity for students to ask questions regarding both lectures and exercises.
- Make sure that you have the latest Zoom client installed.
- Exercises will not be graded. We will provide solutions before the final plenary Q&A session.
Prerequisites
- Basic Computer Science skills: Variables, functions, loops, classes, algorithms
- Basic Python and PyTorch coding skills
- Basic Math skills: Linear algebra, probability and information theory (eg., Math for ML lecture
https://www.tml.cs.uni-tuebingen.de/teaching/2020_maths_for_ml/index.php)
As a refresher we recommend reading Chapters 1-4 of: http://www.deeplearningbook.org - Experience with Deep Learning (eg., through participation in Deep Learning lecture
https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/autonomous-vision/lectures/deep-learning/)
Registration
- To participate in this lecture, you must enroll via ILIAS (see sidebar for link)
- To participate in the live sessions, you must register via ILIAS (per session)
- Information about exam registration can be found here
Exam
- To qualify for the final exam, students must have registered to the lecture on ILIAS
- To participate in the exam, students must register through ILIAS towards the end of the semester
- By submitting lecture notes for one lecture students obtain a 0.3 bonus
- Students who rank in the top 50% of the exercise challenge obtain a 0.3 bonus
- The two bonuses can be combined. The bonus is only effective if the written exam is passed.
- All topics discussed in lectures, Q&A sessions and exercises are relevent for the final exam.
Exercises and Challenges
The exercises play an essential role in understanding the content of the course. There will be 3 assignments in total (see content table below). The assignments contain pen and paper questions as well as programming problems. Each programming problem involves programming an agent to participate in a challenge against the other agents developed by the students. At the end of the course, the points obtained across the 3 challenges will be accumulated and students ranking in the top 50% of the leaderboard will obtain a 0.3 bonus. The winners of each challenge will have the opportunity to present their work to the class during the final live session.
For programming, the students will use Python and PyTorch, a deep learning framework which features GPU support and auto-differentiation. If you have questions regarding the exercises or the lecture, please ask them during the interactive zoom sessions or in our ILIAS forum.
Lecture Notes
Interested students collectively write Latex lecture notes to complement the slides, summarizing the content discussed in the lecture videos. Submission of lecture notes is voluntary (a bonus of 0.3 is obtained) and are not a requirement to participate in the exam. In the beginning of the course, every registered student will be assigned one lecture. Students can find their assignments in ILIAS. The lecture notes must be submitted via ILIAS at the latest 7 days after the respective official lecture date (see content table below). Lecture notes must be written individually (not in groups). We will continuously merge and consolidate the lecture notes into a single document. You can edit the lecture notes in Overleaf or a local Latex editor. To get started, copy the Self-Driving Cars Lecture Notes Latex Template.
Further Readings
- Janai, Güney, Behl and Geiger: Computer Vision for Autonomous Vehicles
- Goodfellow, Bengio and Courville: Deep Learning
- Richard Szeliski: Computer Vision: Algorithms and Applications
- Deisenroth, Faisal and Ong: Mathematics for Machine Learning
- Current version of the Self-Driving Cars Lecture Notes
- Current version of the Self-Driving Cars Lecture Frequently Asked Questions
- Articles and papers mentioned in the lecture slides (footer)
Schedule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| ||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Congratulations to Winners of the Self-Driving Challenges 2019/20!
After the course, Micha Schilling has implemented a conditional imitation learning controller on a Raspberry Pi 3 that was added to the basic Arduino self-driving car kit. Here are links to the project page and video:
- Github: https://github.com/Lucbus/SelfDrivingElegooCar
- YouTube: https://www.youtube.com/watch?v=1-7RTr_nGgs