Lecture: Self-Driving Cars

Within the last years, driverless cars have emerged as one of the major workhorses in the field of artificial intelligence. Given the large number of traffic fatalities, the limited mobility of elderly and handicapped people as well as the increasing problem of traffic jams and congestion, self-driving cars promise a solution to one of our socities most important problems: the future of mobility. However, making a car drive on its own in largely unconstrained environments requires a set of algorithmic skills that rival human cognition, thus rendering the task very hard. This course we will cover the most dominant paradigms of self-driving cars: modular pipeline-based approaches as well as deep-learning based end-to-end driving techniques. Topics include camera, lidar and radar-based perception, localization, navigation, path planning, vehicle modeling/control, imitation learning and reinfocement learning. The tutorials will deepen the acquired knowledge through the implementation of several deep learning based approaches to perception and sensori-motor control in the context of autonomous driving. Towards this goal, we will build upon existing simulation environments and established deep learning frameworks.

Qualification Goals

Students develop an understanding of the capabilities and limitations of state-of-the-art autonomous driving solutions. They gain a basic understanding of the entire system comprising perception, learning and vehicle control. In addition, they are able to implement and train simple models for sensori-motor control.

Overview

  • Course number: ML-4340
  • Credits: 6 ECTS (2h lecture + 2h exercise)
  • Recommended for: Master, 3rd semester
  • Total Workload: 180h
  • This lecture is taught as flipped classroom. Lectures will be held asynchronously via YouTube (see sidebar for link). We will upload all lectures at least one week before the respective interactive live sessions. Students should watch these videos before the corresponding interactive live session takes place and take note of questions which they like to ask during the live session.
  • Each Friday, we will host a physical live session from 10:15-12:00 in the MvL6 lecture hall. The event will be hybrid and streamed via Zoom (see sidebar for link). Students must bring a 3G proof, a mobile phone with QR scanning app, and their university credentials (username/password) for registering and contact tracing. Our TAs will start onboarding at 10:00. During the event, we will conduct quizzes to deepen the understanding of the materials, introduce the exercises and provide the opportunity for students to ask questions regarding both lectures and exercises.
  • Make sure that you have the latest Zoom client installed.
  • Exercises will not be graded. We will provide solutions before the final plenary Q&A session.

Prerequisites

Registration

  • To participate in this lecture, you must enroll via ILIAS (see sidebar for link)
  • To participate in the live sessions, you must register via ILIAS (per session)
  • Information about exam registration can be found here

Exam

  • To qualify for the final exam, students must have registered to the lecture on ILIAS
  • To participate in the exam, students must register through ILIAS towards the end of the semester
  • By submitting lecture notes for one lecture students obtain a 0.3 bonus
  • Students who rank in the top 50% of the exercise challenge obtain a 0.3 bonus
  • The two bonuses can be combined. The bonus is only effective if the written exam is passed.
  • All topics discussed in lectures, Q&A sessions and exercises are relevent for the final exam.

Exercises and Challenges

The exercises play an essential role in understanding the content of the course. There will be 3 assignments in total (see content table below). The assignments contain pen and paper questions as well as programming problems. Each programming problem involves programming an agent to participate in a challenge against the other agents developed by the students. At the end of the course, the points obtained across the 3 challenges will be accumulated and students ranking in the top 50% of the leaderboard will obtain a 0.3 bonus. The winners of each challenge will have the opportunity to present their work to the class during the final live session.

For programming, the students will use Python and PyTorch, a deep learning framework which features GPU support and auto-differentiation. If you have questions regarding the exercises or the lecture, please ask them during the interactive zoom sessions or in our ILIAS forum.

Lecture Notes

Interested students collectively write Latex lecture notes to complement the slides, summarizing the content discussed in the lecture videos. Submission of lecture notes is voluntary (a bonus of 0.3 is obtained) and are not a requirement to participate in the exam. In the beginning of the course, every registered student will be assigned one lecture. Students can find their assignments in ILIAS. The lecture notes must be submitted via ILIAS at the latest 7 days after the respective official lecture date (see content table below). Lecture notes must be written individually (not in groups). We will continuously merge and consolidate the lecture notes into a single document. You can edit the lecture notes in Overleaf or a local Latex editor. To get started, copy the Self-Driving Cars Lecture Notes Latex Template.

Further Readings

Schedule

Date

Lectures

Exercises (Coding | P&P)

TA Support

22.10.

L01 - Introduction | Slides

1.1 - Organization | Video

1.2 - Introduction | Video

1.3 - History of Self-Driving | Video

L01 - Q&A
E00 - Introduction | Problems


 

Katrin Renz

29.10.

L02 - Imitation Learning | Slides
2.1 - Approaches to Self-Driving | Video

2.2 - Deep Learning Recap | Video

2.3 - Imitation Learning | Video

2.4 - Conditional Imitation Learning | Video

L02 - Q&A

Exercise - Q&A
E01 - Introduction | Problems

Katrin Renz

05.11.

L03 - Direct Perception | Slides

3.1 - Direct Perception | Video

3.2 - Conditional Affordance Learning | Video

3.3 - Visual Abstractions | Video

3.4 - Driving Policy Transfer | Video

3.5 - Online vs. Offline Evaluation | Video

L03 - Q&A

Exercise - Q&A

E02 - Introduction (L2+L3) | Problems

Katrin Renz

12.11.

No Lecture

No Exercise

19.11.

L04 - Reinforcement Learning | Slides

4.1 - Markov Decision Processes | Video

4.2 - Bellman Optimality and Q-Learning | Video

4.3 - Deep Q-Learning | Video

L04 - Q&A

E01+E02 Discussion

E03 - Introduction | Problems

Katrin Renz

Joo Ho Lee

26.11.

L05 - Vehicle Dynamics | Slides

5.1 - Introduction | Video

5.2 - Kinematic Bicycle Model | Video

5.3 - Tire Models | Video

5.4 - Dynamic Bicycle Model | Video

L05 - Q&A

Exercise - Q&A

Joo Ho Lee

03.12.

L06 - Vehicle Control | Slides

6.1 - Introduction | Video

6.2 - Black Box Control | Video

6.3 - Geometric Control | Video

6.4 - Optimal Control | Video

L06 - Q&A

Exercise - Q&A

E04 - Introduction (L4-L6) | Problems

Joo Ho Lee

10.12.

L07 - Odometry, SLAM and Localization | Slides

7.1 - Visual Odometry | Video

7.2 - Simultaneous Localization and Mapping | Video

7.3 - Localization | Video

L07 - Q&A

Exercise - Q&A

Joo Ho Lee

17.12.

L08 - Road and Lane Detection | Slides

8.1 - Introduction | Video

8.2 - Road Segmentation | Video

8.3 - Lane Marking Detection | Video

8.4 - Lane Detection | Video

8.5 - Lane Tracking | Video

L08 - Q&A

E03+E04  - Discussion

E05 - Introduction | Problems

Joo Ho Lee

Apratim Bhattacharyya

No Lecture (Christmas Break)

No Exercise  (Christmas Break)

14.01.

L09 - Reconstruction and Motion | Slides

9.1 - Stereo Matching | Video

9.2 - Freespace and Stixels | Video

9.3 - Optical Flow | Video

9.4 - Scene Flow | Video

L09 - Q&A

Exercise - Q&A

Apratim Bhattacharyya

21.01.

L10 - Object Detection | Slides

10.1 - Introduction | Video

10.2 - Performance Evaluation | Video

10.3 - Sliding Window Object Detection | Video

10.4 - Region Based CNNs | Video

10.5 - 3D Object Detection | Video

L10 - Q&A

Exercise - Q&A

E06 - Introduction (L7-12) | Problems

Apratim Bhattacharyya

28.01.

L11 - Object Tracking | Slides

11.1 - Introduction | Video

11.2 - Filtering | Video

11.3 - Association | Video

11.4 - Holistic Scene Understanding | Video

L11 - Q&A

Exercise - Q&A

Apratim Bhattacharyya

04.02.

L12 - Decision Making and Planning | Slides

12.1 - Introduction | Video

12.2 - Route Planning | Video

12.3 - Behavior Planning | Video

12.4 - Motion Planning | Video

L12 - Q&A

E05+E06 - Discussion
Presentation by Challenge Winners

Apratim Bhattacharyya

Congratulations to Winners of the Self-Driving Challenges 2019/20!

After the course, Micha Schilling has implemented a conditional imitation learning controller on a Raspberry Pi 3 that was added to the basic Arduino self-driving car kit. Here are links to the project page and video: