Deep learning is a subfield of machine learning that has achieved significant state-of-the-art results in many areas of artificial intelligence, including computer vision and robotics, and has been advancing very quickly in recent years. This seminar aims to cover current topics in the field of deep learning. It takes shape as a paper reading and discussing the concept of "learning and learning". A collection of papers from selected journals and conferences is provided for the students to choose from. In each meeting, two topics are presented by the students.
Students are graded based on: a) their presentation, b) a short (10-15 page) report that they write on the subject, and c) their participation in post-presentation discussions. So, attendance is required to pass the course.
The date for the first meeting can be seen from the table above. In the session, all possible topics are presented. The presentations will start two to four weeks after the preliminary meeting; two presentations in each meeting. If you are unable to attend this preliminary meeting, please write to email to valentin.bolz. @uni-tuebingen.de
Important note: If there are more than 12 participants on the preliminary meeting, students who have chosen the seminar on ILIAS have priority. If you are on the waiting list, you will get access to the ILIAS page shortly before this meeting will start.
This is a BSc Seminar (after 5th semester). Interested MSc students are welcome as well.
There are no formal requirements. However, it is helpful to have a good background in mathematics (linear algebra, statistics).
The following is a preliminary list of topics you can choose from. You can get access to the most resources with an online-search from the university network (computer science pools, ZDV pools, VPN-client, etc.). For the literature search, it is recommended to use Google Scholar, Citeseer, arXiv. For very recent submissions on arXiv, click here. If a paper is published in CVPR or ICCV, you can find it on CVF open access. NIPS proceedings can be reached here. Also, you can download the PDFs from authors' webpages.
Major Network Architectures and its Datasets
Framework Comparison (TensorFlow, PyTorch, ...)
ImageNet and MS COCO Datasets (tasks, evaluation metrics, winners of competitions, etc.)
Training Methods (Gradient descent optimization algorithms: SGD, Momentum, Adagrad, RMSprop, Adam, etc.)
Training Strategies (dropout, batch normalization, group normalization, etc.)
|Architecture Search (ENAS, DARTs, ProxylessNAS, etc.)|
Graph Neural Networks
|Recurrent Neural Networks (RNNs)|
Generative Adversarial Nets (GANs)