Seminar: Current Topics in Deep Neural Networks

 

Instructors Faranak Shamsafar, Valentin Bolz
Preliminary Meeting 26.04.2019, 10:00 AM, Sand1, A301
Credits 3 LP (new PO), 4 LP (old PO)
Weekly Meetings Fridays, 10:00 AM-12:00 PM, 
Room C424 (Sand 14, third floor)
Language English
Max. Participants 12

Description

Deep learning is a subfield of machine learning that has achieved significant state-of-the-art results in many areas of artificial intelligence, including computer vision and robotics, and has been advancing very quickly in recent years. This seminar aims to cover current topics in the field of deep learning. It takes shape as a paper reading and discussing the concept of "learning and learning". A collection of papers from selected journals and conferences is provided for the students to choose from. In each meeting, one topic is presented by the students. 

Students are graded based on: a) their presentation, b) a short (12-15 page) report that they write on the subject, and c) their participation in post-presentation discussions. So, attendance is required to pass the course.

The first meeting will be held on 26th of April . In the session, each student chooses one topic and the presentations will start after two weeks; one presentation in each meeting. Participation in the preliminary meeting is required. If you are unable to attend this session, please write to email to faranak.shamsafarspam prevention@uni-tuebingen.de.

Important note: If there are more than 12 participants on the preliminary meeting, students who have chosen the seminar on ILIAS have priority.

Requirements

This is a BSc Seminar (after 5th semester). Interested MSc students are welcome as well. 
There are no formal requirements. However, it is helpful to have a good background in mathematics (linear algebra, statistics).

Topics

The following is a preliminary list of topics you can choose from. You can get access to the most resources with an online-search from the university network (computer science pools, ZDV pools, VPN-client, etc.). For the literature search, it is recommended to use Google Scholar, Citeseer, arXiv. For very recent submissions on arXiv, click here. If a paper is published in CVPR or ICCV, you can find it on CVF open access. NIPS proceedings can be reached here. Also, you can download the PDFs from authors' webpages.

     Major Network Architectures I (AlexNet, ZFNet, NIN) 

     Major Network Architectures II (GoogLeNet/Inception, VGGNet, ResNet)

     Major Network Architectures III (DenseNet, Xception, CapsNet)

     ImageNet and MS COCO Datasets (tasks, evaluation metrics, winners of competitions, etc.)

     Training Methods (Gradient descent optimization algorithms: SGD, Momentum, Adagrad, RMSprop, Adam, etc.)

     Training Strategies (dropout, batch normalization, group normalization, etc.)

     Object Detection

     Image Segmentation

     Recurrent Neural Networks (RNNs)

     Generative Adversarial Nets (GANs)

     Architecture Search (ENAS, DARTs, ProxylessNAS, etc.)

Useful Documents

Recommended Literature List

Here is an initial list of useful litreature.

[1] ImageNet classification with deep convolutional neural networks. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, NIPS 2012.
[2] Visualizing and understanding convolutional networks. Zeiler, Matthew D., and Rob Fergus, ECCV 2014.
[3] Network in network. Lin, Min, Qiang Chen, and Shuicheng Yan, ICLR 2013.
[4] Going deeper with convolutions. Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, CVPR 2015.
[5] Very deep convolutional networks for large-scale image recognition. Simonyan, Karen, and Andrew Zisserman, ICLR 2015.
[6] Deep residual learning for image recognition. He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, CVPR 2016.
[7] Xception: Deep learning with depthwise separable convolutions. Chollet, François, CVPR 2017.
[8] Dynamic routing between capsules. Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton, NIPS 2017.
[9] DARTs: Differentiable architecture search. Liu, Hanxiao, Karen Simonyan, and Yiming Yang, ICLR 2019.
[10] Dropout: a simple way to prevent neural networks from overfitting. Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, The Journal of Machine Learning Research, 2014.
[11] Batch normalization: Accelerating deep network training by reducing internal covariate shift. Ioffe, Sergey, and Christian Szegedy, ICML 2015.
[12] Group normalization. Wu, Yuxin, and Kaiming He, ECCV 2018.
[13] Generative adversarial nets. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, NIPS 2014.
[14] You only look once: Unified, real-time object detection. Redmon, Joseph, Santosh Divvala, Ross Girshick, and Ali Farhadi, CVPR 2016.
[15] Mask R-CNN. He, Kaiming, Georgia Gkioxari, Piotr Dollár, and Ross Girshick, ICCV 2017.
[16] Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville. MIT Press, 2016.
[17] Imagenet: A large-scale hierarchical image database. Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, CVPR 2009. [link]
[18] Microsoft coco: Common objects in context. Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick, ECCV 2014. [link]
[19] MorphNet: Fast & simple resource-constrained structure learning of deep networks. Gordon, Ariel, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, and Edward Choi, CVPR 2018.