Research Seminar


In this weekly seminar we present and discuss cutting edge research from the field of computer vision but also more general machine learning papers. Students (Bachelor or Master), PhD students and PostDocs are invited to join us. Each week one paper gets presented and discussed. All participants prepare the paper beforehand and take remarks to be discussed during the seminar.

Place and time

The reading group takes place virtually via Zoom every Friday from 11 am to 12 am. If you like to participate, please write an e-mail to Christian Reiser who will provide you with further details.

Date Title Speaker
13.01.2021 Deep3DLayout: 3D Reconstruction of an Indoor Layout from a Spherical Panoramic Image Joo Ho Lee
05.01.2021 EG3D: Efficient Geometry-aware 3D Generative Adversarial Networks Bozidar Antic
16.12.2021 Learning to See by Looking at Noise Niklas Hanselmann
09.12.2021 AdaViT: Adaptive Vision Transformers for Efficient Image Recognition Zehao Yu
02.12.2021 Florence: A New Foundation Model for Computer Vision Katrin Renz
25.11.2021 Tracking emerges by colorizing videos Stefan Baur
18.11.2021 Masked Autoencoders Are Scalable Vision Learners Axel Sauer
21.10.2021 ADOP: Approximate Differentiable One-Pixel Point Rendering Christian Reiser


Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment Kashyap Chitta
28.10.2021 FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold Katja Schwarz
01.10.2021 Pathdreamer: A World Model for Indoor Navigation Apratim Bhattacharyya
24.09.2021 Neural Complex Luminaires: Representation and Rendering Joo Ho Lee
17.09.2021 Past research Zehao Yu
10.09.2021 Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts Stefan Baur
03.09.2021 BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration Carolin Schmitt
20.08.2021 Perceiver IO: A General Architecture for Structured Inputs & Outputs Katrin Renz
06.08.2021 Rethinking and Improving the Robustness of Image Style Transfer Niklas Hanselmann
30.07.2021 Understanding self-supervised Learning Dynamics without Contrastive Pairs Kashyap Chitta
23.07.2021 Pedestrian Prediction under Uncertainty and Multi-modality Apratim Bhattacharyya
02.07.2021 Denoising Diffusion Probabilistic Models Axel Sauer
25.06.2021 On the Spectral Bias of Neural Networks Katja Schwarz
18.06.2021 Partial success in closing the gap between human and machine vision Christian Reiser
11.06.2021 On the Convergence of Adam and Beyond Xu Chen
30.04.2021 The Lumigraph Joo Ho Lee
16.04.2021 Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations Yiyi Liao
09.04.2021 Object-Centric Learning with Slot Attention Michael Niemeyer
02.04.2021 Learning to Simulate Stefan Baur
26.03.2021 Building Rome in a Day Carolin Schmitt
26.02.2021 The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks Axel Saur
19.02.2021 Group Normalization Niklas Hanselmann
12.02.2021 LambdaNetworks: Modeling long-range Interactions without Attention



05.02.2021 Neural Tangent Kernel: Convergence and Generalization in Neural Networks Katja Schwarz
29.01.2021 Invited Talk: Efficient Transformers Angelos Katharopoulos
22.01.2021 Spatial Transformer Networks Songyou Peng
15.01.2021 What Do Single-view 3D Reconstruction Networks Learn?



08.01.2021 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Aditya Prakash
18.12.2020 SinGAN: Learning a Generative Model from a Single Natural Image Christian Reiser
11.12.2020 Deep Equilibrium Models Michael Oechsle
27.11.2020 CVPR Submissions  
30.10.2020 Solving Rubik’s Cube with a Robot Hand Joo Ho Lee
09.10.2020 Secrets of Optical Flow Estimation and Their Principles Stefan Baur
02.10.2020 Accurate, Dense, and Robust Multi-View Stereopsis Carolin Schmitt
25.09.2020 Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild Michael Oechsle
18.09.2020 Neural Processes Yiyi Liao
11.09.2020 Relational Inductive Biases, Deep Learning and Graph Networks, DeepMind Aditya Prakash


Your classifier is secretly an energy-based model and you should treat it like one Niklas Hanselmann