Master Theses at the Chair of Cognitive Systems (Prof. Dr. Andreas Zell)

Students who want to take a master thesis should have attended at least one lecture of Prof. Zell and passed it with good or at least satisfactory grades. They might also have obtained the relevant background knowledge for the thesis from other, similar lectures.

Comparison of Hyper-Parameter Optimization Methods for Neural Architecture Search

Mentor: Kevin Laube

Email: kevin.laubespam prevention@uni-tuebingen.de

Description: The topology of Neural Networks can be described as a series of integers, activating certain paths in an over-complete super-network. Thus, finding an optimal architecture in Neural Architecture Search (NAS) can be stated as a hyper-parameter optimization (HPO) problem of finding the optimal integer series, maximizing e.g. accuracy while minimizing latency/FLOPs. The goal of this thesis is implementing state-of-the-art HPO methods such as BOHB and Hyperband for NAS in our currently developed framework, and comparing against the widely used NSGA-II method.

Requirements: good grade in a Neural Network course, experience with PyTorch

Robust Stroke Learning from Demonstration for Table Tennis Robots

Mentor: Yapeng Gao

Email: yapeng.gao@uni-tuebingen.de

Description: Learning from demonstration (LfD) refers to the process used to transfer new skills to a robot by relying on demonstration from a human. The goal of this topic is to mimic the demonstration (racket's pose + velocity) at the hitting position in the given table tennis robot. The student should implement a VR environment by integrating a Virtual Reality (VR) device into our existing physical simulation, which is a good proxy for capturing human demonstration in the real world. Then, a general imitation learning method called GAIL could be used to learn a robust stroke policy for table tennis robots.

Requirements: Python, English speaking, skilled at table tennis

Data Augmentation for Hyperspectral Imaging

Mentor: Leon Varga

Email: leon.varga.hdh@gmail.com

Description: Data Augmentation techniques should extend the training set and improve the performance of trained models. For color images, there are many ways to augment the data without changing their meaning (for example rotation, noise or color space transformation).

Hyperspectral Imaging uses cameras, which can additionally record wavelengths outside the visible range. For the hyperspectral recordings, the Data Augmentation of color images does not improve the model performance. In this thesis, new techniques for the augmentation of hyperspectral recordings should be developed and tested.

Requirements: Understanding of image data and neuronal networks, Statistical background is helpful, Python, C/C++, Linux

Accelerate movement of a table tennis robot

Mentor: Jonas Tebbe

Email: jonas.tebbe@uni-tuebingen.de

Description: The Reflexxes library calculates the trajectory of our table tennis robot. All the calculations can be controlled either in Cartesian space or in the joint space of the robot. Cartesian controlled trajectories tend to overstep the force limits of the robot while joint space controlled trajectories are longer and more sensitive to changes of the target position. The student has to combine the two approaches in a way that guarantees that the target point and target velocity are still reached with high accuracy. The approach shall be developed in simulation and finally evaluated on our KUKA KR6 R900 Agilus robot arm.

Requirements: Lecture on robotics.

A Comparison of Robot Arm Motion Planners

Mentor: Mario Laux

Email: mario.lauxspam prevention@uni-tuebingen.de

Description: The aim of this thesis is to review, evaluate and compare different motion planners for robot arms. Suitable metrics have to be developed. The corresponding simulations and real-world experiments have to be analyzed statistically.

Requirements: C++, calculus, statistics, ROS, DNN, MOVEit

Pointcloud Clustering to Segment Unknown Objects

Mentor: Daniel Weber

Email: daniel.weber@uni-tuebingen.de

Description: The recognition and segmentation of objects is often solved by means of neural networks. However, for unknown objects a machine learning approach is not always appropriate. The goal of this thesis is to cluster point clouds in order to segment unknown objects. The point clouds are to be recorded with a Kinect v2 and the code should be implemented in Python or C++ and run on Linux. Different methods (edge based, region based, graph based) can be considered. The developed segmentation should be tested with different objects and environments.

Requirements: Good programming skills

Weakly Supervised Instance Segmentation using FourierNet

Mentor: Hamd ul Moqeet Riaz

Email: hamd.riaz@uni-tuebingen.de

Description: It is tedious to generate pixel-level annotated data for training deep neural networks. However, weakly supervised networks can utilized to generate semantic labels for datasets having only image level annotations. FourierNet is a fully supervised network, which utilizes a Fourier series to decode the shape (mask) of an object from compressed feature map. We want to train FourierNet in a weakly supervised manner by generating pseudo labels from Class Attention Maps (CAMs). FourierNet should be trained and tested on instance segmentation benchmarks such as MS COCO and PASCAL VOC.

Requirements: Knowledge in Deep learning and computer vision, Programming in Python (PyTorch)

6D Pose refinement for 3D objects

Mentor: Timon Höfer

Email: timon.hoefer@uni-tuebingen.de

Description: One of the most important components of modern computer vision systems for applications such as mobile robotic manipulation and augmented reality is a reliable 6D object detection module. 6D pose estimation is the task of detecting 3D location and 3D orientation of an object. State of the art methods have accuracies of around 70-80% on the publicly available LineMOD dataset, without any kind of refinement. With pose refinement, this can be increased to 80-95%. In this project, you will analyze different pose refinement methods and their effect on the performance of state of the art methods for 6D object pose estimation. Since pose refinement is a slow task, you should think about methods to increase the speed to make use of them in real time tracking.

Requirements: Python, Basic knowledge in Computer Vision

Deep Learning-based Visual Localization System

Mentor: Chenhao Yang

Email: chenhao.yang@uni-tuebingen.de

Description: We have collected a UAV image database, covering a variety of typical outdoor environments in urban areas, which can be applied to deep learning training for UAV outdoor localization. This database can examine the deep learning model's ability to locate images from large viewing angles, changing lighting conditions, and even across scenes. Based on real images and 3D modeling methods, the same number (about 10k) of synthetic images are rendered with accurate poses. The ability of cross-domain generalization is to train deep learning models with synthetic images and test with real images. We are planning to train a camera pose estimation model with synthetic images in the dataset, then test its localization ability with real fly videos in real-time.

Requirements: ROS experience, Python or C++ experience

Exploiting Drone Metadata for Multi Object Tracking (MOT)

Mentor: Martin Meßmer

Email: martin.messmerspam prevention@uni-tuebingen.de

Although some deep learning methods like correlation filters and Siamese networks show great promise to tackle the problem of multi object tracking, those approaches are far from working perfectly. Therefore, in specific use cases, it is necessary to impose additional priors or leverage additional data. Luckily, when working with drones, there is free metadata to work with such as height or velocity of the drone. In this thesis, the student should develop some useful ideas on how to exploit this data to increase the performance of a MOT-model and also implement and compare those ideas with other approaches.

Requirements: deep learning knowledge, Python, good English or German

Developing a Meta-Data-Aware Object Detector Running in Real-Time

Mentor: Benjamin Kiefer

Email: benjamin.kiefer@uni-tuebingen.de

Description: UAV imagery differs from other scenarios in that it employs images/videos taken with dynamic altitudes and angles and therefore viewpoints. At the same time, an object detector running in real-time on a UAV requires a lightweight Neural Network. Off-the-shelf object detectors are either too slow or do not consider the environmental factors inherited in Object Detection from UAVs. In this thesis, the student should study and develop a lightweight object detector capable of considering its environment given by, e.g., barometer (altitude) and camera angle measurements that can run in real-time on a small GPU such as the Nvidia Jetson Xavier.

Requirements: Good knowledge in Deep Neural Networks and PyTorch and Tensorflow.

Object Detection and Following with Quadcopters

Mentor: Ya Wang

Email: francis.wangspam prevention@uni-tuebingen.de

Description: Nowadays, flying robots are widely applied and quite popular in many fields of research. Quadcopters are cheap, lightweight, and can be programmed for various tasks. The goal of this master thesis is to detect a moving object (a person or a wheeled robot) and follow it at a given height in an outdoor environment. The robot should detect obstacles and avoid them using efficient maps e.g. OctoMap. Using existing methods (e.g. DroNet with traditional object detection or YOLOv4 object detection with assisted OptiTrack/GPS) is acceptable, and using a novel algorithm is encouraged.

Requirements: Python or C++, ROS

Robot Navigation in Uneven Terrain Using Deep Reinforcement Learning

Mentor: Nuri Benbarka

Email: nuri.benbakra@uni-tuebingen.de

Description: Outdoor robot navigation is usually done on uneven surfaces, and to navigate on these surfaces 3D or 2.5D maps are required. Our group had worked previously to solve this problem using model-based path planning and it gave excellent results. In this thesis, we want to use reinforcement learning to do the same task. However, instead of training from scratch, we will use our previous running algorithm as a demonstrator to train the network. This method can be expanded later to handle complex dynamic objects like pedestrians.

Image fusion with multi-view 3D object detectors

Mentor: Nuri Benbarka

Email: nuri.benbakra@uni-tuebingen.de

Description: For many years, there were mainly two ways to approach 3D object detection in Point clouds; either processing them in the perspective view or the birds-eye view. Each of the approaches has its advantages and disadvantages. Recent work showed that combining features from the two views increases the performance dramatically. An uninvestigated addition to this latest work is to combine image features with the perspective view features of the Point cloud. In this thesis, we will try to implement this idea and see who it affects the performance.

Requirements: Experience with PyTorch.

Self Supervised Learning for Deep Stereo Vision

Mentor: Rafia Rahim

Email: rafia.rahim@uni-tuebingen.de

Description: Self-supervised learning is showing quite a promise for solving various computer vision tasks. The advantage is being able to leverage labelled data for learning a proxy task and then tune the network to perform on the targeted task, here in our case deep stereo vision. The goal here is to explore different self-supervised methods along with geometrical constraints to learn a self-supervised method for deep stereo vision. The advantage of this approach will be that we can make use of a huge quantity of unlabeled data to learn better stereo vision models.

Requirements: Good knowledge of deep neural networks, experience with PyTorch or Tensorflow.

Improving object instance segmentation in a cluttered scene

Mentor: Dr. Faranak Shamsafar

Email: faranak.shamsafarspam prevention@uni-tuebingen.de

Description: Object instance segmentation is a highly demanding computer vision task. Deep neural networks have obtained impressive results in recent years, but their performance is still in doubt when encountered with a highly cluttered dense scene. The goal of this thesis is to investigate the performance of the recent state-of-the-art object instance segmentation methods, like Mask R-CNN, YOLACT and EmbedMask on images with a high number of objects in a very close vicinity to each other and with occlusions. The performance should be evaluated in terms of accuracy and speed. Finally, the best method should be improved to perform more accurate and robust in a cluttered scene.

Requirements: Experience in DNN, Python and PyTorch/TensorFLow

Sim2real transfer of depth images

Mentor: Dr. Faranak Shamsafar

Email: faranak.shamsafarspam prevention@uni-tuebingen.de

Description: One of the main issues in deep learning is to provide thousands of data samples for the networks in the training phase. While data labeling for tasks like classification can be handled with reasonable effort, this can be a highly time consuming work for problems like object instance segmentation. Therefore, many researchers aim to create datasets in a simulation environment. The simulated scene, however, demonstrates a huge gap with the real data distribution. This thesis aims to bridge the gap between the synthetic world and reality for depth images by using techniques like domain randomization. The applicability of the method should be proved in an object instance segmentation task.

Requirements: Experience in DNN, Python and PyTorch/TensorFLow