Department of Computer Science

Safe Deep Learning

In parallel with the rapid development and deployment of deep learning (DL) models, the concerns about system safety are also raising, as there is often no guarantee that DL models trained on a limited number of samples will always behave as expected. At Bosch IoC lab, we target safety-related problems arising from data distribution shifts. When moving from the closed-set training environment in the lab to an open-set operating environment in the real world, it is often difficult to maintain the assumption that the data distribution is the same at run time as at training time. Ignorance of potential data distribution shifts and uninformed model predictions at novel scenarios can result into potentially catastrophic consequences in safety-critical applications. We aim to understand the failure modes of machine learning models, improve their robustness against data distribution shifts and detect novel concepts that are beyond their cognitive capabilities.

We have open positions for Ph.D. students and Postdocs.

If you are interested in the following topics, please reach out to me by email and kindly attach your CV.

Research Topics:

  • Out-of-distribution detection and generalization under data shifts
  • Long-tail recognition in complex scenes
  • Robust open-world learning
  • Uncertainty estimation and applications 
  • Knowledge distillation from large-scale multi-modal pre-trained models

Requirement: 

  • Master's or Doctoral degree in Computer Science, Artificial Intelligence, Mathematics, or related field;
  • Strong background in machine learning and/or computer vision;
  • Excellent programming skills, preferably in Python;
  • Prior experience of working with deep learning libraries, such as PyTorch;
  • Solid mathematics background, especially in probability theory, statistics, calculus and linear algebra;
  • High motivation and high creativity;
  • Strong communication, presentation and writing skills and excellent command of English.

Prior publications in relevant machine learning and computer vision venues as well as prior research experiences on related topics will be advantageous for your application.

What we offer:

You will have a unique opportunity to work jointly with the Tuebingen academia talent and the industrial Bosch research team on challenging topics to create real world impact.


TEAM

Dr. Dan Zhang

Research Scentist
Bosch Center for Artificial Intelligence

Research Focus:
safe deep learning, with a particular focus on generative models, density estimation, Bayesian methods, unsupervised and self-supervised learning

dan.zhang2@de.bosch.com

Haiwen Huang

PhD Student

Research Focus:
Uncertainty estimation, Out-of-distribution detection and generalization

Co-supervised with Prof. Andreas Geiger

haiwen.huangspam prevention@uni-tuebingen.de

Haiwen Huang


PROJECTS

Open-set Recognition in Complex Scenes / Haiwen Huang

Computer vision tasks such as object detection and semantic segmentation have made great progress in the last decade. However, in real-world deployment, we always meet the problem of new environments, new objects, and in general new data distributions. Therefore, it is of great importance to build machine learning models able to cope with these out-of-distribution (OOD) scenarios.

We set our first step to learn a general sense of objectiveness so that the model can detect the existence of novel objects at run time (e.g., wild animals crossing streets). However, the training data often only has a limited number of object categories or have many object unannotated in the background. Thus, the model often misclassifies or even overlooks novel objects. We are investigating novelty-aware training methods, e.g., mitigating the suppression of unannotated objects in the background, and exploiting extra sources of knowledge which are easy to access.