Excellence Strategy

Robert Bosch GmbH

Robert Bosch GmbH has sent two outstanding, internationally renowned scientists to the University of Tübingen to work on the following topics as part of their Industry-on-Campus projects:

Data-Efficient Deep Learning

Dr. Anna Khoreva

Data is essential to achieve good performance and generalization of deep learning (DL) models. However, many restrictions in the data collection process (e.g., high costs, lack of resources, privacy constraints, etc.) often lead to the absence of sufficient amount of training data to enable satisfactory for production performance. At Bosch IoC Lab, we aim to investigate unsupervised and semi-supervised methods to extract patterns from available data that allow synthesizing data points that are almost indistinguishable from real data. This can help to vastly reduce the need to collect new data and improve the performance of DL methods trained on both real and synthetic data. Data-efficient methods for DL have huge cost saving potential and can also contribute to making algorithms safer and more robust to deal with safety-critical situations, e.g., by synthesizing rare data points or dangerous situations for which almost no real-world data is available.

Contact

Dr. Anna Khoreva

Research Group Leader at Bosch Center for Artificial Intelligence (BCAI)

anna.khorevaspam prevention@de.bosch.com

Research Focus: data-efficient deep learning, with a particular focus on generative models, image and video synthesis, few-shot learning, unsupervised and weakly supervised learning

Safe Deep Learning

Dr. Dan Zhang

In parallel with the rapid development and deployment of deep learning (DL) models, the concerns about system safety are also raising, as there is often no guarantee that DL models trained on a limited number of samples will always behave as expected. At Bosch IoC lab, we target safety-related problems arising from data distribution shifts. When moving from the closed-set training environment in the lab to an open-set operating environment in the real world, it is often difficult to maintain the assumption that the data distribution is the same at run time as at training time. Ignorance of potential data distribution shifts and uninformed model predictions at novel scenarios can result into potentially catastrophic consequences in safety-critical applications. We aim to understand the failure modes of machine learning models, improve their robustness against data distribution shifts and detect novel concepts that are beyond their cognitive capabilities.

Contact

Dr. Dan Zhang

Research Scientist at Bosch Center for Artificial Intelligence (BCAI)

dan.zhang2@de.bosch.com

Research Focus: safe deep learning, with a particular focus on generative models, density estimation, Bayesian methods, unsupervised and self-supervised learning