Die Robert Bosch GmbH hat zwei hervorragende, international renommierte Wissenschaftlerinnen an die Universität Tübingen entsandt, die sich im Rahmen ihrer Industry-on-Campus Projekte mit den folgenden Themen beschäftigen:
During the past few years, deep neural networks have become the de-facto technique for machine learning and computer vision tasks, in many cases achieving human- or super-human-level performance by leveraging large collections of training data. However, this success comes at a distinct cost; namely, creating these large datasets typically requires a great deal of human effort (collecting and manually labelling data samples), pain or risk (e.g. for medical datasets involving invasive tests) and financial expense (building the infrastructure needed for domain-specific data collection and hiring labelers). For many real-world applications, lack of training data often becomes a restrictive factor, which limits the utilization of deep learning techniques in practice. In our research we focus on relaxing this constraint and exploiting solutions for data-efficient deep learning.
One of the promising directions is to employ synthetic data. Synthetic data generation with generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), is a relatively recent development with huge potential for data-efficient deep learning. At Bosch IoC Lab we aim to investigate unsupervised and semi-supervised methods to extract patterns from available data that allow synthesizing data points that are almost indistinguishable from real data. This can help to vastly reduce the need to collect new data, and improve the performance of deep learning methods trained on both real and synthetic data. Furthermore, we aim to investigate domain transfer methods, which transform real data to a new setting, e.g. translate images between different camera sensor models, change light and weather conditions of collected images and videos, or adapt image and video data to new locations. Data-efficient methods for deep learning have huge cost saving potential, and can also contribute to making algorithms safer and more robust to deal with safety-critical situations, e.g. by synthesizing rare data points or dangerous situations for which almost no real-world data is available.
Deep neural networks (DNNs) have found a wide spectrum of applications, such as autonomous driving, automated manufacturing and medical diagnosis. In many tasks, they have outperformed humans, owing to large amounts of labeled training data, large network models and high-performance computing machines. In parallel with the rapid development of DNN-based systems, the concerns about system safety is also raising, as there is often no guarantee that DNNs trained on a limited number of samples will always behave as expected, particularly in unseen situations. Erroneous yet overly confident outputs can lead to potentially catastrophic consequences in safety critical applications. For the sake of safety, it is therefore crucial to know the conditions under which a DNN-based system is accurate. If the fulfillment of these conditions is detectable by a monitor with sufficiently high accuracy, the DNN-based system together with the monitor forms a safe system. Under such a domain-agnostic safety definition, several research topics will be investigated in this Bosch-IoC project, e.g.:
- Anomaly detection to detect and reject inputs that do not follow the same distribution as training samples, reducing the potential risk of making mistakes on unseen situations;
- Denoising to restore corrupted inputs, improving system robustness to inevitable corruptions in practice and reducing data uncertainty;
- Uncertainty estimation and calibration to quantify the confidence of predictions made by the system, performing uncertainty-aware decision-making and fusion.
Learning the distribution and discriminative features behind the training samples is at the core of anomaly detection and denoising. By treating the system prediction and hidden states as random variables, modelling their distributions and inferring latent variables are necessary steps to estimate and calibrate predictive confidence. In this Bosch-IoC project, we will exploit and develop generative models, representation learning algorithms and scalable statistical inference techniques for anomaly detection, denoising, uncertainty estimation and calibration, ultimately aiming to enable safety in DNN-based systems.