Our research mission is to create truly trustworthy machine learning techniques, which are robust (in particular adversarially robust), correctly specify their domain and their uncertainty (out-of-distribution detection and uncertainty quantification), and which are explainable/interpretable.
For a recent blog post of our work on Diffusion Visual Counterfactuals, click here.
Our group is part of a team maintaining the RobustBench benchmark for (adversarial) robustness. Our paper ``Reliable evaluation of adversarial robustness with an ensemble of diverse and parameter-free attacks'' introducing Auto-Attack is the 5th most influential paper of ICML 2020.
Our group is part of the following initiatives:
- Tübingen AI Center
- DFG Cluster of Excellence "Machine Learning: New Perspectives for Science"
- "Certification and Foundations of Safe Machine Learning Systems in Healthcare" funded by the Carl Zeiss Foundation
- DFG Priority Group "Theoretical Foundations of Deep Learning"
Our research is also supported by Open Philantropy