Our research mission is to create truly trustworthy machine learning techniques, which are robust (in particular adversarially robust), correctly specify their domain and their uncertainty (out-of-distribution detection and uncertainty quantification), and which are explainable/interpretable.

For a recent blog post of our work on Diffusion Visual Counterfactuals, click here.

Our group is part of a team maintaining the RobustBench benchmark for (adversarial) robustness. Our paper ``Reliable evaluation of adversarial robustness with an ensemble of diverse and parameter-free attacks'' introducing Auto-Attack is the 5th most influential paper of ICML 2020.

Our group is part of the following initiatives:

Our research is also supported by Open Philantropy