Our research mission is to create truly trustworthy machine learning techniques, which are robust (in particular adversarially robust), correctly specify their domain and their uncertainty (out-of-distribution detection and uncertainty quantification), and which are explainable/interpretable.

For a recent blog post of our work on Diffusion Visual Counterfactuals, click here.

Our group is part of a team maintaining the RobustBench benchmark for (adversarial) robustness. Our paper ``Reliable evaluation of adversarial robustness with an ensemble of diverse and parameter-free attacks'' introducing Auto-Attack is the 5th most influential paper of ICML 2020.

Our group is part of the following initiatives:

Our research is also supported by Open Philantropy

Privacy settings

Our website uses cookies. Some of them are mandatory, while others allow us to improve your user experience on our website. The settings you have made can be edited at any time.

or

Essential

in2code

Videos

in2code
YouTube
Google