Welcome to the website of the Machine Learning Group!

Latest News

  • Francesco Croce receives the DAGM MVTec Dissertation Award 2024 of the German Association for Pattern Recognition (DAGM, Deutsche Arbeitsgemeinschaft für Mustererkennung) - Big Congratulations!
  • ECCV 2024: Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models by Francesco Croce, Naman Deep Singh, Matthias Hein arxiv and github
  • ICML 2024:
    • Unsupervised adversarial finetuning of the CLIP embedding allows plug-and-play replacement of CLIP in downstream tasks like VLMs (Llava 1.5) or zero-shot learning so that they become robust as well.  Github
    • Disentangling the Effects of Overparameterization of Neural Networks in terms of implicit bias of SGD and a simplicity bias of the architecture arxiv
  • CVPR 2024: DIG-IN, Diffusion Guidance for Investigating Networks, a framework for debugging classifiers, e.g. uncovering classifier differences, neuron visualisations and visual counterfactual explanations arxiv and github
  • Francesco Croce receives the prize for the best PhD thesis in computer science of the University of Tübingen - Congratulations!
  • Two papers accepted at NeurIPS 2023
    • "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models" by Naman Deep Singh, Francesco Croce, Matthias Hein
    • "Normalization Layers are All that Sharpness-Aware Minimization needs" by Maximilian Müller, Tiffany Joyce Vlaar, David Rolnick, Matthias Hein
  • Our paper "Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet" by Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein has been accepted at ICCV 2023 - Check your own ImageNet model if it relies on spurious features by using the "Spurious ImageNet Dataset" - github page
  • Three papers accepted at ICML 2023 - great job by our team:
    • "Improving l1-certified robustness by randomized smoothing by leveraging box constraints" by Vaclav Voracek and Matthias Hein
    • "A modern look at the relationship between sharpness and generalization" by Maksym Andriushchenko, Francesco Croce, Maximilian Müller, Matthias Hein, Nicolas Flammarion
    • "In our Out? Fixing ImageNet Out-of-Distribution Detection Evaluation" by Julian Bitterwolf, Maximilian Müller, Matthias Hein
  • Novel architectural changes and training schemes lead to significant improvements regarding adversarial robustness on ImageNet, see github page for paper/code/models.
  • We reveal that state-of-the-art deep neural networks for ImageNet heavily rely on spurious features and introduce, Spurious ImageNet, a new dataset to measure the dependence on spurious features.
  • Our work on explaining and debugging classifier decisions using Visual Counterfactual Explanations is featured on the Machine Learning for Science Blog.
  • Two papers accepted at ICLR 2023:
    • "Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation" by Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan-Hendrik Metzen
    • "Sound randomized smoothing in floating-point arithmetics" by Vaclav Voráček and Matthias Hein
  • Two papers accepted at NeurIPS 2022:
    • "Diffusion Visual Counterfactual Explanations" by Maximilian Augustin*, Valentyn Boreiko*, Francesco Croce and Matthias Hein (* joint first author)
    • "Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free" by Alexander Meinke, Julian Bitterwolf and Matthias Hein
  • Our paper "Sparse Visual Counterfactual Explanations in Image Space" has been accepted at GCPR 2022.
  • The paper: "Adversarial Robustness of MR Image Reconstruction under Realistic Perturbations" by Nikolas Morshius, Sergios Gatidis, Christian Baumgartner and Matthias Hein has been accepted at the MICCAI 2022 Workshop "Machine Learning in Medical Image Reconstruction"
  • Our workshop papers at ICML 2022
    • F. Croce and M. Hein. On the interplay of adversarial robustness and architecture components: patches, convolution and attention. ICML 2022 Workshop on Adversarial Machine Learning Frontiers.
    • A. Meinke, J. Bitterwolf, M. Hein. Provably Robust Detection of Out-of-distribution Data (almost) for free. ICML 2022 Workshop on Adversarial Machine Learning Frontiers.
    • V. Voráček, M. Hein. Sound randomized smoothing in floating-point arithmetics. ICML 2022 Workshop on Formal Verification of Machine Learning and ICML 2022 Workshop on Adversarial Machine Learning Frontiers.
    • F. Croce, S. Gowal, T. Brunner, E. Shelhamer, M. Hein, T. Cemgil. How Adaptive are Adaptive Test-Time Defenses. ICML 2022 Workshop on Updatable Machine Learning.
  • Our Journal Paper: D. Stutz, N. Chandramoorthy, M. Hein, B. Schiele: Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators has been accepted at PAMI.
  • Congratulations! "Visual counterfactual explanations for robust disease detection in ophthalmology" accepted at MICCAI 2022
  • Great news -  four accepted papers at ICML 2022
    • V. Voracek, M. Hein. Provably Adversarially Robust Nearest Prototype Classifiers
    • F. Croce, M. Hein. Adversarial robustness against multiple and and single l_p-threat models via quick fine-tuning of robust classifiers
    • F. Croce, S. Gowal, T. Brunner, E. Shelhamer, M. Hein, T. Cemgil. Evaluating the Adversarial Robustness of Test-Time Defenses
    • J. Bitterwolf, A. Meinke, M. Augustin, M. Hein. Breaking Down Out-of-Distribution detection. Many Popular Methods Estimate a Combination of the Same Core Quantities
  • The paper "Neural Network Heuristic Functions: Taking Confidence into Account, Symposium on Combinatorial Search" by David Heller, Patrick Ferber, Julian Bitterwolf, Matthias Hein and Joerg Hoffmann has been accepted at the Symposium on Combinatorial Search (SOCS 2022)
  • Congratulations to Agustinus for the paper "Being a Bit Frequentist Improves Bayesian Neural Networks" accepted at AISTATS 2022
  • Sparse-RS has been accepted at AAAI 2022 - Black-box attacks for (universal) patches, (universal) frames and l_0
  • RobustBench - our benchmark on (adversarial) robustness got accepted to the NeurIPS 2021 Dataset and Benchmark Track. Check the leaderboard and the model zoo, see github page for more details!
  • Two papers (spotlight and poster) accepted at NeurIPS 2021:
    • A. Kristiadi, M. Hein, P. Hennig (2021). An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence
    • M. Yatsura, J. Metzen, M. Hein (2021). Meta-Learning the Search Distribution of Black Box Random Search Based Adversarial Attacks.
  • One paper accepted as oral at ICCV 2021:
    • D. Stutz, M. Hein, B. Schiele (2021). Relating Adversarially Robust Generalization to Flat Minima
  • Outstanding Paper Award at CVPR 2021 Workshop on on Adversarial Machine Learning in Real-World Computer Vision Systems for the paper "Bit Error Robustness for Energy-Efficient DNN Accelerators"
  • One paper accepted at ICML 2021
    • F. Croce, M. Hein: Mind the box: l_1-APGD for sparse adversarial attacks on image classifiers
  • Two papers accepted at ICLR 2021 Workshop Robust and Reliable Machine Learning in the Real World
    • Bit error robustness for Energy-efficient DNN Accelerators
    • An Infinite-Feature Extension for Bayesian ReLU nets that fixes their asymptotic overconfidence
  • Two papers accepted at ICLR 2021 Workshop on Security and Safety in Machine Learning
    • RobustBench: a standardized adversarial robustness benchmark (Best Paper Honorable Mention Prize)
    • Mind the box: l_1-APGD for sparse adversarial attacks on image classifiers
  • New paper on l1-APGD attack provides reasons why previous PGD attack algorithms for L1  show suboptimal performance
  • Paper accepted at MLSys 2021: Training quantized networks to be robust against bit errors saves energy when using them in DNN accelerators.
  • Our new open world SSL paper let's you exploit large unlabeled datasets even if the number of task relevant samples is very small compared to the total number of unlabeled examples
  • One paper accepted at NeurIPS 2020:
  • We present our new paper, Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks e.g. for universal targeted patch/frame attacks at the ECCV 2020 Workshop Adversarial Robustness in the Real World
  • Two papers accepted at ECCV 2020
    • M. Andriushchenko, F. Croce, N Flammarion, M. Hein: Square Attack: a query-efficient black-box adversarial attack via random search
    • M. Augustin, A. Meinke, M. Hein: Adversarial Robustness on In- and Out-Distribution Improves Explainability
  • Matthias Hein gives keynote at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning.
  • Three papers (one oral, one spotlight, one poster) accepted at the  ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
  • Strong presence of our group at ICML 2020 - four papers accepted!
    • F. Croce, M. Hein: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
    • F. Croce, M. Hein: Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
    • D. Stutz, M. Hein, B. Schiele: Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
    • A. Kristiadi, M. Hein, P. Hennig: Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
  • We have come up with Auto-Attack a new hyperparameter-free ensemble of our FAB-attack, Square attack and two automatic versions of PGD for cross-entropy and our new DLR loss. We have tested it on over 40 models of adversarial defenses and always improve the original robustness evaluation. At the same time our evaluation provides the currently only available benchmark ranking a large number of adversarial defenses on CIFAR 10 and MNIST.
  • Our black-box adversarial  "Square attack" beats all white-box attacks on MNIST on the benchmark Trades model and is very competitive on the Madry model.
  • Two papers accepted at ICLR 2020  
    • A. Meinke, M. Hein: Towards neural networks that provably know when they don't know
    • F. Croce, M. Hein: Provable robustness against all adversarial l_p-perturbations for p>=1
  • Our new  "Square attack" improves the state-of-the-art in terms of query efficiency and sucess rate for score-based black box adversarial attacks.
  • Maksym Andriushchenko presented our NeurIPS 2019 paper "Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks" at the Swiss Machine Learning Day at EPFL and it got the best paper award - Congratulations, Maksym!
  • We present three papers at the NeurIPS Workshop Machine Learning with Guarantees
    • M. Andriushchenko, M. Hein: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks - oral presentation
    • A. Meinke, M. Hein: Towards neural networks that provably know when they don't know
    • F. Croce, M. Hein: Provable robustness against all adversarial l_p-perturbations for p>=1
  • Two papers have been accepted at NeurIPS 2019:
    • M. Andriushchenko, M. Hein: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks.
    • P. Mercado, F. Tudisco, M. Hein: Generalized Matrix Means for Semi-Supervised Learning and Multilayer Graphs.
  • Our work on Sparse and Imperceivable Adversarial Attacks has been accepted at ICCV 2019.
  • Our paper Error estimates for spectral convergence of the graph Laplacian on random geometric graphs towards the Laplace--Beltrami operator has been accepted at FOCM (Foundations of Computational Mathematics).
  • Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks has been accepted at IJCV.
  • Our new sparse and imperceivable white-box (PGD variant) and black-box attack (CornerSearch) has been accepted at ICCV 2019.
  • Our new fast adaptive boundary (fab-) attack improves upon the best reported results on the Madry robust CIFAR-10 network and for the robust TRADES model on MNIST and CIFAR-10.
  • Both our papers on Perron-Frobenius theory of multi-homogenous mappings and its application to tensor spectral problems got accepted at SIMAX. Congratulations, Antoine and Francesco!
  • Our CVPR paper Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem has been featured in the CVPR daily magazine.
  • Our paper Spectral Clustering of Signed Graphs via Matrix Power Means has been accepted at ICML 2019.
  • Two of our papers have been accepted at CVPR 2019:
    • M. Hein, M. Andriushchenko, J. Bitterwolf (2019): Why ReLU networks yield high-confidence predeictions far away from the training data and how to mitigate the problem.
    • D. Stutz, M. Hein, B.Schiele (2019): Disentangling Adversarial Robustness and Generalization.
  • Our paper Provable Robustness of ReLU networks via Maximization of Linear Regions has been accepted at AISTATS 2019.
  • Our paper On the loss landscape of a class of deep neural networks with no bad local valleys has been accepted at ICLR 2019.

Find these and more publications in the publications section!