Welcome...

...to the new website of the Machine Learning Group!

Latest News

  • Two papers accepted at ECCV 2020
    • M. Andriushchenko, F. Croce, N Flammarion, M. Hein: Square Attack: a query-efficient black-box adversarial attack via random search
    • M. Augustin, A. Meinke, M. Hein: Adversarial Robustness on In- and Out-Distribution Improves Explainability
  • Three papers (one oral, one spotlight, one poster) accepted at the  ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
  • Strong presence of our group at ICML 2020 - four papers accepted!
    • F. Croce, M. Hein: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
    • F. Croce, M. Hein: Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
    • D. Stutz, M. Hein, B. Schiele: Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
    • A. Kristiadi, M. Hein, P. Hennig: Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
  • We have come up with Auto-Attack a new hyperparameter-free ensemble of our FAB-attack, Square attack and two automatic versions of PGD for cross-entropy and our new DLR loss. We have tested it on over 40 models of adversarial defenses and always improve the original robustness evaluation. At the same time our evaluation provides the currently only available benchmark ranking a large number of adversarial defenses on CIFAR 10 and MNIST.
  • Our black-box adversarial  "Square attack" beats all white-box attacks on MNIST on the benchmark Trades model and is very competitive on the Madry model.
  • Two papers accepted at ICLR 2020   
    •  A. Meinke, M. Hein: Towards neural networks that provably know when they don't know
    • F. Croce, M. Hein: Provable robustness against all adversarial l_p-perturbations for p>=1
  • Our new  "Square attack" improves the state-of-the-art in terms of query efficiency and sucess rate for score-based black box adversarial attacks.
  • Maksym Andriushchenko presented our NeurIPS 2019 paper "Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks" at the Swiss Machine Learning Day at EPFL and it got the best paper award - Congratulations, Maksym!
  • We present three papers at the NeurIPS Workshop Machine Learning with Guarantees
    • M. Andriushchenko, M. Hein: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks - oral presentation
    • A. Meinke, M. Hein: Towards neural networks that provably know when they don't know
    • F. Croce, M. Hein: Provable robustness against all adversarial l_p-perturbations for p>=1
  • Two papers have been accepted at NeurIPS 2019:
    • M. Andriushchenko, M. Hein: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks. 
    • P. Mercado, F. Tudisco, M. Hein: Generalized Matrix Means for Semi-Supervised Learning and Multilayer Graphs.
  • Our work on Sparse and Imperceivable Adversarial Attacks has been accepted at ICCV 2019.
  • Our paper Error estimates for spectral convergence of the graph Laplacian on random geometric graphs towards the Laplace--Beltrami operator has been accepted at FOCM (Foundations of Computational Mathematics).
  • Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks has been accepted at IJCV.
  • Our new sparse and imperceivable white-box (PGD variant) and black-box attack (CornerSearch) has been accepted at ICCV 2019.
  • Our new fast adaptive boundary (fab-) attack improves upon the best reported results on the Madry robust CIFAR-10 network and for the robust TRADES model on MNIST and CIFAR-10.
  • Both our papers on Perron-Frobenius theory of multi-homogenous mappings and its application to tensor spectral problems got accepted at SIMAX. Congratulations, Antoine and Francesco!
  • Our CVPR paper Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem has been featured in the CVPR daily magazine
  • Our paper Spectral Clustering of Signed Graphs via Matrix Power Means has been accepted at ICML 2019.
  • Two of our papers have been accepted at CVPR 2019:
    • M. Hein, M. Andriushchenko, J. Bitterwolf (2019): Why ReLU networks yield high-confidence predeictions far away from the training data and how to mitigate the problem.
    • D. Stutz, M. Hein, B.Schiele (2019): Disentangling Adversarial Robustness and Generalization. 
  • Our paper Provable Robustness of ReLU networks via Maximization of Linear Regions has been accepted at AISTATS 2019.
  • Our paper On the loss landscape of a class of deep neural networks with no bad local valleys has been accepted at ICLR 2019.

Find these and more publications in the publications section!