Julian Bitterwolf

I am working on the task of out-of-distribution detection in deep learning tasks. If a machine learning model receives an input that is catergorically different from everything it has seen during training, it cannot make any reasonable inference about that input. However, in many cases it still outputs a prediction with very high confidence, which is not desirable. Using methods from adversarial machine learning and generative modeling, I am trying to make a model's behaviour in domains that lie outside of its expertise more sensible. 

Please find more information here.


List of Publications

  • M. Hein, M. Andriushchenko, J. Bitterwolf (2019): Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. CVPR 2019 (oral presentation). PDF Github