Fachbereich Informatik - Aktuell

07.12.2018

Disputation Carl-Johann Simon-Gabriel

am Montag, 17. Dezember 2018, um 17:00 Uhr in Raum N0.002, Max-Planck-Institut für Intelligente Systeme (Max-Planck-Ring 4)


Distribution-Dissimilarities in Machine Learning: from Maximum Mean Discrepancies to Adversarial Examples


Berichterstatter 1: Prof. Dr. Bernhard Schölkopf
Berichterstatter 2: Prof. Dr. Ulrike von Luxburg


Point-dissimilarities can be used to define a classifier: the classifier then essentially relates dissimilarities between points to dissimilarities between labels. Similarly, a binary classifier (or its score-function) can be used to define a dissimilarity between the distribution of points with positive and those with negative labels. Many well-known distribution-dissimilarities are such classifier-based dissimilarities: the total variation, the KL- or JS-divergence, the Hellinger distance, etc. And many recent popular generative modelling algorithms actually compute or approximate these distribution-dissimilarities by explicitly training a classifier: for example, GANs and their variants.
After briefly introducing these classifier-based dissimilarities, we will analyze how the classifier's capacity influences the strength of its associated distribution-dissimilarity. To do so, we first study maximum mean discrepancies −a weak form of total variation that has grown popular in machine learning. We then turn towards deep neural networks and more particularly towards their startling vulnerability against adversarial examples, i.e. imperceptible but targeted input perturbations that suffice to change a classifiers' decisions.
 

Zurück