In his Master's thesis, Felix Sieghörtner addressed the issue of the black-box model explainability. The Master's thesis was supervised by Gjergji Kasneci and supported by Vadim Borisov in an advisory capacity.
The abstract:
Machine learning algorithms and deep neural networks see a significant rise in popularity for all kinds of problem-solving. Whether it is auto-correction in our cell phones or route-finding in our navigation systems, machine learning plays a key role in our daily lives. Some of them may have a heavy impact on human life, such as assisting doctors in selecting the appropriate treatment for patients. For this reason, the Explainability of Artificial Intelligence (XAI) is becoming an increasingly important field of study. Being able to explain decisions made by an AI, helps humans to understand its choices and is crucial for ensuring a wide acceptance among the public.
This thesis presents a Python framework for explainable AI. The key elements of it are two novel algorithms for generating and evaluating feature attributions for computer vision tasks. The developed perturbation- and permutation-based feature attribution generation method is model-agnostic and without the need to train any surrogate machine learning model. The evaluation method replaces superpixels with other superpixels and reevaluates the image afterward. In addition to a visual and empirical evaluation, these methods are compared to the current state-of-the-art feature attribution approaches, resulting in the novel methods performing on par or, in some cases, even better than existing ones.