In search of explainable and interpretable machine learning with philosophy and physics

Two DFG Clusters of Excellence from Stuttgart and Tübingen combine philosophy and physics in a joint project on machine learning: the Heidelberg Academy of Sciences and Humanities is funding three new projects in its ninth sub-program of the “WIN-Kolleg” for young scientists. One of these is the project “Complexity Reduction, Explainability and Interpretability (KEI)”, which Eric Raidl has been awarded together with Miriam Klopotek. Eric Raidl is a fellow at our Ethics & Philosophy Lab and Miriam Klopotek is a junior research group leader at the Cluster of Excellence SimTech in Stuttgart.

With the support of the state of Baden-Württemberg, the Heidelberg Academy of Sciences and Humanities has initiated an unique grant programme that offers funding opportunities to the best young researchers throughout the state: the WIN-Kolleg. Since 2002, the academy has been funding interdisciplinary projects for early career researchers, thereby opening up special scope for research and enabling exchange with academy members. Now in its ninth season, the WIN-Kolleg examines whether, how and why the reduction of complex issues is necessary for scientific knowledge and what consequences this reduction in complexity has for the relationship between science and society.

The funded project by Raidl and Klopotek addresses these questions with regard to machine learning and goes "In search of explainable and interpretable machine learning with philosophy and physics". The project started in April 2024 and runs for three years.

Machine learning (ML) algorithms are permeating our everyday and public lives with increasing intensity.  They make predictions, but why they ‘decide’ one way and not another often remains unintelligible to us:  in a sense they are “opaque”. In our project, we want to understand how this opacity arises, and whether or how it could be retroactively reversed. To do this, we want to interpret the nature of the (implicit) abstractions that ML inherently generates, using insights from physics and other theories of complexity. Our working hypothesis is that the complexity of ML and the difficulty of understanding certain components of the learning process together give rise to this opacity problem. In this sense, a solution does not call for simply “more understanding”, or “less complexity”, but for a sensible reduction of complexity. By this we mean abstractions that are adequate and simplifications that are non-trivial, to ensure access to well-grounded understanding. In our project, we will develop tools to analyze the complexity of ML algorithms in new ways and find reductions that make sense from the perspectives of many-body physics and philosophy.
 

Project description: www.hadw-bw.de/en/junge-akademie/win-kolleg/komplexitaetsreduktion/KEI

HAdW WIN-Kolleg: www.hadw-bw.de/en/young-academy/win-kolleg