Carl Friedrich von Weizsäcker-Zentrum

Seminar Tübingen-Nancy

Organizers: Maël Pégny, Reinhard Kahle, Thomas Piecha, Anna Zielinska, Cyrille Imbert

Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies / Université de Lorraine / Universität Tübingen

Want to join us? Please register here: https://forms.gle/papVbAjPoyoGEqTH9


Anstehender Vortrag

May 16, 2022 5pm (CEST)

The Notions of Information and Conviviality in Deep Neural Networks: Or What about "Explainable AI" or "Trusted AI"?

Dr. Christophe Denis (Associate Professor at Sorbonne University - Laboratory of Computer Sciences LIP6 and PhD Students in Philosophy at University of Rouen-Normandie)

 

Zoom: https://us02web.zoom.us/j/88308943415?pwd=aU5rLzFiOEpOMkNjZGtCa0hYQWdzQT09

ID of the meeting: 883 0894 3415 | Code: 931555

The thunderous return of neural networks occurred in the sublime Florentine setting in 2012 during a renowned international computer vision conference. As for several years, the participants of this conference were invited to test their image recognition techniques. Geoffrey Hinton's team from the University of Toronto was the only one using deep neural networks: it outperformed the other competitors in two out of three categories of the competition.  The audience was stunned by the impact of the reduction in prediction error, a factor of three, while the algorithms based on the expertise of the researchers differ by a few percent. Other computational scientific disciplines, like computational fluid dynamics, geophysics, and climatology, have also started to use deep learning methods to predict phenomena which are difficult to solve with a classical hypothetical deductive approach.

Impressed by the results obtained by deep learning results, an American master student from the University of Maryland had set up an ambitious deep learning project. Its objective was to automatically detect a husky or a wolf on images representing only one of these two animals in their setting lives. This project seemed to be difficult as these two animals are very similar unlike for example cat and bird. The student and his teacher were amazed by the very good results achieved by the model … until a husky in the snow was classified as a wolf by the deep neural network.  After  further analysis, the explanation of the very good prediction results was disappointing: the neural network did not "learn" to distinguish a wolf from a husky, but only to detect snowy settings.  Did the machine learning model cheat? So how do we build trust between users and AI ? To ensure trust, many AI ethical committees recommend building in explanations of the predictive machine learning outcomes to be provided to the users.  For example, in France, the bioethics law recently voted by the French National Assembly requires that the designers of a medical device based on machine learning explain how it works.

We argue that systematically explaining deep learning to all its users is not always justified, could be counterproductive and even raises ethical issues. For example, how to assess the correctness of an explanation that could even be unintentionally permissive or even manipulative in a fraudulent context? There is therefore a need to revisit the theory of information (Fisher, Shannon) and the philosophy of information (eg. Floridi) in the light of deep learning. This information will allow certain users to produce their own reasoning (surely an abductive one) rather than receiving an explanation.

Last but not least, should we trust a machine learning model? Trust means handing over something valuable to someone, relying on them. The corollary is that "the person who trusts is immediately in a state of vulnerability and dependence", and all the more and all the more so on the basis of an explanation whose correctness is difficult to assess.

Last but not least, we strongly believe that using human relationship terms, like trust or fairness in the context of machine learning, necessarily induces anthropomorphism, whose bad effects could be addiction (Eliza effect) and persuasion rather than information.  In contrast, our philosophical and mathematical research direction tries to define conviviality criteria in machine learning based on Ivan Illich's thought. According to Illich, a convivial tool must have the following properties:

• it must generate efficiency without degrading personal autonomy;

• it must create neither slave nor master;

• it must widen the personal radius of action.

As presented in the last part of the talk, neural differential equations, by providing trajectories rather than predictions, seem to be an efficient mathematical formalism to implement convivial deep learning tools.

Vorträge

April 11, 2022 [Online]

Dimensions of Trust in AI Ethics

Karoline Reinhardt (University of Tübingen)

Due to the extensive progress of research in Artificial Intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates: Despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far. Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI Ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. In this paper I sketch out which insights from political philosophy and social philosophy might be particularly helpful here. The concept of "insitutionalised mistrust" will play a special role here.

March 21, 2022 [Online]

Digital Voodoo Dolls

Marija Slavkovik (University of Bergen)

An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.

February 21, 2022 [Online]

Mismatching Concerns and Definitions in Current Trends in Machine Learning 

Carmela Troncoso (EPF Lausanne)

In this talk we will revisit current approaches to fairness and privacy in machine learning, and take a critical look at the concerns they address. We will show that concerns are modeled in a narrow way, and therefore the proposed solutions fall short to provide the protections that are promised in the literature. We will look at three examples and discuss the implications of the mismatch on how these systems may affect society if deployed.

November 15, 2021 [Online]

Mathematizing Fairness? On Statistical Metrics of Algorithmic Fairness

Dr. Maël Pégny (University of Tübingen)

One of the great topic of the AI ethics literature has been the discussion of possible metrics of algorithmic fairness. Those are statistical metrics designed to determine whether the input-output behavior of a given model exhibits biases towards a given population. The topic has grown in relevance as several early mathematical results, called "incompatibility results", demonstrated the impossibility of a simultaneous satisfaction of several current metrics, even when those seem both natural and desirable. In this talk, we will tackle two philosophical issues. The first issue is the exact status of those metrics, and hence of incompatibility results: are we dealing with definitions or simple indicators? Should we consider that we face several competing definitions, or should we defend a form of pluralism? The second issue, structurally tied to the first one, bears on the risk of bureaucratization of fairness issues through the use of those metrics: what are the risks of abusive reduction of the difficult issues raised by (algorithmic) discrimination to the simple satisfaction of a metric?

Watch the talk on YouTube