Philosophical aspects of computer sciences – Ethics, Norms & Responsibility

Seminar Tübingen-Nancy

Organisation : Maël Pégny, Reinhard Kahle, Thomas Piecha, Anna Zielinska, Cyrille Imbert

Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies / Université de Lorraine / Universität Tübingen

Please register here: 
https://forms.gle/papVbAjPoyoGEqTH9


Upcoming Talk

TBA


List of Lectures

15 November 2021: Maël Pégny (Universität Tübingen): Mathematizing fairness? On statistical metrics of algorithmic fairness

One of the great topic of the AI ethics literature has been the discussion of possible metrics of algorithmic fairness. Those are statistical metrics designed to determine whether the input-output behavior of a given model exhibits biases towards a given population. The topic has grown in relevance as several early mathematical results, called "incompatibility results", demonstrated the impossibility of a simultaneous satisfaction of several current metrics, even when those seem both natural and desirable. In this talk, we will tackle two philosophical issues. The first issue is the exact status of those metrics, and hence of incompatibility results: are we dealing with definitions or simple indicators? Should we consider that we face several competing definitions, or should we defend a form of pluralism? The second issue, structurally tied to the first one, bears on the risk of bureaucratization of fairness issues through the use of those metrics: what are the risks of abusive reduction of the difficult issues raised by (algorithmic) discrimination to the simple satisfaction of a metric?

Watch the talk on our YouTube channel

13 December 2021: TBA

21 February 2022: Carmela Troncoso (EPF Lausanne): TBA

21 March 2022: Marija Slavkovik (University of Bergen): TBA

11 April 2022: Karoline Reinhardt (Universität Tübingen): Dimensions of trust in AI Ethics

Due to the extensive progress of research in Artificial Intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates: Despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far. Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI Ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. In this paper I sketch out which insights from political philosophy and social philosophy might be particularly helpful here. The concept of "insitutionalised mistrust" will play a special role here.