Seminar Tübingen-Nancy
Organizers: Maël Pégny, Reinhard Kahle, Thomas Piecha, Anna Zielinska, Cyrille Imbert
Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies / Université de Lorraine / Universität Tübingen
Want to join us? Please register here: https://forms.gle/papVbAjPoyoGEqTH9
May 16, 2022 5pm (CEST) [Online]
Dr. Christophe Denis (Associate Professor at Sorbonne University - Laboratory of Computer Sciences LIP6 and PhD Students in Philosophy at University of Rouen-Normandie)
Zoom: https://us02web.zoom.us/j/88308943415?pwd=aU5rLzFiOEpOMkNjZGtCa0hYQWdzQT09
ID of the meeting: 883 0894 3415 | Code: 931555
The thunderous return of neural networks occurred in the sublime Florentine setting in 2012 during a renowned international computer vision conference. As for several years, the participants of this conference were invited to test their image recognition techniques. Geoffrey Hinton's team from the University of Toronto was the only one using deep neural networks: it outperformed the other competitors in two out of three categories of the competition. The audience was stunned by the impact of the reduction in prediction error, a factor of three, while the algorithms based on the expertise of the researchers differ by a few percent. Other computational scientific disciplines, like computational fluid dynamics, geophysics, and climatology, have also started to use deep learning methods to predict phenomena which are difficult to solve with a classical hypothetical deductive approach.
Impressed by the results obtained by deep learning results, an American master student from the University of Maryland had set up an ambitious deep learning project. Its objective was to automatically detect a husky or a wolf on images representing only one of these two animals in their setting lives. This project seemed to be difficult as these two animals are very similar unlike for example cat and bird. The student and his teacher were amazed by the very good results achieved by the model … until a husky in the snow was classified as a wolf by the deep neural network. After further analysis, the explanation of the very good prediction results was disappointing: the neural network did not "learn" to distinguish a wolf from a husky, but only to detect snowy settings. Did the machine learning model cheat? So how do we build trust between users and AI ? To ensure trust, many AI ethical committees recommend building in explanations of the predictive machine learning outcomes to be provided to the users. For example, in France, the bioethics law recently voted by the French National Assembly requires that the designers of a medical device based on machine learning explain how it works.
We argue that systematically explaining deep learning to all its users is not always justified, could be counterproductive and even raises ethical issues. For example, how to assess the correctness of an explanation that could even be unintentionally permissive or even manipulative in a fraudulent context? There is therefore a need to revisit the theory of information (Fisher, Shannon) and the philosophy of information (eg. Floridi) in the light of deep learning. This information will allow certain users to produce their own reasoning (surely an abductive one) rather than receiving an explanation.
Last but not least, should we trust a machine learning model? Trust means handing over something valuable to someone, relying on them. The corollary is that "the person who trusts is immediately in a state of vulnerability and dependence", and all the more and all the more so on the basis of an explanation whose correctness is difficult to assess.
Last but not least, we strongly believe that using human relationship terms, like trust or fairness in the context of machine learning, necessarily induces anthropomorphism, whose bad effects could be addiction (Eliza effect) and persuasion rather than information. In contrast, our philosophical and mathematical research direction tries to define conviviality criteria in machine learning based on Ivan Illich's thought. According to Illich, a convivial tool must have the following properties:
• it must generate efficiency without degrading personal autonomy;
• it must create neither slave nor master;
• it must widen the personal radius of action.
As presented in the last part of the talk, neural differential equations, by providing trajectories rather than predictions, seem to be an efficient mathematical formalism to implement convivial deep learning tools.
April 11, 2022 [Online]
Karoline Reinhardt (University of Tübingen)
Due to the extensive progress of research in Artificial Intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates: Despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far. Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI Ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. In this paper I sketch out which insights from political philosophy and social philosophy might be particularly helpful here. The concept of "insitutionalised mistrust" will play a special role here.
March 21, 2022 [Online]
Marija Slavkovik (University of Bergen)
An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.
February 21, 2022 [Online]
Carmela Troncoso (EPF Lausanne)
In this talk we will revisit current approaches to fairness and privacy in machine learning, and take a critical look at the concerns they address. We will show that concerns are modeled in a narrow way, and therefore the proposed solutions fall short to provide the protections that are promised in the literature. We will look at three examples and discuss the implications of the mismatch on how these systems may affect society if deployed.
November 15, 2021 [Online]
Dr. Maël Pégny (University of Tübingen)
One of the great topic of the AI ethics literature has been the discussion of possible metrics of algorithmic fairness. Those are statistical metrics designed to determine whether the input-output behavior of a given model exhibits biases towards a given population. The topic has grown in relevance as several early mathematical results, called "incompatibility results", demonstrated the impossibility of a simultaneous satisfaction of several current metrics, even when those seem both natural and desirable. In this talk, we will tackle two philosophical issues. The first issue is the exact status of those metrics, and hence of incompatibility results: are we dealing with definitions or simple indicators? Should we consider that we face several competing definitions, or should we defend a form of pluralism? The second issue, structurally tied to the first one, bears on the risk of bureaucratization of fairness issues through the use of those metrics: what are the risks of abusive reduction of the difficult issues raised by (algorithmic) discrimination to the simple satisfaction of a metric?
Watch the talk on YouTube
Our website uses cookies. Some of them are mandatory, while others allow us to improve your user experience on our website. The settings you have made can be edited at any time.
or
Essential
in2cookiemodal-selection
Required to save the user selection of the cookie settings.
3 months
be_lastLoginProvider
Required for the TYPO3 backend login to determine the time of the last login.
3 months
be_typo_user
This cookie tells the website whether a visitor is logged into the TYPO3 backend and has the rights to manage it.
Browser session
ROUTEID
These cookies are set to always direct the user to the same server.
Browser session
fe_typo_user
Enables frontend login.
Browser session
Videos
iframeswitch
Used to show all third-party contents.
3 months
yt-player-bandaid-host
Is used to display YouTube videos.
Persistent
yt-player-bandwidth
Is used to determine the optimal video quality based on the visitor's device and network settings.
Persistent
yt-remote-connected-devices
Saves the settings of the user's video player using embedded YouTube video.
Persistent
yt-remote-device-id
Saves the settings of the user's video player using embedded YouTube video.
Persistent
yt-player-headers-readable
Collects data about visitors' interaction with the site's video content - This data is used to make the site's video content more relevant to the visitor.
Persistent
yt-player-volume
Is used to save volume preferences for YouTube videos.
Persistent
yt-player-quality
Is used to save the quality settings for YouTube videos.
Persistent
yt-remote-session-name
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-session-app
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-fast-check-period
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-cast-installed
Saves the user settings when retrieving a YouTube video integrated on other web pages
Browser session
yt-remote-cast-available
Saves user settings when retrieving integrated YouTube videos.
Browser session
ANID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
2 years
SNID
Google Maps - Google uses these cookies to store user preferences and information when you view pages with Google Maps.
1 month
SSID
Used to store information about how you use the site and what advertisements you saw before visiting this site, and to customize advertising on Google resources by remembering your recent searches, your previous interactions with an advertiser's ads or search results, and your visits to an advertiser's site.
6 months
1P_JAR
This cookie is used to support Google's advertising services.
1 month
SAPISID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
2 years
APISID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
6 months
HSID
Includes encrypted entries of your Google account and last login time to protect against attacks and data theft from form entries.
2 years
SID
Used for security purposes to store digitally signed and encrypted records of a user's Google Account ID and last login time, enabling Google to authenticate users, prevent fraudulent use of login credentials, and protect user data from unauthorized parties. This may also be used for targeting purposes to display relevant and personalized advertising content.
6 months
SIDCC
This cookie stores information about user settings and information for Google Maps.
3 months
NID
The NID cookie contains a unique ID that Google uses to store your preferences and other information.
6 months
CONSENT
This cookie tracks how you use a website to show you advertisements that may be of interest to you.
18 years
__Secure-3PAPISID
This cookie is used to support Google's advertising services.
2 years
__Secure-3PSID
This cookie is used to support Google's advertising services.
6 months
__Secure-3PSIDCC
This cookie is used to support Google's advertising services.
6 months