May 16, 2022 5pm (CEST) [Online]
The Notions of Information and Conviviality in Deep Neural Networks: Or What about "Explainable AI" or "Trusted AI"?
Dr. Christophe Denis (Associate Professor at Sorbonne University - Laboratory of Computer Sciences LIP6 and PhD Students in Philosophy at University of Rouen-Normandie)
Zoom: https://us02web.zoom.us/j/88308943415?pwd=aU5rLzFiOEpOMkNjZGtCa0hYQWdzQT09
ID of the meeting: 883 0894 3415 | Code: 931555
The thunderous return of neural networks occurred in the sublime Florentine setting in 2012 during a renowned international computer vision conference. As for several years, the participants of this conference were invited to test their image recognition techniques. Geoffrey Hinton's team from the University of Toronto was the only one using deep neural networks: it outperformed the other competitors in two out of three categories of the competition. The audience was stunned by the impact of the reduction in prediction error, a factor of three, while the algorithms based on the expertise of the researchers differ by a few percent. Other computational scientific disciplines, like computational fluid dynamics, geophysics, and climatology, have also started to use deep learning methods to predict phenomena which are difficult to solve with a classical hypothetical deductive approach.
Impressed by the results obtained by deep learning results, an American master student from the University of Maryland had set up an ambitious deep learning project. Its objective was to automatically detect a husky or a wolf on images representing only one of these two animals in their setting lives. This project seemed to be difficult as these two animals are very similar unlike for example cat and bird. The student and his teacher were amazed by the very good results achieved by the model … until a husky in the snow was classified as a wolf by the deep neural network. After further analysis, the explanation of the very good prediction results was disappointing: the neural network did not "learn" to distinguish a wolf from a husky, but only to detect snowy settings. Did the machine learning model cheat? So how do we build trust between users and AI ? To ensure trust, many AI ethical committees recommend building in explanations of the predictive machine learning outcomes to be provided to the users. For example, in France, the bioethics law recently voted by the French National Assembly requires that the designers of a medical device based on machine learning explain how it works.
We argue that systematically explaining deep learning to all its users is not always justified, could be counterproductive and even raises ethical issues. For example, how to assess the correctness of an explanation that could even be unintentionally permissive or even manipulative in a fraudulent context? There is therefore a need to revisit the theory of information (Fisher, Shannon) and the philosophy of information (eg. Floridi) in the light of deep learning. This information will allow certain users to produce their own reasoning (surely an abductive one) rather than receiving an explanation.
Last but not least, should we trust a machine learning model? Trust means handing over something valuable to someone, relying on them. The corollary is that "the person who trusts is immediately in a state of vulnerability and dependence", and all the more and all the more so on the basis of an explanation whose correctness is difficult to assess.
Last but not least, we strongly believe that using human relationship terms, like trust or fairness in the context of machine learning, necessarily induces anthropomorphism, whose bad effects could be addiction (Eliza effect) and persuasion rather than information. In contrast, our philosophical and mathematical research direction tries to define conviviality criteria in machine learning based on Ivan Illich's thought. According to Illich, a convivial tool must have the following properties:
• it must generate efficiency without degrading personal autonomy;
• it must create neither slave nor master;
• it must widen the personal radius of action.
As presented in the last part of the talk, neural differential equations, by providing trajectories rather than predictions, seem to be an efficient mathematical formalism to implement convivial deep learning tools.