Juristische Fakultät

Accepting Opaque Algorithms? Legitimate Use in a Situation of (Partial) Ignorance

On Thursday, June 24th 2021, an interdisciplinary symposium on Tübingen’s “AI MEETS LAW” platform took place in cooperation with the Carl Friedrich von Weizsäcker-Centre. It was dedicated to “Accepting Opaque Algorithms? Legitimate Use in a Situation of (Partial) Ignorance”.

"Black-box" or "opaque" algorithms, especially Machine Learning (ML) models, have been the object of a recent and intense debate. Public and private institutions rely, in an increasing number of contexts, in their decision making on computers with ML capabilities. As such ML solutions are often developed by third parties, such as software companies, it is not clear whether the institutions using the tools always understand the way the algorithm works, and whether the algorithm produces biased decisions. ML based decision making, however, can affect people's lives in various ways. That is especially relevant if the government relies on it.

Scholarly debate has expounded extensively on the transparency and explainability of ML decision making. Yet the preliminary state of the discourse seems to suggest that, irrespective of the measures taken, there remains a degree of opacity. In this webinar, we therefore did not discuss how transparency and explainability could be improved. Rather, we took the opposite route of accepting the opacity of those models as a given, and we discussed what kind of legitimate use can be made by public or private institutions despite the opacity of ML. In other words, as algorithmic opacity seems to be here to stay with us for a while. Our intention was to try to find a path towards reasonable use in a situation of (partial) ignorance. 

The symposium put an emphasis on historical and philosophical approaches to these questions in order to establish a foundation for further legal discourse.

Thomas Haigh (Professor, University of Wisconsin-Milwaukee) is a historian of science. Thanks to his work on the computerization of businesses, he brought a historical perspective on the use of computer systems by corporations, including ideological fantasies and failed attempts, as well as how it illustrates the current debate on opaque AI in institutions.

The next speaker was John Zerilli, Research Fellow at the Cambridge Leverhulme Center for the Future of Intelligence. With a double training in philosophy and the law, he is the lead author of A Citizen's Guide to Artificial Intelligence. John explained new approaches on explainability. He put AI decisionmaking in concrete decisional contexts and argued from here what specific requirements on explainability can suffice to make the use of AI-based decisionmaking acceptable. It became clear that the focus point of an AI-application can be pivotal for understanding its rationale and that, from there, important conclusions on explainability can already be drawn.

The subsequent discussion among the panelists and the audience revealed that the legal and societal acceptance of opacity is interrelated with the historical experience of society in relation to the emergence of new technology. While technological progress can bring economic efficiency, it also bears on our societal perceptions on how we are willing to accept concomitant risks. Constant technological and societal change leads to a permanent process of adaption to the evolving AI-landscape. The main conclusion that could be drawn, therefore, was that the legal and societal approach to the dealings with opacity should not be conceived of as a static set of rules and requirements but rather as a dynamic process of risk-evaluation and reassessment of historical experiences in society.

The AI MEETS LAW platform was established in 2020 at the Law Faculty in Tübingen. It is an informal discussion group on the Intersection of Artificial Intelligence and Law. You can learn more about it here: https://uni-tuebingen.de/en/167985.

Sign up for our mailing list and watch out for the upcoming events!