On Thursday, June 24th 2021, an interdisciplinary symposium on Tübingen’s “AI MEETS LAW” platform took place in cooperation with the Carl Friedrich von Weizsäcker-Centre. It was dedicated to “Accepting Opaque Algorithms? Legitimate Use in a Situation of (Partial) Ignorance”.
"Black-box" or "opaque" algorithms, especially Machine Learning (ML) models, have been the object of a recent and intense debate. Public and private institutions rely, in an increasing number of contexts, in their decision making on computers with ML capabilities. As such ML solutions are often developed by third parties, such as software companies, it is not clear whether the institutions using the tools always understand the way the algorithm works, and whether the algorithm produces biased decisions. ML based decision making, however, can affect people's lives in various ways. That is especially relevant if the government relies on it.
Scholarly debate has expounded extensively on the transparency and explainability of ML decision making. Yet the preliminary state of the discourse seems to suggest that, irrespective of the measures taken, there remains a degree of opacity. In this webinar, we therefore did not discuss how transparency and explainability could be improved. Rather, we took the opposite route of accepting the opacity of those models as a given, and we discussed what kind of legitimate use can be made by public or private institutions despite the opacity of ML. In other words, as algorithmic opacity seems to be here to stay with us for a while. Our intention was to try to find a path towards reasonable use in a situation of (partial) ignorance.
The symposium put an emphasis on historical and philosophical approaches to these questions in order to establish a foundation for further legal discourse.