Accepting Opaque Algorithms?
Legitimate Use in a Situation of (Partial) Ignorance
Dies ist der Link zum Event: https://laterna.jura.uni-tuebingen.de/b/nil-pve-vne
Diese Tagung widmet sich in Zusammenarbeit mit dem Carl Friedriech von Weiszäcker Zentrum den historischen und philosophischen Ansätzen zu diesen Fragen.
"Black-box" or "opaque" algorithms, especially Machine Learning (ML) models, have been the object of a recent and intense debate. Public and private institutions rely, in an increasing number of contexts, in their decisionmaking on computers with ML capabilities. As such ML solutions are often developed by third parties, such as software companies, it is not clear whether the institutions using the tools always understand the way the algorithm works, and whether the algorithm produces biased decisions. ML based decisionmaking, however, can impact on people's lifes in various ways. That is especially relevant if the government relies on it.
Scholarly debate has expounded extensively on transparency and explainability of ML decisionmaking. Yet the preliminary state of the discourse seems to suggest that, irrespective of the measures taken, there remains a degree of opacity. In this webinar, we will therefore not discuss how transparency and explainabilty can be improved. Rather, we take the opposite route of accepting the opacity of those models as a given, and we discuss what kind of legitimate use can be made by public or private institutions despite the opacity of ML. In other words, as algorithmic opacity seems to be here to stay with us for a while, we will try to find a path towards reasonable use in a situation of (partial) ignorance.
Thomas Haigh (Professor, University of Wisconsin-Milwaukee) is an historian of science. Thanks to his work on the computerization of businesses, he will bring us historical perspective on the use of computer systems by corporations, including ideological fantasies and failed attempts, and how it illustrates the current debate on opaque AI in institutions.
John Zerilli is a Leverhulme Fellow at the University of Oxford. With a double training in philosophy and the law, he is the lead author of A Citizen's Guide to Artificial Intelligence.
The topics of the conference comprise all areas of logic relating in a narrower or wider sense to Gödel's incompleteness results. This includes the history of logic, proof theory, philosophy of mathematics, aspects of incompleteness in computer science and others. The conference is organised as a collection of workshops for these specific topics.
Workshop auf der Jahrsetagung der Gesellschaft für Informatik.
Im Carl Friedrich von Weizsäcker-Kolloquium werden wöchentlich Vorträge von Gästen und Mitarbeitern des CFvW-Zentrums angeboten, die sich im weiten Sinne mit Themen befassen, die am Zentrum diskutiert werden.
Making Responsible Decisions in and about Science
Responsible Life Science Policy between Public and Private Funding
Konzeptionelle Herausforderungen für die KI
Celebrating and commemorating Erwin Engeler’s 90th birthday
Vlasta Sikimić - Team composition and inclusion in contemporary science
3. Workshop zum VolkswagenStiftung Planning Grant
Hilbert-Bernays Summer School on Logic and Computation
Proof, Computation, Complexity
2. Workshop zum VolkswagenStiftung Planning Grant
1. Workshop zum VolkswagenStiftung Planning Grant
Proof-Theoretic Semantics: Assessment and Future Perspectives