Juristische Fakultät

"When you introduce powerful technologies into this world you have a responsibility that they are introduced in the right way"

On Thursday, March 4th 2021, the inaugural symposium of Tübingen’s “AI MEETS LAW” platform took place. It was dedicated to “Trustworthy AI – Determinants for Designing Ethical and Legally Compliant Solutions”. 

AI, especially with machine learning capabilities, provides a huge efficiency potential that can cater to a wide range of societal needs. Yet the progress of these technologies faces questions on transparency, predictability, and accountability. Enshrining trustworthiness in AI will therefore bear significantly on its progress, the societal acceptance of AI-decision-making, and data processing.

The symposium on trustworthy AI expounded on the various implications of this matter on the technological development, the political discourse, and ethics. These appear to be the foundations of any AI-related regulatory regime, as Stefan Thomas addressed in his brief welcome note.

Andrea Martin, Leader of the IBM Watson Center in Munich and the EMEA Client Centers, spoke on “Trustworthy AI: Notes on the dialogue between AI innovators and the political actors”. Andrea provided insights into her experience in dealing with trustworthiness from a company perspective and her role as a policy advisor to the German Parliament. Ethical self-control by AI-developers as well as legal boundaries emerged as two complementing concepts to ensure the safety and societal acceptance of AI-based solutions. The essence of Andrea’s talk is reflected in Ginni Rometty’s words, former President, CEO and Chairman IBM Corporation, that “when you introduce powerful technologies into this world you have a responsibility that they are introduced in the right way.”

Matthias Biniok, Lead Architect IBM Watson / Leader Space Tech Division DACH, subsequently talked about “The design principles of trustworthy AI – from conception to market realization”. He provided a clear overview on the current developmental approaches in machine learning, with a focus on applications in space exploration and legal tech. While seemingly disparate fields of use, similar challenges come up for the developers in both areas when creating practical and ethically responsible AI-based products.

The subsequent discussion was moderated by the panel chair, Sebastian Brüggemann, a tech-specialized attorney and lecturer at the Tübingen faculty. The questions and comments from the audience focused on the role of data for the further development of AI, especially machine learning, and the regulatory framework to ensure the trustworthiness of a system. As to trustworthiness, the matter of fairness and discrimination turned out as a key issue. Since AI cannot produce absolute certainty in its predictions or outcomes, it becomes necessary to find ways to deal with these inherent uncertainties. When evaluating such AI-related risks, it can be helpful to consider that human decision-making can be influenced by biases and errors in a significant way, too. To reflect critically about the error-margin of human decision-making turned out to be helpful when defining areas where AI-related approaches might contribute to achieving greater fairness and equality.     

 

The AI MEETS LAW platform was established in 2020 at the Law Faculty in Tübingen. It is an informal discussion group on the Intersection of Artificial Intelligence and Law. You can learn more about it here: https://uni-tuebingen.de/en/167985.

Sign up for our mailing list and watch out for the upcoming events!