Uni-Tübingen

News

02.04.2020

From principles to practice: How we can make AI ethics measurable

As part of the AI Ethics Impact Group, the Ethics Centre of the University of Tübingen (IZEW) has published a study on the ethical design of artificial intelligence (AI) applications. The core of the paper is the presentation of a label to indicate the ethical evaluation of AI.

With the increasing use of algorithmic systems in all areas of life, the debate on the social impact of technology and the development of a "European pathway to artificial intelligence" has also gained momentum. Humane and trustworthy AI are the keywords with which political actors in Germany and at European level describe this path. A number of ethical guidelines for the design of AI have been published to make this possible. There seems to be general agreement that AI systems must be subject to certain principles such as fairness, transparency or data protection.

General principles must be made measurable

However, the question of how the principles contained in the directives are to be implemented in practice remains largely unanswered. There are many different understandings of concepts such as transparency and justice. As a result, both AI-developing companies and users, e.g. public authorities, lack the necessary orientation and effective control of the systems is not possible. The lack of concreteness is thus one of the major obstacles to the development and deployment of public-interest oriented artificial intelligence.

 

Under the leadership of the non-profit standardization organization VDE in cooperation with the Bertelsmann Foundation, PD Dr. Jessica Heesen, Dr. Thilo Hagendorff and Dr. Wulf Loh from the Ethics Center Tübingen have been working in the interdisciplinary AI Ethics Impact Group since October 2019. With the working paper "AI Ethics: From Principles to Practice - An interdisciplinary framework to operationalise AI ethics", a study is now available that illustrates how AI ethics principles can be operationalised and transferred into practice. The AI Ethics Impact Group brought together experts from the fields of computer science, philosophy, engineering and social sciences. In addition to the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen, participants included researchers from the Algorithmic Accountability Lab at the TU Kaiserslautern, the High Performance Computing Centre at the University of Stuttgart, the Institute for Technology Assessment and Systems Analysis (ITAS) in Karlsruhe, the Institute of Philosophy at the TU Darmstadt and the think tank iRights.Lab.

Twitter: #Principles2Practice and #AIEIG

Contact:
jessica.heesenspam prevention@uni-tuebingen.de; thilo.hagendorffspam prevention@uni-tuebingen.de; wulf.lohspam prevention@uni-tuebingen.de

Back