06.04.2021 · Is technology ruling our lives? Yes and no - technology in general allows us to live longer, makes our lives easier and is part of our nature, as people depend very much on the creation of a civilization to stay alive. We have no fur, so we need heating; we cannot communicate with everyone in a modern society and therefore need the media to enable democratic deliberations; we are not good at seeing into the future and need a data-driven foresight; and so on.
The question of whether technologies generate dystopia or social welfare depends on how we design them - technologies shape our society and the way people interact. Technologies can lead to discrimination, to an unequal distribution of power and money or to the destruction of our natural environment. This applies to technology in general, but especially to digital technologies, which are at the forefront of technological progress and enable many innovations in various areas.
Digital technologies in the form of social media can be used to foster surveillance or the oppression of free speech, they can lead to marginalization and manipulation. Digital infrastructure can make our society more vulnerable considering the risks of power failures or hacker attacks. At the same time, most of us do not want to miss the benefits of free, fast and easy communication worldwide, the availability of a huge treasure of knowledge on the Internet and all the advances in science, health or work that have been made thanks to computer technology.
Technologies are, and always have been, a part of human activity and human identity. They do not come to us from another planet or a hostile superpower. Technologies that seem new and perhaps strange to one generation are often already seen as a natural part of life by the next generation. Technologies are an integral element of the common effort to create a "good" society. For this reason, a prudent design and a well-founded assessment of possible negative side effects of technological innovations is a must. Each of us is required to use technologies responsibly, but without question, powerful actors and politicians have a special responsibility to create safe and ethical frameworks for a welfare-oriented development of the digital society. Therefore: Rules are needed to ensure that the promises and benefits of technology are fairly distributed. If some actors in a society make full use of their freedom, they can restrict the freedom and opportunities of others. The German idiom "The freedom of the wolves is the death of the lambs" illustrates this mechanism, which can turn the utopian potential of technologies into a dystopia.
With the progress in Artificial Intelligence (AI), we are in the midst of a new technological course. It will determine how individuals, companies and politics act and decide with the help of algorithmic decision-making systems - or the other way around: how they cannot autonomously decide because of these systems and an AI-driven social infrastructure.
AI development should serve society and not lead to the creation of new technical or economic constraints that conflict with ethical norms or restrict progress in social welfare. This is actually what is meant when many political documents and speeches talk about "humanistic AI". In general, it is important not to accept the patterns and recommendations for action given by AI systems as having no alternative or as "factual constraints". As in other technical products, AI is inscribed with specific, changeable purposes and preferences that benefit certain groups and individuals but can harm others.
Technical innovations often create new spaces for action for which known ethical forms of behavior provide no answer. They must therefore be discussed, considered, and negotiated in a new way. Questions of ethics always become a topic of social and scientific discussion when techniques lead to uncertainties with regard to the choice of the most appropriate action. This is the case when technologies enable actions that pose new moral problems (1), when new routines of action cause institutional changes (2), or when new technologies lead to uncertain (risky) consequences of action (3). Risky actions are always under special ethical pressure to justify. Often there is a combination of these three issues.
When we think of autonomous systems in cars, for example, complex questions of responsibility arise: Does the developer of the software in cars have any responsibility if a hacker manipulates it in such a way that an accident is caused? And how do we assess the issue, when intelligent technology is making driving safer overall?
Despite all critical and reflective distance to technical progress, ethics also emphasizes the positive effects of technical innovations. In this respect, the balancing of the expected positive and the possible unintended negative consequences of technology is the core of its tasks. Methodologically, (technology)ethics focuses on a thorough reconstruction of the normative background and motives for scientific and technological progress. By this clarification of the often implicit value assumptions, conflicts of interest and purposes, ethics tries to contribute to the finding of ethically reflected and responsible decisions.
The ethics center contributes to this process with a range of papers, presentations, discussions and the participation in different projects and committees like the “Plattform Lernende Systeme – Germany’s platform for Artificial Intelligence” or the public advisory board of the “Cyber Valley”. Our papers deal for example with the problem of putting AI ethics into practice, issues of discrimination through AI, the protection of diversity, or the role of AI in public communication. The following selection of papers of the ethics center on AI-Ethics is intended to give you a first overview:
Values in the development of AI
Hagendorff, Thilo, The Ethics of AI Ethics. An Evaluation of Guidelines, Minds and Machines 30/3 (2020), 457-461, https://doi.org/10.1007/s11023-020-09517-8
Heesen, Jessica, Verantwortlich Forschen mit und zu Big Data und Künstlicher Intelligenz. In: Anja Seibert-Fohr (Hg.): Entgrenzte Verantwortung. Zur Reichweite und Regulierung von Verantwortung in Wirtschaft, Medien, Technik und Umwelt, New York/Heidelberg: Springer 2020, 285-303, https://doi.org/10.1007/978-3-662-60564-6_14
Anthropological aspects of AI
Ammicht Quinn, Regina, Digitale Aufklärung und Aufklärung des Digitalen: Menschen als sinnliche Wesen. In: Armin Grunwald (Hg.): Wer bist du, Mensch? Transformationen menschlichen Selbstverständnisses im wissenschaftlich-technischen Fortschritt, Freiburg i. Br. : Herder Juni 2021, 64-82, https://www.herder.de/theologie-pastoral-shop/wer-bist-du%2c-mensch-gebundene-ausgabe/c-37/p-21357/
Reinhardt, Karoline, Digitaler Humanismus. Jenseits von Utopie und Dystopie, Berliner Debatte Initial, Themenschwerpunkt Digitale Dystopie, 2020, 111-123, https://shop.welttrends.de/e-journals/e-paper/2020-digitale-dystopien/digitaler-humanismus
Regulation of AI
Heesen, Jessica/Müller-Quade, Jörn/Wrobel, Stefan et al., Zertifizierung von KI-Systemen. Kompass für die Entwicklung und Anwendung vertrauenswürdiger KI-Systeme, Whitepaper aus der Plattform Lernende Systeme. November 2020, https://www.plattform-lernende-systeme.de/publikationen-details/zertifizierung-von-ki-systemen-kompass-fuer-die-entwicklung-und-anwendung-vertrauenswuerdiger-ki-systeme.html
AI and Diversity
Heesen, Jessica/Reinhardt, Karoline/Schelenz, Laura, Diskriminierung durch Algorithmen vermeiden. Analysen und Instrumente für eine digitale demokratische Gesellschaft, in: Gero Bauer u. a. (Hg.): Diskriminierung und Antidiskriminierung. Bielefeld: transcript 2021, 129-148, https://doi.org/10.14361/9783839450819-008
Reinhardt, Karoline, Between Identity and Ambiguity. Some Conceptual Considerations on Diversity, in: Symposion. Theoretical and Applied Inquiries in Philosophy and Social Sciences 7 (2), 2020, S. 261-283. http://symposion.acadiasi.ro/wp-content/uploads/2020/12/2020.7.2.-Reinhardt.pdf
Putting AI ethics into practice
Hagendorff, Thilo (2020): AI virtues. The missing link in putting AI ethics into practice. In arXiv:2011.12750, 1-20.
Heesen, Jessica et al., Ethik-Briefing, Leitfaden für eine verantwortungsvolle Entwicklung und Anwendung von KI-Systemen. Whitepaper aus der Plattform Lernende Systeme. Oktober 2020, https://www.plattform-lernende-systeme.de/files/Downloads/Publikationen/AG3_Whitepaper_EB_200831.pdf
From Principles to Practice. An interdisciplinary framework to operationalise AI ethics, Artificial Intelligence Ethics Impact Group (AIEIGroup), VDE/Bertelsmann Stiftung, Creative Commons 2020, https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf (with Thilo Hagendorff, Jessica Heesen and Wulf Loh from the IZEW)
Kurz-Link zum Teilen: https://uni-tuebingen.de/de/208225