International Center for Ethics in the Sciences and Humanities (IZEW)

Ethics, Law and Security of Digital Afterlife (Edilife)

Digital technologies are increasingly determining our lives - but they are also increasingly influencing the way we deal with death, grief and memory. Technologies in the context of the Digital Afterlife Industry (DAI) enable a "continuation of living" and interaction with digital representations of deceased persons as avatars, chatbots or senders of text messages. The Edilife project aims to reflect on current services in the context of the digital afterlife industry, to anticipate future developments, to characterize required actions, and to develop possibilities for future actions. With the help of participatory processes, the spectrum of societal perspectives on the topic of "digital afterlife" is analysed through ethics, IT security and law. Edilife contributes to opening the social discussion and sets the course for a successful implementation of new digital practices in the context of death and memory culture.



July 2022 – Febuary 2024

Insight - interdisziplinäre Perspektiven des gesellschaftlichen und technologischen Wandels

In coorperation with


The so-called Digital Afterlife Industry (DAI) is a new growth market for start-ups and large commercial platform operators, in which interactions with deceased persons via communication platforms, chatbots or avatars in particular play a significant role. The field of "digital afterlife" includes situations of one's own death, when people prepare or explicitly forbid a corresponding use of their data before their death; it includes situations of the death of others, when relatives or friends want to continue to experience the "presence" of the deceased. At the same time, the research questions go beyond individual situations, such as where, in the context of memory culture, avatars of Holocaust survivors answer the questions of groups of students in museums; where, in the field of education, digital teaching can also be "performed" by the deceased; where deceased public figures are brought back to the foreground on certain occasions (for example when a simulation of the voice of U.S. President Kennedy holds the speech Kennedy would have given on the day of his assassination, cf. Rothco, 2018).

Currently, these applications exist in advertisement, music, and film industries (McEvoy, 2021; Chesney & Citron, 2019, p. 1770). Visitors to the "Dalí Lives" exhibition in the USA were able to "interact" with the artist, who died in 1989 (Kwok & Koh, 2020, p. 3). In the context of the 2020 U.S. election, a deepfake of a teenager killed in the Parkland massacre enabled a particularly forceful political campaign to tighten gun laws (Diaz, 2020). But such applications are also already being used in personal contexts, as in the case of a Korean woman who simulated her dead daughter in this way (McEvoy, 2021), or in the form of reconstructions of short video animations of deceased relatives based on photos as a service of the genealogy platform MyHeritage.

Advances in AI will significantly increase the speed of this process, making it more widely and easily accessible. In the future, it can therefore be assumed that there will also be increased use in the private sector. The Edilife project has a strongly participatory approach and based on analyses of the relevant applications, stakeholder workshops, and group and individual interviews, is mapping and accomplishing an initial exploration of the research field of the "digital afterlife".

The goals of the project are

  • the scientific gain of knowledge in a significant new research area, which on the one hand includes ethical questions of dealing with grief and death in relation to techniques of the digital afterlife (IZEW) and on the other hand refers to the technical possibilities of representing natural persons virtually and the associated data protection and security implications (SIT);
  • the identification of the need for political action regarding ethical issues (IZEW), security and data protection (SIT) and the opening of a societal discussion (IZEW and SIT).

Central research questions include: What wishes do people have for their digital existence after their death? How can grief and reverence find a place in this socio-technical context? In what ways will technology influence religious life and vice versa? What possibilities for manipulation and abuse arise? How will everyone's privacy be protected? Can a digital twin of a deceased person break the law? How can the rights of data subjects be enforced against the commercial interests of the international DAI and other industries (such as the entertainment industry)?

Just as social interaction in a digital society needs media literacy and an (ethically) reflected understanding of technology, it is also an urgent task to search for a respectful approach to death and grief in a data-supported and mediatized digital culture. In the project, these questions are addressed in an interdisciplinary perspective that is shaped both technically and culturally.


  • Ajder, H., Patrini, G., Cavalli, F. & Cullen, L. (2019). The State of Deepfakes: Landscape, Threats, and Impact. Deeptrace.
  • Anderson, J., Rainie, L. & Luchsinger, A. (2018). Artificial intelligence and the future of humans. Pew Research Center.
  • Bohnstedt, J. (2019). Vom Personenbezug zum Gerätebezug – KI und Datenschutz. DSRITB, 409–421.
  • Brundage, M. et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228
  • Chesney, R. & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753–1819.
  • Deutscher Bundestag (28. Oktober 2020). Bericht der Enquete-Kommission Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale (Vorabfassung), Drucksache 19/23700.
  • Diaz, A.‑C. (2. Oktober 2020). Parkland victim Joaqium Oliver comes back to life in heartbreaking plea to voters. AdAge, 2020.
  • Han-sol, P. (17. Januar 2021). Holographic performances of dead stars welcomed, with caution. The Korea Times.
  • Heesen, J. (2022). Verstorbene als Medienprodukt. Die Programmierung von Unendlichkeit als ethische Herausforderung. In W. George & K. Weber (Hg.), Die Grenzen des Wachstums: Eigene Endlichkeit. Gießen 2022 (im Druck).
  • Hennig, Martin (2018a), Fiktionen vom digitalen Körper, Leben und Tod in Literatur, Film und Computerspiel. In: Anja Hartung-Griemberg/Ralf Vollbrecht/ Christine Dallmann (Hg.): Körpergeschichten. Körper als Fluchtpunkte medialer Biografisierungspraxen. Baden-Baden: Nomos, 195-215.
  • Jordan, M. I. (2019). Artificial intelligence – The revolution hasn’t happened yet. Harvard Data Science Review, 1.
  • Kasket, E. (2020). All the Ghosts in the Machine: The Digital Afterlife of your Personal Data. London: Robinson.Kneese, T. (2. November 2020). How Data Can Create Full-On Apparitions of the Dead. Slate.
  • Kubis, M., Naczinsky, M., Selzer, A., Sperlich, T.,Steiner, S. & Waldmann, U. (2019). Der digitale Nachlass – Eine Untersuchung aus rechtlicher und technischer Sicht.
  • Kwok, A. O. J. & Koh, S. G. M. (2020). Deepfake: a social construction of technology perspective. Current Issues in Tourism, 1–5.
  • Lagerkvist, A. (2017). The Media End: Digital Afterlife Agencies and Techno-existential Closure. In A. Hoskin (ed.), Digital Memory Studies. Media Pasts in Transition (p. 48-84). New York: Routledge.
  • Loh, J. (2020). Trans- und Posthumanismus zur Einführung. Hamburg: Junius, 3. korrigierte Auflage.
  • Marshall, A., Rojas, R., Stokes, J. & Brinkman, D. (2018). Securing the Future of Artificial Intelligence and Machine Learning at Microsoft.
  • McEvoy, F. J. (23. Januar 2021). Deepfaking the Deceased: Is it Ever Okay? You the Data.
  • Morse, T. & Birnhack, M. D. (2019). Digital Remains: The Users’ Perspectives. SSRN Electronic Journal.
  • Netzwerk Datenschutzexpertise. (2016). Postmortaler Datenschutz Auskunftsansprüche von Erben und Angehörigen zu personenbezogenen Internetdaten eines Verstorbenen.
  • Öhman, C. & Watson, D. (2019). Are the dead taking over Facebook? A Big Data approach to the future of death online. In Big Data & Society, January – June 2019, 1-13. 10.1177/2053951719842540
  • Öhman, C. & Floridi, L. (2018). An ethical framework for the digital afterlife industry. Nature Human Behaviour, 2, 318–320.
  • Rothco (2018). JFK Unsilenced.
  • Savin-Baden, M. & Mason-Robbie, V. (eds.). (2020). Digital Afterlife. Death Matters in a Digital Age. Taylor & Francis.
  • Schindler, S. (2019). Künstliche Intelligenz und (Datenschutz-)Recht. ZD-Aktuell, 06647.
  • Sisto, D. (2020). Online Afterlives. Immortality, Memory, and Grief in Digital Culture. Translated by B. McClellan-Broussard. MIT Press.
  • Schmidt, J.-H. & Taddicken, M. (2017). Soziale Medien: Funktionen, Praktiken, Formationen. In J.-H. Schmidt & M. Taddicken (Hrsg.), Handbuch Soziale Medien (S. 23-37). Wiesbaden: Springer.
  • Smith, M. (2021). The Intangible Ossuaries: The Ethical Dilemmas that Come with Handling the Data of the Deceased. APPE Conference.
  • Spies, U. (2020). Klinische Krebsregistrierung aus Sicht der Tumorzentren und postmortaler Datenschutz. NZS, 921–926.
  • Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 44, 44–56.
  • Voinea, C. & Uszkai, R. (2019). An ethical framework for digital afterlife industries. In Proceedings of the 13th International Management Conference „Management Strategies for High Performance” 31st October – 1st November 2019, Bucharest, Romania. (20.12.2021).
  • Weissman, J. (2021). The Crowdsourced Panopticon. Conformity and Control on Social Media. London: Rowman & Littlefield.
  • Wieder, C. (2018). Datenschutzrechtliche Betroffenenrechte bei der Verarbeitung von personenbezogenen Daten mittels künstlicher Intelligenz, Datenschutzrechtliche Betroffenenrechte bei der Verarbeitung von personenbezogenen Daten mittels künstlicher Intelligenz, in: Taeger, J. (Hrsg.). Rechtsfragen digitaler Transformationen. Edewecht, 505–518.
  • Yampolskiy, R. V. (2019). Unexplainability and Incomprehensibility of Artificial Intelligence. University of Louisville. arXiv:1907.03869