International Center for Ethics in the Sciences and Humanities (IZEW)

Do deepfakes (really) harm democracy?

Why the debate about deepfakes in politics often falls short

by Cora Bieß and Maria Pawelec

12.11.2020 · Deepfakes are synthetic audio-visual media (i.e. images, videos, and audio files), often created using artificial intelligence (AI). Many concerns are associated with the use of deepfakes, in particular that they could undermine democratic processes and institutions as a new and more dangerous form of fake news. These concerns are definitely justified. At the same time, the debate neglects two important aspects: firstly, deepfakes may be causing greater damage in a different context, that of pornography, and secondly, the technology has many legitimate and even pro-democratic applications.

Deepfakes are used by a wide range of actors in different fields. Their impact and ethical implications depend on their context of use and are correspondingly diverse. The debate about deepfakes frequently revolves around their (often hypothetical) potential for political damage. Potential applications and consequences here are:

  1. Deepfakes in election campaigns: Deepfakes could threaten democratic elections by depicting candidates taking actions or making statements that they never actually made. This could manipulate voters and distort election results.

  2. "Liar's dividend": People who are criticised for certain statements or actions can increasingly claim that incriminating files are deepfakes (Chesney/Citron 2019). For example, in 2017 [former] US President Trump claimed that the "Access Hollywood" video, in which he bragged about harassing women, was manipulated.

  3. Weakening the media: Deepfakes pose new challenges to journalists evaluating sources and can weaken public trust in the media.

  4. Political destabilisation: Deepfakes can deepen social rifts. In 2018, for example, a deepfake picture and video of Emma Gonzalez, a survivor of the shooting in Parkland, Florida, circulated. It showed her tearing up the American Constitution instead of tearing up a target. This deepfake further aggravated the tense debate about arms legislation in the US.
    In a worst-case scenario, deepfakes could even be used to provoke domestic or interstate conflicts. For instance, deepfakes of politicians could antagonize certain groups or states. So far, we have only been able to find one confirmed case for such an application: in 2018, accusations that a video showing the President of Gabon’s New Year's speech was a deepfake led to unrest and finally to an attempted military coup. No evidence of video manipulation was found later, but the accusations alone encouraged the political unrest.

  5. Damaging foreign policy: Diplomatic relations can be harmed by targeted disinformation. Deepfakes are increasingly used for political propaganda. For instance, a few weeks ago Facebook reported that it had deleted two networks of fake accounts on Facebook and Instagram. One of these was a Chinese network that used deepfake profile pictures for fake accounts. Unlike real pictures, such deepfakes cannot be recognised or retraced. Propaganda on geopolitical topics such as maritime security in the South China Sea was then spread via these fake accounts.

  6. Damaging political opinion making: Ultimately, deepfakes threaten fundamental discourses and processes in an open and democratic society. They may create an environment in which citizens get the feeling that they can no longer believe anything at all. This could cause a loss of trust and disenchantment with democracy.

  7. Pornography/exploitation of women's sexual identity/threat to gender equality: 96% of all deepfakes are pornographic; almost 100% of them depict women (Ajder et al. 2019). The deepfake technology even arose in this context when female celebrities’ faces were swapped into pornographic videos in 2017. Women who are not famous are now also affected, as deepfake creation can increasingly be purchased as a service. The only prerequisite for this is sufficient image material, which can often be found e.g. on social media profiles. Pornographic deepfakes violate the personal rights of those affected and can cause profound personal damage. They can also serve as a basis for blackmail, defamation, or other criminally relevant behaviour. Deepfake revenge pornography thus threatens the personal integrity of the women concerned and women’s equality in a digital society.

However, deepfakes do not only threaten democratic institutions and processes. Besides legitimate applications in the film and video industry and the arts, for example, they can also be used to pro-democratic ends.

  1. Deepfakes to protect persecuted groups: In 2020, deepfakes were used in the documentary "Welcome to Chechnya". Here, LGBTQ activists reported of their escape from anti-gay cleansing in Chechnya. Their identity was then obscured using deepfakes to protect them from persecution, while still preserving their emotions and expressions so that the audience could sympathise with them. The use of AI enabled this documentary to report of experiences that would otherwise remain untold due to the necessity of protecting the identity of the persecuted.

  2. Deepfakes in activism and political art: In autocratic systems, deepfakes could help activists spread the opinions of the opposition, as the technology can be used to conceal one's own identity. In political art, deepfakes are increasingly used to draw attention to social and political grievances and to stimulate political action. For example, the art project "Spectre" highlights the misuse of personal data for political influence, using deepfakes.

  3. Deepfakes for education: Deepfakes could make history told in museums more vivid and tangible. The Dalí Museum in Florida is currently using deepfakes to enable visitors to interact directly "with" the artist Salvador Dalí, who died in 1989.

  4. Deepfakes in assistance technology: Deepfakes can provide people who have lost their voice due to disabilities or chronic illness with an authentic or even synthetic copy of their voice for communication (see e.g. Project Revoice). This promotes the inclusion and participation of people with disabilities.

  5. Deepfake satire/parody: Deepfakes are increasingly used for political commentary, satire, and irony, thus contributing to political debate. So far, they have mostly been uncritically seen as ethically unproblematic. However, ethical questions are raised here as well: Who determines what satire/irony is? What about groups of people who do not understand satire, such as people with poor language skills or limited cognitive abilities? In addition, deceptions endangering democracy can also occur here when a deepfake parody is transferred from its original context to another context.

As shown, deepfakes are thus used for a variety of purposes, raising very different ethical and societal questions. The widespread fear that deepfakes could endanger democracy is definitely justified. It is true that there are only a few documented cases in which deepfakes have had a significant negative impact on democratic processes. However, the barriers to the creation of convincing deepfakes in terms of required computer capacities, technical knowledge, and other resources, such as training data, are constantly decreasing, making successful use for disinformation campaigns ever more likely. This could cause a loss of trust in political processes and institutions, thus threatening social cohesion, participation, autonomy of voters, and, ultimately, democracy. Equally worrying is the growing number of pornographic deepfakes, which are often used as a "weapon against women". They massively violate the personal rights and dignity of those affected and threaten inclusion, participation, privacy, and gender justice. At the same time, deepfakes are subject to freedom of expression as a means of entertainment and commentary and can be used for a variety of pro-democratic purposes in addition to numerous economic applications. Thereby, they enable new forms of political debate and mobilisation, awareness for social grievances, and the inclusion of people with physical disabilities.

A general ban on deepfakes is therefore neither feasible nor desirable. However, all deepfakes in the application areas mentioned above should be labelled as such to counteract deception and to create transparency for the viewers. In addition, specific regulatory discussions for each application context are needed, which require an ethical and societal impact assessment.

Literature

Ajder, H., Patrini, G., Cavalli, F. and Cullen, L. (2019), “The State of Deepfakes. Landscape, Threats, and Impact”, Deeptrace.

Chesney, R. and Citron, D. (2019), “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security”, California Law Review, Vol. 107, pp. 1753–1819.