Internationales Zentrum für Ethik in den Wissenschaften (IZEW)

Human(s) in the loop(s): On the use of AI in German law enforcement

von Lou Brandner & Anna Louban

16.07.2024 · The criminal justice system is one of many areas of society undergoing a comprehensive process of digitalization and datafication. This development is accompanied and potentially accelerated by the impact of rapidly evolving artificial intelligence (AI) technologies. Digital technology and particularly AI can step in where human skills and capacities reach their limits. But AI systems can also perpetuate discrimination, violate privacy laws, and be opaque “black boxes” that lack understandability. The EU’s Artificial Intelligence Act (AI Act) addresses these issues by establishing a legal framework for the development, deployment, and use of AI systems according to their risk category.

While the final version of the AI Act exempts law enforcement authorities from many obligations, it requires all deployers and providers of high-risk systems to ensure “human oversight”. In this text, we look at the current organization of the German police and particularly datafied police work. Our argument is informed by findings from interviews conducted in two interdisciplinary BMBF-funded[1] projects: FAKE-ID, which deals with AI-based video analysis to detect false and manipulated identities and PEGASUS, which analyzes the processing of heterogeneous data in the context of organized crime. These projects explore the legal, ethical, and societal impacts of AI-based technology in law enforcement. Against this background, we examine the complexity of datafied police work and the difficulties this might entail for the implementation of meaningful human oversight for AI-based policing, emphasizing the importance of ethical reflection.

Identifying the humans in the loops

For PEGASUS, the conducted interviews focused on police investigators and data analysts to explore how the authorities currently work with digital technologies, data filtering and evaluation systems considering the increasing use of AI-based policing technology. For FAKE-ID, actors in various positions of datafied police work were interviewed, including experts whose area of responsibility involves the detailed documentation and evaluation of image and video material. The qualitative material we gathered provides insights into the status quo of datafied police work processes, chains of responsibility, and interactions between technology, police investigators, and other experts.

To outline the entangled relationship between police work and digital technology, we borrow the data science term human in the loop, describing systems that require human interaction for the completion of a process by decision. Recognizing the multiplicity of human-produced decisions involved, we use the term humans in the loops. Adopting this terminology captures that within the many partly intertwined processes of criminal justice, different (types of) police actors maneuver the various interactions (a) with technology and (b) with other individuals involved in the complex field of datafied police work.

The ideal of a criminal case solved with the help of digital technology might look something like this: To investigate a criminal case, police investigators submit applications to the public prosecution, for example to monitor a suspect’s smartphone and to subsequently seize the technical device. The seized device is passed on to digital forensics, where analysts examine data such as images, audio or text messages and forward their findings to the investigators responsible for the seizure. During this step, the use of AI-based technologies promises to be more objective, faster and therefore more efficient in data-based investigative analysis. If evidence is found, the investigators approach the public prosecution either to approve further measures or to issue an arrest warrant. In the following court proceedings, investigators and data analysts can appear in court to present and explain their findings and evidence.

 

Reality is rarely that straightforward. Police personnel need to handle immense amounts of heterogeneous and often unstructured data when investigating crime; analyzing data obtained from PCs or mobile phones as well as internet data (i.e. for cybercrime investigations) often involves terabytes of photos, videos, text messages, voice messages, et cetera. A variety of sub-areas such as development, technical integration, internal service points, the coordination of system allocation, and “pure” usage (by investigators and analysts) drawing on a multitude of programs and program types, such as case processing systems, geo-information systems, device analysis tools, recognition software, or translation tools. Furthermore, due to the organization of German police agencies under a federal structure, the various police regulations, roles, and chains of responsibilities can differ from one state, department, and actor to another.

Our combined data thus show that for different police personnel the engagement with AI in their work is both varied and highly specific. Some members within police forces contribute to the ambition of developing their own AI, which renders the police as the provider of such systems, while others are focused on reviewing and selecting AI tools that are acquired by the police as a deployer. Meanwhile, police actors in other positions oversee the endless customization that purchased systems require or are engaged in incorporating or implementing AI within their current or potential job functions.

The AI Act reflects various types of involvement of police actors in AI-based processes. While the final draft was endorsed by the European Parliament on March 14th 2024, the documents published during the trilogue provide insight into how the identification of roles and the risks and obligations attached to those roles have evolved. For instance, the initially proposed AI Act by the Parliament and Council of the European Union published in 2021 primarily mentions developers and users. However, amendments adopted by the European Parliament on June 14th 2023 already acknowledge the diversity of actors involved in the ”AI game”, encompassing not only the use of AI systems but also their deployment, provision, distribution, and importation. The compromise version, finalized in January 2024 goes even further and acknowledges a necessity to ensure AI literacy for all persons professionally involved with AI. By doing so, it becomes obvious that in the workplace context, the concept of the uninformed technology user is outdated, placing a particular responsibility on law enforcement actors.

Human oversight: Not only the solution but also the problem?

In light of this complex constellation of humans in the loops in police decision-making processes involving technology, we want to contemplate the implementation of “human oversight” obligations of the AI Act as listed in Article 14. In this article, the EU AI Act assigns the main responsibility to providers of high-risk systems who shall ensure that the systems are “provided to the deployer in such a way that natural persons to whom human oversight is assigned” are able to understand the system’s capacities and limitations while also considering the possibility of over-reliance on automated outputs and decisions. They must be capable of correctly interpreting system outputs and of making informed decisions on disregarding them. They should also be able to intervene in the operation or interrupt the system, for instance through a “stop” button, or entirely reject the use of AI systems unsuitable for the case at hand.

As discussed, German police assume many different roles in developing, purchasing, providing, deploying and using AI. This means that, depending on the system in question, an external company can be the provider of AI software the police then deploys. The police can also be provider and deployer simultaneously. In both cases, human oversight needs to be assigned to natural persons within the police. These individuals require the necessary competence, training, and authority to fulfill this role effectively. Respective roles and obligations within specific police forces and teams need to be identified and differentiated. Moreover, the institution of the police is responsible for establishing the necessary structures for obtaining and maintaining the essential skills for this task.

Our qualitative data show that these expectations meet a complex reality. AI-based systems and their outputs can be particularly difficult for investigators to comprehend due to their complexity and opacity. Investigators are thus dependent on a lengthy process of “back and forth” with different data analysts (text analysts, image analysts etc.) to obtain useful results. AI technology can easily introduce errors into an investigation if results do not undergo specialized review; for instance, evidence such as texts, images and video recordings is often in languages investigators do not understand due to polylingual speech and coded or “slang” terminology. An AI-based transcription or translation system might not be able to grasp these intricacies correctly. In instances like this, additionally, external actors like specialized translators need to be consulted.

This begs the question who could effectively assume the role of human overseer -- investigators are often insufficiently familiar with technical operating principles of the systems while data analysts do not necessarily have insight into investigative details to evaluate the significance of decisions. The creation of dedicated roles that combine both expertise across the German police can be one strategy to put human oversight into practice, but wider structural changes might be necessary as well. In our empirical research, we observe an error culture that needs improvement. Furthermore, individual as well as organizational responsibility while working with digital policing technology often appears unclear and institutionalized discussions of normative issues rarely take place. The introduction of increasingly complex technology and of emerging additional legal obligations might render these existing problems more pressing; the use of AI can exacerbate reputational risks for the institution in the case of public scandals, but also, most importantly, risks to individual and societal wellbeing.

German law enforcement using AI: Where decisions have serious consequences, ethical reflection is essential

The argumentation presented above reflects the importance of ethical, legal, and social implications (ELSI) in AI development and use: Human oversight, as required by the AI Act, is an essential aspect of ethically acceptable technology implementation that promotes human agency and autonomy. Especially in high-risk contexts like law enforcement, AI technologies can perpetuate structural discrimination and other undesirable social phenomena; against this backdrop, human overseers are supposed to make informed decisions on the validity of automated outcomes. Accordingly, we contend that the required training must include discussions on ethical concepts such as AI fairness and transparency. Support structures (for instance in the form of ombudspersons) should also account for these issues, offering the possibility to report erroneous, harmful, or otherwise ethically concerning use of AI technology. The integration of ELSI aspects into projects researching police AI utilization thus can be an essential measure to ensure that relevant individuals or teams within the police bear responsibilities aligned with their particular tasks and role(s).

Through our research, we have been able to ascertain that the concept of sole organizational responsibility, established through legal obligations, is insufficient to address the far-reaching consequences of AI use within law enforcement. Presently, police actors in various positions assume disparate roles in AI development, implementation, deployment, and other AI related processes. The AI Act diversifies obligations according to role. It demonstrates that the legal compliance and ethical acceptability of a system do not solely depend on how it is developed but need to be ensured throughout the whole technology lifecycle. Beyond this, how concrete responsibilities are best arranged in practice must be viewed on a case-by-case basis that considers the numerous human actors operating in various loops and capacities.

-------------------------------------------

[1]The Federal Ministry of Education and Research of Germany (BMBF) funded both referred projects as part of the thematic area “Artificial Intelligence in Civil Security Research”.

-------------------------------------------

Short-Cut to forward this article: https://uni-tuebingen.de/de/267978

-------------------------------------------

About the authors:

Lou Brandner is a sociologist and postdoctoral research associate at the IZEW of the University of Tübingen. She received her PhD in Sociology from La Sapienza University in Rome in 2021. Her current research applies AI and data ethics to the use of AI-based technologies in different societal areas, including law enforcement.

Anna Louban is a sociologist/anthropologist with research experience in legal anthropology, immigration, citizenship as well as anthropology of the state and bureaucracy. Since 2021, she is a research associate at the Research Institute for Public and Private Security (FÖPS) at the Berlin School of Economics and Law (HWR) working on research projects focused on the use of AI within German law enforcement.