The PEGASUS project deals with the use of machine learning based software solutions in the fight against organised crime. This use is connected to a wide range of ethical questions. On the one hand, the ethical reflection of the definition and handling of crime and on the other hand the ethical analysis of the use of technology are to be differentiated. PEGASUS focuses on the latter, but without losing sight of the former. The aim of the project is to carry out ethically informed accompanying research in close cooperation with technical and legal partners in order to define normative guidelines and framework conditions for an acceptable use of technology.
April 2020 – April 2023
Organised crime is subject to "digitalisation" just like other social fields. This offers law enforcement a multitude of new investigative possibilities. However, the evaluation of the accumulated data material is so complex that specific problems may arise. This is where the accompanying ethical research in the PEGASUS project comes in. It reflects the various scenarios concerning the collection and evaluation of heterogeneous mass data, describes the basics of data and AI ethics and applies them to the project-relevant, case-related topics. In this context, questions of (re)identification of individuals, "group privacy", algorithmic discrimination, transparency, algorithmic ethics, explainability of machine learning methods and much more are of importance. Already during the design of ADM processes (algorithmic decision making) normative settings are inscribed in technical artefacts, such as fairness criteria, thresholds, scores, etc. These normative settings must be discussed before ADM processes are implemented in law enforcement practice.