Internationales Zentrum für Ethik in den Wissenschaften (IZEW)

Trust in AI Ethics

von Karoline Reinhardt

04.04.2023 · With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates: Despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. Currently, AI Ethics tends to overload the notion of trustworthiness, turning it into a buzzword that cannot be operationalized. In what follows, I give an overview on the notion of trust employed in AI Ethics Guidelines. Based on this overview, I assess their overlaps and their omissions from the perspective of practical philosophy. In the final section, I formulate points to consider for future research on trustworthy AI (TAI).

Trust and Trustworthiness in AI Ethics: An Evaluation of Guidelines

Trust is, in the about 120 guidelines analyzed, generally perceived as something ‘good’. Only few guidelines warn against blind, excessive trust and being overly trusting. Most guidelines refer to trust as something to be advanced, built, created, cultivated, earned, elevated, enabled, established, or use notions similar to these.[1]

The guidelines differ in the envisioned addressee of trust building: Sometimes it is the general public or society as a whole that is addressed, and the aim here is to build public or social trust. Corporate ethics guidelines as well as guidelines by business associations, unsurprisingly, tend to emphasize the trust of clients, consumers, customers, and users, but other organizations do as well. Less mentioned is human trust or trust between humans and technology, machines, or robots.

The proposed answer to the question who or what is to be trusted also diverges among guidelines: In the Asilomar AI Principles we read about trust and transparency among AI researchers and developers. Others see developers and designers also, but not solely, as the appropriate addressees of calls for trustworthiness.

What makes AI trustworthy?

In the guidelines, trustworthiness is linked to a whole range of different principles: Transparency does play a major role and is linked to trust and trustworthiness in several guidelines. Trustworthiness is, however, also linked to, for instance:

  • reliability, robustness
  • safety, security
  • traceability, verifiability
  • understandability, interpretability, explainability
  • monitoring, evaluation processes, auditing procedures, human oversight
  • compliance with norms and standards, lawfulness, alignment with moral values and ethical principles
  • enhancement of environmental and societal well-being, sustainable development, human agency
  • non-discrimination, fairness
  • data security, privacy protection

Conceptual overlaps between the guidelines

  1. The guidelines that refer to trust at all view building trust dominantly as something 'good' that is to be promoted, whereas lack of trust is dominantly seen as something to be overcome.
  2. Most guidelines are based on an instrumental understanding of trust: Trust is described as something that is a precondition to achieve other things, like the benefits connected to AI or to realize AI’s full potential for society.
  3. Dimensions of interpersonal concepts of trust, institutional and social trust and trust in technology are lumped together. In what way and to what extent they are and ought to be relevant, however, as well as the question of what aspects of them are desirable or justifiable in liberal democratic societies regarding TAI, still needs to be laid out more precisely.
  4. Though the connection between trust and a variety of ethical principles is made in the guidelines, no single principle is consistently mentioned throughout the entire corpus of documents. The conceptualization of trust in general and the definition of what makes AI trustworthy are thus so far inconclusive.
  5. Possible trade-offs and conflicts between these various values and principles that are supposed to generate trust are rarely reflected.

Omissions of the Guidelines with regard to Trust

  1. The guidelines overlook that trust is an ambivalent concept. Trust has to do with uncertainty and with vulnerability. We only need to trust where there is uncertainty about the outcome of a given situation and that outcome puts us at risk. Increasing trust is therefore not necessarily ethically unproblematic.
  2. Trust is often a fallback position in situations that are too complex to understand or where the costs of establishing understanding outweigh the supposed gains of doing so. Under this perspective, increasing transparency that most guidelines view as conducive to trust-building actually decreases the need for trust by decreasing uncertainty.
  3. The focus is clearly on the side of those who have an interest in building trust. The dominant envisioned actor of the trust game is the trustee. The role of the trustor in granting trust and thus establishing a trust-relation is not sufficiently reflected.
  4. The dynamic and flexible aspects of trust building as well as trust withdrawal are not in the focus of the guidelines. Reading the guidelines, one sometimes gets the impression that trust could and would never be withdrawn once it has been gained.
  5. The conditions under which AI products are created hardly play a role and are rarely mentioned as a factor for increasing or decreasing people’s trust in AI. What is also rarely mentioned is, whether an AI system could potentially be deployed for military purposes.

Closing the Gaps: Points for Further Research

  1. The ambivalence of trust must be addressed in order not only to appropriately capture the nature of trust relations, but also because of its practical relevance: Trust comes with several ethically relevant risks. The nature of trust is thus not only of interest for classroom discussions, but of high practical importance. When algorithmic decisions are as interwoven with the fabric of society and political structures as they are today, this generates a normative claim for a justification of their deployment. Ultimately, we might not be in need for more trust in the application of AI systems, but for structures that institutionalize “distrust” for instance in the form of mandatory auditing and monitoring.
  2. In the guidelines, we observe a problematic conflation of trust and trustworthiness. Ideally, we only trust things and people that are trustworthy. However, this is obviously not how trust works. People trust things and persons that are utterly unworthy of trust, and they do not trust things and persons that are utterly trustworthy. This observation has practical implications: Putting emphasis on designing TAI as a means for a wider adoption of AI systems might in the end lead to disappointment on the side of developers: People might still not trust it – let alone adopt it. There are good reasons to design trustworthy AI as there are good reasons for many of the values and principles mentioned in the guidelines, but their employment might ultimately not lead to a wider adoption of AI systems. Further research needs to address this issue.
  3. The guidelines thus far combine conflicting principles regarding the foundation of trustworthiness. This is problematic because it leaves developers in the unclear as to which principle should be applied in case of conflict. This leaves room for arbitrariness, ultimately putting the whole endeavor of well-founded trust in AI at risk, because practitioners or users cannot be sure which part of the trustworthiness canon was applied to what extent to a system in question. Therefore, in further research on TAI it has to be addressed how trade-offs and conflicts between principles are to be resolved.

 

[1] For references see Reinhardt (2022): Trust and Trustworthiness in AI Ethics, in: AI and Ethics, pp. 1-10 https://doi.org/10.1007/s43681-022-00200-5.

The article is based on “Trust and Trustworthiness in AI Ethics” published in: AI and Ethics, 2022, pp. 1-10 (https://doi.org/10.1007/s43681-022-00200-5).

Shortcut for sharing this article: https://uni-tuebingen.de/de/247897

----------------------

Über die Autorin:

Karoline Reinhardt ist seit 2022 Juniorprofessorin für Angewandte Ethik an der Universität Passau. Davor war sie unter anderem PostDoctoral Fellow am Ethics&Philosophy Lab des DFG Exzellenzclusters „Machine Learning: New Perspectives for Science“ an der Universität Tübingen im Projekt „AITE – Artificial Intelligence, Trustworthiness, Explainability“ (Baden-Württemberg Stiftung) und wissenschaftliche Mitarbeiterin am IZEW. Sie ist Mitglied der Jungen Akademie der Heidelberger Akademie der Wissenschaften und war von 2020-2022 Sprecherin des Akademie-Kollegs.