International Center for Ethics in the Sciences and Humanities (IZEW)

Artificial Intelligence, Trustworthiness and Explainability (AITE)

The AITE project investigates which epistemological and scientific standards Artificial Intelligence must meet in order to be considered explainable and which ethical standards it should fulfill in order to be considered trustworthy.

The sub-project based at the IZEW examines in particular the concept of trust with regard to artificial intelligence and its relationship to explainability.

Funded by:

Team and contact

Project outline

Presently, it remains opaque why machine learning systems (ML) decide or answer as they do. How can we be sure that AI systems reach their conclusions for the right reasons? This problem is at the heart of at least two debates: Can we trust AI systems? And if so, on which basis? Would an explanation of the decision help our understanding and ultimately foster trust? And if so, what kind of explanation? These are the central questions we want to address in this project.

The project divides into three interrelated subprojects. In Subproject 1, we formulate epistemological and scientific norms on explanation to put constraints on explainable AI (XAI). In Subproject 2, we investigate moral norms of XAI, based on a classification of morally loaded cases of algorithmic decision-making. In Subproject 3, we analyse the notion of “trust” in AI systems and its relation to explainability. The three projects will collaborate to establish ethical, epistemological and scientific standards for trustworthy/reliable AI and XAI.

The project is a joint endeavour of the “Ethics and Philosophy Lab” (EPL) of the DFG Cluster of Excellence “Machine Learning: New Perspectives for Science” (ML-Cluster) and the “International Centre for Ethics in the Sciences and Humanities” (IZEW) at the University of Tübingen.