International Center for Ethics in the Sciences and Humanities (IZEW)

Conversational Agents in Education: Ethical Reflection

AI systems, and in particular interactive, personalised applications such as chatbots, offer new opportunities for knowledge transfer that can be tailored to the needs of students and teachers in a way that is accessible and appropriate for the target group. At the same time, challenges such as the increasingly anthropomorphic design of AI applications, blind trust in and potential dependencies on AI-generated information, insufficient data and AI literacy, as well as the large-scale processing of personal data, pose considerable risks in the educational sector. Supported by the Hector Foundation, the ethics project accompanies the interdisciplinary work of the AI + Education Future Fund initiatives through methods of integrated and cooperative research.

Team

Duration

01.07.2025 – 30.06.2028

Funding

Project description

The interdisciplinary projects of the AI + Education Future Fund are developing various pedagogical AI solutions for children, students, and teachers. In order to enable a sovereign and critical use of AI applications, a fundamental understanding of how they work is essential. To facilitate media literacy, there is a need for empowerment programs providing guided experiences and creating opportunities for reflection on AI use. Questions of media literacy currently apply in particular to applications of communicative AI (AI chatbots and conversational agents), which engage learners in human-like interaction. Such systems as technical artifacts must withstand critical assessment with regard to their design and (unintended) consequences, so that their benefits can be realised both in schools and in extracurricular contexts.

The ethics project critically and reflexively integrates the interdisciplinary work of the research network on the basis of media and AI ethics, as well as on a children’s rights perspective. The research focuses on three main areas:

1. Communication design for technological competence: Applications of communicative AI easily make people, especially children, forget that they are interacting with statistically operating language models. From an ethical perspective, the design of AI chatbots must avoid inappropriate anthropomorphization, while at the same time exploring ways to use the advantages of low-threshold communication settings for a productive learning experience.

2. AI as a source of knowledge: When users do not understand how information is processed or generated by AI, they are unable to properly assess whether and to what extent the output may be incorrect and/or biased. Children and teachers therefore need support in developing the necessary knowledge for a critical-reflective engagement with AI-generated information. The project explores ways of embedding such knowledge into the relevant applications.

3. Data literacy and informational self-determination: The use of AI systems generally involves the recording of personal or personally attributable data. Transparency and informed, voluntary consent for the processing of personal data are central prerequisites for informational self-determination. The project places particular emphasis on enabling children and young people, in age-appropriate ways, to make such decisions.

The other projects of the Hector AI + Education Innovation Fund are supported in reflecting on their own study design, especially with regard to participatory research methods with children, as well as possible empirical sub-studies on ethical issues. Together with the project partners in the consortium, interdisciplinary proposals and guidelines are developed for AI labeling and transparency to ensure audience-appropriate informed consent in the context of conversational agents. Building on the project results, the ethics project formulates guidelines for the development of interactive, language-based AI and makes them available to the professional community in the form of a policy paper.