Generative artificial intelligence (AI), such as OpenAI's Generative Pre-Trained Transformer (GPT), is rapidly transforming many areas of everyday life. Especially since the release of versions 3.5 and 4.0 of ChatGPT, the usability and everyday impact of text-based generative AI has expanded rapidly. One of the areas severely affected is academic research and teaching. Universities around the world are struggling to make sense of the newfound power of generative AI. Not only can students now easily produce academic texts, numerical calculations, or code with a few simple “prompts”; senior researchers are also starting to apply generative AI in their epistemic practices. This introduces unprecedented uncertainties into academic life and challenges the value of academic work more generally. Previous AI systems have mostly (though not exclusively) affected disciplines such as computer science, the natural sciences, quantitative areas of the social sciences, and the digital humanities. In these fields, AI has long been an object of research and a tool for data analysis. Generative AI is very different in that it has abruptly and radically affected the qualitative social sciences and humanities (QSSH) more broadly. Universities urgently need to find solutions to issues of authenticity, copyright, plagiarism, etc., while researchers and students must contend with the possibility that generative AI might actually make meaningful contributions to their work. In the QSSH, this has implications not only for concrete epistemic practices, but also for the self-understanding of academics and university students, who are wondering whether their work will soon become – or perhaps already is – partially expendable.
Due to the rapid technological development, there is a lack of empirical knowledge of how generative AI is being used by students and academic staff, and how it is affecting everyday academic life more generally. The project, thus, uses the current phase of early adoption of generative AI as a unique research opportunity to generate a much-needed empirical knowledge base. By conducting an ethnographic study to explore how students and staff within the QSSH at UT are implementing generative AI in their everyday academic lives, it sheds light on hybrid epistemic practices – that is, epistemic practices that emerge through the collaboration between human actors and AI systems – and assess the role of these practices within academic assemblages more generally.
While the project studies generative AI’s epistemic potential, it also engages in a critical evaluation of its broader impact. It considers the ways that the new technology might introduce false information into academic workflows or might be misleading in its explanations of theoretical concepts; the serious issues raised by generative AI relating to copyright and plagiarism and to data management; the potential reproduction of biases such as gender or racial stereotypes in the training data; the hidden politics and cultural assumption in generated texts; generative AI’s significant environmental footprint in terms of CO2 emissions, water consumption, and the use of rare minerals for hardware; and the real danger that the corporate logic of many AI development companies will foster inequalities between different user groups. Ultimately, universities will have to find a middle ground between utilizing the epistemic capacities of generative AI while critically reflecting on its problematic implications. By looking at how generative AI co-constitutes or disrupts insightful epistemic practices and how it impacts universities more broadly, our project aims to spark a debate on critical AI literacy in the QSSH.