Uni-Tübingen

Christian Stegemann-Philipps

Proceedings opoened: 09 September 2020
Dissertation colloquium: 18 February 2021

Biographical Information

  • since July 2017: Research Assistant and PhD Student at the Research Training Group 1808: Ambiguity – Production and Perception at the Eberhard Karls Universität Tübingen
  • 2015 - 2017: Intern and working student at Horváth & Partners (area computational linguistics)
  • 2012 - 2017: Studies in philosophy, computational linguistics and mathematics with a research focus on philosophy of mind at Eberhard Karls Universität Tübingen, Paris I – Panthéon Sorbonne, Kyoto University and Fernuniversität Hagen
  • Degree M.A. Philosophy
  • 2009 - 2012: Studies in mathematics and philosophy with a focus on combinatorial optimization at Rheinische Friedrich-Wilhelms-Universität Bonn
  • Degree B.Sc. Mathematics
  • 2009: Abitur, Gymnasium Andreanum Hildesheim

 

Research Interests

  • Ambiguity in virtual environments
  • Predictive cognitive models
  • Natural language understanding in simulated agents
  • Artificial intelligence
  • Mental representation

Publications

  • Achimova, Asya; Gregory Scontras, Christian Stegemann-Philipps, Johannes Lohmann & Martin V. Butz (2021). “Learning about others: Pragmatic social inference through ambiguity resolution.” Cognition.

Abstract: "Ambiguity in a Virtual Agent" (working title)

In everyday linguistic communication, sentences and statements often appear that seem to allow for superficial ambiguity, but their interpretation is obvious to humans. In such cases, their meaning is intuitively clear. Knowledge concerning the situation, the intentions of the speaker or general world knowledge is used to resolve the ambiguity. An example would be "the city councilmen refused the demonstrators a permit because they [feared/advocated] violence" (Winograd, 1972). The reference of "they" changes depending on the verb that is used. Artificial intelligence (or computer-aided methods in general) can not solve such cases of ambiguity or at least very badly (see the Winograd Schema Challenge: http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas /WS.html). It has been clear for some time now that artificial intelligence has difficulties making use of the situation, the intentions involved, or world knowledge (see, for example, Haugeland, 1979). However, situational and world knowledge are necessary to resolve ambiguities such as those above.
In my project, I am implementing a virtual agent (similar to a self-acting character in a computer game) that by itself acquires world knowledge and a basic understanding of situations. The focus is to model a plausible cognitive architecture that incorporates and integrates current theories such as predictive processing or embodied cognition with respect to elementary speech processing. The resulting cognitive model should then show a sufficiently sophisticated representation (also called encoding) of the situational environment and its objects. Using this representation, the model should be able to disambiguate ambiguities like the one above.
The primary goal is thus to apply, link, concretise and implement existing theories, with the modeling of cognitive processes and representations being the focus. In this way, the project can be distinguished from approaches that try to learn disambiguating using text only.