Cognitive Modeling

General Research Approach of the Cognitive Modeling Group:

Studying how minds learn conceptual structures from sensorimotor experiences, how self-referential encodings develop, how these structures and encodings interact with language, and how all this depends on and develops via progressively complex object and social interactions, cooperation, and communication.

The Cognitive Modeling Group takes an ecological stance in all these respects, that is, the neural learning processes develop for the purpose of maintaining internal homeostasis of the organism. As a result, all involved structures, encodings, and neural processes serve the purpose to optimize behavior. In the case of our highly interactive and social minds, it is necessary to be able to plan, decide upon, and execute versatile, adaptive, highly flexible, context-dependent behavior to enable deep, communicative, cooperative, social interactions as well as tool usage. It appears that evolution has thus evolved brains that develop, distributed, well-structured, event-predictive structures and, meanwhile, control the inference processes that dynamically unfold within.

As a result, out theoretical considerations focus on the principle of event-predictive inference, which integrates formulations of free energy minimization and active inference with event-integrative common encodings, event segmentations, and event schema-theoretic approaches.

Three focused research areas:


1. The Dynamics of Anticipatory Processes During Environmental Interactions – Behavioral Psychological Studies

We develop and run behavioral experiments – mostly in Virtual Reality Settings including eye- and motion tracking equipment – to assess far and in which manner our brain explores the future while, for example, interacting with objects or tools, but also while interacting socially with others. Moreover, we are modeling the behavioral results by means of variational free energy formulations of the interaction events and event sequences that the participants of the psychological experiments underwent.


2. Language, Conversation, and Artificial Systems

We are working on modeling the unfolding dynamics during conversations – including the inference of preferences – by means of enhanced rational speech act models. Current model enhancements go beyond the processing of instructions towards deeper intentional speech production and consequent observation inferences. In addition, we work on the BrainControl program developing artificial intelligent agents, which learn to converse about their world given their current event-predictive knowledge about it. For example, one current goal is to use the available event-predictive structures to assess which observations about the world may be of interest to the listener. Third, we relate the developing structures to episodic memory, where memorization is expected to focus on those events and event successions that were experienced as particularly “surprising”.


3. Artificial Neural Nework-Based Cognitive Models

Our deep generative RNN group focusses on developing artificial creatures that first learn to control their own bodies by building predictive models about them. These models are suitably compressed into motor primitives, which constitute particular event codes. Progressively, the event codes will enable the creature to think along them, and thus in a temporally more abstract manner than along actual sensorimotor interaction codes. This process will thus enable the creatures to mentally explore the future in an extended manner.