This is a selection of research topics that we are studying in our lab.

Cognitive Offloading of Working Memory Processes

Mobile devices such as tablets enable users to offload internal memory content and memory processes, thus providing the potential to overcome limitations of working memory that generally restrict cognitive performance. In our projects, we address the potentials, benefits, and risks of offloading cognitive processes into mobile touch devices systematically. Therefore, we investigate how the intuitive control of mobile touch devices facilitates interactions between internally and externally stored information. Furthermore, we explore the potential of the mobility of touch devices to externalize transformation processes that otherwise draw heavily onto the internal resources. Importantly, we also investigate how cognitive offloading affects the explicit and implicit acquisition of long-term memory representations. Beside these more general aspects of cognitive offloading, we also study individual differences in cognitive offloading of working memory processes by focusing on the influence of metacognitions on cognitive offloading.

This project is funded by the Leibniz WissenschaftsCampus Tübingen:

Smartphones and Tablets: Potentials for Learning and Identification?

Touch is one of the basic ways through which we interact with our environment. It has often been shown that touch is an intuitive way of exploring the world from the cradle, that it represents an important channel of communication, and that it impacts our well-being as well as our attitude to objects as well as our relationships. With the ubiquity of touch-technology, a new dimension of touch arose, presenting new possibilities. In our studies, we explore the extent of touch dynamics in a digital, abstract environment. We focus on the educational and social contexts, investigating whether touch-based interactions with abstract symbols can improve learning and enhance identification. From the results of this project, we ultimately aim to derive specific recommendations for the design of applications created to improve learning and identification.

This project is funded by the Leibniz WissenschaftsCampus Tübingen:

Processing of Spatial Configurations in Visual Working Memory

Spatial configurations are an important part of the organization of visual working memory. Even when asking observers to encode multiple object locations independently, for example, they also automatically process and encode the spatial configuration of those objects. With this project, we contribute to the theoretical understanding of how spatial configurations are processed within visual working memory. Thereby, this project tries to expand our understanding of the structure of visual working memory. This project focuses on the following two research questions: a.) Can spatial configurations be updated during active memorization? b) Is there a common mechanism driving the configuration and context effects that were observed in multiple paradigms by previous research?

This project is funded by the Deutsche Forschungsgemeinschaft (DFG):

Digital dynamics of the self

We are constantly acting in a digital, online world, using apps and websites for our daily chores and interactions - for entertainment, time management, learning, social contact, health, etc. The representation of ourselves in this world is varied - be it an arrow or mannequin on Google Maps, the avatar on our Netflix account, or an actual photo of us on Facebook, we are being represented in different, constantly changing, ways. Our cognitive system seems flexible in integrating these various representations of the self. Our studies address the question of how the processes of self-representation and integration in the digital world actually work and the role that self-representation and perception play in the digital world. Not only do we look at processes of integrating representations of the self, but also their behavioral consequences.

This project is funded by the Leibniz WissenschaftsCampus Tübingen:

Event Cognition

How do human observers comprehend their dynamic environment, such as when watching sport broadcasts on television, movies or natural actions? Instead of processing all information presented in this constant stream of information equally, observers segment the information stream into meaningful units, the so called events. In this project, we study how human observer construct event models of their dynamic environment and how they update these event models during the observation of dynamic scenes. Furthermore, we investigate the consequences of event model construction on human perception, such as the illusory perception of information that was actually missing in the dynamic environment.

Attitude-dependent reception and evaluation of information

People tend to prefer information that confirms their beliefs while ignoring information that contradicts those beliefs. This tendency is referred to as selective exposure bias. People also tend to overvalue attitude-consistent information while devaluing attitude-inconsistent information. This tendency is referred to as attitudinal evaluation bias. It is particularly evident, for instance, when people are asked to examine pro- and con-arguments on controversial issues. The research focus is on the fine-grained modelling and estimation of those attitude-dependent processes and parameters.

Seeing, Hearing, Feeling: Similarities, differences and interactions between information processing in the sensory modalities

At each and every waking moment, we are bombarded with visual, auditory as well as tactile input. Our cognitive system needs to constantly filter the information that is relevant for each moment in order to adaptively respond to the external world. Often, information from different sensory modalities needs to be considered in order to fulfil this task (or actually: it is often simply more efficient to rely on combinations of sensory input rather than on unisensory information). In our studies, we shed light on these crossmodal interactions. Since tactile information processing has been neglected for a long time, it serves as the focus for our research: On the one hand, we adapt established paradigms of visual and auditory perception to compare their results from vision and audition to touch. On the other hand, we test how vision and audition influence tactile information processing (or vice versa, how touch impacts vision / audition). These studies aim to reveal insights on the degree to which cognitive processes related to attention are modality specific vs. modality unspecific.