Werner Reichardt Centrum für Integrative Neurowissenschaften (CIN)

Computational Neuroscience

The Three Facets of Computational Neuroscience

The first facet of computational neuroscience is also called ‘theoretical neuroscience’ - the development of mathematical models to better understand empirical data. Mathematics as a tool has proved to be invaluable in describing physical systems and this also applies to the brain. Casting our knowledge and hypotheses about brain function in mathematical language allows us to see them more clearly and apply the powerful tools of mathematics to analyze them and generate testable predictions.

The second facet is the use of computers as a tool to study the brain. With recent advances in experimental methods, the empirical data sets have become so rich that computers are needed even for the simplest computations, e.g. in basic data analysis. Furthermore, many of the mathematical models devised to describe the brain are so complex that their study requires numerical simulations.

The third facet refers to the computations performed by the brain itself. The brain can be thought of as a complex neural system which constantly carries out computations - transforming sensory inputs (e.g. the number of photons hitting the retina) into internal representations of the outside world (e.g. spiking activity in the brain) and into well-defined motor outputs (e.g. electrical pulses operating muscles). These computations can be simple such as when inducing a reflex or very complicated when sensory input has to be combined with prior experience to appropriately select one of many action programs. Understanding these processes is of great importance for both clinical and technological applications.

A Success Story of Neuroscience

One of the success stories of neuroscience, involving all three facets, is our understanding of color-opponency in the retina. Horace Barlow’s idea about the retina - already brought up in the early 1960s - was that its main function is to find a compact and efficient representation of the visual input. The necessity for such a representation is evident: More than 120 million photoreceptors react to the incoming light, but the optic nerve represents a bottleneck with only 1.2 million ganglion cells communicating the visual signal to the brain. How does the retina decide which features to transmit?

Barlow’s ingenious idea was to posit that the retina tries to remove redundancies from the visual input, that is parts of the signal that can be easily reconstructed from what remains after processing. As an example consider a black square: in some digital image formats, this square is stored as say 128 by 128 numbers. However, without loss of any information, so-called vector graphic formats can store the same image by only a few numbers: e.g. identity (square), size, position and color. The latter is a much more efficient representation, one with much less redundancy than the original one.

Barlow’s hypothesis consequently is known as the ‘redundancy reduction hypothesis’. The full value of this idea was only uncovered as it was cast into mathematically precise terms. Buchsbaum and Gottschalk derived solely from the fact that we have three types of color-sensitive photoreceptors with different color spectra and the redundancy hypothesis that two color opponent channels and one luminance channel are optimal for an efficient representation in Barlow’s sense. Stunningly, this prediction matches exactly the properties of ganglion cells: there is one cell type with red-green opponent receptive fields, one with blue-yellow preference and one encoding the luminance. In this case, casting the redundancy reduction hypothesis in precise mathematical terms has lead to quantitative predictions about features of retinal ganglion cells, and allowed us to make sense of their response properties.

A Three-Layered Approach

At the Centre for Integrative Neuroscience, the computational neuroscience groups also take this three-layered approach of computational neuroscience:

In the spirit of Barlow’s original idea, the group of Matthias Bethge tries to find good representations of natural images that allow to encode them in a most efficient manner. In addition to storage, such representations can also be used for denoising or filling-in of missing image information or to ‘fantasize’ new images. Subsequently, they investigate using psychophysical techniques how well human observers can tell images fantasized by the model apart from real-world photographs. In addition to technical applications, this can yield interesting insights into what features of an image are perceptually important and therefore processed by the visual system. Bethge’s group combines such computational approaches with the question of how neural populations in the brain actually perform the computations required to interpret visual input signals. To this end, they develop new models and data analysis techniques for analyzing and understanding the data collected by experimentalists.

The group of Martin Giese focuses on the motor part and studies how complex movements and actions are represented in the brain, and how the learning principles underlying these representations can be exploited for technical applications in computer vision, robotics and biomedical systems. One focus of Giese’s group is the development end experimental testing of models for action representation in the brain. This work includes the development of neural models and testing them in psychophysical, neurophysiological and fMRI experiments. The second focus is the development of technical systems exploiting learning-based action representations for medical diagnosis, computer animation and movement programming in robots. For this purpose they exploit special learning techniques that allow to represent complex movements and actions on the basis of very few learned example patterns.

In 2010, the Bernstein Center for Computational Neuroscience Tübingen was founded to integrate the work of these computational neuroscience groups for computational neuroscience with the experimental community acquiring massive amounts of highly complex data on one hand, and with the machine learning community, who are experts in developing algorithms for large-scale problems on the other hand. At the Bernstein Center, scientists from these backgrounds work closely together to investigate the neural basis of perceptual inference - the process of extracting underlying aspects of the external world that are potentially relevant to the organism. In particular, a main research goal is to understand the coordinated interaction of neurons during perceptual information processing.

Find out more about our national and international partners

Introductory Reading

Dayan & Abbott. Theoretical Neuroscience. MIT Press 1999.

Rieke, Bialek & Warland. Spikes: Exploring the Neural Code. MIT Press 1997.

Wandell, Foundations of Vision. Sinauer Associates 1995.