The first facet of computational neuroscience is also called ‘theoretical neuroscience’ - the development of mathematical models to better understand empirical data. Mathematics as a tool has proved to be invaluable in describing physical systems and this also applies to the brain. Casting our knowledge and hypotheses about brain function in mathematical language allows us to see them more clearly and apply the powerful tools of mathematics to analyze them and generate testable predictions.
The second facet is the use of computers as a tool to study the brain. With recent advances in experimental methods, the empirical data sets have become so rich that computers are needed even for the simplest computations, e.g. in basic data analysis. Furthermore, many of the mathematical models devised to describe the brain are so complex that their study requires numerical simulations.
The third facet refers to the computations performed by the brain itself. The brain can be thought of as a complex neural system which constantly carries out computations - transforming sensory inputs (e.g. the number of photons hitting the retina) into internal representations of the outside world (e.g. spiking activity in the brain) and into well-defined motor outputs (e.g. electrical pulses operating muscles). These computations can be simple such as when inducing a reflex or very complicated when sensory input has to be combined with prior experience to appropriately select one of many action programs. Understanding these processes is of great importance for both clinical and technological applications.
One of the success stories of neuroscience, involving all three facets, is our understanding of color-opponency in the retina. Horace Barlow’s idea about the retina - already brought up in the early 1960s - was that its main function is to find a compact and efficient representation of the visual input. The necessity for such a representation is evident: More than 120 million photoreceptors react to the incoming light, but the optic nerve represents a bottleneck with only 1.2 million ganglion cells communicating the visual signal to the brain. How does the retina decide which features to transmit?
Barlow’s ingenious idea was to posit that the retina tries to remove redundancies from the visual input, that is parts of the signal that can be easily reconstructed from what remains after processing. As an example consider a black square: in some digital image formats, this square is stored as say 128 by 128 numbers. However, without loss of any information, so-called vector graphic formats can store the same image by only a few numbers: e.g. identity (square), size, position and color. The latter is a much more efficient representation, one with much less redundancy than the original one.
Barlow’s hypothesis consequently is known as the ‘redundancy reduction hypothesis’. The full value of this idea was only uncovered as it was cast into mathematically precise terms. Buchsbaum and Gottschalk derived solely from the fact that we have three types of color-sensitive photoreceptors with different color spectra and the redundancy hypothesis that two color opponent channels and one luminance channel are optimal for an efficient representation in Barlow’s sense. Stunningly, this prediction matches exactly the properties of ganglion cells: there is one cell type with red-green opponent receptive fields, one with blue-yellow preference and one encoding the luminance. In this case, casting the redundancy reduction hypothesis in precise mathematical terms has lead to quantitative predictions about features of retinal ganglion cells, and allowed us to make sense of their response properties.
At the Centre for Integrative Neuroscience, the computational neuroscience groups also take this three-layered approach of computational neuroscience:
In the spirit of Barlow’s original idea, the group of Matthias Bethge tries to find good representations of natural images that allow to encode them in a most efficient manner. In addition to storage, such representations can also be used for denoising or filling-in of missing image information or to ‘fantasize’ new images. Subsequently, they investigate using psychophysical techniques how well human observers can tell images fantasized by the model apart from real-world photographs. In addition to technical applications, this can yield interesting insights into what features of an image are perceptually important and therefore processed by the visual system. Bethge’s group combines such computational approaches with the question of how neural populations in the brain actually perform the computations required to interpret visual input signals. To this end, they develop new models and data analysis techniques for analyzing and understanding the data collected by experimentalists.
The group of Martin Giese focuses on the motor part and studies how complex movements and actions are represented in the brain, and how the learning principles underlying these representations can be exploited for technical applications in computer vision, robotics and biomedical systems. One focus of Giese’s group is the development end experimental testing of models for action representation in the brain. This work includes the development of neural models and testing them in psychophysical, neurophysiological and fMRI experiments. The second focus is the development of technical systems exploiting learning-based action representations for medical diagnosis, computer animation and movement programming in robots. For this purpose they exploit special learning techniques that allow to represent complex movements and actions on the basis of very few learned example patterns.
In 2010, the Bernstein Center for Computational Neuroscience Tübingen was founded to integrate the work of these computational neuroscience groups for computational neuroscience with the experimental community acquiring massive amounts of highly complex data on one hand, and with the machine learning community, who are experts in developing algorithms for large-scale problems on the other hand. At the Bernstein Center, scientists from these backgrounds work closely together to investigate the neural basis of perceptual inference - the process of extracting underlying aspects of the external world that are potentially relevant to the organism. In particular, a main research goal is to understand the coordinated interaction of neurons during perceptual information processing.
Dayan & Abbott. Theoretical Neuroscience. MIT Press 1999.
Rieke, Bialek & Warland. Spikes: Exploring the Neural Code. MIT Press 1997.
Wandell, Foundations of Vision. Sinauer Associates 1995.
Our website uses cookies. Some of them are mandatory, while others allow us to improve your user experience on our website. The settings you have made can be edited at any time.
or
Essential
in2cookiemodal-selection
Required to save the user selection of the cookie settings.
3 months
be_lastLoginProvider
Required for the TYPO3 backend login to determine the time of the last login.
3 months
be_typo_user
This cookie tells the website whether a visitor is logged into the TYPO3 backend and has the rights to manage it.
Browser session
ROUTEID
These cookies are set to always direct the user to the same server.
Browser session
fe_typo_user
Enables frontend login.
Browser session
Videos
iframeswitch
Used to show all third-party contents.
3 months
yt-player-bandaid-host
Is used to display YouTube videos.
Persistent
yt-player-bandwidth
Is used to determine the optimal video quality based on the visitor's device and network settings.
Persistent
yt-remote-connected-devices
Saves the settings of the user's video player using embedded YouTube video.
Persistent
yt-remote-device-id
Saves the settings of the user's video player using embedded YouTube video.
Persistent
yt-player-headers-readable
Collects data about visitors' interaction with the site's video content - This data is used to make the site's video content more relevant to the visitor.
Persistent
yt-player-volume
Is used to save volume preferences for YouTube videos.
Persistent
yt-player-quality
Is used to save the quality settings for YouTube videos.
Persistent
yt-remote-session-name
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-session-app
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-fast-check-period
Saves the settings of the user's video player using embedded YouTube video.
Browser session
yt-remote-cast-installed
Saves the user settings when retrieving a YouTube video integrated on other web pages
Browser session
yt-remote-cast-available
Saves user settings when retrieving integrated YouTube videos.
Browser session
ANID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
2 years
SNID
Google Maps - Google uses these cookies to store user preferences and information when you view pages with Google Maps.
1 month
SSID
Used to store information about how you use the site and what advertisements you saw before visiting this site, and to customize advertising on Google resources by remembering your recent searches, your previous interactions with an advertiser's ads or search results, and your visits to an advertiser's site.
6 months
1P_JAR
This cookie is used to support Google's advertising services.
1 month
SAPISID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
2 years
APISID
Used for targeting purposes to profile the interests of website visitors in order to display relevant and personalized Google advertising.
6 months
HSID
Includes encrypted entries of your Google account and last login time to protect against attacks and data theft from form entries.
2 years
SID
Used for security purposes to store digitally signed and encrypted records of a user's Google Account ID and last login time, enabling Google to authenticate users, prevent fraudulent use of login credentials, and protect user data from unauthorized parties. This may also be used for targeting purposes to display relevant and personalized advertising content.
6 months
SIDCC
This cookie stores information about user settings and information for Google Maps.
3 months
NID
The NID cookie contains a unique ID that Google uses to store your preferences and other information.
6 months
CONSENT
This cookie tracks how you use a website to show you advertisements that may be of interest to you.
18 years
__Secure-3PAPISID
This cookie is used to support Google's advertising services.
2 years
__Secure-3PSID
This cookie is used to support Google's advertising services.
6 months
__Secure-3PSIDCC
This cookie is used to support Google's advertising services.
6 months