Reinforcement learning has become a wide and deep conduit that links ideas and results in computer science, statistics, control theory and economics to a near century's worth of psychological data on animal and human decision-making, and a fantastic wealth of findings concerning the neural basis of choice. There is a ready and free flow of ideas among these disciplines, providing a powerful foundation for exploring some of the complexities of both normal and abnormal behaviors.
I will provide an overview, illustrating the themes with examples showing how far we have come, and how far we still have to go.
Peter Samuel Dayan FRS is director at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. He is co-author of Theoretical Neuroscience, an influential textbook on computational neuroscience. He is known for applying bayesian methods from machine learning and artificial intelligence to understand neural function and is particularly recognized for relating neurotransmitter levels to prediction errors and Bayesian uncertainties. He has pioneered the field of reinforcement learning (RL) where he helped develop the Q-learning algorithm, and made contributions to unsupervised learning, including the wake-sleep algorithm for neural networks and the Helmholtz machine