Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells
Open Access
- 31 August 2007
- journal article
- research article
- Published by Public Library of Science (PLoS) in PLoS Computational Biology
- Vol. 3 (8) , e166
- https://doi.org/10.1371/journal.pcbi.0030166
Abstract
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer. Rats excel at navigating through complex environments. In order to find their way, they need to answer two basic questions. Where am I? In which direction am I heading? As the brain has no direct access to information about its position in space, it has to rely on sensory signals—from eyes and ears for example—to answer these questions. Information about its position and orientation is typically present in the information it gathers from its senses, but unfortunately it is encoded in a way that is not obvious to decode. Three major types of cells in the brain whose firing directly reflects spatial information are place, head-direction, and view cells. Place cells, for example, fire when the animal is at a particular location independent of the direction the animal is looking in. In this study, we present a self-organizational model that develops these three representation types by learning on naturalistic videos mimicking the visual input of a rat. Although the model works on complex visual stimuli, a rigorous mathematical description of the system is given as well.Keywords
This publication has 49 references indexed in Scilit:
- Modular toolkit for Data Processing (MDP): a Python data processing frameworkFrontiers in Neuroscience, 2008
- Slowness: An Objective for Spike-Timing–Dependent Plasticity?PLoS Computational Biology, 2007
- A Model of the Ventral Visual System Based on Temporal Stability and Local MemoryPLoS Biology, 2006
- Microstructure of a spatial map in the entorhinal cortexNature, 2005
- Theta-Modulated Place-by-Direction Cells in the Hippocampal Formation in the RatJournal of Neuroscience, 2004
- Role of active movement in place-specific firing of hippocampal neuronsHippocampus, 2004
- A view model which accounts for the spatial fields of hippocampal primate spatial view cells and rat place cellsHippocampus, 2001
- Discharge correlates of hippocampal complex spike neurons in behaving rats passively displaced on a mobile robotHippocampus, 1998
- Multiple Spatial/Behavioral Correlates for Cells in the Rat Postsubiculum: Multiple Regression Analysis and Comparison to Other Hippocampal AreasCerebral Cortex, 1996
- The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving ratBrain Research, 1971