Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

Abstract
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer. Rats excel at navigating through complex environments. In order to find their way, they need to answer two basic questions. Where am I? In which direction am I heading? As the brain has no direct access to information about its position in space, it has to rely on sensory signals—from eyes and ears for example—to answer these questions. Information about its position and orientation is typically present in the information it gathers from its senses, but unfortunately it is encoded in a way that is not obvious to decode. Three major types of cells in the brain whose firing directly reflects spatial information are place, head-direction, and view cells. Place cells, for example, fire when the animal is at a particular location independent of the direction the animal is looking in. In this study, we present a self-organizational model that develops these three representation types by learning on naturalistic videos mimicking the visual input of a rat. Although the model works on complex visual stimuli, a rigorous mathematical description of the system is given as well.