A Balanced Memory Network
Open Access
- 7 September 2007
- journal article
- research article
- Published by Public Library of Science (PLoS) in PLoS Computational Biology
- Vol. 3 (9) , e141-1700
- https://doi.org/10.1371/journal.pcbi.0030141
Abstract
A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons. A critical component of cognition is memory—the ability to store information, and to readily retrieve it on cue. Existing models postulate that recalled items are represented by self-sustained activity; that is, they are represented by activity that can exist in the absence of input. These models, however, are incomplete, in the sense that they do not explain two salient experimentally observed features of persistent activity: low firing rates and high neuronal variability. Here we propose a model that can explain both. The model makes two predictions: changes in synaptic weights during learning should be much smaller than the background weights, and the fraction of neurons selective for a memory should be above some threshold. Experimental confirmation of these predictions would provide strong support for the model, and constitute an important step toward a complete theory of memory storage and retrieval.Keywords
All Related Versions
This publication has 75 references indexed in Scilit:
- Optimal decoding of correlated neural population responses in the primate visual cortexNature Neuroscience, 2006
- High-conductance states in a mean-field cortical network modelNeurocomputing, 2004
- Temporally Irregular Mnemonic Persistent Activity in Prefrontal Neurons of Monkeys During a Delayed Response TaskJournal of Neurophysiology, 2003
- Firing Rate of the Noisy Quadratic Integrate-and-Fire NeuronNeural Computation, 2003
- Turning on and off recurrent balanced cortical activityNature, 2003
- The Cost of Cortical ComputationCurrent Biology, 2003
- Graded-response neurons and information encodings in autoassociative memoriesPhysical Review A, 1990
- Associative memory with high information contentPhysical Review A, 1989
- Neuronal correlate of visual associative long-term memory in the primate temporal cortexNature, 1988
- Neuronal correlate of pictorial short-term memory in the primate temporal cortexYasushi MiyashitaNature, 1988