Spatial Representation of Temporal Information by Networks that Learn

  • 4 September 2002
Abstract
In networks that learn the coupling strengths among neurons are altered according to some rule which is implemented as input signals are presented to the network. We investigate here a mechanism, based on the observed response of biological synapses to presynaptic and postsynaptic spikes at excitatory synapses, for storing, retrieving and predicting temporal sequences. Our model system is composed of realistic conduction based Hodgkin-Huxley neurons operating in a spiking mode and densely coupled with learning synapses. After conditioning through repeated input of a limited number of temporal sequences the system is able to predict the temporal sequences upon receiving a new input comprised of a fraction of the original training sequence. This is an example of effective unsupervised learning. We investigate the dependence of learning success on entrainment time, system size and the presence of noise. There are implications from this modeling for learning of motor sequences, recognition and prediction of temporal sensory information in the visual as well as auditory systems and for late processing in the olfactory system.

This publication has 0 references indexed in Scilit: