Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
- 28 July 2006
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review Letters
- Vol. 97 (4) , 048104
- https://doi.org/10.1103/physrevlett.97.048104
Abstract
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of “empiric” synapses driven by random spike trains from an external source.Keywords
All Related Versions
This publication has 12 references indexed in Scilit:
- Learning Curves for Stochastic Gradient Descent in Linear Feedforward NetworksNeural Computation, 2005
- Learning in neural networks by reinforcement of irregular spikingPhysical Review E, 2004
- Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic TransmissionNeuron, 2003
- Gradient-based learning applied to document recognitionProceedings of the IEEE, 1998
- Redistribution of synaptic efficacy between neocortical pyramidal neuronsNature, 1996
- Gradient calculations for dynamic recurrent neural networks: a surveyIEEE Transactions on Neural Networks, 1995
- Temporal and spatial properties of local circuits in neocortexTrends in Neurosciences, 1994
- Model-free distributed learningIEEE Transactions on Neural Networks, 1990
- 30 years of adaptive neural networks: perceptron, Madaline, and backpropagationProceedings of the IEEE, 1990
- Pattern-recognizing stochastic learning automataIEEE Transactions on Systems, Man, and Cybernetics, 1985