Universality in neural networks: the importance of the ‘mean firing rate’

Abstract
We present a general analysis of highly connected recurrent neural networks which are able to learn and retrieve a finite number of static patterns. The arguments are based on spike trains and their interval distribution and require no specific model of a neuron. In particular, they apply to formal two-state neurons as well as to more refined models like the integrate-and-fire neuron or the Hodgkin-Huxley equations. We show that the mean firing rate defined as the inverse of the mean interval length is the only relevant parameter (apart from the synaptic weights) that determines the existence of retrieval solutions with a large overlap with one of the learnt patterns. The statistics of the spiking noise (Gaussian, Poisson or other) and hence the shape of the interval distribution does not matter. Thus our unifying approach explains why, and when, all the different associative networks which treat static patterns yield basically the same results, i.e., belong to the same universality class.

This publication has 38 references indexed in Scilit: