Democratic reinforcement: A principle for brain function

Abstract
We introduce a simple ‘‘toy’’ brain model. The model consists of a set of randomly connected, or layered integrate-and-fire neurons. Inputs to and outputs from the environment are connected randomly to subsets of neurons. The connections between firing neurons are strengthened or weakened according to whether the action was successful or not. Unlike previous reinforcement learning algorithms, the feedback from the environment is democratic: it affects all neurons in the same way, irrespective of their position in the network and independent of the output signal. Thus no unrealistic back propagation or other external computation is needed. This is accomplished by a global threshold regulation which allows the system to self-organize into a highly susceptible, possibly ‘‘critical’’ state with low activity and sparse connections between firing neurons. The low activity permits memory in quiescent areas to be conserved since only firing neurons are modified when new information is being taught.

This publication has 4 references indexed in Scilit: