Democratic reinforcement: A principle for brain function
- 1 May 1995
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review E
- Vol. 51 (5) , 5033-5039
- https://doi.org/10.1103/physreve.51.5033
Abstract
We introduce a simple ‘‘toy’’ brain model. The model consists of a set of randomly connected, or layered integrate-and-fire neurons. Inputs to and outputs from the environment are connected randomly to subsets of neurons. The connections between firing neurons are strengthened or weakened according to whether the action was successful or not. Unlike previous reinforcement learning algorithms, the feedback from the environment is democratic: it affects all neurons in the same way, irrespective of their position in the network and independent of the output signal. Thus no unrealistic back propagation or other external computation is needed. This is accomplished by a global threshold regulation which allows the system to self-organize into a highly susceptible, possibly ‘‘critical’’ state with low activity and sparse connections between firing neurons. The low activity permits memory in quiescent areas to be conserved since only firing neurons are modified when new information is being taught.Keywords
This publication has 4 references indexed in Scilit:
- Versatility and adaptive performancePhysical Review E, 1995
- Modeling Brain FunctionPublished by Cambridge University Press (CUP) ,1989
- Self-organized criticality: An explanation of the 1/fnoisePhysical Review Letters, 1987
- Pattern-recognizing stochastic learning automataIEEE Transactions on Systems, Man, and Cybernetics, 1985