Nonlinear reinforcement schemes for learning automata
- 1 January 1990
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 2204-2207 vol.4
- https://doi.org/10.1109/cdc.1990.204017
Abstract
The development and evaluation of two novel nonlinear reinforcement schemes for learning automata are presented. These schemes are designed to increase the rate of adaptation of the existing L/sub R-P/ schemes while interacting with nonstationary environments. The first of these two schemes is called a nonlinear scheme incorporating history (NSIH) and the second a nonlinear scheme with unstable zones (NSWUZ). The prime objective of these algorithms is to reduce the number of iterations needed for the action probability vector to reach the desired level of accuracy rather than converge to a specific unit vector in the Cartesian coordinate. Simulation experiments have been conducted to assess the learning properties of NSIH and NSWUZ in nonstationary environments. The simulation results show that the proposed nonlinear algorithms respond to environmental changes faster than the L/sub R-P/ scheme.Keywords
This publication has 4 references indexed in Scilit:
- Reconfigurable control of power plants using learning automataIEEE Control Systems, 1991
- epsilon -optimal discretized linear reward-penalty learning automataIEEE Transactions on Systems, Man, and Cybernetics, 1988
- A new approach to the design of reinforcement schemes for learning automataIEEE Transactions on Systems, Man, and Cybernetics, 1985
- Learning Automata - A SurveyIEEE Transactions on Systems, Man, and Cybernetics, 1974