Using Expectation-Maximization for Reinforcement Learning
- 1 February 1997
- journal article
- Published by MIT Press in Neural Computation
- Vol. 9 (2) , 271-278
- https://doi.org/10.1162/neco.1997.9.2.271
Abstract
We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).Keywords
This publication has 4 references indexed in Scilit:
- Simple statistical gradient-following algorithms for connectionist reinforcement learningMachine Learning, 1992
- Connectionist learning proceduresArtificial Intelligence, 1989
- Pattern-recognizing stochastic learning automataIEEE Transactions on Systems, Man, and Cybernetics, 1985
- A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov ChainsThe Annals of Mathematical Statistics, 1970