Two Competing Models of How People Learn in Games
Preprint
- 1 January 2000
- preprint Published in RePEc
Abstract
Reinforcement learning and stochastic fictitious play are apparent rivals as models of humans learning. They embody quite different assumptions about the processing of information and optimisation. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behaviour. In particular, they have identical local stability properties at mixed equilibria. The main identifiable difference between two models is speed: stochastic fictitious play gives rise to faster learning.Keywords
All Related Versions
This publication has 0 references indexed in Scilit: