Coupled replicator equations for the dynamics of learning in multiagent systems
- 31 January 2003
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review E
- Vol. 67 (1) , 015206
- https://doi.org/10.1103/physreve.67.015206
Abstract
Starting with a group of reinforcement-learning agents we derive coupled replicator equations that describe the dynamics of collective learning in multiagent systems. We show that, although agents model their environment in a self-interested way without sharing knowledge, a game dynamics emerges naturally through environment-mediated interactions. An application to rock-scissors-paper game interactions shows that the collective learning dynamics exhibits a diversity of competitive and cooperative behaviors. These include quasiperiodicity, stable limit cycles, intermittency, and deterministic chaos—behaviors that should be expected in heterogeneous multiagent systems described by the general replicator equations we derive.Keywords
All Related Versions
This publication has 14 references indexed in Scilit:
- Construction of higher order symplectic integratorsPublished by Elsevier ,2002
- Local dispersal promotes biodiversity in a real-life game of rock–paper–scissorsNature, 2002
- Chaos in learning a simple two-person gameProceedings of the National Academy of Sciences, 2002
- Dynamics of internal models in game playersPhysica D: Nonlinear Phenomena, 1999
- Learning Through Reinforcement and Replicator DynamicsJournal of Economic Theory, 1997
- Evolutionary dynamics for bimatrix games: A Hamiltonian system?Journal of Mathematical Biology, 1996
- Chaos in Coupled OptimizersAnnals of the New York Academy of Sciences, 1987
- Selfregulation of behaviour in animal societiesBiological Cybernetics, 1981
- Evolutionarily stable strategies with two types of playerJournal of Applied Probability, 1979
- Evolutionary stable strategies and game dynamicsMathematical Biosciences, 1978