Coupled Replicator Equations for the Dynamics of Learning in Multiagent Systems

  • 24 April 2002
Abstract
Starting with a group of reinforcement-learning agents we derive coupled replicator equations that describe the dynamics of collective learning in multiagent systems. We show that, although agents model their environment in a self-interested way without sharing knowledge, a game dynamics emerges naturally through the environment. As an application, with a rock-scissors-paper game interaction between agents, the collective learning dynamics exhibits a diversity of competitive and cooperative behaviors. These include quasiperiodicity, stable limit cycles, intermittency, and deterministic chaos--behaviors that are to be expected in the multiagent, heterogeneous setting described by the general replicator equations.

This publication has 0 references indexed in Scilit: