Language Evolution by Iterated Learning With Bayesian Agents
Open Access
- 6 May 2007
- journal article
- Published by Wiley in Cognitive Science
- Vol. 31 (3) , 441-480
- https://doi.org/10.1080/15326900701326576
Abstract
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely‐used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation‐maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.Keywords
This publication has 58 references indexed in Scilit:
- Inferring causal networks from observations and interventionsCognitive Science, 2003
- Language Dynamics in Finite PopulationsJournal of Theoretical Biology, 2003
- Spontaneous evolution of linguistic structure-an iterated learning model of the emergence of regularity and irregularityIEEE Transactions on Evolutionary Computation, 2001
- The Evolutionary Dynamics of Grammar AcquisitionJournal of Theoretical Biology, 2001
- The Stochastic EM Algorithm: Estimation and Asymptotic ResultsBernoulli, 2000
- An Economist’s Perspective on Probability MatchingJournal of Economic Surveys, 2000
- Asymptotic properties of a stochastic EM Algorithm for estimating mixing proportionsCommunications in Statistics. Stochastic Models, 1993
- Neural Networks and the Bias/Variance DilemmaNeural Computation, 1992
- A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation AlgorithmsJournal of the American Statistical Association, 1990
- The Calculation of Posterior Distributions by Data AugmentationJournal of the American Statistical Association, 1987