Empirical Tests of the Gradual Learning Algorithm
Top Cited Papers
- 1 January 2001
- journal article
- Published by MIT Press in Linguistic Inquiry
- Vol. 32 (1) , 45-86
- https://doi.org/10.1162/002438901554586
Abstract
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient well-formedness judgments. The case studies we examine involve Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/Keywords
This publication has 8 references indexed in Scilit:
- Sympathy and phonological opacityPhonology, 1999
- Formal and Empirical Arguments concerning Phonological AcquisitionLinguistic Inquiry, 1998
- Quatrain Form in English Folk VerseLanguage, 1998
- Learnability in Optimality TheoryLinguistic Inquiry, 1998
- Optimality Theory and variable word-final deletion in FaetarLanguage Variation and Change, 1997
- Mechanism of sound change in Optimality TheoryLanguage Variation and Change, 1997
- Reduplication and syllabification in IlokanoLingua, 1989
- A re-examination of phonological neutralizationJournal of Linguistics, 1985