Integrating Form and Meaning: A Distributed Model of Speech Perception

Abstract
We present a new distributed connectionist model of the perception of spoken words. The model employs a representation of speech that combines lexical information with abstract phonological information, with lexical access modelled as a direct mapping onto this single distributed representation. We first examine the integration of partial cues to phonological identity, showing that the model provides a sound basis for simulating phonetic and lexical decision data from Marslen-Wilson and Warren (1994). We then investigate the time course of lexical access, and argue that the process of competition between word candidates during lexical access can be interpreted in terms of interference between distributed lexical representations. The relation between our model and other models of spoken word recognition is discussed.