Cross-linguistic comparisons in the integration of visual and auditory speech
- 1 January 1995
- journal article
- Published by Springer Nature in Memory & Cognition
- Vol. 23 (1) , 113-131
- https://doi.org/10.3758/bf03210561
Abstract
We examined how speakers of different languages perceive speech in face-to-face communication. These speakers identified synthetic unimodal and bimodal speech syllables made from synthetic auditory and visual five-step /ba/-/da/ continua. In the first experiment, Dutch speakers identified the test syllables as either /ba/ or /da/. To explore the robustness of the results, Dutch and English speakers were given a completely open-ended response task. Tasks in previous studies had always specified a set of alternatives. Similar results were found in the two-alternative and open-ended task. Identification of the speech segments was influenced by both the auditory and the visual sources of information. The results falsified an auditory dominance model (ADM) which assumes that the contribution of visible speech is dependent on poor-quality audible speech. The results also falsified an additive model of perception (AMP) in which the auditory and visual sources are linearly combined. The fuzzy logical model of perception (FLMP) provided a good description of performance, supporting the claim that multiple sources of continuous information are evaluated and integrated in speech perception. These results replicate previous results found with English, Spanish, and Japanese speakers. Although there were significant performance differences, the model analyses indicated no differences in the nature of information processing across language groups. The performance differences across languages were caused by information differences due to different phonologies in Dutch and English. These results suggest that the underlying mechanisms for speech perception are similar across languages.Keywords
This publication has 29 references indexed in Scilit:
- Models of integration given multiple sources of information.Psychological Review, 1990
- Perception of Synthesized Audible and Visible SpeechPsychological Science, 1990
- Multiple Book Review ofSpeech perception by ear and eye: A paradigm for psychological inquiryBehavioral and Brain Sciences, 1989
- Testing between the TRACE model and the fuzzy logical model of speech perceptionCognitive Psychology, 1989
- Before you see it, you see its parts: Evidence for feature encoding and integration in preschool children and adultsCognitive Psychology, 1989
- The role of visual information in the processing ofPerception & Psychophysics, 1989
- Evaluation and integration of visual and auditory information in speech perception.Journal of Experimental Psychology: Human Perception and Performance, 1983
- Software for a cascade/parallel formant synthesizerThe Journal of the Acoustical Society of America, 1980
- Hearing lips and seeing voicesNature, 1976
- Strong InferenceScience, 1964