Speechreading sentences I: Development of a sequence comparator
- 1 May 1989
- journal article
- Published by Acoustical Society of America (ASA) in The Journal of the Acoustical Society of America
- Vol. 85 (S1) , S59
- https://doi.org/10.1121/1.2027055
Abstract
Previous research suggests that many lexical errors in the speechreading of sentences can be explained in terms of visual phonemic errors. However, description and quantification of perceptual errors at the phonemic level requires specification of stimulus-to-response alignments. Because speechreading produces numerous errors, including phoneme insertions, deletions, and/or substitutions, alignment is a nontrivial problem. This paper describes development of a sequence comparator that can be used to obtain alignments automatically for phonemically transcribed sentences. The comparator employs a weights matrix that reflects presumed visual distances between all possible segmental stimulus-response pairs to find the alignment that minimizes overall stimulus-response distance. Initially, the comparator used weights based on viseme groupings, but these weights resulted in multiple, equal-distance, alternative alignments. More effective weights were obtained empirically via multidimensional scaling of phonemic confusions. Vowel data were obtained from Montgomery and Jackson [J. Acoust. Soc. Am. 73, 2134–2144 (1983)] and consonant data from a nonsense syllable identification task, which employed 22 consonants spoken by the same talkers who produced the sentence stimuli for this study [Bernstein et al., J. Acoust. Soc. Am. 85, 397–405 (1989)]. [Work supported by NIH.]Keywords
This publication has 0 references indexed in Scilit: