Abstract
The ability to form perceptual equivalence classes from variable input stimuli is common in both animals and humans. Neural circuitry that can disambiguate ambiguous stimuli to arrive at perceptual constancy has been documented in the barn owl's inferior colliculus where sound-source azimuth is signaled by interaural phase differences spanning the frequency spectrum of the sound wave. Extrapolating from the sound-localization system of the barn owl to human speech, 2 hypothetical models are offered to conceptualize the neural realization of relative invariance in (a) categorization of stop consonants/b, d, g/ across varying vowel contexts and (b) vowel identity across speakers. 2 computational algorithms employing real speech data were used to establish acoustic commonalities to form neural mappings representing phonemic equivalence classes in the form of functional arrays similar to those seen in the barn owl.

This publication has 0 references indexed in Scilit: