The Identification of Affective-Prosodic Stimuli by Left- and Right-Hemisphere-Damaged Subjects

Abstract
Impairments in listening tasks that require subjects to match affective-prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere-damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.
Keywords