Abstract
The purpose of this study was two-fold: (1) to determine whether develop- mental, gender-based, or emotion-based differences exist in children's ability to interpret emotion in music, and (2) to determine which musical elements contribute to children's interpretations of emotion. Subjects aged 6 to 12 (N = 658) were asked on two occasions to interpret the emotion present in 30 recorded musical excerpts. On one measure subjects were asked to respond by circling happy or sad faces as they listened, and on another measure subjects circled excited or calm faces. The data revealed that subjects of all age groups and both genders were highly consistent in their interpretations. No age or gender differences were found, although subjects were significantly more consistent in their distinction between happy and sad music as compared to their distinction between excited and calm music. Stepwise regression analysis showed that happy-sad distinctions were based largely on the rhythmic activity and articulation of the excerpt, and that excited-calm distinctions were based on rhythmic activity and meter.

This publication has 22 references indexed in Scilit: