Dissociating Face Processing Skills: Decisions about Lip read Speech, Expression, and Identity

Abstract
The separability of different subcomponents of face processing has been regularly affirmed, but not always so clearly demonstrated. In particular, the ability to extract speech from faces (lip-reading) has been shown to dissociate doubly from face identification in neurological but not in other populations. In this series of experiments with undergraduates, the classification of speech sounds (lip-reading) from personally familiar and unfamiliar face photographs was explored using speeded manual responses. The independence of lip-reading from identity-based processing was confirmed. Furthermore, the established pattern of independence of expression-matching from, and dependence of identity-matching on, face familiarity was extended to personally familiar faces and “difficult”-emotion decisions. The implications of these findings are discussed.

This publication has 24 references indexed in Scilit: