Rotating objects to determine orientation, not identity: Evidence from a backward-masking/dual-task procedure

Abstract
The effects of picture-plane rotations on times taken to name familiar objects (RTs) may reflect a process of mental rotation to stored viewpoint-specific representations: therotate-to-recognize hypothesis. Alternatively, mental rotation might be used after stored object representations are activated by a misoriented stimulus in order to verify a weak or distorted shape percept: thedouble-checking hypothesis. We tested these two accounts of rotation effects in object recognition by having subjects verify the orientations (to within 90°) and basic-level names of 14-msec, backward-masked depictions of common objects. The stimulus-mask interval (SOA) varied from 14 to 41 msec, permitting interpolation of the SOA required for 75% accuracy (SOAc). Whereas the SOAc to verify orientation increased with rotation up to 180°, the SOAc to verify identity was briefer and asymptoted at ∼60°. We therefore reject the rotate-to-recognize hypothesis, which implies that SOAc should increase steadily with rotation in both tasks. Instead, we suggest that upright and near-upright stimuli are matched by a fast direct process and that misoriented stimuli are matched at a featural level by a slightly slower view-independent process. We also suggest that rotation effects on RTs reflect apostrecognition stage of orientation verification: therotate-to-orient hypothesis, a version of double-checking that also explains the well-known reduction in orientation effects on RTs when naming repeated objects.

This publication has 51 references indexed in Scilit: