Perceptual constraints on implicit learning of spatial context

Abstract
Invariant spatial relationships of objects may provide a rich source of contextual information. Visual context can assist localization of individual objects via an implicit learning mechanism, as revealed in the contextual cueing paradigm (Chun & Jiang, 1998). What defines a visual context? How robust is contextual learning? And is it perceptually constrained? Here we investigate whether both local context that surround a target, and long-range context that does not spatially coincide with a target, can influence target localization. In the contextual cueing task, participants implicitly learned a context by repeated exposure to items arranged in invariant patterns. Experiments 1 and 2 suggest that only local context facilitates target localization. However, Experiment 3 showed that long range context can prime target location when target and context are not separated by random information. Experiment 4 showed that grouping by colour does not affect contextual cueing, suggesting that spatial features play a more important role than surface features in spatial contextual cueing. In separate analyses, visual hemifield differences were found for learning and performance. In sum, the results indicate that implicit learning of spatial context is robust across noise and biased towards spatially grouped information.

This publication has 39 references indexed in Scilit: