Multisensory information for human postural control: integrating touch and vision

Abstract
Despite extensive research on the influence of visual, vestibular and somatosensory information on human postural control, it remains unclear how these sensory channels are fused for self-orientation. The focus of the present study was to test whether a linear additive model could account for the fusion of touch and vision for postural control. We simultaneously manipulated visual and somatosensory (touch) stimuli in five conditions of single- and multisensory stimulation. The visual stimulus was a display of random dots projected onto a screen in front of the standing subject. The somatosensory stimulus was a rigid plate which subjects contacted lightly (<1 N of force) with their right index fingertip. In each condition, one sensory stimulus oscillated (dynamic) in the medial-lateral direction while the other stimulus was either dynamic, static or absent. The results qualitatively supported five predictions of the linear additive model in that the patterns of gain and variability across conditions were consistent with model predictions. However, a strict quantitative comparison revealed significant deviations from model predictions, indicating that the sensory fusion process clearly has nonlinear aspects. We suggest that the sensory fusion process behaved in an approximately linear fashion because the experimental paradigm tested postural control very close to the equilibrium point of vertical upright.

This publication has 0 references indexed in Scilit: