Using sensory weighting to model the influence of canal,otolith and visual cues on spatial orientation and eye movements |
| |
Authors: | L H Zupan D M Merfeld C Darlot |
| |
Institution: | (1) Jenks Vestibular Physiology Laboratory, Massachusetts Eye and Ear Infirmary, Department of Otology and Laryngology, Harvard Medical School, 243 Charles Street, Suite 421, Boston, MA 02114, USA, US;(2) Ecole Nationale Supérieure des Télécommunications, URA CNRS 820, Paris, France, FR |
| |
Abstract: | The sensory weighting model is a general model of sensory integration that consists of three processing layers. First, each
sensor provides the central nervous system (CNS) with information regarding a specific physical variable. Due to sensor dynamics,
this measure is only reliable for the frequency range over which the sensor is accurate. Therefore, we hypothesize that the
CNS improves on the reliability of the individual sensor outside this frequency range by using information from other sensors,
a process referred to as “frequency completion.” Frequency completion uses internal models of sensory dynamics. This “improved”
sensory signal is designated as the “sensory estimate” of the physical variable. Second, before being combined, information
with different physical meanings is first transformed into a common representation; sensory estimates are converted to intermediate
estimates. This conversion uses internal models of body dynamics and physical relationships. Third, several sensory systems
may provide information about the same physical variable (e.g., semicircular canals and vision both measure self-rotation).
Therefore, we hypothesize that the “central estimate” of a physical variable is computed as a weighted sum of all available
intermediate estimates of this physical variable, a process referred to as “multicue weighted averaging.” The resulting central
estimate is fed back to the first two layers. The sensory weighting model is applied to three-dimensional (3D) visual–vestibular
interactions and their associated eye movements and perceptual responses. The model inputs are 3D angular and translational
stimuli. The sensory inputs are the 3D sensory signals coming from the semicircular canals, otolith organs, and the visual
system. The angular and translational components of visual movement are assumed to be available as separate stimuli measured
by the visual system using retinal slip and image deformation. In addition, both tonic (“regular”) and phasic (“irregular”)
otolithic afferents are implemented. Whereas neither tonic nor phasic otolithic afferents distinguish gravity from linear
acceleration, the model uses tonic afferents to estimate gravity and phasic afferents to estimate linear acceleration. The
model outputs are the internal estimates of physical motion variables and 3D slow-phase eye movements. The model also includes
a smooth pursuit module. The model matches eye responses and perceptual effects measured during various motion paradigms in
darkness (e.g., centered and eccentric yaw rotation about an earth-vertical axis, yaw rotation about an earth-horizontal axis)
and with visual cues (e.g., stabilized visual stimulation or optokinetic stimulation).
Received: 20 September 2000 / Accepted in revised form: 28 September 2001 |
| |
Keywords: | |
本文献已被 PubMed SpringerLink 等数据库收录! |
|