首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have reviewed evidence that suggests that the target for limb motion is encoded in a retinocentric frame of reference. Errors in pointing that are elicited by an illusion that distorts the perceived motion of a target are strongly correlated with errors in gaze position. The modulations in the direction and speed of ocular smooth pursuit and of the hand show remarkable similarities, even though the inertia of the arm is much larger than that of the eye. We have suggested that ocular motion is constrained so that gaze provides an appropriate target signal for the hand. Finally, ocular and manual tracking deficits in patients with cerebellar ataxia are very similar. These deficits are also consistent with the idea that a gaze signal provides the target for hand motion; in some cases limb ataxia would be a consequence of optic ataxia rather than reflecting a deficit in the control of limb motion per se. These results, as well as neurophysiological data summarized here, have led us to revise a hypothesis we have previously put forth to account for the initial stages of sensorimotor transformations underlying targeted limb motions. In the original hypothesis, target location and initial arm posture were ultimately encoded in a common frame of reference tied to somatosensation, i.e. a body-centered frame of reference, and a desired change in posture was derived from the difference between the two. In our new scheme, a movement vector is derived from the difference between variables encoded in a retinocentric frame of reference. Accordingly, gaze, with its exquisite ability to stabilize a target image even under dynamic conditions, would be used as a reference signal. Consequently, this scheme would facilitate the processing of information under conditions in which the body and the target are moving relative to each other.  相似文献   

2.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

3.
The report ‘I saw the stimulus’ operationally defines visual consciousness, but where does the ‘I’ come from? To account for the subjective dimension of perceptual experience, we introduce the concept of the neural subjective frame. The neural subjective frame would be based on the constantly updated neural maps of the internal state of the body and constitute a neural referential from which first person experience can be created. We propose to root the neural subjective frame in the neural representation of visceral information which is transmitted through multiple anatomical pathways to a number of target sites, including posterior insula, ventral anterior cingulate cortex, amygdala and somatosensory cortex. We review existing experimental evidence showing that the processing of external stimuli can interact with visceral function. The neural subjective frame is a low-level building block of subjective experience which is not explicitly experienced by itself which is necessary but not sufficient for perceptual experience. It could also underlie other types of subjective experiences such as self-consciousness and emotional feelings. Because the neural subjective frame is tightly linked to homeostatic regulations involved in vigilance, it could also make a link between state and content consciousness.  相似文献   

4.
To form an accurate internal representation of visual space, the brain must accurately account for movements of the eyes, head or body. Updating of internal representations in response to these movements is especially important when remembering spatial information, such as the location of an object, since the brain must rely on non-visual extra-retinal signals to compensate for self-generated movements. We investigated the computations underlying spatial updating by constructing a recurrent neural network model to store and update a spatial location based on a gaze shift signal, and to do so flexibly based on a contextual cue. We observed a striking similarity between the patterns of behaviour produced by the model and monkeys trained to perform the same task, as well as between the hidden units of the model and neurons in the lateral intraparietal area (LIP). In this report, we describe the similarities between the model and single unit physiology to illustrate the usefulness of neural networks as a tool for understanding specific computations performed by the brain.  相似文献   

5.
Rapid orientating movements of the eyes are believed to be controlled ballistically. The mechanism underlying this control is thought to involve a comparison between the desired displacement of the eye and an estimate of its actual position (obtained from the integration of the eye velocity signal). This study shows, however, that under certain circumstances fast gaze movements may be controlled quite differently and may involve mechanisms which use visual information to guide movements prospectively. Subjects were required to make large gaze shifts in yaw towards a target whose location and motion were unknown prior to movement onset. Six of those tested demonstrated remarkable accuracy when making gaze shifts towards a target that appeared during their ongoing movement. In fact their level of accuracy was not significantly different from that shown when they performed a 'remembered' gaze shift to a known stationary target (F3,15 = 0.15, p > 0.05). The lack of a stereotypical relationship between the skew of the gaze velocity profile and movement duration indicates that on-line modifications were being made. It is suggested that a fast route from the retina to the superior colliculus could account for this behaviour and that models of oculomotor control need to be updated.  相似文献   

6.
Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed.  相似文献   

7.
Eye position influences auditory responses in primate inferior colliculus   总被引:9,自引:0,他引:9  
Groh JM  Trause AS  Underhill AM  Clark KR  Inati S 《Neuron》2001,29(2):509-518
We examined the frame of reference of auditory responses in the inferior colliculus in monkeys fixating visual stimuli at different locations. Eye position modulated the level of auditory responses in 33% of the neurons we encountered, but it did not appear to shift their spatial tuning. The effect of eye position on auditory responses was substantial-comparable in magnitude to that of sound location. The eye position signal appeared to interact with the auditory responses in at least a partly multiplicative fashion. We conclude that the representation of sound location in primate IC is distributed and that the frame of reference is intermediate between head- and eye-centered coordinates. The information contained in these neurons appears to be sufficient for later neural stages to calculate the positions of sounds with respect to the eyes.  相似文献   

8.
Familiarity accentuates gaze cuing in women but not men   总被引:1,自引:0,他引:1  
Gaze cuing, the tendency to shift attention in the direction other individuals are looking, is hypothesized to depend on a distinct neural module. One expectation of such a module is that information processing should be encapsulated within it. Here, we tested whether familiarity, a type of social knowledge, penetrates the neural circuits governing gaze cuing. Male and female subjects viewed the face of an adult male looking left or right and then pressed a keypad to indicate the location of a target appearing randomly left or right. Responses were faster for targets congruent with gaze direction. Moreover, gaze cuing was stronger in females than males. Contrary to the modularity hypothesis, familiarity enhanced gaze cuing, but only in females. Sex differences in the effects of familiarity on gaze cuing may reflect greater adaptive significance of social information for females than males.  相似文献   

9.
The supplementary eye field (SEF) is a region within medial frontal cortex that integrates complex visuospatial information and controls eye-head gaze shifts. Here, we test if the SEF encodes desired gaze directions in a simple retinal (eye-centered) frame, such as the superior colliculus, or in some other, more complex frame. We electrically stimulated 55 SEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. Each stimulation site specified a specific spatial goal when plotted in its intrinsic frame. These intrinsic frames varied site by site, in a continuum from eye-, to head-, to space/body-centered coding schemes. This variety of coding schemes provides the SEF with a unique potential for implementing arbitrary reference frame transformations.  相似文献   

10.
Lesion studies of the parietal cortex have led to a wide range of conclusions regarding the coordinate reference frame in which hemineglect is expressed. A model of spatial representation in the parietal cortex has recently been developed in which the position of an object is not encoded in a particular frame of reference, but instead involves neurones computing basis functions of sensory inputs. In this type of representation, a nonlinear sensorimotor transformation of an object is represented in a population of units having the response properties of neurones that are observed in the parietal cortex. A simulated lesion in a basis-function representation was found to replicate three of the most important aspects of hemineglect: (i) the model behaved like parietal patients in line-cancellation and line-bisection experiments; (ii) the deficit affected multiple frames of reference; and (iii) the deficit could be object-centred. These results support the basis-function hypothesis for spatial representations and provide a testable computational theory of hemineglect at the level of single cells.  相似文献   

11.
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.  相似文献   

12.
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.  相似文献   

13.
Humans show a remarkable ability to discriminate others' gaze direction, even though a given direction can be conveyed by many physically dissimilar configurations of different eye positions and head views. For example, eye contact can be signaled by a rightward glance in a left-turned head or by direct gaze in a front-facing head. Such acute gaze discrimination implies considerable perceptual invariance. Previous human research found that superior temporal sulcus (STS) responds preferentially to gaze shifts [1], but the underlying representation that supports such general responsiveness remains poorly understood. Using multivariate pattern analysis (MVPA) of human functional magnetic resonance imaging (fMRI) data, we tested whether STS contains a higher-order, head view-invariant code for gaze direction. The results revealed a finely graded gaze direction code in right anterior STS that was invariant to head view and physical image features. Further analyses revealed similar gaze effects in left anterior STS and precuneus. Our results suggest that anterior STS codes the direction of another's attention regardless of how this information is conveyed and demonstrate how high-level face areas carry out fine-grained, perceptually relevant discrimination through invariance to other face features.  相似文献   

14.
Orchestrating a movement towards a sensory target requires many computational processes, including a transformation between reference frames. This transformation is important because the reference frames in which sensory stimuli are encoded often differ from those of motor effectors. The posterior parietal cortex has an important role in these transformations. Recent work indicates that a significant proportion of parietal neurons in two cortical areas transforms the sensory signals that are used to guide movements into a common reference frame. This common reference frame is an eye-centred representation that is modulated by eye-, head-, body- or limb-position signals. A common reference frame might facilitate communication between different areas that are involved in coordinating the movements of different effectors. It might also be an efficient way to represent the locations of different sensory targets in the world.  相似文献   

15.
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.  相似文献   

16.

Background

Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account.

Methodology/Principal Findings

We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side.

Conclusions

While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.  相似文献   

17.
For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes.  相似文献   

18.
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints.  相似文献   

19.
Damage to the human parietal cortex leads to disturbances of spatial perception and of motor behaviour. Within the parietal lobe, lesions of the superior and of the inferior lobule induce quite different, characteristic deficits. Patients with inferior (predominantly right) parietal lobe lesions fail to explore the contralesional part of space by eye or limb movements (spatial neglect). In contrast, superior parietal lobe lesions lead to specific impairments of goal-directed movements (optic ataxia). The observations reported in this paper support the view of dissociated functions represented in the inferior and the superior lobule of the human parietal cortex. They suggest that a spatial reference frame for exploratory behaviour is disturbed in patients with neglect. Data from these patients'' visual search argue that their failure to explore the contralesional side is due to a disturbed input transformation leading to a deviation of egocentric space representation to the ipsilesional side. Data further show that this deviation follows a rotation around the earth-vertical body axis to the ipsilesional side rather than a translation towards that side. The results are in clear contrast to explanations that assume a lateral gradient ranging from a minimum of exploration in the extreme contralesional to a maximum in the extreme ipsilesional hemispace. Moreover, the failure to orient towards and to explore the contralesional part of space appears to be distinct from those deficits observed once an object of interest has been located and releases reaching. Although patients with neglect exhibit a severe bias of exploratory movements, their hand trajectories to targets in peripersonal space may follow a straight path. This result suggests that (i) exploratory and (ii) goal-directed behaviour in space do not share the same neural control mechanisms. Neural representation of space in the inferior parietal lobule seems to serve as a matrix for spatial exploration and for orienting in space but not for visuomotor processes involved in reaching for objects. Disturbances of such processes rather appear to be prominent in patients with more superior parietal lobe lesions and optic ataxia.  相似文献   

20.
Neurophysiological studies focus on memory retrieval as a reproduction of what was experienced and have established that neural discharge is replayed to express memory. However, cognitive psychology has established that recollection is not a verbatim replay of stored information. Recollection is constructive, the product of memory retrieval cues, the information stored in memory, and the subject''s state of mind. We discovered key features of constructive recollection embedded in the rat CA1 ensemble discharge during an active avoidance task. Rats learned two task variants, one with the arena stable, the other with it rotating; each variant defined a distinct behavioral episode. During the rotating episode, the ensemble discharge of CA1 principal neurons was dynamically organized to concurrently represent space in two distinct codes. The code for spatial reference frame switched rapidly between representing the rat''s current location in either the stationary spatial frame of the room or the rotating frame of the arena. The code for task variant switched less frequently between a representation of the current rotating episode and the stable episode from the rat''s past. The characteristics and interplay of these two hippocampal codes revealed three key properties of constructive recollection. (1) Although the ensemble representations of the stable and rotating episodes were distinct, ensemble discharge during rotation occasionally resembled the stable condition, demonstrating cross-episode retrieval of the representation of the remote, stable episode. (2) This cross-episode retrieval at the level of the code for task variant was more likely when the rotating arena was about to match its orientation in the stable episode. (3) The likelihood of cross-episode retrieval was influenced by preretrieval information that was signaled at the level of the code for spatial reference frame. Thus key features of episodic recollection manifest in rat hippocampal representations of space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号