首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bernier PM  Grafton ST 《Neuron》2010,68(4):776-788
Current models of sensorimotor transformations emphasize the dominant role of gaze-centered representations for reach planning in the posterior parietal cortex (PPC). Here we exploit fMRI repetition suppression to test whether the sensory modality of a target determines the reference frame used to define the motor goal in the PPC and premotor cortex. We show that when targets are defined visually, the anterior precuneus selectively encodes the motor goal in gaze-centered coordinates, whereas the parieto-occipital junction, Brodman Area 5 (BA 5), and PMd use a mixed gaze- and body-centered representation. In contrast, when targets are defined by unseen proprioceptive cues, activity in these areas switches to represent the motor goal predominantly in body-centered coordinates. These results support computational models arguing for flexibility in reference frames for action according to sensory context. Critically, they provide neuroanatomical evidence that flexibility is achieved by exploiting a multiplicity of reference frames that can be expressed within individual areas.  相似文献   

2.
Orchestrating a movement towards a sensory target requires many computational processes, including a transformation between reference frames. This transformation is important because the reference frames in which sensory stimuli are encoded often differ from those of motor effectors. The posterior parietal cortex has an important role in these transformations. Recent work indicates that a significant proportion of parietal neurons in two cortical areas transforms the sensory signals that are used to guide movements into a common reference frame. This common reference frame is an eye-centred representation that is modulated by eye-, head-, body- or limb-position signals. A common reference frame might facilitate communication between different areas that are involved in coordinating the movements of different effectors. It might also be an efficient way to represent the locations of different sensory targets in the world.  相似文献   

3.
Lesion studies of the parietal cortex have led to a wide range of conclusions regarding the coordinate reference frame in which hemineglect is expressed. A model of spatial representation in the parietal cortex has recently been developed in which the position of an object is not encoded in a particular frame of reference, but instead involves neurones computing basis functions of sensory inputs. In this type of representation, a nonlinear sensorimotor transformation of an object is represented in a population of units having the response properties of neurones that are observed in the parietal cortex. A simulated lesion in a basis-function representation was found to replicate three of the most important aspects of hemineglect: (i) the model behaved like parietal patients in line-cancellation and line-bisection experiments; (ii) the deficit affected multiple frames of reference; and (iii) the deficit could be object-centred. These results support the basis-function hypothesis for spatial representations and provide a testable computational theory of hemineglect at the level of single cells.  相似文献   

4.
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints.  相似文献   

5.
During saccadic eye movements, the visual world shifts rapidly across the retina. Perceptual continuity is thought to be maintained by active neural mechanisms that compensate for this displacement, bringing the presaccadic scene into a postsaccadic reference frame. Because of this active mechanism, objects appearing briefly around the time of the saccade are perceived at erroneous locations, a phenomenon called perisaccadic mislocalization. The position and direction of localization errors can inform us about the different reference frames involved. It has been found, for example, that errors are not simply made in the direction of the saccade but directed toward the saccade target, indicating that the compensatory mechanism involves spatial compression rather than translation. A recent study confirmed that localization errors also occur in the direction orthogonal to saccade direction, but only for eccentricities far from the fovea, beyond the saccade target. This spatially specific pattern of distortion cannot be explained by a simple compression of space around the saccade target. Here I show that a change of reference frames (i.e., translation) in cortical (logarithmic) coordinates, taking into account the cortical magnification factor, can accurately predict these spatial patterns of mislocalization. The flashed object projects onto the cortex in presaccadic (fovea-centered) coordinates but is perceived in postsaccadic (target-centered) coordinates.  相似文献   

6.
Pesaran B  Nelson MJ  Andersen RA 《Neuron》2006,51(1):125-134
When reaching to grasp an object, we often move our arm and orient our gaze together. How are these movements coordinated? To investigate this question, we studied neuronal activity in the dorsal premotor area (PMd) and the medial intraparietal area (area MIP) of two monkeys while systematically varying the starting position of the hand and eye during reaching. PMd neurons encoded the relative position of the target, hand, and eye. MIP neurons encoded target location with respect to the eye only. These results indicate that whereas MIP encodes target locations in an eye-centered reference frame, PMd uses a relative position code that specifies the differences in locations between all three variables. Such a relative position code may play an important role in coordinating hand and eye movements by computing their relative position.  相似文献   

7.
In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets.  相似文献   

8.
9.
Flexible representations of dynamics are used in object manipulation   总被引:1,自引:0,他引:1  
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.  相似文献   

10.
Three sets of 20 trinucleotides are preferentially associated with the reading frames and their 2 shifted frames of both eukaryotic and prokaryotic genes. These 3 sets are circular codes. They allow retrieval of any frame in genes (containing these circular code words), locally anywhere in the 3 frames and in particular without start codons in the reading frame, and automatically with the reading of a few nucleotides. The circular code in the reading frame, noted X, which can deduce the 2 other circular codes in the shifted frames by permutation, is the information used for analysing frameshift genes, i. e. genes with a change of reading frame during translation. This work studies the circular code signal around their frameshift sites. Two scoring methods are developed, a function P based on this code X and a function Q based both on this code X and the 4 trinucleotides with identical nucleotides. They detect a significant correlation between the code X and the -1 frameshift signals in both eukaryotic and prokaryotic genes, and the +1 frameshift signals in eukaryotic genes.  相似文献   

11.
12.
We have reviewed evidence that suggests that the target for limb motion is encoded in a retinocentric frame of reference. Errors in pointing that are elicited by an illusion that distorts the perceived motion of a target are strongly correlated with errors in gaze position. The modulations in the direction and speed of ocular smooth pursuit and of the hand show remarkable similarities, even though the inertia of the arm is much larger than that of the eye. We have suggested that ocular motion is constrained so that gaze provides an appropriate target signal for the hand. Finally, ocular and manual tracking deficits in patients with cerebellar ataxia are very similar. These deficits are also consistent with the idea that a gaze signal provides the target for hand motion; in some cases limb ataxia would be a consequence of optic ataxia rather than reflecting a deficit in the control of limb motion per se. These results, as well as neurophysiological data summarized here, have led us to revise a hypothesis we have previously put forth to account for the initial stages of sensorimotor transformations underlying targeted limb motions. In the original hypothesis, target location and initial arm posture were ultimately encoded in a common frame of reference tied to somatosensation, i.e. a body-centered frame of reference, and a desired change in posture was derived from the difference between the two. In our new scheme, a movement vector is derived from the difference between variables encoded in a retinocentric frame of reference. Accordingly, gaze, with its exquisite ability to stabilize a target image even under dynamic conditions, would be used as a reference signal. Consequently, this scheme would facilitate the processing of information under conditions in which the body and the target are moving relative to each other.  相似文献   

13.
Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.  相似文献   

14.
Neurophysiological studies focus on memory retrieval as a reproduction of what was experienced and have established that neural discharge is replayed to express memory. However, cognitive psychology has established that recollection is not a verbatim replay of stored information. Recollection is constructive, the product of memory retrieval cues, the information stored in memory, and the subject''s state of mind. We discovered key features of constructive recollection embedded in the rat CA1 ensemble discharge during an active avoidance task. Rats learned two task variants, one with the arena stable, the other with it rotating; each variant defined a distinct behavioral episode. During the rotating episode, the ensemble discharge of CA1 principal neurons was dynamically organized to concurrently represent space in two distinct codes. The code for spatial reference frame switched rapidly between representing the rat''s current location in either the stationary spatial frame of the room or the rotating frame of the arena. The code for task variant switched less frequently between a representation of the current rotating episode and the stable episode from the rat''s past. The characteristics and interplay of these two hippocampal codes revealed three key properties of constructive recollection. (1) Although the ensemble representations of the stable and rotating episodes were distinct, ensemble discharge during rotation occasionally resembled the stable condition, demonstrating cross-episode retrieval of the representation of the remote, stable episode. (2) This cross-episode retrieval at the level of the code for task variant was more likely when the rotating arena was about to match its orientation in the stable episode. (3) The likelihood of cross-episode retrieval was influenced by preretrieval information that was signaled at the level of the code for spatial reference frame. Thus key features of episodic recollection manifest in rat hippocampal representations of space.  相似文献   

15.
BACKGROUND: Neurons in primary auditory cortex are known to be sensitive to the locations of sounds in space, but the reference frame for this spatial sensitivity has not been investigated. Conventional wisdom holds that the auditory and visual pathways employ different reference frames, with the auditory pathway using a head-centered reference frame and the visual pathway using an eye-centered reference frame. Reconciling these discrepant reference frames is therefore a critical component of multisensory integration. RESULTS: We tested the reference frame of neurons in the auditory cortex of primates trained to fixate visual stimuli at different orbital positions. We found that eye position altered the activity of about one third of the neurons in this region (35 of 113, or 31%). Eye position affected not only the responses to sounds (26 of 113, or 23%), but also the spontaneous activity (14 of 113, or 12%). Such effects were also evident when monkeys moved their eyes freely in the dark. Eye position and sound location interacted to produce a representation for auditory space that was neither head- nor eye-centered in reference frame. CONCLUSIONS: Taken together with emerging results in both visual and other auditory areas, these findings suggest that neurons whose responses reflect complex interactions between stimulus position and eye position set the stage for the eventual convergence of auditory and visual information.  相似文献   

16.

Objective

Biomechanical effects of laterally wedged insoles are assessed by reduction in the knee adduction moment. However, the degree of reduction may vary depending on the reference frame with which it is calculated. The purpose of this study was to clarify the effect of reference frame on the reduction in the knee adduction moment by laterally wedged insoles.

Methods

Twenty-nine healthy participants performed gait trials with a laterally wedged insole and with a flat insole as a control. The knee adduction moment, including the first and second peaks and the angular impulse, were calculated using four different reference frames: the femoral frame, tibial frame, laboratory frame and the Joint Coordinate System.

Results

There were significant effects of reference frame on the knee adduction moment first and second peaks (P < 0.001 for both variables), while the effect was not significant for the angular impulse (P = 0.84). No significant interaction between the gait condition and reference frame was found in either of the knee adduction moment variables (P = 0.99 for all variables), indicating that the effects of laterally wedged insole on the knee adduction moments were similar across the four reference frames. On the other hand, the average percent changes ranged from 9% to 16% for the first peak, from 16% to 18% for the second peak and from 17% to 21% for the angular impulse when using the different reference frames.

Conclusion

The effects of laterally wedged insole on the reduction in the knee adduction moment were similar across the reference frames. On the other hand, Researchers need to recognize that when the percent change was used as the parameter of the efficacy of laterally wedged insole, the choice of reference frame may influence the interpretation of how laterally wedged insoles affect the knee adduction moment.  相似文献   

17.
The mammalian forebrain is characterized by the presence of several parallel cortico‐basal ganglia circuits that shape the learning and control of actions. Among these are the associative, limbic and sensorimotor circuits. The function of all of these circuits has now been implicated in responses to drugs of abuse, as well as drug seeking and drug taking. While the limbic circuit has been most widely examined, key roles for the other two circuits in control of goal‐directed and habitual instrumental actions related to drugs of abuse have been shown. In this review we describe the three circuits and effects of acute and chronic drug exposure on circuit physiology. Our main emphasis is on drug actions in dorsal striatal components of the associative and sensorimotor circuits. We then review key findings that have implicated these circuits in drug seeking and taking behaviors, as well as drug use disorders. Finally, we consider different models describing how the three cortico‐basal ganglia circuits become involved in drug‐related behaviors. This topic has implications for drug use disorders and addiction, as treatments that target the balance between the different circuits may be useful for reducing excessive substance use.  相似文献   

18.
Chersi F  Ferrari PF  Fogassi L 《PloS one》2011,6(11):e27652
The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention).In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.  相似文献   

19.
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions.  相似文献   

20.
Multiple particle tracking (MPT) has seen numerous applications in live-cell imaging studies of subcellular dynamics. Establishing correspondence between particles in a sequence of frames with high particle density, particles merging and splitting, particles entering and exiting the frame, temporary particle disappearance, and an ill-performing detection algorithm is the most challenging part of MPT. Here we propose a tracking method based on multidimensional assignment to address these problems. We combine an Interacting Multiple Model (IMM) filter, multidimensional assignment, particle occlusion handling, and merge-split event detection in a single software analysis package. The main advantage of a multidimensional assignment is that both spatial and temporal information can be used by using several later frames as reference. The IMM filter, which is used to maintain and predict the state of each track, contains several models which correspond to different types of biologically realistic movements. It works especially well with multidimensional assignment, because there tends to be a higher probability of correct particle association over time. First the method generates many particle-correspondence hypotheses, merge-split hypotheses and misdetection hypotheses within the framework of a sliding window over the frames of the image sequence. Then it builds a multidimensional assignment problem (MAP) accordingly. The particle is tracked with gap-filling, and merging and splitting events are then detected using the MAP solution. The tracking method is validated on both simulated tracks and microscopy image sequences. The results of these experiments show that the method is more accurate and robust than other "tracking from detected features" methods in dense particle situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号