首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lesion studies of the parietal cortex have led to a wide range of conclusions regarding the coordinate reference frame in which hemineglect is expressed. A model of spatial representation in the parietal cortex has recently been developed in which the position of an object is not encoded in a particular frame of reference, but instead involves neurones computing basis functions of sensory inputs. In this type of representation, a nonlinear sensorimotor transformation of an object is represented in a population of units having the response properties of neurones that are observed in the parietal cortex. A simulated lesion in a basis-function representation was found to replicate three of the most important aspects of hemineglect: (i) the model behaved like parietal patients in line-cancellation and line-bisection experiments; (ii) the deficit affected multiple frames of reference; and (iii) the deficit could be object-centred. These results support the basis-function hypothesis for spatial representations and provide a testable computational theory of hemineglect at the level of single cells.  相似文献   

2.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

3.
Recent studies of visually guided reaching in monkeys support the hypothesis that the visuomotor transformations underlying arm movements to spatial targets involve a parallel mechanism that simultaneously engages functionally related frontal and parietal areas linked by reciprocal cortico-cortical connections. The neurons in these areas possess similar combinations of response properties. The multimodal combinatorial properties of these neurons and the gradient architecture of the parieto-frontal network emerge as a potential substrate to link the different sensory and motor signals that arise during reaching behavior into common hybrid reference frames. This convergent combinatorial process is evident at early stages of visual information processing in the occipito-parietal cortex, suggesting the existence of re-entrant motor influences on cortical areas once believed to have only visual functions.  相似文献   

4.
Bernier PM  Grafton ST 《Neuron》2010,68(4):776-788
Current models of sensorimotor transformations emphasize the dominant role of gaze-centered representations for reach planning in the posterior parietal cortex (PPC). Here we exploit fMRI repetition suppression to test whether the sensory modality of a target determines the reference frame used to define the motor goal in the PPC and premotor cortex. We show that when targets are defined visually, the anterior precuneus selectively encodes the motor goal in gaze-centered coordinates, whereas the parieto-occipital junction, Brodman Area 5 (BA 5), and PMd use a mixed gaze- and body-centered representation. In contrast, when targets are defined by unseen proprioceptive cues, activity in these areas switches to represent the motor goal predominantly in body-centered coordinates. These results support computational models arguing for flexibility in reference frames for action according to sensory context. Critically, they provide neuroanatomical evidence that flexibility is achieved by exploiting a multiplicity of reference frames that can be expressed within individual areas.  相似文献   

5.
Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.  相似文献   

6.
LR Bremner  RA Andersen 《Neuron》2012,75(2):342-351
Competing models of sensorimotor computation predict different topological constraints in the brain. Some models propose population coding of particular reference frames in anatomically distinct nodes, whereas others require no such dedicated subpopulations and instead predict that regions will simultaneously code in multiple, intermediate, reference frames. Current empirical evidence is conflicting, partly due to difficulties involved in identifying underlying reference frames. Here, we independently varied the locations of hand, gaze, and target over many positions while recording from the dorsal aspect of parietal area 5. We find that the target is?represented in a predominantly hand-centered reference frame here, contrasting with the relative code seen in dorsal premotor cortex and the mostly gaze-centered reference frame in the parietal reach region. This supports the hypothesis that different nodes of the sensorimotor circuit contain distinct and systematic representations, and this constrains the types of computational model that are neurobiologically relevant.  相似文献   

7.
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions.  相似文献   

8.
9.
Topographic maps are a fundamental and ubiquitous feature of the sensory and motor regions of the brain. There is less evidence for the existence of conventional topographic maps in associational areas of the brain such as the prefrontal cortex and parietal cortex. The existence of topographically arranged anatomical projections is far more widespread and occurs in associational regions of the brain as well as sensory and motor regions: this points to a more widespread existence of topographically organised maps within associational cortex than currently recognised. Indeed, there is increasing evidence that abstract topographic representations may also occur in these regions. For example, a topographic mnemonic map of visual space has been described in the dorsolateral prefrontal cortex and topographically arranged visuospatial attentional signals have been described in parietal association cortex. This article explores how abstract representations might be extracted from sensory topographic representations and subsequently code abstract information. Finally a simple model is presented that shows how abstract topographic representations could be integrated with other information within the brain to solve problems or form abstract associations. The model uses correlative firing to detect associations between different types of stimuli. It is flexible because it can produce correlations between information represented in a topographic or non-topographic coordinate system. It is proposed that a similar process could be used in high-level cognitive operations such as learning and reasoning.  相似文献   

10.
11.
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.  相似文献   

12.
BACKGROUND: Neurons in primary auditory cortex are known to be sensitive to the locations of sounds in space, but the reference frame for this spatial sensitivity has not been investigated. Conventional wisdom holds that the auditory and visual pathways employ different reference frames, with the auditory pathway using a head-centered reference frame and the visual pathway using an eye-centered reference frame. Reconciling these discrepant reference frames is therefore a critical component of multisensory integration. RESULTS: We tested the reference frame of neurons in the auditory cortex of primates trained to fixate visual stimuli at different orbital positions. We found that eye position altered the activity of about one third of the neurons in this region (35 of 113, or 31%). Eye position affected not only the responses to sounds (26 of 113, or 23%), but also the spontaneous activity (14 of 113, or 12%). Such effects were also evident when monkeys moved their eyes freely in the dark. Eye position and sound location interacted to produce a representation for auditory space that was neither head- nor eye-centered in reference frame. CONCLUSIONS: Taken together with emerging results in both visual and other auditory areas, these findings suggest that neurons whose responses reflect complex interactions between stimulus position and eye position set the stage for the eventual convergence of auditory and visual information.  相似文献   

13.
Reaches to sounds encoded in an eye-centered reference frame   总被引:5,自引:0,他引:5  
Cohen YE  Andersen RA 《Neuron》2000,27(3):647-652
A recent hypothesis suggests that neurons in the lateral intraparietal area (LIP) and the parietal reach region (PRR) encode movement plans in a common eye-centered reference frame. To test this hypothesis further, we examined how PRR neurons encode reach plans to auditory stimuli. We found that PRR activity was affected by eye and initial hand position. Population analyses, however, indicated that PRR neurons were affected more strongly by eye position than by initial hand position. These eye position effects were appropriate to maintain coding in eye coordinates. Indeed, a significant population of PRR neurons encoded reaches to auditory stimuli in an eye-centered reference frame. These results extend the hypothesis that, regardless of the modality of the sensory input or the eventual action, PRR and LIP neurons represent movement plans in a common, eye-centered representation.  相似文献   

14.
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

A combination of psychophysics, computational modelling and fMRI reveals novel insights into how the brain controls the binding of information across the senses, such as the voice and lip movements of a speaker.  相似文献   

15.
A hallmark of higher brain functions is the ability to contemplate the world rather than to respond reflexively to it. To do so, the nervous system makes use of a modular architecture in which sensory representations are dissociated from areas that control actions. This flexibility however necessitates a recoding scheme that would put sensory information to use in the control of behavior. Sensory recoding faces two important challenges. First, recoding must take into account the inherent variability of sensory responses. Second, it must be flexible enough to satisfy the requirements of different perceptual goals. Recent progress in theory, psychophysics, and neurophysiology indicate that cortical circuitry might meet these challenges by evaluating sensory signals probabilistically.  相似文献   

16.
BACKGROUND: Recent neuroimaging studies have found that several areas of the human brain, including parietal regions, can respond multimodally. But given single-cell evidence that responses in primate parietal cortex can be motor-related, some of the human multimodal activations might reflect convergent activation of potentially motor-related areas, rather than multimodal representations of space independent of motor factors. Here we crossed sensory stimulation of different modalities (vision or touch, in left or right hemifield) with spatially directed responses to such stimulation by different effector-systems (saccadic or manual). RESULTS: The fMRI results revealed representations of contralateral space in both the posterior part of the superior parietal gyrus and the anterior intraparietal sulcus that activated independently of both sensory modality and motor response. Multimodal saccade-related or manual-related activations were found, by contrast, in different regions of parietal cortex. CONCLUSIONS: Whereas some parietal regions have specific motor functions, others are engaged during the execution of movements to the contralateral hemifield irrespective of both input modality and the type of motor effector.  相似文献   

17.
G Blohm 《PloS one》2012,7(7):e41241
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1(st) layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3(rd) layer) that we read out (4(th) layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.  相似文献   

18.
Biotic and abiotic factors have been proposed to explain patterns of reproductive character displacement, but which factor is most important to character displacement of acoustic signals is not clear. Male vocalizations of the frog Pseudacris feriarum are known to undergo reproductive character displacement in areas of sympatry with P. brimleyi and P. nigrita. Despite evidence for reinforcement as an important mechanism, local adaptation via sensory drive might explain this pattern because Pseudacris breed in different habitat types and mating signals are exposed to a variety of environments. We tested the sensory drive hypothesis by playing synthesized vocalizations representing the spectrum of variation in P. feriarum at 12 different study sites. If sensory drive has occurred, then vocalizations should transmit better in the site of origin or at ecologically similar sites. We found that variation in acoustic signals did not produce better transmission in particular sites, the effect of site was uniform, and acoustic signals often transmitted better in habitats external to their origin. Ecological variation among habitats did not explain signal degradation. Our playback experiments, ecological analyses, and comparisons of different habitat types provide no support for sensory drive as a process promoting reproductive character displacement in this system. Reinforcement is the more likely primary mechanism.  相似文献   

19.
While sensory neurons carry behaviorally relevant information in responses that often extend over hundreds of milliseconds, the key units of neural information likely consist of much shorter and temporally precise spike patterns. The mechanisms and temporal reference frames by which sensory networks partition responses into these shorter units of information remain unknown. One hypothesis holds that slow oscillations provide a network-intrinsic reference to temporally partitioned spike trains without exploiting the millisecond-precise alignment of spikes to sensory stimuli. We tested this hypothesis on neural responses recorded in visual and auditory cortices of macaque monkeys in response to natural stimuli. Comparing different schemes for response partitioning revealed that theta band oscillations provide a temporal reference that permits extracting significantly more information than can be obtained from spike counts, and sometimes almost as much information as obtained by partitioning spike trains using precisely stimulus-locked time bins. We further tested the robustness of these partitioning schemes to temporal uncertainty in the decoding process and to noise in the sensory input. This revealed that partitioning using an oscillatory reference provides greater robustness than partitioning using precisely stimulus-locked time bins. Overall, these results provide a computational proof of concept for the hypothesis that slow rhythmic network activity may serve as internal reference frame for information coding in sensory cortices and they foster the notion that slow oscillations serve as key elements for the computations underlying perception.  相似文献   

20.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号