首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Eye movements constitute one of the most basic means of interacting with our environment, allowing to orient to, localize and scrutinize the variety of potentially interesting objects that surround us. In this review we discuss the role of the parietal cortex in the control of saccadic and smooth pursuit eye movements, whose purpose is to rapidly displace the line of gaze and to maintain a moving object on the central retina, respectively. From single cell recording studies in monkey we know that distinct sub-regions of the parietal lobe are implicated in these two kinds of movement. The middle temporal (MT) and medial superior temporal (MST) areas show neuronal activities related to moving visual stimuli and to ocular pursuit. The lateral intraparietal (LIP) area exhibits visual and saccadic neuronal responses. Electrophysiology, which in essence is a correlation method, cannot entirely solve the question of the functional implication of these areas: are they primarily involved in sensory processing, in motor processing, or in some intermediate function? Lesion approaches (reversible or permanent) in the monkey can provide important information in this respect. Lesions of MT or MST produce deficits in the perception of visual motion, which would argue for their possible role in sensory guidance of ocular pursuit rather than in directing motor commands to the eye muscle. Lesions of LIP do not produce specific visual impairments and cause only subtle saccadic deficits. However, recent results have shown the presence of severe deficits in spatial attention tasks. LIP could thus be implicated in the selection of relevant objects in the visual scene and provide a signal for directing the eyes toward these objects. Functional imaging studies in humans confirm the role of the parietal cortex in pursuit, saccadic, and attentional networks, and show a high degree of overlap with monkey data. Parietal lobe lesions in humans also result in behavioral deficits very similar to those that are observed in the monkey. Altogether, these different sources of data consistently point to the involvement of the parietal cortex in the representation of space, at an intermediate stage between vision and action.  相似文献   

2.
《Journal of Physiology》2013,107(6):459-470
In the present paper, we focus on the coding by cell assemblies in the prefrontal cortex (PFC) and discuss the diversity of the coding, which results in stable and dynamic representations and the processing of various information in that higher brain region. The key activity that reflects cell-assembly coding is the synchrony of the firing of multiple neurons when animals are performing cognitive and memory tasks. First, we introduce some studies that have shown task-related synchrony of neuronal firing in the monkey PFC. These studies have reported fixed and several types of dynamic synchronous firing during working memory, long-term visual memory, and goal selection. The results of these studies have indicated that cell assemblies in the PFC can contribute to both the stability and the dynamics of various types of information. Second, we refer to rat studies and introduce the findings of cellular interactions that contribute to synchrony in working memory, learning-induced changes in synchrony in spatial tasks, and interactions of the PFC and hippocampus in dynamic synchrony. These studies have proposed neuronal mechanisms of cell-assembly coding in the PFC and its critical role in the learning of task demands in problematic situations. Based on the monkey and rat studies, we conclude that cell-assembly coding in the PFC is diverse and has various facets, which allow multipotentiality in the higher brain region. Finally, we discuss the problem of the sizes of cell assembly, how diverse the sizes are in the PFC, and the technical problems in their investigation. We introduce a unique spike-sorting method that can detect small and local cell assemblies that consist of closely neighboring neurons. Then, we describe the findings of our study that showed that the monkey PFC has both small and large cell assemblies, which have different roles in information coding in the working brain.  相似文献   

3.
Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception whenever one perceptual interpretation is dominant, and the instability of perception that causes perceptual dominance to alternate between perceptual interpretations upon extended viewing. This review summarizes several ways in which contextual information can help the brain resolve visual ambiguities and construct temporarily stable perceptual experiences. Temporal context through prior stimulation or internal brain states brought about by feedback from higher cortical processing levels may alter the response characteristics of specific neurons involved in rivalry resolution. Furthermore, spatial or crossmodal context may strengthen the neuronal representation of one of the possible perceptual interpretations and consequently bias the rivalry process towards it. We suggest that contextual influences on perceptual choices with ambiguous visual stimuli can be highly informative about the neuronal mechanisms of context-driven inference in the general processes of perceptual decision-making.  相似文献   

4.
A Johnston 《Spatial Vision》1986,1(4):319-331
Striate cortex topography derives from a stretching of retinal space along the optic axis. At the retina, relative distances are preserved in a mapping of retinal space onto a spherical surface in the environment. At the cortex, relative distances along visual meridia in the cortical map are preserved in a mapping of striate cortex onto an environmental conic surface whose base is in the plane of the eye. This eco-cortical relationship can be considered a reference frame through which spatial relationships at the cortex might provide information about the environment. The present analysis provides an explanation of changes in cortical magnification with visual eccentricity in the primate and a detailed three-dimensional model of striate topography for the macaque monkey. In man, a conic environmental surface is shown to be uniformly resolvable along meridia in the visual field. Finally, the implications of this analysis of the structural properties of the retino-striate pathway and visual resolution are considered in relation to depth and distance perception.  相似文献   

5.
Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making.  相似文献   

6.
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.  相似文献   

7.
Does the primary visual cortex mediate consciousness for higher-level stages of information processing by providing an outlet for mental imagery? Evidence based on neural electrical activity is inconclusive as reflected in the “imagery debate” in cognitive science. Neural information and activity, however, also depend on regulated biophoton (optical) signaling. During encoding and retrieval of visual information, regulated electrical (redox) signals of neurons are converted into synchronized biophoton signals by bioluminescent radical processes. That is, visual information may be represented by regulated biophotons of mitochondrial networks in retinotopically organized cytochrome oxidase-rich neural networks within early visual areas. Therefore, we hypothesize that regulated biophotons can generate intrinsic optical representations in the primary visual cortex and then propagate variably degraded versions along cytochrome oxidase pathways during both perception and imagery. Testing this hypothesis requires to establish a methodology for measurement of in vivo and/or in vitro increases of biophoton emission in humans' brain during phosphene inductions by transcranial magnetic stimulation and to compare the decrease in phosphene thresholds during transcranial magnetic stimulation and imagery. Our hypothesis provides a molecular mechanism for the visual buffer and for imagery as the prevalent communication mode (through optical signaling) within the brain. If confirmed empirically, this hypothesis could resolve the imagery debate and the underlying issue of continuity between perception and abstract thought.  相似文献   

8.
Eye movements modulate visual receptive fields of V4 neurons   总被引:11,自引:0,他引:11  
The receptive field, defined as the spatiotemporal selectivity of neurons to sensory stimuli, is central to our understanding of the neuronal mechanisms of perception. However, despite the fact that eye movements are critical during normal vision, the influence of eye movements on the structure of receptive fields has never been characterized. Here, we map the receptive fields of macaque area V4 neurons during saccadic eye movements and find that receptive fields are remarkably dynamic. Specifically, before the initiation of a saccadic eye movement, receptive fields shrink and shift towards the saccade target. These spatiotemporal dynamics may enhance information processing of relevant stimuli during the scanning of a visual scene, thereby assisting the selection of saccade targets and accelerating the analysis of the visual scene during free viewing.  相似文献   

9.
Retinal activity is the first stage of visual perception. Retinal sampling is non-uniform and not continuous, yet visual experience is not characterized by holes and discontinuities in the world. How does the brain achieve this perceptual completion? Fifty years ago, it was suggested that visual perception involves a two-stage process of (i) edge detection followed by (ii) neural filling-in of surface properties. We examine whether this general hypothesis can account for the specific example of perceptual completion of a small target surrounded by dynamic dots (an ''artificial scotoma''), a phenomenon argued to provide insight into the mechanisms responsible for perception. We degrade the target''s borders using first blur and then depth continuity, and find that border degradation does not influence time to target disappearance. This indicates that important information for the continuity of target perception is conveyed at a coarse spatial scale. We suggest that target disappearance could result from adaptation that is not specific to borders, and question the need to hypothesize an active filling-in process to explain this phenomenon.  相似文献   

10.
As most sensory modalities, the visual system needs to deal with very fast changes in the environment. Instead of processing all sensory stimuli, the brain is able to construct a perceptual experience by combining selected sensory input with an ongoing internal activity. Thus, the study of visual perception needs to be approached by examining not only the physical properties of stimuli, but also the brain's ongoing dynamical states onto which these perturbations are imposed. At least three different models account for this internal dynamics. One model is based on cardinal cells where the activity of few cells by itself constitutes the neuronal correlate of perception, while a second model is based on a population coding that states that the neuronal correlate of perception requires distributed activity throughout many areas of the brain. A third proposition, known as the temporal correlation hypothesis states that the distributed neuronal populations that correlate with perception, are also defined by synchronization of the activity on a millisecond time scale. This would serve to encode contextual information by defining relations between the features of visual objects. If temporal properties of neural activity are important to establish the neural mechanisms of perception, then the study of appropriate dynamical stimuli should be instrumental to determine how these systems operate. The use of natural stimuli and natural behaviors such as free viewing, which features fast changes of internal brain states as seen by motor markers, is proposed as a new experimental paradigm to study visual perception.  相似文献   

11.
Ganglion cell axon pathfinding in the retina and optic nerve   总被引:3,自引:0,他引:3  
The eye is a highly specialized structure that gathers and converts light information into neuronal signals. These signals are relayed along axons of retinal ganglion cells (RGCs) to visual centers in the brain for processing. In this review, we discuss the pathfinding tasks RGC axons face during development and the molecular mechanisms known to be involved. The data at hand support the presence of multiple axon guidance mechanisms concentrically organized around the optic nerve head, each of which appears to involve both growth-promoting and growth-inhibitory guidance molecules. Together, these strategies ensure proper optic nerve formation and establish the anatomical pathway for faithful transmission of information between the retina and the brain.  相似文献   

12.
Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons.  相似文献   

13.
Cortical analysis of visual context   总被引:13,自引:0,他引:13  
Bar M  Aminoff E 《Neuron》2003,38(2):347-358
Objects in our environment tend to be grouped in typical contexts. How does the human brain analyze such associations between visual objects and their specific context? We addressed this question in four functional neuroimaging experiments and revealed the cortical mechanisms that are uniquely activated when people recognize highly contextual objects (e.g., a traffic light). Our findings indicate that a region in the parahippocampal cortex and a region in the retrosplenial cortex together comprise a system that mediates both spatial and nonspatial contextual processing. Interestingly, each of these regions has been identified in the past with two functions: the processing of spatial information and episodic memory. Attributing contextual analysis to these two areas, instead, provides a framework for bridging between previous reports.  相似文献   

14.
Neurons in posterior parietal cortex of the awake, trained monkey respond to passive visual and/or somatosensory stimuli. In general, the receptive fields of these cells are large and nonspecific. When these neurons are studied during visually guided hand movements and eye movements, most of their activity can be accounted for by passive sensory stimulation. However, for some visual cells, the response to a stimulus is enhanced when it is to be the target for a saccadic eye movement. This enhancement is selective for eye movements into the visual receptive field since it does not occur with eye movements to other parts of the visual field. Cells that discharge in association with a visual fixation task have foveal receptive fields and respond to the spots of light used as fixation targets. Cells discharging selectively in association with different directions of tracking eye movements have directionally selective responses to moving visual stimuli. Every cell in our sample discharging in association with movement could be driven by passive sensory stimuli. We conclude that the activity of neurons in posterior parietal cortex is dependent on and indicative of external stimuli but not predictive of movement.  相似文献   

15.
Spatial updating in human parietal cortex   总被引:13,自引:0,他引:13  
Merriam EP  Genovese CR  Colby CL 《Neuron》2003,39(2):361-373
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.  相似文献   

16.
We performed a systematic study to check whether neurons in the area TE (the anterior part of inferotemporal cortex) in rhesus monkey, regarded as the last stage of the ventral visual pathway, could be modulated by auditory stimuli. Two fixating rhesus monkeys were presented with visual, auditory or combined audiovisual stimuli while neuronal responses were recorded. We have found that the visually sensitive neurons are also modulated by audiovisual stimuli. This modulation is manifested as the change of response rate. Our results have shown also that the visual neurons were responsive to the sole auditory stimuli. Therefore, the concept of inferotemporal cortex unimodality in information processing should be re-evaluated.  相似文献   

17.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.  相似文献   

18.
A prevailing theory proposes that the brain''s two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers'' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals'' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers'' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways.  相似文献   

19.
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective ("real") motion. In?contrast to other areas, V3A lacked responses to?self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.  相似文献   

20.
Preparing a goal directed movement often requires detailed analysis of our environment. When picking up an object, its orientation, size and relative distance are relevant parameters when preparing a successful grasp. It would therefore be beneficial if the motor system is able to influence early perception such that information processing needs for action control are met at the earliest possible stage. However, only a few studies reported (indirect) evidence for action-induced visual perception improvements. We therefore aimed to provide direct evidence for a feature-specific perceptual modulation during the planning phase of a grasping action. Human subjects were instructed to either grasp or point to a bar while simultaneously performing an orientation discrimination task. The bar could slightly change its orientation during grasping preparation. By analyzing discrimination response probabilities, we found increased perceptual sensitivity to orientation changes when subjects were instructed to grasp the bar, rather than point to it. As a control experiment, the same experiment was repeated using bar luminance changes, a feature that is not relevant for either grasping or pointing. Here, no differences in visual sensitivity between grasping and pointing were found. The present results constitute first direct evidence for increased perceptual sensitivity to a visual feature that is relevant for a certain skeletomotor act during the movement preparation phase. We speculate that such action-induced perception improvements are controlled by neuronal feedback mechanisms from cortical motor planning areas to early visual cortex, similar to what was recently established for spatial perception improvements shortly before eye movements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号