首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
There is much evidence in primates' visual processing for distinct mechanisms involved in object recognition and encoding object position and motion, which have been identified with 'ventral' and 'dorsal' streams, respectively, of the extra-striate visual areas [1] [2] [3]. This distinction may yield insights into normal human perception, its development and pathology. Motion coherence sensitivity has been taken as a test of global processing in the dorsal stream [4] [5]. We have proposed an analogous 'form coherence' measure of global processing in the ventral stream [6]. In a functional magnetic resonance imaging (fMRI) experiment, we found that the cortical regions activated by form coherence did not overlap with those activated by motion coherence in the same individuals. Areas differentially activated by form coherence included regions in the middle occipital gyrus, the ventral occipital surface, the intraparietal sulcus, and the temporal lobe. Motion coherence activated areas consistent with those previously identified as V5 and V3a, the ventral occipital surface, the intraparietal sulcus, and temporal structures. Neither form nor motion coherence activated area V1 differentially. Form and motion foci in occipital, parietal, and temporal areas were nearby but showed almost no overlap. These results support the idea that form and motion coherence test distinct functional brain systems, but that these do not necessarily correspond to a gross anatomical separation of dorsal and ventral processing streams.  相似文献   

2.
Conscious perception depends not only on sensory input, but also on attention [1, 2]. Recent studies in monkeys [3-6] and humans [7-12] suggest that influences of spatial attention on visual awareness may reflect top-down influences on excitability of visual cortex. Here we tested this specifically, by providing direct input into human visual cortex via cortical transcranial magnetic stimulation (TMS) to produce illusory visual percepts, called phosphenes. We found that a lower TMS intensity was needed to elicit a conscious phosphene when its apparent spatial location was attended, rather than unattended. Our results indicate that spatial attention can enhance visual-cortex excitability, and visual awareness, even when sensory signals from the eye via the thalamic pathway are bypassed.  相似文献   

3.
4.
Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.  相似文献   

5.
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area.  相似文献   

6.
In Li and Atick's [1, 2] theory of efficient stereo coding, the two eyes' signals are transformed into uncorrelated binocular summation and difference signals, and gain control is applied to the summation and differencing channels to optimize their sensitivities. In natural vision, the optimal channel sensitivities vary from moment to moment, depending on the strengths of the summation and difference signals; these channels should therefore be separately adaptable, whereby a channel's sensitivity is reduced following overexposure to adaptation stimuli that selectively stimulate that channel. This predicts a remarkable effect of binocular adaptation on perceived direction of a dichoptic motion stimulus [3]. For this stimulus, the summation and difference signals move in opposite directions, so perceived motion direction (upward or downward) should depend on which of the two binocular channels is most strongly adapted, even if the adaptation stimuli are completely static. We confirmed this prediction: a single static dichoptic adaptation stimulus presented for less than 1 s can control perceived direction of a subsequently presented dichoptic motion stimulus. This is not predicted by any current model of motion perception and suggests that the visual cortex quickly adapts to the prevailing binocular image statistics to maximize information-coding efficiency.  相似文献   

7.
The extent to which areas in the visual cerebral cortex differ in their ability to support perceptions has been the subject of considerable speculation. Experiments examining the activity of individual neurons have suggested that activity in later stages of the visual cortex is more closely linked to perception than that in earlier stages [1-9]. In contrast, results from functional imaging, transcranial magnetic stimulation, and lesion studies have been interpreted as showing that earlier stages are more closely coupled to perception [10-15]. We examined whether neuronal activity in early and later stages differs in its ability to support detectable signals by measuring behavioral thresholds for detecting electrical microstimulation in different cortical areas in two monkeys. By training the animals to perform a two-alternative temporal forced-choice task, we obtained criterion-free thresholds from five visual areas--V1, V2, V3A, MT, and the inferotemporal cortex. Every site tested yielded a reliable threshold. Thresholds varied little within and between visual areas, rising gradually from early to later stages. We similarly found no systematic differences in the slopes of the psychometric detection functions from different areas. These results suggest that neuronal signals of similar magnitude evoked in any part of visual cortex can generate percepts.  相似文献   

8.
The visual cortex in primates is parcellated into cytoarchitectonically, physiologically, and connectionally distinct areas: the striate cortex (V1) and the extrastriate cortex, consisting of V2 and numerous higher association areas [1]. The innervation of distinct visual cortical areas by the thalamus is especially segregated in primates, such that the lateral geniculate (LG) nucleus specifically innervates striate cortex, whereas pulvinar projections are confined to extrastriate cortex [2--8]. The molecular bases for the parcellation of the visual cortex and thalamus, as well as the establishment of reciprocal connections between distinct compartments within these two structures, are largely unknown. Here, we show that prospective visual cortical areas and corresponding thalamic nuclei in the embryonic rhesus monkey (Macaca mulatta) can be defined by combinatorial expression of genes encoding Eph receptor tyrosine kinases and their ligands, the ephrins, prior to obvious cytoarchitectonic differentiation within the cortical plate and before the establishment of reciprocal connections between the cortical plate and thalamus. These results indicate that molecular patterns of presumptive visual compartments in both the cortex and thalamus can form independently of one another and suggest a role for EphA family members in both compartment formation and axon guidance within the visual thalamocortical system.  相似文献   

9.
The visual system has the remarkable ability to extract several types of meaningful global-motion signals, such as radial motion, translation motion, and rotation, for different visual functions and actions. In the monkey brain, different groups of cells in MST respond best to different types of global motion [1, 2] whereas in lower cortical areas including MT, no such differential responses have been found. Here, we show that an area (or areas) lower than MST in the human brain [3] responds to different types of global motion. A series of human functional magnetic resonance imaging (fMRI) experiments, in which attention was controlled for, indicated that the center of radial motion activates the corresponding location in the V3A representation, whereas translation motion activates mainly in a more peripheral representation of V3A. These results suggest that in the human brain, V3A is an area that differentially responds according to the type of global motion.  相似文献   

10.
As we move through our environment, the flow of deforming images on the retinae provides a rich source of information about the three-dimensional structure of the external world and how to navigate through it. Recent evidence from psychophysical [1] [2] [3] [4], electrophysiological [5] [6] [7] [8] [9] and imaging [10] [11] studies suggests that there are neurons in the primate visual system - in the medial superior temporal cortex - that are specialised to respond to this type of complex 'optic flow' motion. In principle, optic flow could be encoded by a small number of neural mechanisms tuned to 'cardinal directions', including radial and circular motion [12] [13]. There is little support for this idea at present, however, from either physiological [6] [7] or psychophysical [14] research. We have measured the sensitivity of human subjects for detection of motion and for discrimination of motion direction over a wide and densely sampled range of complex motions. Average sensitivity was higher for inward and outward radial movement and for both directions of rotation, consistent with the existence of detectors tuned to these four types of motion. Principle component analysis revealed two clear components, one for radial stimuli (outward and inward) and the other for circular stimuli (clockwise and counter-clock-wise). The results imply that the mechanisms that analyse optic flow in humans tend to be tuned to the cardinal axes of radial and rotational motion.  相似文献   

11.
The right and left visual hemifields are represented in different cerebral hemispheres and are bound together by connections through the corpus callosum. Much has been learned on the functions of these connections from split-brain patients [1-4], but little is known about their contribution to conscious visual perception in healthy humans. We used diffusion tensor imaging and functional magnetic resonance imaging to investigate which callosal connections contribute to the subjective experience of a visual motion stimulus that requires interhemispheric integration. The "motion quartet" is an ambiguous version of apparent motion that leads to perceptions of either horizontal or vertical motion [5]. Interestingly, observers are more likely to perceive vertical than horizontal motion when the stimulus is presented centrally in the visual field [6]. This asymmetry has been attributed to the fact that, with central fixation, perception of horizontal motion requires integration across hemispheres whereas perception of vertical motion requires only intrahemispheric processing [7]. We are able to show that the microstructure of individually tracked callosal segments connecting motion-sensitive areas of the human MT/V5 complex (hMT/V5+; [8]) can predict the conscious perception of observers. Neither connections between primary visual cortex (V1) nor other surrounding callosal regions exhibit a similar relationship.  相似文献   

12.
There is now good evidence that perception of motion is strongly suppressed during saccades (rapid shifts of gaze), presumably to blunt the disturbing sense of motion that saccades would otherwise elicit. Other aspects of vision, such as contrast detection of high-frequency or equiluminant gratings, are virtually unaffected by saccades [1] [2] [3] [4] [5]. This has led to the suggestion that saccades may suppress selectively the magnocellular pathway (which is strongly implicated in motion perception), leaving the parvocellular pathway unaffected [5] [6]. Here, we investigate the neural level at which perception of motion is suppressed. We used a simple technique in which an impression of motion is generated from only two frames, allowing precise control over the stimulus [7] [8]. One frame has a certain fixed contrast, whereas the contrast of the other (the test frame) is varied to determine the threshold for motion discrimination (that is, the lowest test-frame contrast level at which the direction of motion can be correctly guessed). Contrast thresholds of the test depended strongly and non-monotonically on the contrast of the fixed-contrast frame, with a minimum at medium contrast. To study the effect of saccadic suppression, we triggered the two-frame sequence by a voluntary saccade. Thresholds during saccades increased in a way that suggested that saccadic suppression precedes motion analysis: when the test frame was first in the motion sequence there was a general depression of sensitivity, whereas when it was second, the contrast response curve was shifted to a higher contrast range, sometimes even resulting in higher sensitivity than without a saccade. The dependence on presentation order suggests that saccadic suppression occurs at an early stage of visual processing, on the single frames themselves rather than on the combined motion signal. As motion detection itself is thought to occur at an early stage, saccadic suppression must take place at a very early phenomenon.  相似文献   

13.
Seitz AR  Kim R  Shams L 《Current biology : CB》2006,16(14):1422-1427
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.  相似文献   

14.
Although considerable effort has been devoted to investigating how birds migrate over large distances, surprisingly little is known about how they tackle so successfully the moment-to-moment challenges of rapid flight through cluttered environments [1]. It has been suggested that birds detect and avoid obstacles [2] and control landing maneuvers [3-5] by using cues derived from the image motion that is generated in the eyes during flight. Here we investigate the ability of budgerigars to fly through narrow passages in a collision-free manner, by filming their trajectories during flight in a corridor where the walls are decorated with various visual patterns. The results demonstrate, unequivocally and for the first time, that birds negotiate narrow gaps safely by balancing the speeds of image motion that are experienced by the two eyes and that the speed of flight is regulated by monitoring the speed of image motion that is experienced by the two eyes. These findings have close parallels with those previously reported for flying insects [6-13], suggesting that some principles of visual guidance may be shared by all diurnal, flying animals.  相似文献   

15.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

16.
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.  相似文献   

17.
Adab HZ  Vogels R 《Current biology : CB》2011,21(19):1661-1666
Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no [1], weak [2], or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) [5]. These conflicting results and modeling of human (e.g., [6, 7]) and monkey data [8] suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area.  相似文献   

18.
The ventral form vision pathway of the primate brain comprises a sequence of areas that include V1, V2, V4 and the inferior temporal cortex (IT) [1]. Although contour extraction in the V1 area and responses to complex images, such as faces, in the IT have been studied extensively, much less is known about shape extraction at intermediate cortical levels such as V4. Here, we used functional magnetic resonance imaging (fMRI) to demonstrate that the human V4 is more strongly activated by concentric and radial patterns than by conventional sinusoidal gratings. This is consistent with global pooling of local V1 orientations to extract concentric and radial shape information in V4. Furthermore, concentric patterns were found to be effective in activating the fusiform face area. These findings support recent psychophysical [2,3] and physiological [4,5] data indicating that analysis of concentric and radial structure represents an important aspect of processing at intermediate levels of form vision.  相似文献   

19.
Pei YC  Hsiao SS  Craig JC  Bensmaia SJ 《Neuron》2011,69(3):536-547
How are local motion signals integrated to form a global motion percept? We investigate the neural mechanisms of tactile motion integration by presenting tactile gratings and plaids to the fingertips of monkeys, using the tactile analogue of a visual monitor and recording the responses evoked in somatosensory cortical neurons. The perceived directions of the gratings and plaids are measured in parallel psychophysical experiments. We identify a population of somatosensory neurons that exhibit integration properties comparable to those induced by analogous visual stimuli in area MT and find that these neural responses account for the perceived direction of the stimuli across all stimulus conditions tested. The preferred direction of the neurons and the perceived direction of the stimuli can be predicted from the weighted average of the directions of the individual stimulus features, highlighting that the somatosensory system implements a vector average mechanism to compute tactile motion direction that bears striking similarities to its visual counterpart.  相似文献   

20.
Since Barlow and Hill's classic study of the adaptation of the rabbit ganglion cell to movement [1], there have been several reports that motion adaptation is accompanied by an exponential reduction in spike rate, and similar estimates of the time course of velocity adaptation have been found across species [2-4]. Psychophysical studies in humans have shown that perceived velocity may reduce exponentially with adaptation [5,6]. It has been suggested that the reduction in firing of single cells may constitute the neural substrate of the reduction in perceived speed in humans [1,5-7]. Although a model of velocity coding in which the firing rate directly encodes speed may have the advantage of simplicity, it is not supported by psychophysical research. Furthermore, psychophysical estimates of the time course of perceived speed adaptation are not entirely consistent with physiological estimates. This discrepancy between psychophysical and physiological estimates may be due to the unrealistic assumption that speed is coded in the gross spike rate of neurons in the primary visual cortex. The psychophysical data on motion processing are, however, generally consistent with a model in which perceived velocity is derived from the ratio of two temporal channels [8-14]. We have examined the time course of speed adaptation and recovery to determine whether the observed rates can be better related to the established physiology if a ratio model of velocity processing is assumed. Our results indicate that such a model describes the data well and can accommodate the observed difference in the time courses of physiological and psychophysical processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号