首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
复杂刺激的知觉学习是指由训练或经验引起的对物体或者面孔等复杂视觉刺激在知觉上长期稳定的改变,一般认为这反映了大脑高级视皮层的可塑性.对简单刺激知觉学习特性的研究已经揭示了低级视皮层的部分可塑性,但是复杂刺激知觉学习的神经机制目前仍存在争议.本文介绍了知觉学习的理论模型和实验证据,并重点探讨了复杂刺激如物体和面孔知觉学习的特性、神经机制及研究方法.该领域未来需要在复杂刺激知觉学习的持久性、面孔不同属性知觉学习的机制,以及复杂刺激知觉学习的理论模型方面做进一步研究.  相似文献   

2.
Yotsumoto Y  Watanabe T  Sasaki Y 《Neuron》2008,57(6):827-833
Perceptual learning is regarded as a manifestation of experience-dependent plasticity in the sensory systems, yet the underlying neural mechanisms remain unclear. We measured the dynamics of performance on a visual task and brain activation in the human primary visual cortex (V1) across the time course of perceptual learning. Within the first few weeks of training, brain activation in a V1 subregion corresponding to the trained visual field quadrant and task performance both increased. However, while performance levels then saturated and were maintained at a constant level, brain activation in the corresponding areas decreased to the level observed before training. These findings indicate that there are distinct temporal phases in the time course of perceptual learning, related to differential dynamics of BOLD activity in visual cortex.  相似文献   

3.
Karmarkar UR  Dan Y 《Neuron》2006,52(4):577-585
Experience-dependent plasticity is a prominent feature of the mammalian visual cortex. Although such neural changes are most evident during development, adult cortical circuits can be modified by a variety of manipulations, such as perceptual learning and visual deprivation. Elucidating the underlying mechanisms at the cellular and synaptic levels is an essential step in understanding neural plasticity in the mature animal. Although developmental and adult plasticity share many common features, notable differences may be attributed to developmental cortical changes at multiple levels. These range from shifts in the molecular profiles of cortical neurons to changes in the spatiotemporal dynamics of network activity. In this review, we will discuss recent progress and remaining challenges in understanding adult visual plasticity, focusing on the primary visual cortex.  相似文献   

4.
Humans are able to efficiently learn and remember complex visual patterns after only a few seconds of exposure [1]. At a cellular level, such learning is thought to involve changes in synaptic efficacy, which have been linked to the precise timing of action potentials relative to synaptic inputs [2-4]. Previous experiments have tapped into the timing of neural spiking events by using repeated asynchronous presentation of visual stimuli to induce changes in both the tuning properties of visual neurons and the perception of simple stimulus attributes [5, 6]. Here we used a similar approach to investigate potential mechanisms underlying the perceptual learning of face identity, a high-level stimulus property based on the spatial configuration of local features. Periods of stimulus pairing induced a systematic bias in face-identity perception in a manner consistent with the predictions of spike timing-dependent plasticity. The perceptual shifts induced for face identity were tolerant to a 2-fold change in stimulus size, suggesting that they reflected neuronal changes in nonretinotopic areas, and were more than twice as strong as the perceptual shifts induced for low-level visual features. These results support the idea that spike timing-dependent plasticity can rapidly adjust the neural encoding of high-level stimulus attributes [7-11].  相似文献   

5.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

6.
Haushofer J  Kanwisher N 《Neuron》2007,53(6):773-775
How does experience change representations of visual objects in the brain? Do cortical object representations reflect category membership? In this issue of Neuron, Jiang et al. show that category training leads to sharpening of neural responses in high-level visual cortex; in contrast, category boundaries may be represented only in prefrontal cortex.  相似文献   

7.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

8.
We have previously shown that transcranial direct current stimulation (tDCS) improved performance of a complex visual perceptual learning task (Clark et al. 2012). However, it is not known whether tDCS can enhance perceptual sensitivity independently of non-specific, arousal-linked changes in response bias, nor whether any such sensitivity benefit can be retained over time. We examined the influence of stimulation of the right inferior frontal cortex using tDCS on perceptual learning and retention in 37 healthy participants, using signal detection theory to distinguish effects on perceptual sensitivity (d′) from response bias (ß). Anodal stimulation with 2 mA increased d′, compared to a 0.1 mA sham stimulation control, with no effect on ß. On completion of training, participants in the active stimulation group had more than double the perceptual sensitivity of the control group. Furthermore, the performance enhancement was maintained for 24 hours. The results show that tDCS augments both skill acquisition and retention in a complex detection task and that the benefits are rooted in an improvement in sensitivity (d′), rather than changes in response bias (ß). Stimulation-driven acceleration of learning and its retention over 24 hours may result from increased activation of prefrontal cortical regions that provide top-down attentional control signals to object recognition areas.  相似文献   

9.
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area.  相似文献   

10.

Background

Experience can alter how objects are represented in the visual cortex. But experience can take different forms. It is unknown whether the kind of visual experience systematically alters the nature of visual cortical object representations.

Methodology/Principal Findings

We take advantage of different training regimens found to produce qualitatively different types of perceptual expertise behaviorally in order to contrast the neural changes that follow different kinds of visual experience with the same objects. Two groups of participants went through training regimens that required either subordinate-level individuation or basic-level categorization of a set of novel, artificial objects, called “Ziggerins”. fMRI activity of a region in the right fusiform gyrus increased after individuation training and was correlated with the magnitude of configural processing of the Ziggerins observed behaviorally. In contrast, categorization training caused distributed changes, with increased activity in the medial portion of the ventral occipito-temporal cortex relative to more lateral areas.

Conclusions/Significance

Our results demonstrate that the kind of experience with a category of objects can systematically influence how those objects are represented in visual cortex. The demands of prior learning experience therefore appear to be one factor determining the organization of activity patterns in visual cortex.  相似文献   

11.
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer''s discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.  相似文献   

12.
Human performance on various visual tasks can be improved substantially via training. However, the enhancements are frequently specific to relatively low-level stimulus dimensions. While such specificity has often been thought to be indicative of a low-level neural locus of learning, recent research suggests that these same effects can be accounted for by changes in higher-level areas–in particular in the way higher-level areas read out information from lower-level areas in the service of highly practiced decisions. Here we contrast the degree of orientation transfer seen after training on two different tasks—vernier acuity and stereoacuity. Importantly, while the decision rule that could improve vernier acuity (i.e. a discriminant in the image plane) would not be transferable across orientations, the simplest rule that could be learned to solve the stereoacuity task (i.e. a discriminant in the depth plane) would be insensitive to changes in orientation. Thus, given a read-out hypothesis, more substantial transfer would be expected as a result of stereoacuity than vernier acuity training. To test this prediction, participants were trained (7500 total trials) on either a stereoacuity (N = 9) or vernier acuity (N = 7) task with the stimuli in either a vertical or horizontal configuration (balanced across participants). Following training, transfer to the untrained orientation was assessed. As predicted, evidence for relatively orientation specific learning was observed in vernier trained participants, while no evidence of specificity was observed in stereo trained participants. These results build upon the emerging view that perceptual learning (even very specific learning effects) may reflect changes in inferences made by high-level areas, rather than necessarily fully reflecting changes in the receptive field properties of low-level areas.  相似文献   

13.
Seitz AR  Kim R  Shams L 《Current biology : CB》2006,16(14):1422-1427
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.  相似文献   

14.
Learning to link visual contours   总被引:1,自引:0,他引:1  
Li W  Piëch V  Gilbert CD 《Neuron》2008,57(3):442-451
In complex visual scenes, linking related contour elements is important for object recognition. This process, thought to be stimulus driven and hard wired, has substrates in primary visual cortex (V1). Here, however, we find contour integration in V1 to depend strongly on perceptual learning and top-down influences that are specific to contour detection. In naive monkeys, the information about contours embedded in complex backgrounds is absent in V1 neuronal responses and is independent of the locus of spatial attention. Training animals to find embedded contours induces strong contour-related responses specific to the trained retinotopic region. These responses are most robust when animals perform the contour detection task but disappear under anesthesia. Our findings suggest that top-down influences dynamically adapt neural circuits according to specific perceptual tasks. This may serve as a general neuronal mechanism of perceptual learning and reflect top-down mediated changes in cortical states.  相似文献   

15.
What we see depends on where we look. This paper characterizes the modulatory effects of point of regard in three-dimensional space on responsiveness of visual cortical neurons in areas V1, V2, and V4. Such modulatory effects are both common, affecting 85% of cells, and strong, frequently producing changes of mean firing rate by a factor of 10. The prevalence of neurons in area V4 showing a preference for near distances may be indicative of the involvement of this area in close scrutiny during object recognition. We propose that eye-position signals can be exploited by visual cortex as classical conditioning stimuli, enabling the perceptual learning of systematic relationships between point of regard and the structure of the visual environment.  相似文献   

16.
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom-up interactions, for example, in other sensory modalities.  相似文献   

17.

Background

Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed.

Methodology/Principal Findings

We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d'') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30–60 Hz) and alpha (8–14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing.

Conclusions/Significance

We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.  相似文献   

18.
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.  相似文献   

19.
Investigation of perceptual rivalry between conflicting stimuli presented one to each eye can further understanding of the neural underpinnings of conscious visual perception. During rivalry, visual awareness fluctuates between perceptions of the two stimuli. Here, we demonstrate that high-level perceptual grouping can promote rivalry between stimulus pairs that would otherwise be perceived as nonrivalrous. Perceptual grouping was generated with point-light walker stimuli that simulate human motion, visible only as lights placed on the joints. Although such walking figures are unrecognizable when stationary, recognition judgments as complex as gender and identity can accurately be made from animated displays, demonstrating the efficiency with which our visual system can group dynamic local signals into a globally coherent walking figure. We find that point-light walker stimuli presented one to each eye and in different colors and configurations results in strong rivalry. However, rivalry is minimal when the two walkers are split between the eyes or both presented to one eye. This pattern of results suggests that processing animated walker figures promotes rivalry between signals from the two eyes rather than between higher-level representations of the walkers. This leads us to hypothesize that awareness during binocular rivalry involves the integrated activity of high-level perceptual mechanisms in conjunction with lower-level ocular suppression modulated via cortical feedback.  相似文献   

20.
Under conditions of visual fixation, perceptual fading occurs when a stationary object, though present in the world and continually casting light upon the retina, vanishes from visual consciousness. The neural correlates of the consciousness of such an object will presumably modulate in activity with the onset and cessation of perceptual fading.

Method

In order to localize the neural correlates of perceptual fading, a green disk that had been individually set to be equiluminant with the orange background, was presented in one of the four visual quadrants; Subjects indicated with a button press whether or not the disk was subjectively visible as it perceptually faded in and out.

Results

Blood oxygen-level dependent (BOLD) signal in V1 and ventral retinotopic areas V2v and V3v decreases when the disk subjectively disappears, and increases when it subjectively reappears. This effect occurs in early visual areas both ipsilaterally and contralaterally to the fading figure. That is, it occurs regardless of whether the fading stimulus is presented inside or outside of the corresponding portion of visual field. In addition, we find that the microsaccade rate rises before and after perceptual transitions from not seeing to seeing the disk, and decreases before perceptual transitions from seeing to not seeing the disk. These BOLD signal changes could be driven by a global process that operates across contralateral and ipsilateral visual cortex or by a confounding factor, such as microsaccade rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号