首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Perceptual decisions depend on the ability to exploit available sensory information in order to select the most adaptive option from a set of alternatives. Such decisions depend on the perceptual sensitivity of the organism, which is generally accompanied by a corresponding level of certainty about the choice made. Here, by use of corticocortical paired associative transcranial magnetic stimulation protocol (ccPAS) aimed at inducing plastic changes, we shaped perceptual sensitivity and metacognitive ability in a motion discrimination task depending on the targeted network, demonstrating their functional dissociation. Neurostimulation aimed at boosting V5/MT+-to-V1/V2 back-projections enhanced motion sensitivity without impacting metacognition, whereas boosting IPS/LIP-to-V1/V2 back-projections increased metacognitive efficiency without impacting motion sensitivity. This double-dissociation provides causal evidence of distinct networks for perceptual sensitivity and metacognitive ability in humans.

Transcranial magnetic stimulation targeting cortico-cortical connections reveals a functional dissociation between temporo-visual and parieto-visual re-entrant pathways in humans, controlling perceptual sensitivity and metacognitive abilities respectively, during a visual motion perception task.  相似文献   

2.
We have previously shown that transcranial direct current stimulation (tDCS) improved performance of a complex visual perceptual learning task (Clark et al. 2012). However, it is not known whether tDCS can enhance perceptual sensitivity independently of non-specific, arousal-linked changes in response bias, nor whether any such sensitivity benefit can be retained over time. We examined the influence of stimulation of the right inferior frontal cortex using tDCS on perceptual learning and retention in 37 healthy participants, using signal detection theory to distinguish effects on perceptual sensitivity (d′) from response bias (ß). Anodal stimulation with 2 mA increased d′, compared to a 0.1 mA sham stimulation control, with no effect on ß. On completion of training, participants in the active stimulation group had more than double the perceptual sensitivity of the control group. Furthermore, the performance enhancement was maintained for 24 hours. The results show that tDCS augments both skill acquisition and retention in a complex detection task and that the benefits are rooted in an improvement in sensitivity (d′), rather than changes in response bias (ß). Stimulation-driven acceleration of learning and its retention over 24 hours may result from increased activation of prefrontal cortical regions that provide top-down attentional control signals to object recognition areas.  相似文献   

3.
In humans and some other species perceptual decision-making is complemented by the ability to make confidence judgements about the certainty of sensory evidence. While both forms of decision process have been studied empirically, the precise relationship between them remains poorly understood. We performed an experiment that combined a perceptual decision-making task (identifying the category of a faint visual stimulus) with a confidence-judgement task (wagering on the accuracy of each perceptual decision). The visual stimulation paradigm required steady fixation, so we used eye-tracking to control for stray eye movements. Our data analyses revealed an unexpected and counterintuitive interaction between the steadiness of fixation (prior to and during stimulation), perceptual decision making, and post-decision wagering: greater variability in gaze direction during fixation was associated with significantly increased visual-perceptual sensitivity, but significantly decreased reliability of confidence judgements. The latter effect could not be explained by a simple change in overall confidence (i.e. a criterion artifact), but rather was tied to a change in the degree to which high wagers predicted correct decisions (i.e. the sensitivity of the confidence judgement). We found no evidence of a differential change in pupil diameter that could account for the effect and thus our results are consistent with fixational eye movements being the relevant covariate. However, we note that small changes in pupil diameter can sometimes cause artefactual fluctuations in measured gaze direction and this possibility could not be fully ruled out. In either case, our results suggest that perceptual decisions and confidence judgements can be processed independently and point toward a new avenue of research into the relationship between them.  相似文献   

4.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

5.
Choice certainty is a probabilistic estimate of past performance and expected outcome. In perceptual decisions the degree of confidence correlates closely with choice accuracy and reaction times, suggesting an intimate relationship to objective performance. Here we show that spatial and feature-based attention increase human subjects' certainty more than accuracy in visual motion discrimination tasks. Our findings demonstrate for the first time a dissociation of choice accuracy and certainty with a significantly stronger influence of voluntary top-down attention on subjective performance measures than on objective performance. These results reveal a so far unknown mechanism of the selection process implemented by attention and suggest a unique biological valence of choice certainty beyond a faithful reflection of the decision process.  相似文献   

6.

Background

Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed.

Methodology/Principal Findings

We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d'') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30–60 Hz) and alpha (8–14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing.

Conclusions/Significance

We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.  相似文献   

7.

Background

The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.

Methodology/Principal Findings

Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.

Conclusions/Significance

Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.  相似文献   

8.
Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs.  相似文献   

9.
Seitz AR  Kim R  Shams L 《Current biology : CB》2006,16(14):1422-1427
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.  相似文献   

10.
During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent, we directly compared the contrast sensitivity of fixational eye movements with individuals'' psychophysical judgements. Classification accuracy closely matched psychophysical performance, and predicted individuals'' threshold estimates with less bias and overall error than those obtained using specific features of the signature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye control mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement.  相似文献   

11.
Metacognition is the ability to reflect on, and evaluate, our cognition and behaviour. Distortions in metacognition are common in mental health disorders, though the neural underpinnings of such dysfunction are unknown. One reason for this is that models of key components of metacognition, such as decision confidence, are generally specified at an algorithmic or process level. While such models can be used to relate brain function to psychopathology, they are difficult to map to a neurobiological mechanism. Here, we develop a biologically-plausible model of decision uncertainty in an attempt to bridge this gap. We first relate the model’s uncertainty in perceptual decisions to standard metrics of metacognition, namely mean confidence level (bias) and the accuracy of metacognitive judgments (sensitivity). We show that dissociable shifts in metacognition are associated with isolated disturbances at higher-order levels of a circuit associated with self-monitoring, akin to neuropsychological findings that highlight the detrimental effect of prefrontal brain lesions on metacognitive performance. Notably, we are able to account for empirical confidence judgements by fitting the parameters of our biophysical model to first-order performance data, specifically choice and response times. Lastly, in a reanalysis of existing data we show that self-reported mental health symptoms relate to disturbances in an uncertainty-monitoring component of the network. By bridging a gap between a biologically-plausible model of confidence formation and observed disturbances of metacognition in mental health disorders we provide a first step towards mapping theoretical constructs of metacognition onto dynamical models of decision uncertainty. In doing so, we provide a computational framework for modelling metacognitive performance in settings where access to explicit confidence reports is not possible.  相似文献   

12.
Human performance on various visual tasks can be improved substantially via training. However, the enhancements are frequently specific to relatively low-level stimulus dimensions. While such specificity has often been thought to be indicative of a low-level neural locus of learning, recent research suggests that these same effects can be accounted for by changes in higher-level areas–in particular in the way higher-level areas read out information from lower-level areas in the service of highly practiced decisions. Here we contrast the degree of orientation transfer seen after training on two different tasks—vernier acuity and stereoacuity. Importantly, while the decision rule that could improve vernier acuity (i.e. a discriminant in the image plane) would not be transferable across orientations, the simplest rule that could be learned to solve the stereoacuity task (i.e. a discriminant in the depth plane) would be insensitive to changes in orientation. Thus, given a read-out hypothesis, more substantial transfer would be expected as a result of stereoacuity than vernier acuity training. To test this prediction, participants were trained (7500 total trials) on either a stereoacuity (N = 9) or vernier acuity (N = 7) task with the stimuli in either a vertical or horizontal configuration (balanced across participants). Following training, transfer to the untrained orientation was assessed. As predicted, evidence for relatively orientation specific learning was observed in vernier trained participants, while no evidence of specificity was observed in stereo trained participants. These results build upon the emerging view that perceptual learning (even very specific learning effects) may reflect changes in inferences made by high-level areas, rather than necessarily fully reflecting changes in the receptive field properties of low-level areas.  相似文献   

13.
Subliminal perception studies have shown that one can objectively discriminate a stimulus without subjectively perceiving it. We show how a minimalist framework based on Signal Detection Theory and Bayesian inference can account for this dissociation, by describing subjective and objective tasks with similar decision-theoretic mechanisms. Each of these tasks relies on distinct response classes, and therefore distinct priors and decision boundaries. As a result, they may reach different conclusions. By formalizing, within the same framework, forced-choice discrimination responses, subjective visibility reports and confidence ratings, we show that this decision model suffices to account for several classical characteristics of conscious and unconscious perception. Furthermore, the model provides a set of original predictions on the nonlinear profiles of discrimination performance obtained at various levels of visibility. We successfully test one such prediction in a novel experiment: when varying continuously the degree of perceptual ambiguity between two visual symbols presented at perceptual threshold, identification performance varies quasi-linearly when the stimulus is unseen and in an ‘all-or-none’ manner when it is seen. The present model highlights how conscious and non-conscious decisions may correspond to distinct categorizations of the same stimulus encoded by a high-dimensional neuronal population vector.  相似文献   

14.
Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou''s figure (bound percept) or as pairs of bars oscillating independently along cardinal axes (unbound percept). We found that beta (15–25 Hz), but not gamma (55–85 Hz) oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.  相似文献   

15.

Background

High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese.

Methodology/Principal Findings

To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia.

Conclusion/Significance

These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms.  相似文献   

16.
Jain A  Fuller S  Backus BT 《PloS one》2010,5(10):e13295
The visual system can learn to use information in new ways to construct appearance. Thus, signals such as the location or translation direction of an ambiguously rotating wire frame cube, which are normally uninformative, can be learned as cues to determine the rotation direction. This perceptual learning occurs when the formerly uninformative signal is statistically associated with long-trusted visual cues (such as binocular disparity) that disambiguate appearance during training. In previous demonstrations, the newly learned cue was intrinsic to the perceived object, in that the signal was conveyed by the same image elements as the object itself. Here we used extrinsic new signals and observed no learning. We correlated three new signals with long-trusted cues in the rotating cube paradigm: one crossmodal (an auditory signal) and two within modality (visual). Cue recruitment did not occur in any of these conditions, either in single sessions or in ten sessions across as many days. These results suggest that the intrinsic/extrinsic distinction is important for the perceptual system in determining whether it can learn and use new information from the environment to construct appearance. Extrinsic cues do have perceptual effects (e.g. the "bounce-pass" illusion and McGurk effect), so we speculate that extrinsic signals must be recruited for perception, but only if certain conditions are met. These conditions might specify the age of the observer, the strength of the long-trusted cues, or the amount of exposure to the correlation.  相似文献   

17.
With intensive training, human can achieve impressive behavioral improvement on various perceptual tasks. This phenomenon, termed perceptual learning, has long been considered as a hallmark of the plasticity of sensory neural system. Not surprisingly, high-level vision, such as object perception, can also be improved by perceptual learning. Here we review recent psychophysical, electrophysiological, and neuroimaging studies investigating the effects of training on object selective cortex, such as monkey inferior temporal cortex and human lateral occipital area. Evidences show that learning leads to an increase in object selectivity at the single neuron level and/or the neuronal population level. These findings indicate that high-level visual cortex in humans is highly plastic and visual experience can strongly shape neural functions of these areas. At the end of the review, we discuss several important future directions in this area.  相似文献   

18.
Can subjective belief about one''s own perceptual competence change one''s perception? To address this question, we investigated the influence of self-efficacy on sensory discrimination in two low-level visual tasks: contrast and orientation discrimination. We utilised a pre-post manipulation approach whereby two experimental groups (high and low self-efficacy) and a control group made objective perceptual judgments on the contrast or the orientation of the visual stimuli. High and low self-efficacy were induced by the provision of fake social-comparative performance feedback and fictional research findings. Subsequently, the post-manipulation phase was performed to assess changes in visual discrimination thresholds as a function of the self-efficacy manipulations. The results showed that the high self-efficacy group demonstrated greater improvement in visual discrimination sensitivity compared to both the low self-efficacy and control groups. These findings suggest that subjective beliefs about one''s own perceptual competence can affect low-level visual processing.  相似文献   

19.
The sources of evidence contributing to metacognitive assessments of confidence in decision-making remain unclear. Previous research has shown that pupil dilation is related to the signaling of uncertainty in a variety of decision tasks. Here we ask whether pupil dilation is also related to metacognitive estimates of confidence. Specifically, we measure the relationship between pupil dilation and confidence during an auditory decision task using a general linear model approach to take into account delays in the pupillary response. We found that pupil dilation responses track the inverse of confidence before but not after a decision is made, even when controlling for stimulus difficulty. In support of an additional post-decisional contribution to the accuracy of confidence judgments, we found that participants with better metacognitive ability – that is, more accurate appraisal of their own decisions – showed a tighter relationship between post-decisional pupil dilation and confidence. Together our findings show that a physiological index of uncertainty, pupil dilation, predicts both confidence and metacognitive accuracy for auditory decisions.  相似文献   

20.
We propose a novel explanation for bistable perception, namely, the collective dynamics of multiple neural populations that are individually meta-stable. Distributed representations of sensory input and of perceptual state build gradually through noise-driven transitions in these populations, until the competition between alternative representations is resolved by a threshold mechanism. The perpetual repetition of this collective race to threshold renders perception bistable. This collective dynamics – which is largely uncoupled from the time-scales that govern individual populations or neurons – explains many hitherto puzzling observations about bistable perception: the wide range of mean alternation rates exhibited by bistable phenomena, the consistent variability of successive dominance periods, and the stabilizing effect of past perceptual states. It also predicts a number of previously unsuspected relationships between observable quantities characterizing bistable perception. We conclude that bistable perception reflects the collective nature of neural decision making rather than properties of individual populations or neurons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号