首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

It is well known that facial expressions represent important social cues. In humans expressing facial emotion, fear may be configured to maximize sensory exposure (e.g., increases visual input) whereas disgust can reduce sensory exposure (e.g., decreases visual input). To investigate whether such effects also extend to the attentional system, we used the “attentional blink” (AB) paradigm. Many studies have documented that the second target (T2) of a pair is typically missed when presented within a time window of about 200–500 ms from the first to-be-detected target (T1; i.e., the AB effect). It has recently been proposed that the AB effect depends on the efficiency of a gating system which facilitates the entrance of relevant input into working memory, while inhibiting irrelevant input. Following the inhibitory response on post T1 distractors, prolonged inhibition of the subsequent T2 is observed. In the present study, we hypothesized that processing facial expressions of emotion would influence this attentional gating. Fearful faces would increase but disgust faces would decrease inhibition of the second target.

Methodology/Principal Findings

We showed that processing fearful versus disgust faces has different effects on these attentional processes. We found that processing fear faces impaired the detection of T2 to a greater extent than did the processing disgust faces. This finding implies emotion-specific modulation of attention.

Conclusions/Significance

Based on the recent literature on attention, our finding suggests that processing fear-related stimuli exerts greater inhibitory responses on distractors relative to processing disgust-related stimuli. This finding is of particular interest for researchers examining the influence of emotional processing on attention and memory in both clinical and normal populations. For example, future research could extend upon the current study to examine whether inhibitory processes invoked by fear-related stimuli may be the mechanism underlying the enhanced learning of fear-related stimuli.  相似文献   

2.
Synchronized gamma frequency oscillations in neural networks are thought to be important to sensory information processing, and their effects have been intensively studied. Here we describe a mechanism by which the nervous system can readily control gamma oscillation effects, depending selectively on visual stimuli. Using a model neural network simulation, we found that sensory response in the primary visual cortex is significantly modulated by the resonance between “spontaneous” and “stimulus-driven” oscillations. This gamma resonance can be precisely controlled by the synaptic plasticity of thalamocortical connections, and cortical response is regulated differentially according to the resonance condition. The mechanism produces a selective synchronization between the afferent and downstream neural population. Our simulation results explain experimental observations such as stimulus-dependent synchronization between the thalamus and the cortex at different oscillation frequencies. The model generally shows how sensory information can be selectively routed depending on its frequency components.  相似文献   

3.
In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG) pattern, “delta-brushes,” in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW) during routine EEG recording. Stimuli consisted of either low-volume technogenic “clicks” near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus (“click” and voice) was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and “click” stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for “click” and voice stimuli: responses to “clicks” became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory), these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex during fetal stages and provide a potential test of functional cortical maturation during fetal development.  相似文献   

4.
We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. These networks are built from dynamical toy model neurons which adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the color phi phenomenon can be present in an echo state network, a recurrent neural network, without explicitly training for the presence of the effect, such that it emerges as an artifact of the dynamical processing. Our results suggest that the color phi phenomenon might simply be a feature of the inherent dynamical and nonlinear sensory processing in the brain and in and of itself is not related to consciousness.  相似文献   

5.
Adaptive sequential behavior is a hallmark of human cognition. In particular, humans can learn to produce precise spatiotemporal sequences given a certain context. For instance, musicians can not only reproduce learned action sequences in a context-dependent manner, they can also quickly and flexibly reapply them in any desired tempo or rhythm without overwriting previous learning. Existing neural network models fail to account for these properties. We argue that this limitation emerges from the fact that sequence information (i.e., the position of the action) and timing (i.e., the moment of response execution) are typically stored in the same neural network weights. Here, we augment a biologically plausible recurrent neural network of cortical dynamics to include a basal ganglia-thalamic module which uses reinforcement learning to dynamically modulate action. This “associative cluster-dependent chain” (ACDC) model modularly stores sequence and timing information in distinct loci of the network. This feature increases computational power and allows ACDC to display a wide range of temporal properties (e.g., multiple sequences, temporal shifting, rescaling, and compositionality), while still accounting for several behavioral and neurophysiological empirical observations. Finally, we apply this ACDC network to show how it can learn the famous “Thunderstruck” song intro and then flexibly play it in a “bossa nova” rhythm without further training.  相似文献   

6.

Background

When two targets are presented in close temporal proximity amongst a rapid serial visual stream of distractors, a period of disrupted attention and attenuated awareness lasting 200–500 ms follows identification of the first target (T1). This phenomenon is known as the “attentional blink” (AB) and is generally attributed to a failure to consolidate information in visual short-term memory due to depleted or disrupted attentional resources. Previous research has shown that items presented during the AB that fail to reach conscious awareness are still processed to relatively high levels, including the level of meaning. For example, missed word stimuli have been shown to prime later targets that are closely associated words. Although these findings have been interpreted as evidence for semantic processing during the AB, closely associated words (e.g., day-night) may also rely on specific, well-worn, lexical associative links which enhance attention to the relevant target.

Methodology/Principal Findings

We used a measure of semantic distance to create prime-target pairs that are conceptually close, but have low word associations (e.g., wagon and van) and investigated priming from a distractor stimulus presented during the AB to a subsequent target (T2). The stimuli were words (concrete nouns) in Experiment 1 and the corresponding pictures of objects in Experiment 2. In both experiments, report of T2 was facilitated when this item was preceded by a semantically-related distractor.

Conclusions/Significance

This study is the first to show conclusively that conceptual information is extracted from distractor stimuli presented during a period of attenuated awareness and that this information spreads to neighbouring concepts within a semantic network.  相似文献   

7.
Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100–100Hz; deviant 100–300Hz) separated by a 300, 70 or 10ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6–11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100–300ms indexing sensory processing, and a later peak (LDN), at about 400–600ms, thought to reflect reorientation to the deviant stimuli or “second-look” processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens’ rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children’s responses to rapid successive auditory stimulation requires an examination of both peaks.  相似文献   

8.
In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system''s judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.  相似文献   

9.
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.  相似文献   

10.
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.  相似文献   

11.
How does the brain process sensory stimuli, and decide whether to initiate locomotor behaviour? To investigate this question we develop two whole body computer models of a tadpole. The “Central Nervous System” (CNS) model uses evidence from whole-cell recording to define 2300 neurons in 12 classes to study how sensory signals from the skin initiate and stop swimming. In response to skin stimulation, it generates realistic sensory pathway spiking and shows how hindbrain sensory memory populations on each side can compete to initiate reticulospinal neuron firing and start swimming. The 3-D “Virtual Tadpole” (VT) biomechanical model with realistic muscle innervation, body flexion, body-water interaction, and movement is then used to evaluate if motor nerve outputs from the CNS model can produce swimming-like movements in a volume of “water”. We find that the whole tadpole VT model generates reliable and realistic swimming. Combining these two models opens new perspectives for experiments.  相似文献   

12.
Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants’ attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.  相似文献   

13.
Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities.  相似文献   

14.
Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities.  相似文献   

15.
The relation of gamma-band synchrony to holistic perception in which concerns the effects of sensory processing, high level perceptual gestalt formation, motor planning and response is still controversial. To provide a more direct link to emergent perceptual states we have used holistic EEG/ERP paradigms where the moment of perceptual “discovery” of a global pattern was variable. Using a rapid visual presentation of short-lived Mooney objects we found an increase of gamma-band activity locked to perceptual events. Additional experiments using dynamic Mooney stimuli showed that gamma activity increases well before the report of an emergent holistic percept. To confirm these findings in a data driven manner we have further used a support vector machine classification approach to distinguish between perceptual vs. non perceptual states, based on time-frequency features. Sensitivity, specificity and accuracy were all above 95%. Modulations in the 30–75 Hz range were larger for perception states. Interestingly, phase synchrony was larger for perception states for high frequency bands. By focusing on global gestalt mechanisms instead of local processing we conclude that gamma-band activity and synchrony provide a signature of holistic perceptual states of variable onset, which are separable from sensory and motor processing.  相似文献   

16.
Sensory deprivation has long been known to cause hallucinations or “phantom” sensations, the most common of which is tinnitus induced by hearing loss, affecting 10–20% of the population. An observable hearing loss, causing auditory sensory deprivation over a band of frequencies, is present in over 90% of people with tinnitus. Existing plasticity-based computational models for tinnitus are usually driven by homeostatic mechanisms, modeled to fit phenomenological findings. Here, we use an objective-driven learning algorithm to model an early auditory processing neuronal network, e.g., in the dorsal cochlear nucleus. The learning algorithm maximizes the network’s output entropy by learning the feed-forward and recurrent interactions in the model. We show that the connectivity patterns and responses learned by the model display several hallmarks of early auditory neuronal networks. We further demonstrate that attenuation of peripheral inputs drives the recurrent network towards its critical point and transition into a tinnitus-like state. In this state, the network activity resembles responses to genuine inputs even in the absence of external stimulation, namely, it “hallucinates” auditory responses. These findings demonstrate how objective-driven plasticity mechanisms that normally act to optimize the network’s input representation can also elicit pathologies such as tinnitus as a result of sensory deprivation.  相似文献   

17.
Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent—to a large extent—from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD), which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization), which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a “conditional ERD,” through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli.  相似文献   

18.

Background

When two tasks are presented within a short interval, a delay in the execution of the second task has been systematically observed. Psychological theorizing has argued that while sensory and motor operations can proceed in parallel, the coordination between these modules establishes a processing bottleneck. This model predicts that the timing but not the characteristics (duration, precision, variability…) of each processing stage are affected by interference. Thus, a critical test to this hypothesis is to explore whether the qualitiy of the decision is unaffected by a concurrent task.

Methodology/Principal Findings

In number comparison–as in most decision comparison tasks with a scalar measure of the evidence–the extent to which two stimuli can be discriminated is determined by their ratio, referred as the Weber fraction. We investigated performance in a rapid succession of two non-symbolic comparison tasks (number comparison and tone discrimination) in which error rates in both tasks could be manipulated parametrically from chance to almost perfect. We observed that dual-task interference has a massive effect on RT but does not affect the error rates, or the distribution of errors as a function of the evidence.

Conclusions/Significance

Our results imply that while the decision process itself is delayed during multiple task execution, its workings are unaffected by task interference, providing strong evidence in favor of a sequential model of task execution.  相似文献   

19.
Although serotonin is known to play an important role in pain processing, the relationship between the polymorphism in 5-HTTLPR and pain processing is not well understood. To examine the relationship more comprehensively, various factors of pain processing having putative associations with 5-HT functioning were studied, namely the subjective pain experience (pain threshold, rating of experimental pain), catastrophizing about pain (Pain Catastrophizing Scale = PCS) and motor responsiveness (facial expression of pain). In 60 female and 67 male participants, heat pain stimuli were applied by a contact thermode to assess pain thresholds, supra-threshold ratings and a composite score of pain-relevant facial responses. Participants also completed the PCS and were grouped based on their 5-HTTLPR genotype (bi-allelic evaluation) into a group with s-allele carriers (ss, sl) and a second group without (ll). S-allele carriers proved to have lower pain thresholds and higher PCS scores. These two positive findings were unrelated to each other. No other difference between genotype groups became significant. In all analyses, “age” and “gender” were controlled for. In s-allele carriers the subjective pain experience and the tendency to catastrophize about pain was enhanced, suggesting that the s-allele might be a risk factor for the development and maintenance of pain. This risk factor seems to act via two independent routes, namely via the sensory processes of subjective pain experiences and via the booster effects of pain catastrophizing.  相似文献   

20.
Why is the human brain fundamentally limited when attempting to execute two tasks at the same time or in close succession? Two classical paradigms, psychological refractory period (PRP) and task switching, have independently approached this issue, making significant advances in our understanding of the architecture of cognition. Yet, there is an apparent contradiction between the conclusions derived from these two paradigms. The PRP paradigm, on the one hand, suggests that the simultaneous execution of two tasks is limited solely by a passive structural bottleneck in which the tasks are executed on a first-come, first-served basis. The task-switching paradigm, on the other hand, argues that switching back and forth between task configurations must be actively controlled by a central executive system (the system controlling voluntary, planned, and flexible action). Here we have explicitly designed an experiment mixing the essential ingredients of both paradigms: task uncertainty and task simultaneity. In addition to a central bottleneck, we obtain evidence for active processes of task setting (planning of the appropriate sequence of actions) and task disengaging (suppression of the plan set for the first task in order to proceed with the next one). Our results clarify the chronometric relations between these central components of dual-task processing, and in particular whether they operate serially or in parallel. On this basis, we propose a hierarchical model of cognitive architecture that provides a synthesis of task-switching and PRP paradigms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号