首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Initiating an eye movement towards a suddenly appearing visual target is faster when an accessory auditory stimulus occurs in close spatiotemporal vicinity. Such facilitation of saccadic reaction time (SRT) is well-documented, but the exact neural mechanisms underlying the crossmodal effect remain to be elucidated. From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multisensory processing. Specifically, it is assumed that the phase of an ongoing neural oscillation is shifted due to the occurrence of a sensory stimulus so that, across trials, phase values become highly consistent (phase reset). If one can identify the phase an oscillation is reset to, it is possible to predict when temporal windows of high and low excitability will occur. However, in behavioral experiments the pre-stimulus phase will be different on successive repetitions of the experimental trial, and average performance over many trials will show no signs of the modulation. Here we circumvent this problem by repeatedly presenting an auditory accessory stimulus followed by a visual target stimulus with a temporal delay varied in steps of 2 ms. Performing a discrete time series analysis on SRT as a function of the delay, we provide statistical evidence for the existence of distinct peak spectral components in the power spectrum. These frequencies, although varying across participants, fall within the beta and gamma range (20 to 40 Hz) of neural oscillatory activity observed in neurophysiological studies of multisensory integration. Some evidence for high-theta/alpha activity was found as well. Our results are consistent with the phase reset hypothesis and demonstrate that it is amenable to testing by purely psychophysical methods. Thus, any theory of multisensory processes that connects specific brain states with patterns of saccadic responses should be able to account for traces of oscillatory activity in observable behavior.  相似文献   

2.
Audiovisual integration of speech falters under high attention demands   总被引:11,自引:0,他引:11  
One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.  相似文献   

3.
Humans and animals readily generalize previously learned knowledge to new situations. Determining similarity is critical for assigning category membership to a novel stimulus. We tested the hypothesis that category membership is initially encoded by the similarity of the activity pattern evoked by a novel stimulus to the patterns from known categories. We provide behavioral and neurophysiological evidence that activity patterns in primary auditory cortex contain sufficient information to explain behavioral categorization of novel speech sounds by rats. Our results suggest that category membership might be encoded by the similarity of the activity pattern evoked by a novel speech sound to the patterns evoked by known sounds. Categorization based on featureless pattern matching may represent a general neural mechanism for ensuring accurate generalization across sensory and cognitive systems.  相似文献   

4.
Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.  相似文献   

5.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

6.
Animals can make faster behavioral responses to multisensory stimuli than to unisensory stimuli. The superior colliculus (SC), which receives multiple inputs from different sensory modalities, is considered to be involved in the initiation of motor responses. However, the mechanism by which multisensory information facilitates motor responses is not yet understood. Here, we demonstrate that multisensory information modulates competition among SC neurons to elicit faster responses. We conducted multiunit recordings from the SC of rats performing a two-alternative spatial discrimination task using auditory and/or visual stimuli. We found that a large population of SC neurons showed direction-selective activity before the onset of movement in response to the stimuli irrespective of stimulation modality. Trial-by-trial correlation analysis showed that the premovement activity of many SC neurons increased with faster reaction speed for the contraversive movement, whereas the premovement activity of another population of neurons decreased with faster reaction speed for the ipsiversive movement. When visual and auditory stimuli were presented simultaneously, the premovement activity of a population of neurons for the contraversive movement was enhanced, whereas the premovement activity of another population of neurons for the ipsiversive movement was depressed. Unilateral inactivation of SC using muscimol prolonged reaction times of contraversive movements, but it shortened those of ipsiversive movements. These findings suggest that the difference in activity between the SC hemispheres regulates the reaction speed of motor responses, and multisensory information enlarges the activity difference resulting in faster responses.  相似文献   

7.
Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.  相似文献   

8.
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands.  相似文献   

9.
During the last two decades ferrets (Mustela putorius) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.  相似文献   

10.
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.  相似文献   

11.
Multimodal integration, which mainly refers to multisensory facilitation and multisensory inhibition, is the process of merging multisensory information in the human brain. However, the neural mechanisms underlying the dynamic characteristics of multimodal integration are not fully understood. The objective of this study is to investigate the basic mechanisms of multimodal integration by assessing the intermodal influences of vision, audition, and somatosensory sensations (the influence of multisensory background events to the target event). We used a timed target detection task, and measured both behavioral and electroencephalographic responses to visual target events (green solid circle), auditory target events (2 kHz pure tone) and somatosensory target events (1.5 ± 0.1 mA square wave pulse) from 20 normal participants. There were significant differences in both behavior performance and ERP components when comparing the unimodal target stimuli with multimodal (bimodal and trimodal) target stimuli for all target groups. Significant correlation among reaction time and P3 latency was observed across all target conditions. The perceptual processing of auditory target events (A) was inhibited by the background events, while the perceptual processing of somatosensory target events (S) was facilitated by the background events. In contrast, the perceptual processing of visual target events (V) remained impervious to multisensory background events.  相似文献   

12.
Parallel processing of multiple sensory stimuli is critical for efficient, successful interaction with the environment. An experimental approach to studying parallel processing in sensorimotor integration is to examine reaction times to multiple copies of the same stimulus. Reaction times to bilateral copies of light flashes are faster than to single, unilateral light flashes. These faster responses may be due to 'statistical facilitation' between independent processing streams engaged by the two copies of the light flash. On some trials, however, reaction times are faster than predicted by statistical facilitation. This indicates that a neural 'coactivation' of the two processing streams must have occurred. Here we use fMRI to investigate the neural locus of this coactivation. Subjects responded manually to the detection of unilateral light flashes presented to the left or right visual hemifield, and to the detection of bilateral light flashes. We compared the bilateral trials where subjects' reaction times exceeded the limit predicted by statistical facilitation to bilateral trials that did not exceed the limit. Activity in the right temporo-parietal junction was higher in those bilateral trials that showed coactivation than in those that did not. These results suggest the neural coactivation observed in visuomotor integration occurs at a cognitive rather than sensory or motor stage of processing.  相似文献   

13.
Eriksson J  Villa AE 《Bio Systems》2005,79(1-3):207-212
Evoked potentials were recorded from the auditory cortex of both freely moving and anesthetized rats when deviant sounds were presented in a homogenous series of standard sounds (oddball condition). A component of the evoked response to deviant sounds, the mismatch negativity (MMN), may underlie the ability to discriminate acoustic differences, a fundamental aspect of auditory perception. Whereas most MMN studies in animals have been done using simple sounds, this study involved a more complex set of sounds (synthesized vowels). The freely moving rats had previously undergone behavioral training in which they learned to respond differentially to these sounds. Although we found little evidence in this preparation for the typical, epidurally recorded, MMN response, a significant difference between deviant and standard evoked potentials was noted for the freely moving animals in the 100-200 ms range following stimulus onset. No such difference was found in the anesthetized animals.  相似文献   

14.
Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.  相似文献   

15.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

16.
Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner.  相似文献   

17.
It is well known that even under identical task conditions, there is a tremendous amount of trial-to-trial variability in both brain activity and behavioral output. Thus far the vast majority of event-related potential (ERP) studies investigating the relationship between trial-to-trial fluctuations in brain activity and behavioral performance have only tested a monotonic relationship between them. However, it was recently found that across-trial variability can correlate with behavioral performance independent of trial-averaged activity. This finding predicts a U- or inverted-U- shaped relationship between trial-to-trial brain activity and behavioral output, depending on whether larger brain variability is associated with better or worse behavior, respectively. Using a visual stimulus detection task, we provide evidence from human electrocorticography (ECoG) for an inverted-U brain-behavior relationship: When the raw fluctuation in broadband ECoG activity is closer to the across-trial mean, hit rate is higher and reaction times faster. Importantly, we show that this relationship is present not only in the post-stimulus task-evoked brain activity, but also in the pre-stimulus spontaneous brain activity, suggesting anticipatory brain dynamics. Our findings are consistent with the presence of stochastic noise in the brain. They further support attractor network theories, which postulate that the brain settles into a more confined state space under task performance, and proximity to the targeted trajectory is associated with better performance.  相似文献   

18.
We measured local field potential (LFP) and blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) in the medial temporal lobes of monkeys and humans, respectively, as they performed the same conditional motor associative learning task. Parallel analyses were used to examine both data sets. Despite significantly faster learning in humans relative to monkeys, we found equivalent neural signals differentiating new versus highly familiar stimuli, first stimulus presentation, trial outcome, and learning strength in the entorhinal cortex and hippocampus of both species. Thus, the use of parallel behavioral tasks and analyses in monkeys and humans revealed conserved patterns of neural activity across the medial temporal lobe during an associative learning task.  相似文献   

19.
Prolonged response times are observed with targets having been presented as distractors immediately before, called negative priming effect. Among others, inhibitory and retrieval processes have been suggested underlying this behavioral effect. As those processes would involve different neural activation patterns, a functional magnetic resonance imaging (fMRI) study including 28 subjects was conducted. Two tasks were used to investigate stimulus repetition effects. One task focused on target location, the other on target identity. Both tasks are known to elicit the expected response time effects. However, there is less agreement about the relationship of those tasks with the explanatory accounts under consideration. Based on within-subject comparisons we found clear differences between the experimental repetition conditions and the neutral control condition on neural level for both tasks. Hemodynamic fronto-striatal activation patterns occurred for the location-based task favoring the selective inhibition account. Hippocampal activation found for the identity-based task suggests an assignment to the retrieval account; however, this task lacked a behavioral effect.  相似文献   

20.
目的:研究结合延迟样本匹配任务和分视野范式,探讨负性情绪对言语和空间工作记忆的影响。方法:32名大学本科生参加实验,在中性和负性情绪图片呈现阶段,所有被试完成言语和空间工作记忆任务各160个trial。结果:言语工作记忆任务在右视野呈现时正确率较高,反应时较短,而空间工作记忆任务则在左视野呈现时表现出相似的反应优势;负性情绪状态下的工作记忆表现均优于中性情绪状态。结论:言语及空间工作记忆分别具有左、右半球的加工优势,且负性情绪对工作记忆具有促进作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号