首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.  相似文献   

2.
Recent anatomical, physiological, and neuroimaging findings indicate multisensory convergence at early, putatively unisensory stages of cortical processing. The objective of this study was to confirm somatosensory-auditory interaction in A1 and to define both its physiological mechanisms and its consequences for auditory information processing. Laminar current source density and multiunit activity sampled during multielectrode penetrations of primary auditory area A1 in awake macaques revealed clear somatosensory-auditory interactions, with a novel mechanism: somatosensory inputs appear to reset the phase of ongoing neuronal oscillations, so that accompanying auditory inputs arrive during an ideal, high-excitability phase, and produce amplified neuronal responses. In contrast, responses to auditory inputs arriving during the opposing low-excitability phase tend to be suppressed. Our findings underscore the instrumental role of neuronal oscillations in cortical operations. The timing and laminar profile of the multisensory interactions in A1 indicate that nonspecific thalamic systems may play a key role in the effect.  相似文献   

3.
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.  相似文献   

4.
The integration of multisensory information takes place in the optic tectum where visual and auditory/mechanosensory inputs converge and regulate motor outputs. The circuits that integrate multisensory information are poorly understood. In an effort to identify the basic components of a multisensory integrative circuit, we determined the projections of the mechanosensory input from the periphery to the optic tectum and compared their distribution to the retinotectal inputs in Xenopus laevis tadpoles using dye‐labeling methods. The peripheral ganglia of the lateral line system project to the ipsilateral hindbrain and the axons representing mechanosensory inputs along the anterior/posterior body axis are mapped along the ventrodorsal axis in the axon tract in the dorsal column of the hindbrain. Hindbrain neurons project axons to the contralateral optic tectum. The neurons from anterior and posterior hindbrain regions project axons to the dorsal and ventral tectum, respectively. While the retinotectal axons project to a superficial lamina in the tectal neuropil, the hindbrain axons project to a deep neuropil layer. Calcium imaging showed that multimodal inputs converge on tectal neurons. The layer‐specific projections of the hindbrain and retinal axons suggest a functional segregation of sensory inputs to proximal and distal tectal cell dendrites, respectively. © 2009 Wiley Periodicals, Inc. Develop Neurobiol, 2009  相似文献   

5.
Animals can make faster behavioral responses to multisensory stimuli than to unisensory stimuli. The superior colliculus (SC), which receives multiple inputs from different sensory modalities, is considered to be involved in the initiation of motor responses. However, the mechanism by which multisensory information facilitates motor responses is not yet understood. Here, we demonstrate that multisensory information modulates competition among SC neurons to elicit faster responses. We conducted multiunit recordings from the SC of rats performing a two-alternative spatial discrimination task using auditory and/or visual stimuli. We found that a large population of SC neurons showed direction-selective activity before the onset of movement in response to the stimuli irrespective of stimulation modality. Trial-by-trial correlation analysis showed that the premovement activity of many SC neurons increased with faster reaction speed for the contraversive movement, whereas the premovement activity of another population of neurons decreased with faster reaction speed for the ipsiversive movement. When visual and auditory stimuli were presented simultaneously, the premovement activity of a population of neurons for the contraversive movement was enhanced, whereas the premovement activity of another population of neurons for the ipsiversive movement was depressed. Unilateral inactivation of SC using muscimol prolonged reaction times of contraversive movements, but it shortened those of ipsiversive movements. These findings suggest that the difference in activity between the SC hemispheres regulates the reaction speed of motor responses, and multisensory information enlarges the activity difference resulting in faster responses.  相似文献   

6.
Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.  相似文献   

7.
Town SM  McCabe BJ 《PloS one》2011,6(3):e17777
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.  相似文献   

8.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.  相似文献   

9.
Our nervous system is confronted with a barrage of sensory stimuli, but neural resources are limited and not all stimuli can be processed to the same extent. Mechanisms exist to bias attention toward the particularly salient events, thereby providing a weighted representation of our environment. Our understanding of these mechanisms is still limited, but theoretical models can replicate such a weighting of sensory inputs and provide a basis for understanding the underlying principles. Here, we describe such a model for the auditory system-an auditory saliency map. We experimentally validate the model on natural acoustical scenarios, demonstrating that it reproduces human judgments of auditory saliency and predicts the detectability of salient sounds embedded in noisy backgrounds. In addition, it also predicts the natural orienting behavior of naive macaque monkeys to the same salient stimuli. The structure of the suggested model is identical to that of successfully used visual saliency maps. Hence, we conclude that saliency is determined either by implementing similar mechanisms in different unisensory pathways or by the same mechanism in multisensory areas. In any case, our results demonstrate that different primate sensory systems rely on common principles for extracting relevant sensory events.  相似文献   

10.
BACKGROUND: Integrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression. RESULTS: Using functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs. CONCLUSIONS: The data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.  相似文献   

11.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

12.
Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS)], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV)]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training.  相似文献   

13.
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual-auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.  相似文献   

14.
Social animals learn to perceive their social environment, and their social skills and preferences are thought to emerge from greater exposure to and hence familiarity with some social signals rather than others. Familiarity appears to be tightly linked to multisensory integration. The ability to differentiate and categorize familiar and unfamiliar individuals and to build a multisensory representation of known individuals emerges from successive social interactions, in particular with adult, experienced models. In different species, adults have been shown to shape the social behavior of young by promoting selective attention to multisensory cues. The question of what representation of known conspecifics adult-deprived animals may build therefore arises. Here we show that starlings raised with no experience with adults fail to develop a multisensory representation of familiar and unfamiliar starlings. Electrophysiological recordings of neuronal activity throughout the primary auditory area of these birds, while they were exposed to audio-only or audiovisual familiar and unfamiliar cues, showed that visual stimuli did, as in wild-caught starlings, modulate auditory responses but that, unlike what was observed in wild-caught birds, this modulation was not influenced by familiarity. Thus, adult-deprived starlings seem to fail to discriminate between familiar and unfamiliar individuals. This suggests that adults may shape multisensory representation of known individuals in the brain, possibly by focusing the young's attention on relevant, multisensory cues. Multisensory stimulation by experienced, adult models may thus be ubiquitously important for the development of social skills (and of the neural properties underlying such skills) in a variety of species.  相似文献   

15.
Looming objects produce ecologically important signals that can be perceived in both the visual and auditory domains. Using a preferential looking technique with looming and receding visual and auditory stimuli, we examined the multisensory integration of looming stimuli by rhesus monkeys. We found a strong attentional preference for coincident visual and auditory looming but no analogous preference for coincident stimulus recession. Consistent with previous findings, the effect occurred only with tonal stimuli and not with broadband noise. The results suggest an evolved capacity to integrate multisensory looming objects.  相似文献   

16.
Our understanding of multisensory integration has advanced because of recent functional neuroimaging studies of three areas in human lateral occipito-temporal cortex: superior temporal sulcus, area LO and area MT (V5). Superior temporal sulcus is activated strongly in response to meaningful auditory and visual stimuli, but responses to tactile stimuli have not been well studied. Area LO shows strong activation in response to both visual and tactile shape information, but not to auditory representations of objects. Area MT, an important region for processing visual motion, also shows weak activation in response to tactile motion, and a signal that drops below resting baseline in response to auditory motion. Within superior temporal sulcus, a patchy organization of regions is activated in response to auditory, visual and multisensory stimuli. This organization appears similar to that observed in polysensory areas in macaque superior temporal sulcus, suggesting that it is an anatomical substrate for multisensory integration. A patchy organization might also be a neural mechanism for integrating disparate representations within individual sensory modalities, such as representations of visual form and visual motion.  相似文献   

17.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

18.
Multimodal integration, which mainly refers to multisensory facilitation and multisensory inhibition, is the process of merging multisensory information in the human brain. However, the neural mechanisms underlying the dynamic characteristics of multimodal integration are not fully understood. The objective of this study is to investigate the basic mechanisms of multimodal integration by assessing the intermodal influences of vision, audition, and somatosensory sensations (the influence of multisensory background events to the target event). We used a timed target detection task, and measured both behavioral and electroencephalographic responses to visual target events (green solid circle), auditory target events (2 kHz pure tone) and somatosensory target events (1.5 ± 0.1 mA square wave pulse) from 20 normal participants. There were significant differences in both behavior performance and ERP components when comparing the unimodal target stimuli with multimodal (bimodal and trimodal) target stimuli for all target groups. Significant correlation among reaction time and P3 latency was observed across all target conditions. The perceptual processing of auditory target events (A) was inhibited by the background events, while the perceptual processing of somatosensory target events (S) was facilitated by the background events. In contrast, the perceptual processing of visual target events (V) remained impervious to multisensory background events.  相似文献   

19.
Neurons in the superior colliculus (SC) are known to integrate stimuli of different modalities (e.g., visual and auditory) following specific properties. In this work, we present a mathematical model of the integrative response of SC neurons, in order to suggest a possible physiological mechanism underlying multisensory integration in SC. The model includes three distinct neural areas: two unimodal areas (auditory and visual) are devoted to a topological representation of external stimuli, and communicate via synaptic connections with a third downstream area (in the SC) responsible for multisensory integration. The present simulations show that the model, with a single set of parameters, can mimic various responses to different combinations of external stimuli including the inverse effectiveness, both in terms of multisensory enhancement and contrast, the existence of within- and cross-modality suppression between spatially disparate stimuli, a reduction of network settling time in response to cross-modal stimuli compared with individual stimuli. The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain several aspects of multisensory integration.  相似文献   

20.
Multisensory integration: maintaining the perception of synchrony   总被引:6,自引:0,他引:6  
Spence C  Squire S 《Current biology : CB》2003,13(13):R519-R521
We are rarely aware of differences in the arrival time of inputs to each of our senses. New research suggests that this is explained by a 'moveable window' for multisensory integration and by a 'temporal ventriloquism' effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号