首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

2.
Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP) and control subjects (CS), both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition). We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in the perception and maintenance of tinnitus.  相似文献   

3.
Neurons in the superior colliculus (SC) are known to integrate stimuli of different modalities (e.g., visual and auditory) following specific properties. In this work, we present a mathematical model of the integrative response of SC neurons, in order to suggest a possible physiological mechanism underlying multisensory integration in SC. The model includes three distinct neural areas: two unimodal areas (auditory and visual) are devoted to a topological representation of external stimuli, and communicate via synaptic connections with a third downstream area (in the SC) responsible for multisensory integration. The present simulations show that the model, with a single set of parameters, can mimic various responses to different combinations of external stimuli including the inverse effectiveness, both in terms of multisensory enhancement and contrast, the existence of within- and cross-modality suppression between spatially disparate stimuli, a reduction of network settling time in response to cross-modal stimuli compared with individual stimuli. The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain several aspects of multisensory integration.  相似文献   

4.
In addition to visually driven cells we found within the lateral suprasylvian visual cortex of cats a considerable number of auditory and/or bimodal cells. Most of the visually driven cells were direction and orientation selective with responses that were neither highly stimulus time locked nor very stable. Most of the auditory responses were also not very stable, had relatively high thresholds and were readily habituated. Previous studies have suggested that populations of cells within the lateral suprasylvian area are specialized for the analysis of optic flow fields. Given that a remarkable proportion of cells within this area can be also driven by auditory stimuli we hypothesize that the "optic flow" model may be extended to the bimodal domain rather than restricted to visual clues only. This, however, remains to be corroborated experimentally.  相似文献   

5.
Sensory information from different modalities is processed in parallel, and then integrated in associative brain areas to improve object identification and the interpretation of sensory experiences. The Superior Colliculus (SC) is a midbrain structure that plays a critical role in integrating visual, auditory, and somatosensory input to assess saliency and promote action. Although the response properties of the individual SC neurons to visuoauditory stimuli have been characterized, little is known about the spatial and temporal dynamics of the integration at the population level. Here we recorded the response properties of SC neurons to spatially restricted visual and auditory stimuli using large-scale electrophysiology. We then created a general, population-level model that explains the spatial, temporal, and intensity requirements of stimuli needed for sensory integration. We found that the mouse SC contains topographically organized visual and auditory neurons that exhibit nonlinear multisensory integration. We show that nonlinear integration depends on properties of auditory but not visual stimuli. We also find that a heuristically derived nonlinear modulation function reveals conditions required for sensory integration that are consistent with previously proposed models of sensory integration such as spatial matching and the principle of inverse effectiveness.  相似文献   

6.
Young children do not integrate visual and haptic form information   总被引:1,自引:0,他引:1  
Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.  相似文献   

7.
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands.  相似文献   

8.
When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.  相似文献   

9.
Unwanted scar tissue after surgical procedures remains a central problem in medicine. Nowhere is this problem more evident than within the pediatric airway, where excess scarring, termed subglottic stenosis, can compromise breathing. Recent advances in molecular biology have focused on ways to decrease scar formation through understanding of the wound repair process. Transforming growth factor beta (TFGbeta) plays a central role in this pathway. Ferrets serve as an ideal model for the pediatric airway, and reproduction of subglottic stenosis in ferrets is possible. However, ferret cytokine profiles have not been established. In this study, we characterized the presence and nucleotide sequence of the TGFbeta1 and 2 genes in ferrets by using total RNA isolated from airways. Amino acid sequence homology between human and ferret was determined to be 96.6% for TGFbeta1 and 99.3% for TGFbeta2. Given the nearly total homology between TGFbetas of ferret and human origin, the ferret may serve as an ideal model for future molecular studies.  相似文献   

10.
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver''s visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants'' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.  相似文献   

11.
A behavioral profile of the ferret is presented for those who would like to use this animal in behavioral teratology and toxicology, or other disciplines involving behavior. We have reviewed neurobehavioral teratology of lisencephalic ferrets and neuropsychology of ferrets sustaining frontal lesions, as well as most of the studies of "normal" ferret behavior that have appeared in the research literature. Emphasis is placed on discussion of the tests used and how ferrets behaved on them. The behaviors discussed include spatial (maze) learning, delayed response, visual discrimination learning, discrimination learning sets, schedule maintained behavior, shock avoidance learning and spontaneously occurring behaviors, such as ambulation in open field, spontaneous alternation and species specific behaviors. Although the use of the ferret in behavioral experiments is not yet extensive and large gaps exist in our knowledge about the basic functional capacities of this animal, the ferret is unquestionably well suited for behavioral studies.  相似文献   

12.
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.  相似文献   

13.
King AJ 《Current biology : CB》2002,12(11):R393-R395
Recent studies in owls and ferrets seem to have identified the origin and nature of the visual signals that shape the development of the auditory space map in the midbrain, which ensures that the neural representations of both sensory modalities share the same topographic organization.  相似文献   

14.
The intermediate and deep layers of the superior colliculus (SC) are known for their role in initiating orienting behaviors. To direct these orienting functions, the SC of some animals (e.g., primates, carnivores) is dominated by inputs from the distance senses (vision, audition). In contrast, the rodent SC relies more heavily on non-visual inputs, such as touch and nociception, possibly as an adaptive response to the proximity of dangers encountered during their somatosensory-dominant search behaviors. The ferret (a carnivore) seems to employ strategies of both groups: above ground they use visual/auditory cues, but during subterranean hunting ferrets must rely on non-visual signals to direct orienting. Therefore, the present experiments sought to determine whether the sensory inputs to the ferret SC reveal adaptations common to functioning in both environments. The results showed that the ferret SC is dominated (63%; 181/286) by visual/auditory inputs (like the cat), rather than by somatosensory inputs (as found in rodents). Furthermore, tactile responses were driven primarily from hair-receptors (like cats), not from the vibrissae (as in rodents). Additionally, while a majority of collicular neurons in rodents respond to brief noxious stimulation, no such neurons were encountered in the ferret SC. A small proportion (4%; 13/286) of the ferret SC neurons were responsive to long-duration (>5 s) noxious stimulation, but further tests could not establish these responses as nociceptive. Collectively, these data indicate that the ferret SC is best adapted for the animal's visuallacoustically guided activities and most closely resembles the SC of its phylogenetic relative, the cat.  相似文献   

15.
People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous). The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants'' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.  相似文献   

16.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

17.
The intermediate and deep layers of the superior colliculus (SC) are known for their role in initiating orienting behaviors. To direct these orienting functions, the SC of some animals (e.g., primates, carnivores) is dominated by inputs from the distance senses (vision, audition). In contrast, the rodent SC relies more heavily on non-visual inputs, such as touch and nociception, possibly as an adaptive response to the proximity of dangers encountered during their somatosensory-dominant search behaviors. The ferret (a carnivore) seems to employ strategies of both groups: above ground they use visual/auditory cues, but during subterranean hunting ferrets must rely on non-visual signals to direct orienting. Therefore, the present experiments sought to determine whether the sensory inputs to the ferret SC reveal adaptations common to functioning in both environments. The results showed that the ferret SC is dominated (63%; 181/286) by visual/auditory inputs (like the cat), rather than by somatosensory inputs (as found in rodents). Furthermore, tactile responses were driven primarily from hair-receptors (like cats), not from the vibrissae (as in rodents). Additionally, while a majority of collicular neurons in rodents respond to brief noxious stimulation, no such neurons were encountered in the ferret SC. A small proportion (4%; 13/286) of the ferret SC neurons were responsive to long-duration (> 5s) noxious stimulation, but further tests could not establish these responses as nociceptive. Collectively, these data indicate that the ferret SC is best adapted for the animal's visual/acoustically guided activities and most closely resembles the SC of its phylogenetic relative, the cat.  相似文献   

18.
Looming objects produce ecologically important signals that can be perceived in both the visual and auditory domains. Using a preferential looking technique with looming and receding visual and auditory stimuli, we examined the multisensory integration of looming stimuli by rhesus monkeys. We found a strong attentional preference for coincident visual and auditory looming but no analogous preference for coincident stimulus recession. Consistent with previous findings, the effect occurred only with tonal stimuli and not with broadband noise. The results suggest an evolved capacity to integrate multisensory looming objects.  相似文献   

19.
Watching a speaker''s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.  相似文献   

20.
Visual and auditory reaction times (RTs) have been reported to decrease during moderate aerobic exercise, and this has been interpreted as reflecting an exercise-induced activation (EIA) of cognitive information processing. In the present study we examined changes in several independent measures of information processing (RT, accuracy, P300 latency and amplitude) during exercise, and their relationship to visual or auditory modalities and to gender. P300 latencies offer independent measures of cognitive speed that are unrelated to motor output, and P300 amplitudes have been used as measures of attentional allocation. Twenty-four healthy college students [mean (SD) age 20 (2) years] performed auditory and visual "oddball" tasks during resting baseline, aerobic exercise, and recovery periods. Consistent with previous studies, both visual and auditory RTs during exercise were significantly shortened compared to control and recovery periods (which did not differ from each other). We now report that, paralleling the RT changes, auditory and visual P300 latencies decreased during exercise, indicating the occurrence of faster cognitive information processing in both sensory modalities. However, both auditory and visual P300 amplitudes decreased during exercise, suggesting diminished attentional resource allocation. In addition, error rates increased during exercise. Taken together, these results suggest that the enhancement of cognitive information processing speed during moderate aerobic exercise, although operating across genders and sensory modalities, is not a global facilitation of cognition, but is accompanied by decreased attention and increased errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号