首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.  相似文献   

2.
Self-localization requires that information from several sensory modalities and knowledge domains be integrated in order to identify an environment and determine current location and heading. This integration occurs by the convergence of highly processed sensory information onto neural systems in entorhinal cortex and hippocampus. Entorhinal neurons combine angular and linear self-motion information to generate an oriented metric signal that is then 'attached' to each environment using information about landmarks and context. Neurons in hippocampus use this signal to determine the animal's unique position within a particular environment. Elucidating this process illuminates not only spatial processing but also, more generally, how the brain builds knowledge representations from inputs carrying heterogeneous sensory and semantic content.  相似文献   

3.
It is without a doubt that humans are first and foremost visual beings. Even though the other sensory modalities provide us with valuable information, it is vision that generally offers the most reliable and detailed information concerning our immediate surroundings. It is therefore not surprising that nearly a third of the human brain processes, in one way or another, visual information. But what happens when the visual information no longer reaches these brain regions responsible for processing it? Indeed numerous medical conditions such as congenital glaucoma, retinis pigmentosa and retinal detachment, to name a few, can disrupt the visual system and lead to blindness. So, do the brain areas responsible for processing visual stimuli simply shut down and become non-functional? Do they become dead weight and simply stop contributing to cognitive and sensory processes? Current data suggests that this is not the case. Quite the contrary, it would seem that congenitally blind individuals benefit from the recruitment of these areas by other sensory modalities to carry out non-visual tasks. In fact, our laboratory has been studying blindness and its consequences on both the brain and behaviour for many years now. We have shown that blind individuals demonstrate exceptional hearing abilities. This finding holds true for stimuli originating from both near and far space. It also holds true, under certain circumstances, for those who lost their sight later in life, beyond a period generally believed to limit the brain changes following the loss of sight. In the case of the early blind, we have shown their ability to localize sounds is strongly correlated with activity in the occipital cortex (the location of the visual processing), demonstrating that these areas are functionally engaged by the task. Therefore it would seem that the plastic nature of the human brain allows them to make new use of the cerebral areas normally dedicated to visual processing.  相似文献   

4.
Gottfried JA  Dolan RJ 《Neuron》2003,39(2):375-386
Human olfactory perception is notoriously unreliable, but shows substantial benefits from visual cues, suggesting important crossmodal integration between these primary sensory modalities. We used event-related fMRI to determine the underlying neural mechanisms of olfactory-visual integration in the human brain. Subjects participated in an olfactory detection task, whereby odors and pictures were delivered separately or together. By manipulating the degree of semantic correspondence between odor-picture pairs, we show a perceptual olfactory facilitation for semantically congruent (versus incongruent) trials. This behavioral advantage was associated with enhanced neural activity in anterior hippocampus and rostromedial orbitofrontal cortex. We suggest these findings can be interpreted as indicating that human hippocampus mediates reactivation of crossmodal semantic associations, even in the absence of explicit memory processing.  相似文献   

5.
近期的脑成像研究在盲人等感官缺陷被试者身上发现了感觉替换现象,即传统上认为仅对单一感觉通道刺激反应的皮层区域也参与其他感觉通道的信息加工.类似的效应在感觉剥夺(蒙住眼睛)的明视人被试中也被观察到,提示脑内可能预存着多感觉交互作用的神经通路.通常认为,上述神经通路在常态的人脑中是以潜伏形式存在的,只有当感觉剥夺时才显露出来或得到加强.但是,感觉剥夺是否是该类神经通路发挥作用的必要条件,已有的研究尚缺乏确切的证据.采用统计力度较强的实验设计,给未蒙眼明视人被试听觉呈现一组名词,要求其对听到的每一个词语做出是人工物体还是自然物体的语义判断.对同步采集的功能磁共振信号进行统计分析,观察到视皮层脑区有显著激活.这些结果表明,跨感觉通道的神经通路在未实施感觉剥夺的条件下依然能够显示出来,因而在常态人脑中也不是完全以潜伏形式存在的.上述研究为建立多感觉交互作用神经机制的具体理论模型提供了一个约束条件.  相似文献   

6.
The integration of information from different sensory modalities has many advantages for human observers, including increase of salience, resolution of perceptual ambiguities, and unified perception of objects and surroundings. Several behavioral, electrophysiological and neuroimaging data collected in various tasks, including localization and detection of spatial events, crossmodal perception of object properties and scene analysis are reviewed here. All the results highlight the multiple faces of crossmodal interactions and provide converging evidence that the brain takes advantages of spatial and temporal coincidence between spatial events in the crossmodal binding of spatial features gathered through different modalities. Furthermore, the elaboration of a multimodal percept appears to be based on an adaptive combination of the contribution of each modality, according to the intrinsic reliability of sensory cue, which itself depends on the task at hand and the kind of perceptual cues involved in sensory processing. Computational models based on bayesian sensory estimation provide valuable explanations of the way perceptual system could perform such crossmodal integration. Recent anatomical evidence suggest that crossmodal interactions affect early stages of sensory processing, and could be mediated through a dynamic recurrent network involving backprojections from multimodal areas as well as lateral connections that can modulate the activity of primary sensory cortices, though future behavioral and neurophysiological studies should allow a better understanding of the underlying mechanisms.  相似文献   

7.
8.
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.  相似文献   

9.
《Anthrozo?s》2013,26(3):222-235
Abstract

Stimuli and events that animals have to learn about in both their natural environments and in modern environments, such as our homes and farms, are often “multisensory,” i.e., they usually occur in more than one sensory modality. There are studies in humans and animals showing that stimuli that are multisensory are learned more quickly than stimuli presented in just a single sensory modality. My aim is to highlight how animals can combine information from several sensory modalities simultaneously, and show how using multisensory stimuli in training can enhance an animal's ability to learn new behaviors.  相似文献   

10.
In a multisensory task, human adults integrate information from different sensory modalities -behaviorally in an optimal Bayesian fashion- while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities -i.e. selection- at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.  相似文献   

11.
12.
Tai  Mei-Hui  Zipser  Birgit 《Brain Cell Biology》2002,31(8-9):743-754
Differences in carbohydrate signaling control sequential steps in synaptic growth of sensory afferents in the leech. The relevant glycans are constitutive and developmentally regulated modifications of leechCAM and Tractin (family members of NCAM and L1) that are specific to the surface of sensory afferents. A mannosidic glycosylation mediates the dynamic growth of early afferents as they explore their target region through sprouting sensory arbors rich with synaptic vesicles. Later emerging galactosidic glycosylations serve as markers for subsets of the same sensory afferents that correlate with different sensory modalities. These developmentally regulated galactose markers now oppose the function of the constitutive mannose marker. Sensory afferents gain cell-cell contact with central neurons and self-similar afferents, but lose filopodia and synaptic vesicles. Extant vesicles are confined to sites of en passant synapse formation. The transformation of sensory afferent growth, progressing from mannose- to galactose-specific recognition, is consistent with a change from cell-matrix to cell-cell contact. While the constitutive mannosidic glycosylation promotes dynamic growth, developmentally regulated galactosidic glycosylations of the same cell adhesion molecules promote tissue stability. The persistence of both types of neutral glycans beyond embryonic age allows their function in synaptic plasticity during habituation and learning.  相似文献   

13.
Stimuli from different sensory modalities are thought to be processed initially in distinct unisensory brain areas prior to convergence in multisensory areas. However, signals in one modality can influence the processing of signals from other modalities and recent studies suggest this cross-modal influence may occur early on, even in ‘unisensory’ areas. Some recent psychophysical studies have shown specific cross-modal effects between touch and vision during binocular rivalry, but these cannot completely rule out a response bias. To test for genuine cross-modal integration of haptic and visual signals, we investigated whether congruent haptic input could influence visual contrast sensitivity compared to incongruent haptic input in three psychophysical experiments using a two-interval, two-alternative forced-choice method to eliminate response bias. The initial experiment demonstrated that contrast thresholds for a visual grating were lower when exploring a haptic grating that shared the same orientation compared to an orthogonal orientation. Two subsequent experiments mapped the orientation and spatial frequency tunings for the congruent haptic facilitation of vision, finding a clear orientation tuning effect but not a spatial frequency tuning. In addition to an increased contrast sensitivity for iso-oriented visual-haptic gratings, we found a significant loss of sensitivity for orthogonally oriented visual-haptic gratings. We conclude that the tactile influence on vision is a result of a tactile input to orientation-tuned visual areas.  相似文献   

14.
Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.  相似文献   

15.
Most animal species use distinctive courship patterns to choose among potential mates. Over time, the sensory signaling and preferences used during courtship can diverge among groups that are reproductively isolated. This divergence of signal traits and preferences is thought to be an important cause of behavioral isolation during the speciation process. Here, we examine the sensory modalities used in courtship by two closely related species, Drosophila subquinaria and Drosophila recens, which overlap in geographic range and are incompletely reproductively isolated. We use observational studies of courtship patterns and manipulation of male and female sensory modalities to determine the relative roles of visual, olfactory, gustatory, and auditory signals during conspecific mate choice. We find that sex‐specific, species‐specific, and population‐specific cues are used during mate acquisition within populations of D. subquinaria and D. recens. We identify shifts in both male and female sensory modalities between species, and also between populations of D. subquinaria. Our results indicate that divergence in mating signals and preferences have occurred on a relatively short timescale within and between these species. Finally, we suggest that because olfactory cues are essential for D. subquinaria females to mate within species, they may also underlie variation in behavioral discrimination across populations and species.  相似文献   

16.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

17.
Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2–7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important “active” role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time.  相似文献   

18.
Sensory information from different modalities is processed in parallel, and then integrated in associative brain areas to improve object identification and the interpretation of sensory experiences. The Superior Colliculus (SC) is a midbrain structure that plays a critical role in integrating visual, auditory, and somatosensory input to assess saliency and promote action. Although the response properties of the individual SC neurons to visuoauditory stimuli have been characterized, little is known about the spatial and temporal dynamics of the integration at the population level. Here we recorded the response properties of SC neurons to spatially restricted visual and auditory stimuli using large-scale electrophysiology. We then created a general, population-level model that explains the spatial, temporal, and intensity requirements of stimuli needed for sensory integration. We found that the mouse SC contains topographically organized visual and auditory neurons that exhibit nonlinear multisensory integration. We show that nonlinear integration depends on properties of auditory but not visual stimuli. We also find that a heuristically derived nonlinear modulation function reveals conditions required for sensory integration that are consistent with previously proposed models of sensory integration such as spatial matching and the principle of inverse effectiveness.  相似文献   

19.
The wind-sensitive cercal system of Orthopteroid insects that mediates the detection of the approach of a predator is a very sensitive sensory system. It has been intensively analysed from a behavioural and neurobiological point of view, and constitutes a classical model system in neuroethology. The escape behaviour is triggered in orthopteroids by the detection of air-currents produced by approaching objects, allowing these insects to keep away from potential dangers. Nevertheless, escape behaviour has not been studied in terms of success. Moreover, an attacking predator is more than “air movement”, it is also a visible moving entity. The sensory basis of predator detection is thus probably more complex than the perception of air movement by the cerci. We have used a piston mimicking an attacking running predator for a quantitative evaluation of the escape behaviour of wood crickets Nemobius sylvestris. The movement of the piston not only generates air movement, but it can be seen by the insect and can touch it as a natural predator. This procedure allowed us to study the escape behaviour in terms of detection and also in terms of success. Our results showed that 5-52% of crickets that detected the piston thrust were indeed touched. Crickets escaped to stimulation from behind better than to a stimulation from the front, even though they detected the approaching object similarly in both cases. After cerci ablation, 48% crickets were still able to detect a piston approaching from behind (compared with 79% of detection in intact insects) and 24% crickets escaped successfully (compared with 62% in the case of intact insects). So, cerci play a major role in the detection of an approaching object but other mechanoreceptors or sensory modalities are implicated in this detection. It is not possible to assure that other sensory modalities participate (in the case of intact animals) in the behaviour; rather, than in the absence of cerci other sensory modalities can partially mediate the behaviour. Nevertheless, neither antennae nor eyes seem to be used for detecting approaching objects, as their inactivation did not reduce their detection and escape abilities in the presence of cerci.  相似文献   

20.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号