首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our understanding of multisensory integration has advanced because of recent functional neuroimaging studies of three areas in human lateral occipito-temporal cortex: superior temporal sulcus, area LO and area MT (V5). Superior temporal sulcus is activated strongly in response to meaningful auditory and visual stimuli, but responses to tactile stimuli have not been well studied. Area LO shows strong activation in response to both visual and tactile shape information, but not to auditory representations of objects. Area MT, an important region for processing visual motion, also shows weak activation in response to tactile motion, and a signal that drops below resting baseline in response to auditory motion. Within superior temporal sulcus, a patchy organization of regions is activated in response to auditory, visual and multisensory stimuli. This organization appears similar to that observed in polysensory areas in macaque superior temporal sulcus, suggesting that it is an anatomical substrate for multisensory integration. A patchy organization might also be a neural mechanism for integrating disparate representations within individual sensory modalities, such as representations of visual form and visual motion.  相似文献   

2.
In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile stimuli, as well as to give rise to the 'felt' position of our limbs, and ultimately the multisensory representation of 3-D peripersonal space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal congruency effects (calculated as performance on incongruent-congruent trials) are greatest when visual and vibrotactile stimuli are presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of convergence with other cognitive neuroscience disciplines.  相似文献   

3.
The question of how we experience ownership of an entire body distinct from the external world is a fundamental problem in psychology and neuroscience [1-6]. Earlier studies suggest that integration of visual, tactile, and proprioceptive information in multisensory areas [7-11] mediates self-attribution of single limbs. However, it is still unknown how ownership of individual body parts translates into the unitary experience of owning a whole body. Here, we used a "body-swap" illusion [12], in which people experienced an artificial body to be their own, in combination with functional magnetic resonance imaging to reveal a coupling between the experience of full-body ownership and neural responses in bilateral ventral premotor and left intraparietal cortices, and left putamen. Importantly, activity in the ventral premotor cortex reflected the construction of ownership of a whole body from the parts, because it was stronger when the stimulated body part was attached to a body, was present irrespective of whether the illusion was triggered by stimulation of the hand or the abdomen, and displayed multivoxel patterns carrying information about full-body ownership. These findings suggest that the unitary experience of owning an entire body is produced by neuronal populations that integrate multisensory information across body segments.  相似文献   

4.
Here we report findings from neuropsychological investigations showing the existence, in humans, of intersensory integrative systems representing space through the multisensory coding of visual and tactile events. In addition, these findings show that visuo-tactile integration may take place in a privileged manner within a limited sector of space closely surrounding the body surface, i.e., the near-peripersonal space. They also demonstrate that the representation of near-peripersonal space is not static, as objects in the out-of-reach space can be processed as nearer, depending upon the (illusory) visual information about hand position in space, and the use of tools as physical extensions of the reachable space. Finally, new evidence is provided suggesting the multisensory coding of peripersonal space can be achieved through bottom-up processing that, at least in some instances, is not necessarily modulated by more "cognitive" top-down processing, such as the expectation regarding the possibility of being touched. These findings are entirely consistent with the functional properties of multisensory neuronal structures coding near-peripersonal space in monkeys, as well as with behavioral, and neuroimaging evidence for the cross-modal coding of space in normal subjects. This high level of convergence ultimately favors the idea that multisensory space coding is achieved through similar multimodal structures in both humans and non-human primates.  相似文献   

5.
Driver J  Noesselt T 《Neuron》2008,57(1):11-23
Although much traditional sensory research has studied each sensory modality in isolation, there has been a recent explosion of interest in causal interplay between different senses. Various techniques have now identified numerous multisensory convergence zones in the brain. Some convergence may arise surprisingly close to low-level sensory-specific cortex, and some direct connections may exist even between primary sensory cortices. A variety of multisensory phenomena have now been reported in which sensory-specific brain responses and perceptual judgments concerning one sense can be affected by relations with other senses. We survey recent progress in this multisensory field, foregrounding human studies against the background of invasive animal work and highlighting possible underlying mechanisms. These include rapid feedforward integration, possible thalamic influences, and/or feedback from multisensory regions to sensory-specific brain areas. Multisensory interplay is more prevalent than classic modular approaches assumed, and new methods are now available to determine the underlying circuits.  相似文献   

6.
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas.  相似文献   

7.

Background

Body image distortion is a central symptom of Anorexia Nervosa (AN). Even if corporeal awareness is multisensory majority of AN studies mainly investigated visual misperception. We systematically reviewed AN studies that have investigated different nonvisual sensory inputs using an integrative multisensory approach to body perception. We also discussed the findings in the light of AN neuroimaging evidence.

Methods

PubMed and PsycINFO were searched until March, 2014. To be included in the review, studies were mainly required to: investigate a sample of patients with current or past AN and a control group and use tasks that directly elicited one or more nonvisual sensory domains.

Results

Thirteen studies were included. They studied a total of 223 people with current or past AN and 273 control subjects. Overall, results show impairment in tactile and proprioceptive domains of body perception in AN patients. Interoception and multisensory integration have been poorly explored directly in AN patients. A limitation of this review is the relatively small amount of literature available.

Conclusions

Our results showed that AN patients had a multisensory impairment of body perception that goes beyond visual misperception and involves tactile and proprioceptive sensory components. Furthermore, impairment of tactile and proprioceptive components may be associated with parietal cortex alterations in AN patients. Interoception and multisensory integration have been weakly explored directly. Further research, using multisensory approaches as well as neuroimaging techniques, is needed to better define the complexity of body image distortion in AN.

Key Findings

The review suggests an altered capacity of AN patients in processing and integration of bodily signals: body parts are experienced as dissociated from their holistic and perceptive dimensions. Specifically, it is likely that not only perception but memory, and in particular sensorimotor/proprioceptive memory, probably shapes bodily experience in patients with AN.  相似文献   

8.
How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person''s face being similarly touched produced a bias in recognizing one''s own face, in the direction of the other person included in the representation of one''s own face. Multisensory integration can update cognitive representations of one''s body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one''s face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.  相似文献   

9.
Historically, body size overestimation has been linked to abnormal levels of body dissatisfaction found in eating disorders. However, recently this relationship has been called into question. Indeed, despite a link between how we perceive and how we feel about our body seeming intuitive, until now lack of an experimental method to manipulate body size has meant that a causal link, even in healthy participants, has remained elusive. Recent developments in body perception research demonstrate that the perceptual experience of the body can be readily manipulated using multisensory illusions. The current study exploits such illusions to modulate perceived body size in an attempt to influence body satisfaction. Participants were presented with stereoscopic video images of slimmer and wider mannequin bodies viewed through head-mounted displays from first person perspective. Illusory ownership was induced by synchronously stroking the seen mannequin body with the unseen real body. Pre and post-illusion affective and perceptual measures captured changes in perceived body size and body satisfaction. Illusory ownership of a slimmer body resulted in participants perceiving their actual body as slimmer and giving higher ratings of body satisfaction demonstrating a direct link between perceptual and affective body representations. Change in body satisfaction following illusory ownership of a wider body, however, was related to degree of (non-clinical) eating disorder psychopathology, which can be linked to fluctuating body representations found in clinical samples. The results suggest that body perception is linked to body satisfaction and may be of importance for eating disorder symptomology.  相似文献   

10.
Cohen L  Rothschild G  Mizrahi A 《Neuron》2011,72(2):357-369
Motherhood is associated with different forms of physiological alterations including transient hormonal changes and brain plasticity. The underlying impact of these changes on the emergence of maternal behaviors and sensory processing within the mother's brain are largely unknown. By using in?vivo cell-attached recordings in the primary auditory cortex of female mice, we discovered that exposure to pups' body odor reshapes neuronal responses to pure tones and natural auditory stimuli. This olfactory-auditory interaction appeared naturally in lactating mothers shortly after parturition and was long lasting. Naive virgins that had experience with the pups also showed an appearance of olfactory-auditory integration in A1, suggesting that multisensory integration may be experience dependent. Neurons from lactating mothers were more sensitive to sounds as compared to those from experienced mice, independent of the odor effects. These uni- and multisensory cortical changes may facilitate the detection and discrimination of pup distress calls and strengthen the bond between mothers and their neonates. VIDEO ABSTRACT:  相似文献   

11.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.  相似文献   

12.
Individual recognition can be facilitated by creating representations of familiar individuals, whereby information from signals in multiple sensory modalities become linked. Many vertebrate species use auditory–visual matching to recognize familiar conspecifics and heterospecifics, but we currently do not know whether representations of familiar individuals incorporate information from other modalities. Ring-tailed lemurs (Lemur catta) are highly visual, but also communicate via scents and vocalizations. To investigate the role of olfactory signals in multisensory recognition, we tested whether lemurs can recognize familiar individuals through matching scents and vocalizations. We presented lemurs with female scents that were paired with the contact call either of the female whose scent was presented or of another familiar female from the same social group. When the scent and the vocalization came from the same individual versus from different individuals, females showed greater interest in the scents, and males showed greater interest in both the scents and the vocalizations, suggesting that lemurs can recognize familiar females via olfactory–auditory matching. Because identity signals in lemur scents and vocalizations are produced by different effectors and often encountered at different times (uncoupled in space and time), this matching suggests lemurs form multisensory representations through a newly recognized sensory integration underlying individual recognition.  相似文献   

13.
We argue that current theories of multisensory representations are inconsistent with the existence of a large proportion of multimodal neurons with gain fields and partially shifting receptive fields. Moreover, these theories do not fully resolve the recoding and statistical issues involved in multisensory integration. An alternative theory, which we have recently developed and review here, has important implications for the idea of 'frame of reference' in neural spatial representations. This theory is based on a neural architecture that combines basis functions and attractor dynamics. Basis function units are used to solve the recoding problem, whereas attractor dynamics are used for optimal statistical inferences. This architecture accounts for gain fields and partially shifting receptive fields, which emerge naturally as a result of the network connectivity and dynamics.  相似文献   

14.

Background

A stimulus approaching the body requires fast processing and appropriate motor reactions. In monkeys, fronto-parietal networks are involved both in integrating multisensory information within a limited space surrounding the body (i.e. peripersonal space, PPS) and in action planning and execution, suggesting an overlap between sensory representations of space and motor representations of action. In the present study we investigate whether these overlapping representations also exist in the human brain.

Methodology/Principal Findings

We recorded from hand muscles motor-evoked potentials (MEPs) induced by single-pulse of transcranial magnetic stimulation (TMS) after presenting an auditory stimulus either near the hand or in far space. MEPs recorded 50 ms after the near-sound onset were enhanced compared to MEPs evoked after far sounds. This near-far modulation faded at longer inter-stimulus intervals, and reversed completely for MEPs recorded 300 ms after the sound onset. At that time point, higher motor excitability was associated with far sounds. Such auditory modulation of hand motor representation was specific to a hand-centred, and not a body-centred reference frame.

Conclusions/Significance

This pattern of corticospinal modulation highlights the relation between space and time in the PPS representation: an early facilitation for near stimuli may reflect immediate motor preparation, whereas, at later time intervals, motor preparation relates to distant stimuli potentially approaching the body.  相似文献   

15.
Despite its theoretical and clinical interest, there are no experimental studies exploring obsessive-compulsive disorder (OCD)-like disgust sensations through using somatosensory illusions. Such illusions provide important clues to the nature and limits of multisensory integration and how the brain constructs body image; and may potentially inform novel therapies. One such effect is the rubber hand illusion (RHI) in which tactile sensations are referred to a rubber hand; if the experimenter simultaneously strokes a subject’s occluded hand together with a visible fake hand, the subject starts experiencing the touch sensations as arising from the dummy. In this study, we explore whether OCD-like disgust may result from contamination of a dummy hand during the RHI; suggesting a possible integration of somatosensory and limbic inputs in the construction of body image. We predicted that participants would experience sensations of disgust, when placing a disgust stimulus (fake feces, vomit or blood) on the dummy hand after establishing the RHI. We found that 9 out of 11 participants experienced greater disgust during the synchronous condition (real hidden hand and fake hand are stroked in synchrony) compared to the asynchronous control condition (real hidden hand and fake hand are stroked in asynchrony); and on average such disgust was significantly greater during the synchronous condition compared to the asynchronous control condition, Z = 2.7, p = .008. These results argue against a strictly hierarchical modular approach to brain function and suggest that a four-way multisensory interaction occurs between vision, touch, proprioception on the one hand and primal emotions like disgust on the other. These findings may inform novel clinical approaches for OCD; that is, contaminating a dummy during the RHI could possibly be used as part of an in-vivo exposure-intervention for OCD.  相似文献   

16.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants'' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one''s own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one''s own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm.The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI.The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.  相似文献   

17.
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.  相似文献   

18.
This article reviews recent findings on how forces are detected by sense organs of insect legs and how this information is integrated in control of posture and walking. These experiments have focused upon campaniform sensilla, receptors that detect forces as strains in the exoskeleton, and include studies of sensory discharges in freely moving animals and intracellular characterization of connectivity of afferent inputs in the central nervous system. These findings provide insights into how campaniform sensilla can contribute to the adjustment of motor outputs to changes in load. In this review we discuss (1) anatomy of the receptors and their activities in freely moving insects, (2) mechanisms by which inputs are incorporated into motor outputs and (3) the integration of sensory signals of diverse modalities. The discharges of some groups of receptors can encode body load when standing. Responses are also correlated with muscle-generated forces during specific times in walking. These activities can enhance motor outputs through reflexes and can affect the timing of motoneuron firing through inputs to pattern generating interneurons. Flexibility in the system is also provided by interactions of afferent inputs at several levels. These mechanisms can contribute to the adaptability of insect locomotion to diverse terrains and environments.  相似文献   

19.
A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain.  相似文献   

20.
Understanding the conditions under which the brain integrates the different sensory streams and the mechanisms supporting this phenomenon is now a question at the forefront of neuroscience. In this paper, we discuss the opportunities for investigating these multisensory processes using modern imaging techniques, the nature of the information obtainable from each method and their benefits and limitations. Despite considerable variability in terms of paradigm design and analysis, some consistent findings are beginning to emerge. The detection of brain activity in human neuroimaging studies that resembles multisensory integration responses at the cellular level in other species, suggests similar crossmodal binding mechanisms may be operational in the human brain. These mechanisms appear to be distributed across distinct neuronal networks that vary depending on the nature of the shared information between different sensory cues. For example, differing extents of correspondence in time, space or content seem to reliably bias the involvement of different integrative networks which code for these cues. A combination of data obtained from haemodynamic and electromagnetic methods, which offer high spatial or temporal resolution respectively, are providing converging evidence of multisensory interactions at both "early" and "late" stages of processing--suggesting a cascade of synergistic processes operating in parallel at different levels of the cortex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号