首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Self-consciousness has mostly been approached by philosophical enquiry and not by empirical neuroscientific study, leading to an overabundance of diverging theories and an absence of data-driven theories. Using robotic technology, we achieved specific bodily conflicts and induced predictable changes in a fundamental aspect of self-consciousness by altering where healthy subjects experienced themselves to be (self-location). Functional magnetic resonance imaging revealed that temporo-parietal junction (TPJ) activity reflected experimental changes in self-location that also depended on the first-person perspective due to visuo-tactile and visuo-vestibular conflicts. Moreover, in a large lesion analysis study of neurological patients with a well-defined state of abnormal self-location, brain damage was also localized at TPJ, providing causal evidence that TPJ encodes self-location. Our findings reveal that multisensory integration at the TPJ reflects one of the most fundamental subjective feelings of humans: the feeling of being an entity localized at a position in space and perceiving the world from this position and perspective.  相似文献   

2.

Background

The spatial unity between self and body can be disrupted by employing conflicting visual-somatosensory bodily input, thereby bringing neurological observations on bodily self-consciousness under scientific scrutiny. Here we designed a novel paradigm linking the study of bodily self-consciousness to the spatial representation of visuo-tactile stimuli by measuring crossmodal congruency effects (CCEs) for the full body.

Methodology/Principal Findings

We measured full body CCEs by attaching four vibrator-light pairs to the trunks (backs) of subjects who viewed their bodies from behind via a camera and a head mounted display (HMD). Subjects made speeded elevation (up/down) judgments of the tactile stimuli while ignoring light stimuli. To modulate self-identification for the seen body subjects were stroked on their backs with a stick and the felt stroking was either synchronous or asynchronous with the stroking that could be seen via the HMD.We found that (1) tactile stimuli were mislocalized towards the seen body (2) CCEs were modulated systematically during visual-somatosensory conflict when subjects viewed their body but not when they viewed a body-sized object, i.e. CCEs were larger during synchronous than during asynchronous stroking of the body and (3) these changes in the mapping of tactile stimuli were induced in the same experimental condition in which predictable changes in bodily self-consciousness occurred.

Conclusions/Significance

These data reveal that systematic alterations in the mapping of tactile stimuli occur in a full body illusion and thus establish CCE magnitude as an online performance proxy for subjective changes in global bodily self-consciousness.  相似文献   

3.
Previous research suggests that bodily self-identification, bodily self-localization, agency, and the sense of being present in space are critical aspects of conscious full-body self-perception. However, none of the existing studies have investigated the relationship of these aspects to each other, i.e., whether they can be identified to be distinguishable components of the structure of conscious full-body self-perception. Therefore, the objective of the present investigation is to elucidate the structure of conscious full-body self-perception. We performed two studies in which we stroked the back of healthy individuals for three minutes while they watched the back of a distant virtual body being synchronously stroked with a virtual stick. After visuo-tactile stimulation, participants assessed changes in their bodily self-perception with a custom made self-report questionnaire. In the first study, we investigated the structure of conscious full-body self-perception by analyzing the responses to the questionnaire by means of multidimensional scaling combined with cluster analysis. In the second study, we then extended the questionnaire and validated the stability of the structure of conscious full-body self-perception found in the first study within a larger sample of individuals by performing a principle components analysis of the questionnaire responses. The results of the two studies converge in suggesting that the structure of conscious full-body self-perception consists of the following three distinct components: bodily self-identification, space-related self-perception (spatial presence), and agency.  相似文献   

4.
How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.  相似文献   

5.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

6.
Development of cue integration in human navigation   总被引:1,自引:0,他引:1  
Mammalian navigation depends both on visual landmarks and on self-generated (e.g., vestibular and proprioceptive) cues that signal the organism's own movement [1-5]. When these conflict, landmarks can either reset estimates of self-motion or be integrated with them [6-9]. We asked how humans combine these information sources and whether children, who use both from a young age [10-12], combine them as adults do. Participants attempted to return an object to its original place in an arena when given either visual landmarks only, nonvisual self-motion information only, or both. Adults, but not 4- to 5-year-olds or 7- to 8-year-olds, reduced their response variance when both information sources were available. In an additional "conflict" condition that measured relative reliance on landmarks and self-motion, we predicted behavior under two models: integration (weighted averaging) of the cues and alternation between them. Adults' behavior was predicted by integration, in which the cues were weighted nearly optimally to reduce variance, whereas children's behavior was predicted by alternation. These results suggest that development of individual spatial-representational systems precedes development of the capacity to combine these within a common reference frame. Humans can integrate spatial cues nearly optimally to navigate, but this ability depends on an extended developmental process.  相似文献   

7.
In anorexia nervosa (AN), body distortions have been associated with parietal cortex (PC) dysfunction. The PC is the anatomical substrate for a supramodal reference framework involved in spatial orientation constancy. Here, we sought to evaluate spatial orientation constancy and the perception of body orientation in AN patients. In the present study, we investigated the effect of passive lateral body inclination on the visual and tactile subjective vertical (SV) and body Z-axis in 25 AN patients and 25 healthy controls. Subjects performed visual- and tactile-spatial judgments of axis orientations in an upright position and tilted 90° clockwise or counterclockwise. We observed a significant deviation of the tactile and visual SV towards the body (an A-effect) under tilted conditions, suggesting a multisensory impairment in spatial orientation. Deviation of the Z-axis in the direction of the tilt was also observed in the AN group. The greater A-effect in AN patients may reflect reduced interoceptive awareness and thus inadequate consideration of gravitational inflow. Furthermore, marked body weight loss could decrease the somatosensory inputs required for spatial orientation. Our study results suggest that spatial references are impaired in AN. This may be due to particular integration of visual, tactile and gravitational information (e.g. vestibular and proprioceptive cues) in the PC.  相似文献   

8.
Desert ants, Cataglyphis fortis, perform large-scale foraging trips in their featureless habitat using path integration as their main navigation tool. To determine their walking direction they use primarily celestial cues, the sky’s polarization pattern and the sun position. To examine the relative importance of these two celestial cues, we performed cue conflict experiments. We manipulated the polarization pattern experienced by the ants during their outbound foraging excursions, reducing it to a single electric field (e-)vector direction with a linear polarization filter. The simultaneous view of the sun created situations in which the directional information of the sun and the polarization compass disagreed. The heading directions of the homebound runs recorded on a test field with full view of the natural sky demonstrate that none of both compasses completely dominated over the other. Rather the ants seemed to compute an intermediate homing direction to which both compass systems contributed roughly equally. Direct sunlight and polarized light are detected in different regions of the ant’s compound eye, suggesting two separate pathways for obtaining directional information. In the experimental paradigm applied here, these two pathways seem to feed into the path integrator with similar weights.  相似文献   

9.
Foraging ants are known to use multiple sources of information to return to the nest. These cue sets are employed by independent navigational systems including path integration in the case of celestial cues and vision‐based learning in the case of terrestrial landmarks and the panorama. When cue sets are presented in conflict, the Australian desert ant species, Melophorus bagoti, will choose a compromise heading between the directions dictated by the cues or, when navigating on well‐known routes, foragers choose the direction indicated by the terrestrial cues of the panorama against the dictates of celestial cues. Here, we explore the roles of learning terrestrial cues and delays since cue exposure in these navigational decisions by testing restricted foragers with differing levels of terrestrial cue experience with the maximum (180°) cue conflict. Restricted foragers appear unable to extrapolate landmark information from the nest to a displacement site 8 m away. Given only one homeward experience, foragers can successfully orient using terrestrial cues, but this experience is not sufficient to override a conflicting vector. Terrestrial cue strength increases with multiple experiences and eventually overrides the celestial cues. This appears to be a dynamic choice as foragers discount the reliability of the terrestrial cues over time, reverting back to preferring the celestial vector when the forager has an immediate vector, but the forager's last exposure to the terrestrial cues was 24 hr in the past. Foragers may be employing navigational decision making that can be predicted by the temporal weighting rule.  相似文献   

10.
Harris MA  Wolbers T 《Hippocampus》2012,22(8):1770-1780
Navigation abilities show marked decline in both normal ageing and dementia. Path integration may be particularly affected, as it is supported by the hippocampus and entorhinal cortex, both of which show severe degeneration with ageing. Age differences in path integration based on kinaesthetic and vestibular cues have been clearly demonstrated, but very little research has focused on visual path integration, based only on optic flow. Path integration is complemented by landmark navigation, which may also show age differences, but has not been well studied either. Here we present a study using several simple virtual navigation tasks to explore age differences in path integration both with and without landmark information. We report that, within a virtual environment that provided only optic flow information, older participants exhibited deficits in path integration in terms of distance reproduction, rotation reproduction, and triangle completion. We also report age differences in triangle completion within an environment that provided landmark information. In all tasks, we observed a more restricted range of responses in the older participants, which we discuss in terms of a leaky integrator model, as older participants showed greater leak than younger participants. Our findings begin to explain the mechanisms underlying age differences in path integration, and thus contribute to an understanding of the substantial decline in navigation abilities observed in ageing.  相似文献   

11.
Multisensory integration is a key factor in establishing bodily self-consciousness and in adapting humans to novel environments. The rubber hand illusion paradigm, in which humans can immediately perceive illusory ownership to an artificial hand, is a traditional technique for investigating multisensory integration and the feeling of illusory ownership. However, the long-term learning properties of the rubber hand illusion have not been previously investigated. Moreover, although sleep contributes to various aspects of cognition, including learning and memory, its influence on illusory learning of the artificial hand has not yet been assessed. We determined the effects of daily repetitive training and sleep on learning visuo-tactile-proprioceptive sensory integration and illusory ownership in healthy adult participants by using the traditional rubber hand illusion paradigm. Subjective ownership of the rubber hand, proprioceptive drift, and galvanic skin response were measured to assess learning indexes. Subjective ownership was maintained and proprioceptive drift increased with daily training. Proprioceptive drift, but not subjective ownership, was significantly attenuated after sleep. A significantly greater reduction in galvanic skin response was observed after wakefulness compared to after sleep. Our results suggest that although repetitive rubber hand illusion training facilitates multisensory integration and physiological habituation of a multisensory incongruent environment, sleep corrects illusional integration and habituation based on experiences in a multisensory incongruent environment. These findings may increase our understanding of adaptive neural processes to novel environments, specifically, bodily self-consciousness and sleep-dependent neuroplasticity.  相似文献   

12.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

13.
This review shows how well the published work on the neural basis of balance and hydrostatic pressure reception in crabs agrees with the analyses and models of path integration. Fiddler crabs allow analyses at the level of behaviour. With considerable accuracy, they continuously show the direction to home with their body orientation and use idiothetic path integration to calculate a home vector from the internal measurements of their locomotion. All crabs have a well-developed vestibular system in the statocyst with horizontal and vertical canals which is used for angular acceleration sensing and depth reception. Large identified interneurones abstract the component of angular acceleration in one of the three orthogonal planes. These have properties consistent with a key role in path integration, combining vestibular and proprioceptor information with a central excitatory drive from the hemiellipsoid bodies. They have been monitored during walking, swimming and even in freefall for a 22 s period in parabolic flight.  相似文献   

14.
As we move through the world, information can be combined from multiple sources in order to allow us to perceive our self-motion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.  相似文献   

15.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

16.
Multi-modal visuo-tactile stimulation of the type performed in the rubber hand illusion can induce the brain to temporarily incorporate external objects into the body image. In this study we show that audio-visual stimulation combined with mental imagery more rapidly elicits an elevated physiological response (skin conductance) after an unexpected threat to a virtual limb, compared to audio-visual stimulation alone. Two groups of subjects seated in front of a monitor watched a first-person perspective view of slow movements of two virtual arms intercepting virtual balls rolling towards the viewer. One group was instructed to simply observe the movements of the two virtual arms, while the other group was instructed to observe the virtual arms and imagine that the arms were their own. After 84 seconds the right virtual arm was unexpectedly "stabbed" by a knife and began "bleeding". This aversive stimulus caused both groups to show a significant increase in skin conductance. In addition, the observation-with-imagery group showed a significantly higher skin conductance (p<0.05) than the observation-only group over a 2-second period shortly after the aversive stimulus onset. No corresponding change was found in subjects' heart rates. Our results suggest that simple visual input combined with mental imagery may induce the brain to measurably temporarily incorporate external objects into its body image.  相似文献   

17.
In a vertically rotating centrifuge, the direction of the resultant gravitational and centrifugal forces is constantly changing. Hornets placed in such a centrifuge will build their combs in the direction of the resultant only if the centrifuge is stopped every day and left in the same position for at least half an hour, because during the cessation of motion, they presumably “learn” some geometrical cues which enable them to determine the preferred angle of building. Hornets can detect and respond to a centrifugal force as small as 0·18% of the earth's gravitational force. At a rotational rate of 1/8 of a revolution per minute there was no comb construction whatsoever and hornet mortality rate was 100% within three days.  相似文献   

18.
Motion sickness: a synthesis and evaluation of the sensory conflict theory   总被引:6,自引:0,他引:6  
"Motion sickness" is the general term describing a group of common nausea syndromes originally attributed to motion-induced cerebral ischemia, stimulation of abdominal organ afferents, or overstimulation of the vestibular organs of the inner ear. Seasickness, car sickness, and airsickness are commonly experienced examples. However, the identification of other variants such as spectacle sickness and flight simulator sickness in which the physical motion of the head and body is normal or even absent has led to a succession of "sensory conflict" theories that offer a more comprehensive etiologic perspective. Implicit in the conflict theory is the hypothesis that neural and (or) humoral signals originate in regions of the brain subserving spatial orientation, and that these signals somehow traverse to other centers mediating sickness symptoms. Unfortunately, our present understanding of the neurophysiological basis of motion sickness is incomplete. No sensory conflict neuron or process has yet been physiologically identified. This paper reviews the types of stimuli that cause sickness and synthesizes a mathematical statement of the sensory conflict hypothesis based on observer theory from control engineering. A revised mathematical model is presented that describes the dynamic coupling between the putative conflict signals and nausea magnitude estimates. Based on the model, what properties would a conflict neuron be expected to have?  相似文献   

19.
Vestibular signals are strongly integrated with information from several other sensory modalities. For example, vestibular stimulation was reported to improve tactile detection. However, this improvement could reflect either a multimodal interaction or an indirect interaction driven by vestibular effects on spatial attention and orienting. Here we investigate whether natural vestibular activation induced by passive whole-body rotation influences tactile detection. In particular, we assessed the ability to detect faint tactile stimuli to the fingertips of the left and right hand during spatially congruent or incongruent rotations. We found that passive whole-body rotations significantly enhanced sensitivity to faint shocks, without affecting response bias. Critically, this enhancement of somatosensory sensitivity did not depend on the spatial congruency between the direction of rotation and the hand stimulated. Thus, our results support a multimodal interaction, likely in brain areas receiving both vestibular and somatosensory signals.  相似文献   

20.
The relative role of visual and vestibular cues in determining the perceived distance of passive, linear self motion were assessed. Seventeen subjects were given cues to constant acceleration motion: either optic flow, physical motion in the dark or combinations of visual and physical motion. Subjects indicated when they perceived they had traversed a distance that had been previously indicated either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a visual target but was perceptually equivalent to a shorter physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self-motion when both visual and physical cues were present was perceptually equivalent to the physical motion experienced and not the simultaneous visual motion even when the target was presented visually. We describe this dominance of the physical cues in determining the perceived distance of self motion as "vestibular capture".  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号