首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Loss of integrity of the basal forebrain cholinergic neurons is a consistent feature of Alzheimer’s disease, and measurement of basal forebrain degeneration by magnetic resonance imaging is emerging as a sensitive diagnostic marker for prodromal disease. It is also known that Alzheimer’s disease patients perform poorly on both real space and computerized cued (allothetic) or uncued (idiothetic) recall navigation tasks. Although the hippocampus is required for allothetic navigation, lesions of this region only mildly affect idiothetic navigation. Here we tested the hypothesis that the cholinergic medial septo-hippocampal circuit is important for idiothetic navigation. Basal forebrain cholinergic neurons were selectively lesioned in mice using the toxin saporin conjugated to a basal forebrain cholinergic neuronal marker, the p75 neurotrophin receptor. Control animals were able to learn and remember spatial information when tested on a modified version of the passive place avoidance test where all extramaze cues were removed, and animals had to rely on idiothetic signals. However, the exploratory behaviour of mice with cholinergic basal forebrain lesions was highly disorganized during this test. By contrast, the lesioned animals performed no differently from controls in tasks involving contextual fear conditioning and spatial working memory (Y maze), and displayed no deficits in potentially confounding behaviours such as motor performance, anxiety, or disturbed sleep/wake cycles. These data suggest that the basal forebrain cholinergic system plays a specific role in idiothetic navigation, a modality that is impaired early in Alzheimer’s disease.  相似文献   

2.
Animals are able to update their knowledge about their current position solely by integrating the speed and the direction of their movement, which is known as path integration. Recent discoveries suggest that grid cells in the medial entorhinal cortex might perform some of the essential underlying computations of path integration. However, a major concern over path integration is that as the measurement of speed and direction is inaccurate, the representation of the position will become increasingly unreliable. In this paper, we study how allothetic inputs can be used to continually correct the accumulating error in the path integrator system. We set up the model of a mobile agent equipped with the entorhinal representation of idiothetic (grid cell) and allothetic (visual cells) information and simulated its place learning in a virtual environment. Due to competitive learning, a robust hippocampal place code emerges rapidly in the model. At the same time, the hippocampo-entorhinal feed-back connections are modified via Hebbian learning in order to allow hippocampal place cells to influence the attractor dynamics in the entorhinal cortex. We show that the continuous feed-back from the integrated hippocampal place representation is able to stabilize the grid cell code. This research was supported by the EU Framework 6 ICEA project (IST-4-027819-IP).  相似文献   

3.
This work builds on the enfacement effect. This effect occurs when experiencing a rhythmic stimulation on one’s cheek while seeing someone else’s face being touched in a synchronous way. This typically leads to cognitive and social-cognitive effects similar to self-other merging. In two studies, we demonstrate that this multisensory stimulation can change the evaluation of the other’s face. In the first study, participants judged the stranger’s face and similar faces as being more trustworthy after synchrony, but not after asynchrony. Synchrony interacted with the order of the stroking; hence trustworthiness only changed when the synchronous stimulation occurred before the asynchronous one. In the second study, a synchronous stimulation caused participants to remember the stranger’s face as more trustworthy, but again only when the synchronous stimulation came before the asynchronous one. The results of both studies show that order of stroking creates a context in which multisensory synchrony can affect the trustworthiness of faces.  相似文献   

4.
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.  相似文献   

5.
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.  相似文献   

6.
Brain regions in the intraparietal and the premotor cortices selectively process visual and multisensory events near the hands (peri-hand space). Visual information from the hand itself modulates this processing potentially because it is used to estimate the location of one’s own body and the surrounding space. In humans specific occipitotemporal areas process visual information of specific body parts such as hands. Here we used an fMRI block-design to investigate if anterior intraparietal and ventral premotor ‘peri-hand areas’ exhibit selective responses to viewing images of hands and viewing specific hand orientations. Furthermore, we investigated if the occipitotemporal ‘hand area’ is sensitive to viewed hand orientation. Our findings demonstrate increased BOLD responses in the left anterior intraparietal area when participants viewed hands and feet as compared to faces and objects. Anterior intraparietal and also occipitotemporal areas in the left hemisphere exhibited response preferences for viewing right hands with orientations commonly viewed for one’s own hand as compared to uncommon own hand orientations. Our results indicate that both anterior intraparietal and occipitotemporal areas encode visual limb-specific shape and orientation information.  相似文献   

7.
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.  相似文献   

8.
There is growing evidence that individuals are able to understand others’ emotions because they “embody” them, i.e., re-experience them by activating a representation of the observed emotion within their own body. One way to study emotion embodiment is provided by a multisensory stimulation paradigm called emotional visual remapping of touch (eVRT), in which the degree of embodiment/remapping of emotions is measured as enhanced detection of near-threshold tactile stimuli on one’s own face while viewing different emotional facial expressions. Here, we measured remapping of fear and disgust in participants with low (LA) and high (HA) levels of alexithymia, a personality trait characterized by a difficulty in recognizing emotions. The results showed that fear is remapped in LA but not in HA participants, while disgust is remapped in HA but not in LA participants. To investigate the hypothesis that HA might exhibit increased responses to emotional stimuli producing a heightened physical and visceral sensations, i.e., disgust, in a second experiment we investigated participants’ interoceptive abilities and the link between interoception and emotional modulations of VRT. The results showed that participants’ disgust modulations of VRT correlated with their ability to perceive bodily signals. We suggest that the emotional profile of HA individuals on the eVRT task could be related to their abnormal tendency to be focalized on their internal bodily signals, and to experience emotions in a “physical” way. Finally, we speculated that these results in HA could be due to a enhancement of insular activity during the perception of disgusted faces.  相似文献   

9.
Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

Hearing one’s own voice is critical for fluent speech production, allowing detection and correction of vocalization errors in real-time. This study shows that the dorsal precentral gyrus is a critical component of a cortical network that monitors auditory feedback to produce fluent speech; this region is engaged specifically when speech production is effortful during articulation of long utterances.  相似文献   

10.
 A computational model of hippocampal activity during spatial cognition and navigation tasks is presented. The spatial representation in our model of the rat hippocampus is built on-line during exploration via two processing streams. An allothetic vision-based representation is built by unsupervised Hebbian learning extracting spatio-temporal properties of the environment from visual input. An idiothetic representation is learned based on internal movement-related information provided by path integration. On the level of the hippocampus, allothetic and idiothetic representations are integrated to yield a stable representation of the environment by a population of localized overlapping CA3-CA1 place fields. The hippocampal spatial representation is used as a basis for goal-oriented spatial behavior. We focus on the neural pathway connecting the hippocampus to the nucleus accumbens. Place cells drive a population of locomotor action neurons in the nucleus accumbens. Reward-based learning is applied to map place cell activity into action cell activity. The ensemble action cell activity provides navigational maps to support spatial behavior. We present experimental results obtained with a mobile Khepera robot. Received: 02 July 1999 / Accepted in revised form: 20 March 2000  相似文献   

11.
Sensorimotor learning critically depends on error signals. Learning usually tries to minimise these error signals to guarantee optimal performance. Errors can, however, have both internal causes, resulting from one’s sensorimotor system, and external causes, resulting from external disturbances. Does learning take into account the perceived cause of error information? Here, we investigated the recalibration of internal predictions about the sensory consequences of one’s actions. Since these predictions underlie the distinction of self- and externally produced sensory events, we assumed them to be recalibrated only by prediction errors attributed to internal causes. When subjects were confronted with experimentally induced visual prediction errors about their pointing movements in virtual reality, they recalibrated the predicted visual consequences of their movements. Recalibration was not proportional to the externally generated prediction error, but correlated with the error component which subjects attributed to internal causes. We also revealed adaptation in subjects’ motor performance which reflected their recalibrated sensory predictions. Thus, causal attribution of error information is essential for sensorimotor learning.  相似文献   

12.
Understanding of adaptive behavior requires the precisely controlled presentation of multisensory stimuli combined with simultaneous measurement of multiple behavioral modalities. Hence, we developed a virtual reality apparatus that allows for simultaneous measurement of reward checking, a commonly used measure in associative learning paradigms, and navigational behavior, along with precisely controlled presentation of visual, auditory and reward stimuli. Rats performed a virtual spatial navigation task analogous to the Morris maze where only distal visual or auditory cues provided spatial information. Spatial navigation and reward checking maps showed experience-dependent learning and were in register for distal visual cues. However, they showed a dissociation, whereby distal auditory cues failed to support spatial navigation but did support spatially localized reward checking. These findings indicate that rats can navigate in virtual space with only distal visual cues, without significant vestibular or other sensory inputs. Furthermore, they reveal the simultaneous dissociation between two reward-driven behaviors.  相似文献   

13.
Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal’s current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal’s knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541–4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.  相似文献   

14.
Humans can learn and store multiple visuomotor mappings (dual-adaptation) when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to “hit” it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly) extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit) Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other). The experimental results show that, although learning the associations requires explicit awareness of the cues’ role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the Bayesian learning model and the empirical results indicating that this model poses a possible mechanism for simultaneously learning multiple visuomotor mappings.  相似文献   

15.
In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets.  相似文献   

16.
Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson’s disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers’ gait was affected. However, when visual cues were present, freezers’ gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG.  相似文献   

17.
A biologically inspired model of head direction cells is presented and tested on a small mobile robot. Head direction cells (discovered in the brain of rats in 1984) encode the head orientation of their host irrespective of the host’s location in the environment. The head direction system thus acts as a biological compass (though not a magnetic one) for its host. Head direction cells are influenced in different ways by idiothetic (host-centred) and allothetic (not host-centred) cues. The model presented here uses the visual, vestibular and kinesthetic inputs that are simulated by robot sensors. Real robot-sensor data has been used in order to train the model’s artificial neural network connections. The main contribution of this paper lies in the use of an evolutionary algorithm in order to determine the values of parameters that determine the behaviour of the model. More importantly, the objective function of the evolutionary strategy used takes into consideration quantitative biological observations reported in the literature.  相似文献   

18.
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant''s vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”.  相似文献   

19.
Stepp CE  An Q  Matsuoka Y 《PloS one》2012,7(2):e32743
Most users of prosthetic hands must rely on visual feedback alone, which requires visual attention and cognitive resources. Providing haptic feedback of variables relevant to manipulation, such as contact force, may thus improve the usability of prosthetic hands for tasks of daily living. Vibrotactile stimulation was explored as a feedback modality in ten unimpaired participants across eight sessions in a two-week period. Participants used their right index finger to perform a virtual object manipulation task with both visual and augmentative vibrotactile feedback related to force. Through repeated training, participants were able to learn to use the vibrotactile feedback to significantly improve object manipulation. Removal of vibrotactile feedback in session 8 significantly reduced task performance. These results suggest that vibrotactile feedback paired with training may enhance the manipulation ability of prosthetic hand users without the need for more invasive strategies.  相似文献   

20.
Keller GB  Bonhoeffer T  Hübener M 《Neuron》2012,74(5):809-815
Studies in anesthetized animals have suggested that activity in early visual cortex is mainly driven by visual input and is well described by a feedforward processing hierarchy. However, evidence from experiments on awake animals has shown that both eye movements and behavioral state can strongly modulate responses of neurons in visual cortex; the functional significance of this modulation, however, remains elusive. Using visual-flow feedback manipulations during locomotion in a virtual reality environment, we found that responses in layer 2/3 of mouse primary visual cortex are strongly driven by locomotion and by mismatch between actual and expected visual feedback. These data suggest that processing in visual cortex may be based on predictive coding strategies that use motor-related and visual input to detect mismatches between predicted and actual visual feedback.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号