首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.  相似文献   

2.
Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja’s Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.  相似文献   

3.
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal''s position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat''s velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of ∼10–100 meters and ∼1–10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.  相似文献   

4.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

5.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

6.
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks.  相似文献   

7.
In order to maintain a coherent, unified percept of the external environment, the brain must continuously combine information encoded by our different sensory systems. Contemporary models suggest that multisensory integration produces a weighted average of sensory estimates, where the contribution of each system to the ultimate multisensory percept is governed by the relative reliability of the information it provides (maximum-likelihood estimation). In the present study, we investigate interactions between auditory and visual rate perception, where observers are required to make judgments in one modality while ignoring conflicting rate information presented in the other. We show a gradual transition between partial cue integration and complete cue segregation with increasing inter-modal discrepancy that is inconsistent with mandatory implementation of maximum-likelihood estimation. To explain these findings, we implement a simple Bayesian model of integration that is also able to predict observer performance with novel stimuli. The model assumes that the brain takes into account prior knowledge about the correspondence between auditory and visual rate signals, when determining the degree of integration to implement. This provides a strategy for balancing the benefits accrued by integrating sensory estimates arising from a common source, against the costs of conflating information relating to independent objects or events.  相似文献   

8.
Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal’s current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal’s knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541–4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.  相似文献   

9.
In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.  相似文献   

10.
Motivated by experimental observations of the head direction system, we study a three population network model that operates as a continuous attractor network. This network is able to store in a short-term memory an angular variable (the head direction) as a spatial profile of activity across neurons in the absence of selective external inputs, and to accurately update this variable on the basis of angular velocity inputs. The network is composed of one excitatory population and two inhibitory populations, with inter-connections between populations but no connections within the neurons of a same population. In particular, there are no excitatory-to-excitatory connections. Angular velocity signals are represented as inputs in one inhibitory population (clockwise turns) or the other (counterclockwise turns). The system is studied using a combination of analytical and numerical methods. Analysis of a simplified model composed of threshold-linear neurons gives the conditions on the connectivity for (i) the emergence of the spatially selective profile, (ii) reliable integration of angular velocity inputs, and (iii) the range of angular velocities that can be accurately integrated by the model. Numerical simulations allow us to study the proposed scenario in a large network of spiking neurons and compare their dynamics with that of head direction cells recorded in the rat limbic system. In particular, we show that the directional representation encoded by the attractor network can be rapidly updated by external cues, consistent with the very short update latencies observed experimentally by Zugaro et al. (2003) in thalamic head direction cells.  相似文献   

11.
How the brain constructs a coherent representation of the environment from noisy visual input remains poorly understood. Here, we explored whether awareness of the stimulus plays a role in the integration of local features into a representation of global shape. Participants were primed with a shape defined either by position or orientation cues, and performed a shape-discrimination task on a subsequently presented probe shape. Crucially, the probe could either be defined by the same or different cues as the prime, which allowed us to distinguish the effect of priming by local features and global shape. We found a robust priming benefit for visible primes, with response times being faster when the probe and prime were the same shape, regardless of the defining cue. However, rendering the prime invisible uncovered a dissociation: position-defined primes produced behavioural benefit only for probes of the same cue type. Surprisingly, orientation-defined primes afforded an enhancement only for probes of the opposite cue. In further experiments, we showed that the effect of priming was confined to retinotopic coordinates and that there was no priming effect by invisible orientation cues in an orientation-discrimination task. This explains the absence of priming by the same cue in our shape-discrimination task. In summary, our findings show that while in the absence of awareness orientation signals can recruit retinotopic circuits (e.g. intrinsic lateral connections), conscious processing is necessary to interpret local features as global shape.  相似文献   

12.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

13.
Continuous attractor networks require calibration. Computational models of the head direction (HD) system of the rat usually assume that the connections that maintain HD neuron activity are pre-wired and static. Ongoing activity in these models relies on precise continuous attractor dynamics. It is currently unknown how such connections could be so precisely wired, and how accurate calibration is maintained in the face of ongoing noise and perturbation. Our adaptive attractor model of the HD system that uses symmetric angular head velocity (AHV) cells as a training signal shows that the HD system can learn to support stable firing patterns from poorly-performing, unstable starting conditions. The proposed calibration mechanism suggests a requirement for symmetric AHV cells, the existence of which has previously been unexplained, and predicts that symmetric and asymmetric AHV cells should be distinctly different (in morphology, synaptic targets and/or methods of action on postsynaptic HD cells) due to their distinctly different functions.  相似文献   

14.
The Euclidean and MAX metrics have been widely used to model cue summation psychophysically and computationally. Both rules happen to be special cases of a more general Minkowski summation rule , where m = 2 and ∞, respectively. In vision research, Minkowski summation with power m = 3-4 has been shown to be a superior model of how subthreshold components sum to give an overall detection threshold. Recently, we have previously reported that Minkowski summation with power m = 2.84 accurately models summation of suprathreshold visual cues in photographs. In four suprathreshold discrimination experiments, we confirm the previous findings with new visual stimuli and extend the applicability of this rule to cue combination in auditory stimuli (musical sequences and phonetic utterances, where m = 2.95 and 2.54, respectively) and cross-modal stimuli (m = 2.56). In all cases, Minkowski summation with power m = 2.5-3 outperforms the Euclidean and MAX operator models. We propose that this reflects the summation of neuronal responses that are not entirely independent but which show some correlation in their magnitudes. Our findings are consistent with electrophysiological research that demonstrates signal correlations (r = 0.1-0.2) between sensory neurons when these are presented with natural stimuli.  相似文献   

15.
Development of cue integration in human navigation   总被引:1,自引:0,他引:1  
Mammalian navigation depends both on visual landmarks and on self-generated (e.g., vestibular and proprioceptive) cues that signal the organism's own movement [1-5]. When these conflict, landmarks can either reset estimates of self-motion or be integrated with them [6-9]. We asked how humans combine these information sources and whether children, who use both from a young age [10-12], combine them as adults do. Participants attempted to return an object to its original place in an arena when given either visual landmarks only, nonvisual self-motion information only, or both. Adults, but not 4- to 5-year-olds or 7- to 8-year-olds, reduced their response variance when both information sources were available. In an additional "conflict" condition that measured relative reliance on landmarks and self-motion, we predicted behavior under two models: integration (weighted averaging) of the cues and alternation between them. Adults' behavior was predicted by integration, in which the cues were weighted nearly optimally to reduce variance, whereas children's behavior was predicted by alternation. These results suggest that development of individual spatial-representational systems precedes development of the capacity to combine these within a common reference frame. Humans can integrate spatial cues nearly optimally to navigate, but this ability depends on an extended developmental process.  相似文献   

16.
Cockroaches use navigational cues to elaborate their return path to the shelter. Our experiments investigated how individuals weighted information to choose where to search for the shelter in situations where path integration, visual and olfactory cues were conflicting. We showed that homing relied on a complex set of environmental stimuli, each playing a particular part. Path integration cues give cockroaches an estimation of the position of their goal, visual landmarks guide them to that position from a distance, while olfactory cues indicate the end of the path. Cockroaches gave the greatest importance to the first cues they encountered along their return path. Nevertheless, visual cues placed beyond aggregation pheromone deposits reduced their arrest efficiency and induced search in the area near the visual cues.  相似文献   

17.
Visually targeted reaching to a specific object is a demanding neuronal task requiring the translation of the location of the object from a two-dimensionsal set of retinotopic coordinates to a motor pattern that guides a limb to that point in three-dimensional space. This sensorimotor transformation has been intensively studied in mammals, but was not previously thought to occur in animals with smaller nervous systems such as insects. We studied horse-head grasshoppers (Orthoptera: Proscopididae) crossing gaps and found that visual inputs are sufficient for them to target their forelimbs to a foothold on the opposite side of the gap. High-speed video analysis showed that these reaches were targeted accurately and directly to footholds at different locations within the visual field through changes in forelimb trajectory and body position, and did not involve stereotyped searching movements. The proscopids estimated distant locations using peering to generate motion parallax, a monocular distance cue, but appeared to use binocular visual cues to estimate the distance of nearby footholds. Following occlusion of regions of binocular overlap, the proscopids resorted to peering to target reaches even to nearby locations. Monocular cues were sufficient for accurate targeting of the ipsilateral but not the contralateral forelimb. Thus, proscopids are capable not only of the sensorimotor transformations necessary for visually targeted reaching with their forelimbs but also of flexibly using different visual cues to target reaches.  相似文献   

18.
Behavioural responses of the gastropod Nerita fulgurans Gmelin, 1791 to flat black rectangles and intraspecific mucus trails were measured in a circular arena. Snails were tested in water either in the presence or absence of chemicals generated from a predator gastropod, Chicoreus brevifrons (Lamarck, 1822). The test hypothesis was that this snail has different behavioural responses as result of visual and chemical cue integration. Nerita fulgurans has the capacity to orient to solid targets subtending angles larger than 10° and follow its own mucus trails. In water conditioned by the predator C . brevifrons, snails exhibited an avoidance response when 10°, 20° and 45° sectors were presented, demonstrating an integration of chemical and visual information. The simultaneous presentation of two orienting cues (black sectors and mucus trails) was tested to determine the nature of the interaction. When the two cues were oriented in the same direction there was no effect. When the two cues were presented from directions 180° apart a preference for visual cues over mucus trail cues was evident when the visual angle of the visual cue subtended angles greater than 90°. This result demonstrates a hierarchical usage of the orienting references.  相似文献   

19.
Behavioural responses of the gastropod Nerita fulgurans Gmelin, 1791 to flat black rectangles and intraspecific mucus trails were measured in a circular arena. Snails were tested in water either in the presence or absence of chemicals generated from a predator gastropod, Chicoreus brevifrons (Lamarck, 1822). The test hypothesis was that this snail has different behavioural responses as result of visual and chemical cue integration. Nerita fulgurans has the capacity to orient to solid targets subtending angles larger than 10° and follow its own mucus trails. In water conditioned by the predator C . brevifrons , snails exhibited an avoidance response when 10°, 20° and 45° sectors were presented, demonstrating an integration of chemical and visual information. The simultaneous presentation of two orienting cues (black sectors and mucus trails) was tested to determine the nature of the interaction. When the two cues were oriented in the same direction there was no effect. When the two cues were presented from directions 180° apart a preference for visual cues over mucus trail cues was evident when the visual angle of the visual cue subtended angles greater than 90°. This result demonstrates a hierarchical usage of the orienting references.  相似文献   

20.
Cells in several areas of the hippocampal formation show place specific firing patterns, and are thought to form a distributed representation of an animals current location in an environment. Experimental results suggest that this representation is continually updated even in complete darkness, indicating the presence of a path integration mechanism in the rat. Adopting the Neural Engineering Framework (NEF) presented by Eliasmith and Anderson (2003) we derive a novel attractor network model of path integration, using heterogeneous spiking neurons. The network we derive incorporates representation and updating of position into a single layer of neurons, eliminating the need for a large external control population, and without making use of multiplicative synapses. An efficient and biologically plausible control mechanism results directly from applying the principles of the NEF. We simulate the network for a variety of inputs, analyze its performance, and give three testable predictions of our model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号