首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.  相似文献   

2.
Cockroaches use navigational cues to elaborate their return path to the shelter. Our experiments investigated how individuals weighted information to choose where to search for the shelter in situations where path integration, visual and olfactory cues were conflicting. We showed that homing relied on a complex set of environmental stimuli, each playing a particular part. Path integration cues give cockroaches an estimation of the position of their goal, visual landmarks guide them to that position from a distance, while olfactory cues indicate the end of the path. Cockroaches gave the greatest importance to the first cues they encountered along their return path. Nevertheless, visual cues placed beyond aggregation pheromone deposits reduced their arrest efficiency and induced search in the area near the visual cues.  相似文献   

3.
Recent interest in the neural bases of spatial navigation stems from the discovery of neuronal populations with strong, specific spatial signals. The regular firing field arrays of medial entorhinal grid cells suggest that they may provide place cells with distance information extracted from the animal''s self-motion, a notion we critically review by citing new contrary evidence. Next, we question the idea that grid cells provide a rigid distance metric. We also discuss evidence that normal navigation is possible using only landmarks, without self-motion signals. We then propose a model that supposes that information flow in the navigational system changes between light and dark conditions. We assume that the true map-like representation is hippocampal and argue that grid cells have a crucial navigational role only in the dark. In this view, their activity in the light is predominantly shaped by landmarks rather than self-motion information, and so follows place cell activity; in the dark, their activity is determined by self-motion cues and controls place cell activity. A corollary is that place cell activity in the light depends on non-grid cells in ventral medial entorhinal cortex. We conclude that analysing navigational system changes between landmark and no-landmark conditions will reveal key functional properties.  相似文献   

4.
Foraging ants are known to use multiple sources of information to return to the nest. These cue sets are employed by independent navigational systems including path integration in the case of celestial cues and vision‐based learning in the case of terrestrial landmarks and the panorama. When cue sets are presented in conflict, the Australian desert ant species, Melophorus bagoti, will choose a compromise heading between the directions dictated by the cues or, when navigating on well‐known routes, foragers choose the direction indicated by the terrestrial cues of the panorama against the dictates of celestial cues. Here, we explore the roles of learning terrestrial cues and delays since cue exposure in these navigational decisions by testing restricted foragers with differing levels of terrestrial cue experience with the maximum (180°) cue conflict. Restricted foragers appear unable to extrapolate landmark information from the nest to a displacement site 8 m away. Given only one homeward experience, foragers can successfully orient using terrestrial cues, but this experience is not sufficient to override a conflicting vector. Terrestrial cue strength increases with multiple experiences and eventually overrides the celestial cues. This appears to be a dynamic choice as foragers discount the reliability of the terrestrial cues over time, reverting back to preferring the celestial vector when the forager has an immediate vector, but the forager's last exposure to the terrestrial cues was 24 hr in the past. Foragers may be employing navigational decision making that can be predicted by the temporal weighting rule.  相似文献   

5.
Gaze following in human infants depends on communicative signals   总被引:1,自引:0,他引:1  
Humans are extremely sensitive to ostensive signals, like eye contact or having their name called, that indicate someone's communicative intention toward them [1-3]. Infants also pay attention to these signals [4-6], but it is unknown whether they appreciate their significance in the initiation of communicative acts. In two experiments, we employed video presentation of an actor turning toward one of two objects and recorded infants' gaze-following behavior [7-13] with eye-tracking techniques [11, 12]. We found that 6-month-old infants followed the adult's gaze (a potential communicative-referential signal) toward an object only when such an act is preceded by ostensive cues such as direct gaze (experiment 1) and infant-directed speech (experiment 2). Such a link between the presence of ostensive signals and gaze following suggests that this behavior serves a functional role in assisting infants to effectively respond to referential communication directed to them. Whereas gaze following in many nonhuman species supports social information gathering [14-18], in humans it initially appears to reflect the expectation of a more active, communicative role from the information source.  相似文献   

6.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

7.
Desert ants learn vibration and magnetic landmarks   总被引:1,自引:0,他引:1  
The desert ants Cataglyphis navigate not only by path integration but also by using visual and olfactory landmarks to pinpoint the nest entrance. Here we show that Cataglyphis noda can additionally use magnetic and vibrational landmarks as nest-defining cues. The magnetic field may typically provide directional rather than positional information, and vibrational signals so far have been shown to be involved in social behavior. Thus it remains questionable if magnetic and vibration landmarks are usually provided by the ants' habitat as nest-defining cues. However, our results point to the flexibility of the ants' navigational system, which even makes use of cues that are probably most often sensed in a different context.  相似文献   

8.
As we move through the world, information can be combined from multiple sources in order to allow us to perceive our self-motion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.  相似文献   

9.
In many tropical animals, male and female breeding partners combine their songs to produce vocal duets [1-5]. Duets are often so highly coordinated that human listeners mistake them for the songs of a single animal [6]. Behavioral ecologists rank duets among the most complex vocal performances in the animal kingdom [7, 8]. Despite much research, the evolutionary significance of duets remains elusive [9], in part because many duetting animals live in tropical habitats where dense vegetation makes behavioral observation difficult or impossible. Here, we evaluate the duetting behavior of rufous-and-white wrens (Thryothorus rufalbus) in the humid forests of Costa Rica. We employ two innovative technical approaches to study duetting behavior: an eight-microphone acoustic location system capable of triangulating animals' positions on the basis of recordings of their vocalizations [10] and dual-speaker playback capable of simulating duets in a spatially realistic manner [11]. Our analyses provide the first detailed spatial information on duetting in both a natural context and during confrontations with rivals. We demonstrate that birds perform duets across highly variable distances, that birds approach their partner after performing duets, and that duets of rivals induce aggressive, sex-specific responses. We conclude that duets serve distinct functions in aggressive and nonaggressive contexts.  相似文献   

10.
The ability to determine one''s location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.  相似文献   

11.
Our inner ear is equipped with a set of linear accelerometers, the otolith organs, that sense the inertial accelerations experienced during self-motion. However, as Einstein pointed out nearly a century ago, this signal would by itself be insufficient to detect our real movement, because gravity, another form of linear acceleration, and self-motion are sensed identically by otolith afferents. To deal with this ambiguity, it was proposed that neural populations in the pons and midline cerebellum compute an independent, internal estimate of gravity using signals arising from the vestibular rotation sensors, the semicircular canals. This hypothesis, regarding a causal relationship between firing rates and postulated sensory contributions to inertial motion estimation, has been directly tested here by recording neural activities before and after inactivation of the semicircular canals. We show that, unlike cells in normal animals, the gravity component of neural responses was nearly absent in canal-inactivated animals. We conclude that, through integration of temporally matched, multimodal information, neurons derive the mathematical signals predicted by the equations describing the physics of the outside world.  相似文献   

12.
Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0’-20’. The observers’ task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparity-modulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.  相似文献   

13.
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.  相似文献   

14.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.  相似文献   

15.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

16.
Bishop CW  Miller LM 《PloS one》2011,6(8):e24016
Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.  相似文献   

17.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

18.
In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.  相似文献   

19.
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.  相似文献   

20.
Self-localization requires that information from several sensory modalities and knowledge domains be integrated in order to identify an environment and determine current location and heading. This integration occurs by the convergence of highly processed sensory information onto neural systems in entorhinal cortex and hippocampus. Entorhinal neurons combine angular and linear self-motion information to generate an oriented metric signal that is then 'attached' to each environment using information about landmarks and context. Neurons in hippocampus use this signal to determine the animal's unique position within a particular environment. Elucidating this process illuminates not only spatial processing but also, more generally, how the brain builds knowledge representations from inputs carrying heterogeneous sensory and semantic content.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号