首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.  相似文献   

2.
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.  相似文献   

3.
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.  相似文献   

4.
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.  相似文献   

5.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

6.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

7.
Zhang T  Heuer HW  Britten KH 《Neuron》2004,42(6):993-1001
The ventral intraparietal area (VIP) is a multimodal parietal area, where visual responses are brisk, directional, and typically selective for complex optic flow patterns. VIP thus could provide signals useful for visual estimation of heading (self-motion direction). A central problem in heading estimation is how observers compensate for eye velocity, which distorts the retinal motion cues upon which perception depends. To find out if VIP could be useful for heading, we measured its responses to simulated trajectories, both with and without eye movements. Our results showed that most VIP neurons very strongly signal heading direction. Furthermore, the tuning of most VIP neurons was remarkably stable in the presence of eye movements. This stability was such that the population of VIP neurons represented heading very nearly in head-centered coordinates. This makes VIP the most robust source of such signals yet described, with properties ideal for supporting perception.  相似文献   

8.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

9.
Vestibular inputs are constantly processed and integrated with signals from other sensory modalities, such as vision and touch. The multiply-connected nature of vestibular cortical anatomy led us to investigate whether vestibular signals could participate in a multi-way interaction with visual and somatosensory perception. We used signal detection methods to identify whether vestibular stimulation might interact with both visual and somatosensory events in a detection task. Participants were instructed to detect near-threshold somatosensory stimuli that were delivered to the left index finger in one half of experimental trials. A visual signal occurred close to the finger in half of the trials, independent of somatosensory stimuli. A novel Near infrared caloric vestibular stimulus (NirCVS) was used to artificially activate the vestibular organs. Sham stimulations were used to control for non-specific effects of NirCVS. We found that both visual and vestibular events increased somatosensory sensitivity. Critically, we found no evidence for supra-additive multisensory enhancement when both visual and vestibular signals were administered together: in fact, we found a trend towards sub-additive interaction. The results are compatible with a vestibular role in somatosensory gain regulation.  相似文献   

10.
Delayed comparison tasks are widely used in the study of working memory and perception in psychology and neuroscience. It has long been known, however, that decisions in these tasks are biased. When the two stimuli in a delayed comparison trial are small in magnitude, subjects tend to report that the first stimulus is larger than the second stimulus. In contrast, subjects tend to report that the second stimulus is larger than the first when the stimuli are relatively large. Here we study the computational principles underlying this bias, also known as the contraction bias. We propose that the contraction bias results from a Bayesian computation in which a noisy representation of a magnitude is combined with a-priori information about the distribution of magnitudes to optimize performance. We test our hypothesis on choice behavior in a visual delayed comparison experiment by studying the effect of (i) changing the prior distribution and (ii) changing the uncertainty in the memorized stimulus. We show that choice behavior in both manipulations is consistent with the performance of an observer who uses a Bayesian inference in order to improve performance. Moreover, our results suggest that the contraction bias arises during memory retrieval/decision making and not during memory encoding. These results support the notion that the contraction bias illusion can be understood as resulting from optimality considerations.  相似文献   

11.
Responses to visual, acoustic, and vestibular stimuli were studied in neurons of the middle and deep layers of the tectum in the pigeon. Changes in the receptive field (RF) were assessed from comparison of unit responses to isolated movement of a shaped visual stimulus with responses to movement of a stimulus during simultaneous action of a vestibular or acoustic stimulus. Changes in RF of the neuron could be observed during the action of both a vestibular and an acoustic stimulus. These changes affected the identification of the predominant direction of movement of the stimulus, the position of the maximum in the response histogram, and the duration and number of spikes in the response. The direction of change in RF of the neuron was found not necessarily to coincide with the sign of the response to the same neuron to isolated presentation of a vestibular or acoustic stimulus. It is postulated on the basis of the results and data in the literature that the tectum transforms the flow of impulses arriving from the retina depending on the nature of the information received by it from other sensory systems.  相似文献   

12.
Yabe Y  Watanabe H  Taga G 《PloS one》2011,6(7):e21642
Information on ongoing body movements can affect the perception of ambiguous visual motion. Previous studies on "treadmill capture" have shown that treadmill walking biases the perception of ambiguous apparent motion in backward direction in accordance with the optic flow during normal walking, and that long-term treadmill experience changes the effect of treadmill capture. To understand the underlying mechanisms for these phenomena, we conducted Experiment 1 with non-treadmill runners and Experiment 2 with treadmill runners. The participants judged the motion direction of the apparent motion stimuli of horizontal gratings in front of their feet under three conditions: walking on a treadmill, standing on a treadmill, and standing on the floor. The non-treadmill runners showed the presence of downward bias only under the walking condition, indicating that ongoing treadmill walking but not the awareness of being on a treadmill biased the visual directional discrimination. In contrast, the treadmill runners showed no downward bias under any of the conditions, indicating that neither ongoing activity nor the awareness of spatial context produced perception bias. This suggests that the long-term repetitive experience of treadmill walking without optic flow induced the formation of a treadmill-specific locomotor-visual linkage to perceive the complex relationship between self and the environment.  相似文献   

13.
We investigated how visual attentional resources are allocated during reaching movements. Particularly, this study examined whether or not the direction of the reaching movement affected visual attention resource allocation. Participants held a stylus pen to reach their hand toward a target stimulus on a graphics tablet as quickly and accurately as possible. The direction of the hand movement was either from near to far space or the reverse. They observed visual stimuli and a cursor, which represented the hand position, on a perpendicularly positioned display, instead of directly seeing their hand movements. Regardless of the movement direction, the participants tended with quickly responding to the target stimuli located far from the start position as compared with those located near to the start position. These results led us to conclude that attentional resources were preferentially allocated in the areas far from the start position of reaching movements. These findings may provide important information for basic research on attention, but also contribute to a decrease of human errors in manipulation tasks performed with visual feedback on visual display terminals.  相似文献   

14.
Humans can distinguish between contours of similar orientation, and between directions of visual motion. There is consensus that both of these capabilities depend on selective activation of tuned neural channels. The bandwidths of these tuned channels are estimated here by modelling previously published empirical data. Human subjects were presented with a rapid stream of randomly oriented gratings, or randomly directed motions, and asked to respond when they saw a target stimulus. For the orientation task, subjects were less likely to respond when two preceding orientations were close to the target orientation but differed from each other, presumably due to a failure of summation. For the motion data, by contrast, subjects were more likely to respond when the vector sum of two previous directions was in the target direction. Fitting a cortical signal-processing model to these data showed that the direction bandwidth of motion sensors is about three times the bandwidth of orientation sensors, and that it is the large bandwidth that allows the summation of motion stimuli. The differing bandwidths of orientation and motion sensors presumably equip them for differing tasks, such as orientation discrimination and estimation of heading, respectively.  相似文献   

15.
Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target’s location after its disappearance increases the likelihood of perceiving it consciously.  相似文献   

16.
Noninformative vision improves haptic spatial perception   总被引:10,自引:0,他引:10  
Previous studies have attempted to map somatosensory space via haptic matching tasks and have shown that individuals make large and systematic matching errors, the magnitude and angular direction of which vary systematically through the workspace. Based upon such demonstrations, it has been suggested that haptic space is non-Euclidian. This conclusion assumes that spatial perception is modality specific, and it largely ignores the fact that tactile matching tasks involve active, exploratory arm movements. Here we demonstrate that, when individuals match two bar stimuli (i.e., make them parallel) in circumstances favoring extrinsic (visual) coordinates, providing noninformative visual information significantly increases the accuracy of haptic perception. In contrast, when individuals match the same bar stimuli in circumstances favoring the coding of movements in intrinsic (limb-based) coordinates, providing identical noninformative visual information either has no effect or leads to the decreased accuracy of haptic perception. These results are consistent with optimal integration models of sensory integration in which the weighting given to visual and somatosensory signals depends upon the precision of the visual and somatosensory information and provide important evidence for the task-dependent integration of visual and somatosensory signals during the construction of a representation of peripersonal space.  相似文献   

17.
Vection is an illusory perception of self-motion that can occur when visual motion fills the majority of the visual field. This study examines the effect of the duration of visual field movement (VFM) on the perceived strength of self-motion using an inertial nulling (IN) and a magnitude estimation technique based on the certainty that motion occurred (certainty estimation, CE). These techniques were then used to investigate the association between migraine diagnosis and the strength of perceived vection. Visual star-field stimuli consistent with either looming or receding motion were presented for 1, 4, 8 or 16s. Subjects reported the perceived direction of self-motion during the final 1s of the stimulus. For the IN method, an inertial nulling motion was delivered during this final 1s of the visual stimulus, and subjects reported the direction of perceived self-motion during this final second. The magnitude of inertial motion was varied adaptively to determine the point of subjective equality (PSE) at which forward or backward responses were equally likely. For the CE trials the same range of VFM was used but without inertial motion and subjects rated their certainty of motion on a scale of 0–100. PSE determined with the IN technique depended on direction and duration of visual motion and the CE technique showed greater certainty of perceived vection with longer VFM duration. A strong correlation between CE and IN techniques was present for the 8s stimulus. There was appreciable between-subject variation in both CE and IN techniques and migraine was associated with significantly increased perception of self-motion by CE and IN at 8 and 16s. Together, these results suggest that vection may be measured by both CE and IN techniques with good correlation. The results also suggest that susceptibility to vection may be higher in subjects with a history of migraine.  相似文献   

18.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

19.
We continuously receive the external information from multiple sensors simultaneously. The brain must judge a source event of these sensory informations and integrate them. It is thought that judging the simultaneity of such multisensory stimuli is an important cue when we discriminate whether the stimuli are derived from one event or not. Although previous studies have investigated the correspondence between an auditory-visual (AV) simultaneity perceptions and the neural responses, there are still few studies of this. Electrophysiological studies have reported that ongoing oscillations in human cortex affect perception. Especially, the phase resetting of ongoing oscillations has been examined as it plays an important role in multisensory integration. The aim of this study was to investigate the relationship of phase resetting for the judgment of AV simultaneity judgement tasks. The subjects were successively presented with auditory and visual stimuli with intervals that were controlled as SOA50% and they were asked to report whether they perceived them simultaneously or not. We investigated the effects of the phase of ongoing oscillations on simultaneity judgments with AV stimuli with SOAs in which the detection rate of asynchrony was 50 %. It was found that phase resetting at the beta frequency band in the brain area that related to the modality of the following stimulus occurred after preceding stimulus onset only when the subjects perceived AV stimuli as simultaneous. This result suggested that beta phase resetting occurred in areas that are related to the subsequent stimulus, supporting perception multisensory stimuli as simultaneous.  相似文献   

20.
Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号