首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.  相似文献   

2.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

3.
Although many sources of three-dimensional information have been isolated and demonstrated to contribute independently, to depth vision in animal studies, it is not clear whether these distinct cues are perceived to be perceptually equivalent. Such ability is observed in humans and would seem to be advantageous for animals as well in coping with the often co-varying (or ambiguous) information about the layout of physical space. We introduce the expression primary-depth-cue equivalence to refer to the ability to perceive mutually consistent information about differences in depth from either stereopsis or motion-parallax. We found that owls trained to detect relative depth as a perceptual category (objects versus holes) when specified by binocular disparity alone (stereopsis), immediately transferred this discrimination to novel stimuli where the equivalent depth categories were available only through differences in motion information produced by head movements (observer-produced motion-parallax). Motion-parallax discrimination did occur under monocular viewing conditions and reliable performance depended heavily on the amplitude of side-to-side head movements. The presence of primary-depth-cue equivalence in the visual system of the owl provides further conformation of the hypothesis that neural systems evolved to detect differences in either disparity or motion information are likely to share similar processing mechanisms.  相似文献   

4.
To detect and avoid collisions, animals need to perceive and control the distance and the speed with which they are moving relative to obstacles. This is especially challenging for swimming and flying animals that must control movement in a dynamic fluid without reference from physical contact to the ground. Flying animals primarily rely on optic flow to control flight speed and distance to obstacles. Here, we investigate whether swimming animals use similar strategies for self-motion control to flying animals by directly comparing the trajectories of zebrafish (Danio rerio) and bumblebees (Bombus terrestris) moving through the same experimental tunnel. While moving through the tunnel, black and white patterns produced (i) strong horizontal optic flow cues on both walls, (ii) weak horizontal optic flow cues on both walls and (iii) strong optic flow cues on one wall and weak optic flow cues on the other. We find that the mean speed of zebrafish does not depend on the amount of optic flow perceived from the walls. We further show that zebrafish, unlike bumblebees, move closer to the wall that provides the strongest visual feedback. This unexpected preference for strong optic flow cues may reflect an adaptation for self-motion control in water or in environments where visibility is limited.  相似文献   

5.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

6.
Visually targeted reaching to a specific object is a demanding neuronal task requiring the translation of the location of the object from a two-dimensionsal set of retinotopic coordinates to a motor pattern that guides a limb to that point in three-dimensional space. This sensorimotor transformation has been intensively studied in mammals, but was not previously thought to occur in animals with smaller nervous systems such as insects. We studied horse-head grasshoppers (Orthoptera: Proscopididae) crossing gaps and found that visual inputs are sufficient for them to target their forelimbs to a foothold on the opposite side of the gap. High-speed video analysis showed that these reaches were targeted accurately and directly to footholds at different locations within the visual field through changes in forelimb trajectory and body position, and did not involve stereotyped searching movements. The proscopids estimated distant locations using peering to generate motion parallax, a monocular distance cue, but appeared to use binocular visual cues to estimate the distance of nearby footholds. Following occlusion of regions of binocular overlap, the proscopids resorted to peering to target reaches even to nearby locations. Monocular cues were sufficient for accurate targeting of the ipsilateral but not the contralateral forelimb. Thus, proscopids are capable not only of the sensorimotor transformations necessary for visually targeted reaching with their forelimbs but also of flexibly using different visual cues to target reaches.  相似文献   

7.
Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.  相似文献   

8.
In contradistinction to conventional wisdom, we propose that retinal image slip of a visual scene (optokinetic pattern, OP) does not constitute the only crucial input for visually induced percepts of self-motion (vection). Instead, the hypothesis is investigated that there are three input factors: 1) OP retinal image slip, 2) motion of the ocular orbital shadows across the retinae, and 3) smooth pursuit eye movements (efference copy). To test this hypothesis, we visually induced percepts of sinusoidal rotatory self-motion (circular vection, CV) in the absence of vestibular stimulation. Subjects were presented with three concurrent stimuli: a large visual OP, a fixation point to be pursued with the eyes (both projected in superposition on a semi-circular screen), and a dark window frame placed close to the eyes to create artificial visual field boundaries that simulate ocular orbital rim boundary shadows, but which could be moved across the retinae independent from eye movements. In different combinations these stimuli were independently moved or kept stationary. When moved together (horizontally and sinusoidally around the subject's head), they did so in precise temporal synchrony at 0.05 Hz. The results show that the occurrence of CV requires retinal slip of the OP and/or relative motion between the orbital boundary shadows and the OP. On the other hand, CV does not develop when the two retinal slip signals equal each other (no relative motion) and concur with pursuit eye movements (as it is the case, e.g., when we follow with the eyes the motion of a target on a stationary visual scene). The findings were formalized in terms of a simulation model. In the model two signals coding relative motion between OP and head are fused and fed into the mechanism for CV, a visuo-oculomotor one, derived from OP retinal slip and eye movement efference copy, and a purely visual signal of relative motion between the orbital rims (head) and the OP. The latter signal is also used, together with a version of the oculomotor efference copy, for a mechanism that suppresses CV at a later stage of processing in conditions in which the retinal slip signals are self-generated by smooth pursuit eye movements.  相似文献   

9.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.  相似文献   

10.
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects’ abilities and better understanding their flight.  相似文献   

11.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

12.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

13.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

14.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

15.
Chien SE  Ono F  Watanabe K 《PloS one》2011,6(12):e28371
Shifts of visual attention cause systematic distortions of the perceived locations of visual objects around the focus of attention. In the attention repulsion effect, the perceived location of a visual target is shifted away from an attention-attracting cue when the cue is presented before the target. Recently it has been found that, if the visual cue is presented after the target, the perceived location of the target shifts toward the location of the following cue. One unanswered question is whether a single mechanism underlies both attentional repulsion and attraction effects. We presented participants with two disks at diagonal locations as visual cues and two vertical lines as targets. Participants were asked to perform a forced-choice task to judge targets' positions. The present study examined whether the magnitude of the repulsion effect and the attraction effect would differ (Experiment 1), whether the two effects would interact (Experiment 2), and whether the location or the dynamic shift of attentional focus would determine the distortions effects (Experiment 3). The results showed that the effect size of the attraction effect was slightly larger than the repulsion effect and the preceding and following cues have independent influences on the perceived positions. The repulsion effect was caused by the location of attnetion and the attraction effect was due to the dynamic shift of attentional focus, suggesting that the underlying mechanisms for the retrospective attraction effect might be different from those for the repulsion effect.  相似文献   

16.
Traditionally, research on visual attention has been focused on the processes involved in conscious, explicit selection of task-relevant sensory input. Recently, however, it has been shown that attending to a specific feature of an object automatically increases neural sensitivity to this feature throughout the visual field. Here we show that directing attention to a specific color of an object results in attentional modulation of the processing of task-irrelevant and not consciously perceived motion signals that are spatiotemporally associated with this color throughout the visual field. Such implicit cross-feature spreading of attention takes place according to the veridical physical associations between the color and motion signals, even under special circumstances when they are perceptually misbound. These results imply that the units of implicit attentional selection are spatiotemporally colocalized feature clusters that are automatically bound throughout the visual field.  相似文献   

17.
To determine how the vestibular sense controls balance, we used instantaneous head angular velocity to drive a galvanic vestibular stimulus so that afference would signal that head movement was faster or slower than actual. In effect, this changed vestibular afferent gain. This increased sway 4-fold when subjects (N = 8) stood without vision. However, after a 240 s conditioning period with stable balance achieved through reliable visual or somatosensory cues, sway returned to normal. An equivalent galvanic stimulus unrelated to sway (not driven by head motion) was equally destabilising but in this situation the conditioning period of stable balance did not reduce sway. Reflex muscle responses evoked by an independent, higher bandwidth vestibular stimulus were initially reduced in amplitude by the galvanic stimulus but returned to normal levels after the conditioning period, contrary to predictions that they would decrease after adaptation to increased sensory gain and increase after adaptation to decreased sensory gain. We conclude that an erroneous vestibular signal of head motion during standing has profound effects on balance control. If it is unrelated to current head motion, the CNS has no immediate mechanism of ignoring the vestibular signal to reduce its influence on destabilising balance. This result is inconsistent with sensory reweighting based on disturbances. The increase in sway with increased sensory gain is also inconsistent with a simple feedback model of vestibular reflex action. Thus, we propose that recalibration of a forward sensory model best explains the reinterpretation of an altered reafferent signal of head motion during stable balance.  相似文献   

18.
As we move through the world, information can be combined from multiple sources in order to allow us to perceive our self-motion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.  相似文献   

19.
To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis, flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta, we repeated the experiments on day-active bumble-bees (Bombus terrestris). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta-like their day-active relatives-rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects.  相似文献   

20.

Background

When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization.

Methodology/Principal Findings

We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses.

Conclusions/Significance

Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号