首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

2.
An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.  相似文献   

3.
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment.  相似文献   

4.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

5.
Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one’s eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion.  相似文献   

6.
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.  相似文献   

7.
Object detection in the fly during simulated translatory flight   总被引:1,自引:0,他引:1  
Translatory movement of an animal in its environment induces optic flow that contains information about the three-dimensional layout of the surroundings: as a rule, images of objects that are closer to the animal move faster across the retina than those of more distant objects. Such relative motion cues are used by flies to detect objects in front of a structured background. We confronted flying flies, tethered to a torque meter, with front-to-back motion of patterns displayed on two CRT screens, thereby simulating translatory motion of the background as experienced by an animal during straight flight. The torque meter measured the instantaneous turning responses of the fly around its vertical body axis. During short time intervals, object motion was superimposed on background pattern motion. The average turning response towards such an object depends on both object and background velocity in a characteristic way: (1) in order to elicit significant responses object motion has to be faster than background motion; (2) background motion within a certain range of velocities improves object detection. These properties can be interpreted as adaptations to situations as they occur in natural free flight. We confirmed that the measured responses were mediated mainly by a control system specialized for the detection of objects rather than by the compensatory optomotor system responsible for course stabilization. Accepted: 20 March 1997  相似文献   

8.
Temporal integration in the visual system causes fast-moving objects to generate static, oriented traces (‘motion streaks’), which could be used to help judge direction of motion. While human psychophysics and single-unit studies in non-human primates are consistent with this hypothesis, direct neural evidence from the human cortex is still lacking. First, we provide psychophysical evidence that faster and slower motions are processed by distinct neural mechanisms: faster motion raised human perceptual thresholds for static orientations parallel to the direction of motion, whereas slower motion raised thresholds for orthogonal orientations. We then used functional magnetic resonance imaging to measure brain activity while human observers viewed either fast (‘streaky’) or slow random dot stimuli moving in different directions, or corresponding static-oriented stimuli. We found that local spatial patterns of brain activity in early retinotopic visual cortex reliably distinguished between static orientations. Critically, a multivariate pattern classifier trained on brain activity evoked by these static stimuli could then successfully distinguish the direction of fast (‘streaky’) but not slow motion. Thus, signals encoding static-oriented streak information are present in human early visual cortex when viewing fast motion. These experiments show that motion streaks are present in the human visual system for faster motion.  相似文献   

9.
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.  相似文献   

10.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

11.
The accessory optic system and pretectum are highly conserved brainstem visual pathways that process the visual consequences of self-motion (i.e. optic flow) and generate the optokinetic response. Neurons in these nuclei have very large receptive fields in the contalateral eye, and exhibit direction-selectivity to large-field moving stimuli. Previous research on visual motion pathways in the geniculostriate system has employed "plaids" composed of two non-parallel sine-wave gratings to investigate the visual system's ability to detect the global direction of pattern motion as opposed to the direction of motion of the components within the plaids. In this study, using standard extracellular techniques, we recorded the responses of 47 neurons in the nucleus of the basal optic root of the accessory optic system and 49 cells in the pretectal nucleus lentiformis mesencephali of pigeons to large-field gratings and plaids. We found that most neurons were classified as pattern-selective (41-49%) whereas fewer were classified as component-selective (8-17%). There were no striking differences between nucleus of the basal optic root and lentiformis mesencephali neurons in this regard. These data indicate that most of the input to the optokinetic system is orientation-insensitive but a small proportion is orientation-selective. The implications for the connectivity of the motion processing system are discussed.  相似文献   

12.
Summary Freely flying honeybees are innately attracted to moving objects, as revealed by their spontaneous preference for a moving disc over an identical, but stationary disc. We have exploited this spontaneous preference to explore the visual cues by which a bee, which is herself in motion, recognizes a moving object. We find that the moving disc is not detected on the basis that it produces a more rapidly moving image on the retina. The relevant cue might therefore be the motion of the disc relative to the visual surround. We have attempted to test this hypothesis by artificially rotating the structured environment, together with the moving disc, around the bee. Under these conditions, the image of the stationary disc rather than that of the actually moving disc is in motion relative to the surround. We find that rotation of the surround disrupts the bee's capacity not only to distinguish a moving object from a stationary one, but also to discriminate stationary objects at different ranges. Possible interpretations of these results are discussed.  相似文献   

13.
The relative role of visual and vestibular cues in determining the perceived distance of passive, linear self motion were assessed. Seventeen subjects were given cues to constant acceleration motion: either optic flow, physical motion in the dark or combinations of visual and physical motion. Subjects indicated when they perceived they had traversed a distance that had been previously indicated either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a visual target but was perceptually equivalent to a shorter physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self-motion when both visual and physical cues were present was perceptually equivalent to the physical motion experienced and not the simultaneous visual motion even when the target was presented visually. We describe this dominance of the physical cues in determining the perceived distance of self motion as "vestibular capture".  相似文献   

14.
Radial expanding optic flow is a visual consequence of forward locomotion. Presented on screen, it generates illusionary forward self-motion, pointing at a close vision-gait interrelation. As particularly parkinsonian gait is vulnerable to external stimuli, effects of optic flow on motor-related cerebral circuitry were explored with functional magnetic resonance imaging in healthy controls (HC) and patients with Parkinson’s disease (PD). Fifteen HC and 22 PD patients, of which 7 experienced freezing of gait (FOG), watched wide-field flow, interruptions by narrowing or deceleration and equivalent control conditions with static dots. Statistical parametric mapping revealed that wide-field flow interruption evoked activation of the (pre-)supplementary motor area (SMA) in HC, which was decreased in PD. During wide-field flow, dorsal occipito-parietal activations were reduced in PD relative to HC, with stronger functional connectivity between right visual motion area V5, pre-SMA and cerebellum (in PD without FOG). Non-specific ‘changes’ in stimulus patterns activated dorsolateral fronto-parietal regions and the fusiform gyrus. This attention-associated network was stronger activated in HC than in PD. PD patients thus appeared compromised in recruiting medial frontal regions facilitating internally generated virtual locomotion when visual motion support falls away. Reduced dorsal visual and parietal activations during wide-field optic flow in PD were explained by impaired feedforward visual and visuomotor processing within a magnocellular (visual motion) functional chain. Compensation of impaired feedforward processing by distant fronto-cerebellar circuitry in PD is consistent with motor responses to visual motion stimuli being either too strong or too weak. The ‘change’-related activations pointed at covert (stimulus-driven) attention.  相似文献   

15.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

16.

Background

Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it.

Methodology/Principal Findings

From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection.

Conclusions/Significance

A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion.  相似文献   

17.
Gray R  Regan D 《Current biology : CB》2000,10(10):587-590
Many authors have assumed that motor actions required for collision avoidance and for collision achievement (for example, in driving a car or hitting a ball) are guided by monitoring the time to collision (TTC), and that this is done on the basis of moment-to-moment values of the optical variable tau [1] [2] [3]. This assumption has also motivated the search for single neurons that fire when tau is a certain value [4] [5] [6] [7] [8]. Almost all of the laboratory studies and all the animal experiments were restricted to the case of stationary observer and moving object. On the face of it, this would seem reasonable. Even though humans and other animals routinely perform visually guided actions that require the TTC of an approaching object to be estimated while the observer is moving, tau provides an accurate estimate of TTC regardless of whether the approach is produced by self-motion, object-motion or a combination of both. One might therefore expect that judgements of TTC would be independent of self-motion. We report here, however, that simulated selfmotion using a peripheral flow field substantially altered estimates of TTC for an approaching object, even though the peripheral flow field did not affect the value of tau for the approaching object. This finding points to long range interactions between collision-sensitive visual neurons and neural mechanisms for processing self-motion.  相似文献   

18.
Object detection on the basis of relative motion was investigated in the fly at the neuronal level. A representative of the figure detection cells (FD-cells), the FD1b-cell, was characterized with respect to its responses to optic flow which simulated the presence of an object during translatory flight. The figure detection cells reside in the fly's third visual neuropil and are believed to play a central role in mediating object-directed turning behaviour. The dynamical response properties as well as the mean response amplitudes of the FD1b-cell depend on the temporal frequency of object motion and on the presence or absence of background motion. The responses of the FD1b-cell to object motion during simulated translatory flight were compared to behavioural responses of the fly as obtained with identical stimuli in a previous study. The behavioural responses could only partly be explained on the basis of the FD1b-cell's responses. Further processing between the third visual neuropil and the final motor output has to be assumed which involves (1) facilitation of the object-induced responses during translatory background motion at moderate temporal frequencies, and (2) inhibition of the object-induced turning responses during translatory background motion at high temporal frequencies. Accepted: 9 October 1999  相似文献   

19.
S Wang  M Fukuchi  C Koch  N Tsuchiya 《PloS one》2012,7(8):e41040
While a single approaching object is known to attract spatial attention, it is unknown how attention is directed when the background looms towards the observer as s/he moves forward in a quasi-stationary environment. In Experiment 1, we used a cued speeded discrimination task to quantify where and how spatial attention is directed towards the target superimposed onto a cloud of moving dots. We found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The effects were less pronounced when the motion was contractive. The more ecologically valid the motion features became (e.g., temporal expansion of each dot, spatial depth structure implied by distribution of the size of the dots), the stronger the attentional effects. Further, the attentional effects were sustained over 1000 ms. Experiment 2 quantified these attentional effects using a change detection paradigm by zooming into or out of photographs of natural scenes. Spatial attention was attracted in a sustained manner such that change detection was facilitated or delayed depending on the location of the FOE only when the motion was expansive. Our results suggest that focal attention is strongly attracted towards singular points that signal the direction of forward ego-motion.  相似文献   

20.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients.In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号