首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In experiments described in the literature objects presented to restrained goldfish failed to induce eye movements like fixation and/or tracking. We show here that eye movements can be induced only if the background (visual surround) is not stationary relative to the fish but moving. We investigated the influence of background motion on eye movements in the range of angular velocities of 5–20° s−1. The response to presentation of an object is a transient shift in mean horizontal eye position which lasts for some 10 s. If an object is presented in front of the fish the eyes move in a direction such that it is seen more or less symmetrically by both eyes. If it is presented at ±70° from the fish's long axis the eye on the side of the object moves in the direction that the object falls more centrally on its retina. During these object induced eye responses the typical optokinetic nystagmus of amplitude of some 5° with alternating fast and slow phases is maintained, and the eye velocity during the slow phase is not modified by presentation of the object. Presenting an object in front of stationary or moving backgrounds leads to transient suppression of respiration which shows habituation to repeated object presentations. Accepted: 14 April 2000  相似文献   

2.
An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.  相似文献   

3.
Huang X  Albright TD  Stoner GR 《Neuron》2007,53(5):761-770
Visual motion perception relies on two opposing operations: integration and segmentation. Integration overcomes motion ambiguity in the visual image by spatial pooling of motion signals, whereas segmentation identifies differences between adjacent moving objects. For visual motion area MT, previous investigations have reported that stimuli in the receptive field surround, which do not elicit a response when presented alone, can nevertheless modulate responses to stimuli in the receptive field center. The directional tuning of this "surround modulation" has been found to be mainly antagonistic and hence consistent with segmentation. Here, we report that surround modulation in area MT can be either antagonistic or integrative depending upon the visual stimulus. Both types of modulation were delayed relative to response onset. Our results suggest that the dominance of antagonistic modulation in previous MT studies was due to stimulus choice and that segmentation and integration are achieved, in part, via adaptive surround modulation.  相似文献   

4.
Ilg UJ  Schumann S  Thier P 《Neuron》2004,43(1):145-151
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.  相似文献   

5.
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.  相似文献   

6.
We study the orientation and speed tuning properties of spatiotemporal three-dimensional (3D) Gabor and motion energy filters as models of time-dependent receptive fields of simple and complex cells in the primary visual cortex (V1). We augment the motion energy operator with surround suppression to model the inhibitory effect of stimuli outside the classical receptive field. We show that spatiotemporal integration and surround suppression lead to substantial noise reduction. We propose an effective and straightforward motion detection computation that uses the population code of a set of motion energy filters tuned to different velocities. We also show that surround inhibition leads to suppression of texture and thus improves the visibility of object contours and facilitates figure/ground segregation and the detection and recognition of objects.  相似文献   

7.
Perceiving which of a scene's objects are adjacent may require selecting them with a limited-capacity attentional process. Previous results support this notion [1-3] but leave open whether the process operates simultaneously on several objects or proceeds one by one. With arrays of colored discs moving together, we first tested the effect of moving the discs faster than the speed limit for following them with attentional selection [4]. At these high speeds, participants could identify which colors were present and determine whether identical arrays were aligned or offset by one disc. They could not, however, apprehend which colors in the arrays were adjacent, indicating that attentional selection is required for this judgment. If selection operates serially to determine which colors are neighbors, then after the color of one disc is identified, attention must shift to the adjacent disc. As a result of the motion, attention might occasionally miss its target and land on the trailing disc. We cued attention to first select one or the other of a pair of discs and found the pattern of errors predicted. Perceiving these spatial relationships evidently requires selecting and processing objects one by one and is only possible at low object speeds.  相似文献   

8.
Born RT  Groh JM  Zhao R  Lukasewycz SJ 《Neuron》2000,26(3):725-734
To track a moving object, its motion must first be distinguished from that of the background. The center-surround properties of neurons in the middle temporal visual area (MT) may be important for signaling the relative motion between object and background. To test this, we microstimulated within MT and measured the effects on monkeys' eye movements to moving targets. We found that stimulation at "local motion" sites, where receptive fields possessed antagonistic surrounds, shifted pursuit in the preferred direction of the neurons, whereas stimulation at "wide-field motion" sites shifted pursuit in the opposite, or null, direction. We propose that activating wide-field sites simulated background motion, thus inducing a target motion signal in the opposite direction. Our results support the hypothesis that neuronal center-surround mechanisms contribute to the behavioral segregation of objects from the background.  相似文献   

9.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

10.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

11.
Ground-nesting wasps (Odynerus spinipes, Eumenidae) perform characteristic zig-zag flight manoeuvres when they encounter a novel object in the vicinity of their nests. We analysed flight parameters and flight control mechanisms and reconstructed the optical flow fields which the wasps generate by these flight manoeuvres. During zig-zag flights, the wasps move sideways and turn to keep the object in their frontal visual field. Their turning speed is controlled by the relative motion between object and background. We find that the wasps adjust their rotational and translational speed in such a way as to produce a specific vortex field of image motion that is centred on the novel object. As a result, differential image motion and changes in the direction of motion vectors are maximal in the vicinity and at the edges of the object. Zig-zag flights thus seem to be a `depth from motion' procedure for the extraction of object-related depth information. Accepted: 31 August 1997  相似文献   

12.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients.In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.  相似文献   

13.
In a typical visual scene, one or more objects move relative to a larger background, which can itself be in motion as a result of the observer’s eyes moving with respect to the outside world. Here we show that accurate estimation of the background motion from an image velocity field can be accomplished through an iterative cooperation between two modules: one that specializes in calculating a weighted average velocity and another one calculating a velocity contrast map. We build on our analysis to provide a model for the tectum-pretectum loop in the nonmammalian midbrain. Our model accounts for some of the known properties of the tectal neurons (sensitivity to relative motion) and pretectal neurons (sensitivity to whole-field motion). It also agrees with our knowledge of the pretectotectal projection (divergent and inhibitory), and with the results of lesion studies in which the pretectal input to the tectum was removed, leading to hyperactivity of the tectal neurons and the animal. Our model also makes a testable prediction regarding the tectopretectal projection, i.e., that the presence of a larger object and a bigger discrepancy between the directions of motion for the object and the background lead to a larger error by the pretectum in estimating the background motion when the tectal input is abolished.  相似文献   

14.
Beauchamp MS  Lee KE  Haxby JV  Martin A 《Neuron》2002,34(1):149-159
We tested the hypothesis that different regions of lateral temporal cortex are specialized for processing different types of visual motion by studying the cortical responses to moving gratings and to humans and manipulable objects (tools and utensils) that were either stationary or moving with natural or artificially generated motions. Segregated responses to human and tool stimuli were observed in both ventral and lateral regions of posterior temporal cortex. Relative to ventral cortex, lateral temporal cortex showed a larger response for moving compared with static humans and tools. Superior temporal cortex preferred human motion, and middle temporal gyrus preferred tool motion. A greater response was observed in STS to articulated compared with unarticulated human motion. Specificity for different types of complex motion (in combination with visual form) may be an organizing principle in lateral temporal cortex.  相似文献   

15.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

16.
Development of the perception of spatial relations between objects was studied in infants aged from 3 to 4 to 24 to 25 months. The following tests were performed: prediction of the results of rectilinear and nonrectilinear toy motion; search for the toy hidden before the baby’s eyes under a cup, under one of two to five similar cups, and under a cup different from the others (a “local mark”), which was stationary or moving in the visual field; and search for a toy hidden under the “local mark” while distracting the baby's attention. It was shown that a child masters the regularities of spatial motion of an object first (at the age of 4 to 5 to 8 to 9 months). To the age of 10 to 11 months, all the children remember the location of a hidden toy using the egocentric location strategy (“Self” and “Object”). This strategy gradually improves and allows a child to ignore indifferent objects in the visual field. The ability to use a “local mark,” as a direct indicator of a hidden toy location, appears and is strengthened beginning from 14 to 15 months of age. This fact testifies to the transition from the egocentric strategy of object location to assessment of the relative location of two objects in the visual field. A capability for estimating the relative spatial position of three and more objects develops beginning from the age of two years.  相似文献   

17.
Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar’s position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina’s population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar’s position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.  相似文献   

18.
Olveczky BP  Baccus SA  Meister M 《Neuron》2007,56(4):689-700
Due to fixational eye movements, the image on the retina is always in motion, even when one views a stationary scene. When an object moves within the scene, the corresponding patch of retina experiences a different motion trajectory than the surrounding region. Certain retinal ganglion cells respond selectively to this condition, when the motion in the cell's receptive field center is different from that in the surround. Here we show that this response is strongest at the very onset of differential motion, followed by gradual adaptation with a time course of several seconds. Different subregions of a ganglion cell's receptive field can adapt independently. The circuitry responsible for differential motion adaptation lies in the inner retina. Several candidate mechanisms were tested, and the adaptation most likely results from synaptic depression at the synapse from bipolar to ganglion cell. Similar circuit mechanisms may act more generally to emphasize novel features of a visual stimulus.  相似文献   

19.
在充满生存竞争的动物世界,视觉的伪装与反伪装现象随处可见,视觉反伪装的原理是什么?本文对Reichardt的图形-背景相对运动分辨模型加以发展,提出了视觉反伪装功能的运动图象滤波器模型。为了检验此模型,我们建立了一个生物学似真的实时运动信息加工神经网络电子装置,实现了实时、高分辨运动目标图象滤波。与Mead的人工视网膜的运动目标图象检测功能相比,检测的运动目标图象的分辨率有很大提高,而噪声水平显著降低,克服了人工视网膜的一些局限性。  相似文献   

20.
BACKGROUND: In anorthoscopic viewing conditions, observers can perceive a moving object through a narrow slit even when only portions of its contour are visible at any time. We used fMRI to examine the contribution of early and later visual cortical areas to dynamic shape integration. Observers' success at integrating the shape of the slit-viewed object was manipulated by varying the degree to which the stimulus was dynamically distorted. Line drawings of common objects were either moderately distorted, strongly distorted, or shown undistorted. Phenomenologically, increasing the stimulus distortion made both object shape and motion more difficult to perceive.RESULTS: We found that bilateral cortical activity in portions of the ventral occipital cortex, corresponding to known object areas within the lateral occipital complex (LOC), was inversely correlated with the degree of stimulus distortion. We found that activity in left MT+, the human cortical area specialized for motion, showed a similar pattern as the ventral occipital region. The LOC also showed greater activity to a fully visible moving object than to the undistorted slit-viewed object. Area MT+, however, showed more equivalent activity to both the slit-viewed and fully visible moving objects.CONCLUSIONS: In early retinotopic cortex, the distorted and undistorted stimuli elicited the same amount of activity. Higher visual areas, however, were correlated with the percept of the coherent object, and this correlation suggests that the shape integration is mediated by later visual cortical areas. Motion information from the dorsal stream may project to the LOC to produce the shape percept.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号