首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell''s receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.  相似文献   

2.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

3.
Zhang T  Heuer HW  Britten KH 《Neuron》2004,42(6):993-1001
The ventral intraparietal area (VIP) is a multimodal parietal area, where visual responses are brisk, directional, and typically selective for complex optic flow patterns. VIP thus could provide signals useful for visual estimation of heading (self-motion direction). A central problem in heading estimation is how observers compensate for eye velocity, which distorts the retinal motion cues upon which perception depends. To find out if VIP could be useful for heading, we measured its responses to simulated trajectories, both with and without eye movements. Our results showed that most VIP neurons very strongly signal heading direction. Furthermore, the tuning of most VIP neurons was remarkably stable in the presence of eye movements. This stability was such that the population of VIP neurons represented heading very nearly in head-centered coordinates. This makes VIP the most robust source of such signals yet described, with properties ideal for supporting perception.  相似文献   

4.
Optic flow, the pattern of apparent motion elicited on the retina during movement, has been demonstrated to be widely used by animals living in the aerial habitat, whereas underwater optic flow has not been intensively studied so far. However optic flow would also provide aquatic animals with valuable information about their own movement relative to the environment; even under conditions in which vision is generally thought to be drastically impaired, e. g. in turbid waters. Here, we tested underwater optic flow perception for the first time in a semi-aquatic mammal, the harbor seal, by simulating a forward movement on a straight path through a cloud of dots on an underwater projection. The translatory motion pattern expanded radially out of a singular point along the direction of heading, the focus of expansion. We assessed the seal''s accuracy in determining the simulated heading in a task, in which the seal had to judge whether a cross superimposed on the flow field was deviating from or congruent with the actual focus of expansion. The seal perceived optic flow and determined deviations from the simulated heading with a threshold of 0.6 deg of visual angle. Optic flow is thus a source of information seals, fish and most likely aquatic species in general may rely on for e. g. controlling locomotion and orientation under water. This leads to the notion that optic flow seems to be a tool universally used by any moving organism possessing eyes.  相似文献   

5.
Intracellular responses of medulla neurons (second-order visual interneurons) have been examined in the tiger beetle larva. The larva possesses six stemmata on either side of the head, two of which are much larger than the remaining four. Beneath the cuticle housing the stemmata an optic neuropil complex occurs consisting of lamina and medulla neuropils. Response patterns of medulla neurons to illumination and moving objects varied from neurons to neurons. For movement stimuli black discs and a black bar were moved in the rostro-caudal direction above the larva. Comparison of responses to the discs and the bar suggested a spatial summation of responses in some neurons, and tuning to small objects in some neurons. The majority of neurons responded to objects moving at heights of 10 mm and 50 mm with the same discharge pattern. A few neurons, however, showed distance sensitivities responding with an increase of spike discharges to moving objects only at either of the two heights. Such distance sensitivities still remained in one-stemma larvae, three of the four stemmata being occluded. These data are discussed in relation to distinct visual behavior of the larva and with special reference to perception of the hunting range.  相似文献   

6.
Observers moving through a three-dimensional environment can use optic flow to determine their direction of heading. Existing heading algorithms use cartesian flow fields in which image flow is the displacement of image features over time. I explore a heading algorithm that uses affine flow instead. The affine flow at an image feature is its displacement modulo an affine transformation defined by its neighborhood. Modeling the observer's instantaneous motion by a translation and a rotation about an axis through its eye, affine flow is tangent to the translational field lines on the observer's viewing sphere. These field lines form a radial flow field whose center is the direction of heading. The affine flow heading algorithm has characteristics that can be used to determine whether the human visual system relies on it. The algorithm is immune to observer rotation and arbitrary affine transformations of its input images; its accuracy improves with increasing variation in environmental depth; and it cannot recover heading in an environment consisting of a single plane because affine flow vanishes in this case. Translational field lines can also be approximated through differential cartesian motion. I compare the performance of heading algorithms based on affine flow, differential cartesian flow, and least-squares search.  相似文献   

7.
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment.  相似文献   

8.
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.  相似文献   

9.
An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.  相似文献   

10.
The mouse is emerging as an important model for understanding how sensory neocortex extracts cues to guide behavior, yet little is known about how these cues are processed beyond primary cortical areas. Here, we used two-photon calcium imaging in awake mice to compare visual responses in primary visual cortex (V1) and in two downstream target areas, AL and PM. Neighboring V1 neurons had diverse stimulus preferences spanning five octaves in spatial and temporal frequency. By contrast, AL and PM neurons responded best to distinct ranges of stimulus parameters. Most strikingly, AL neurons preferred fast-moving stimuli while PM neurons preferred slow-moving stimuli. By contrast, neurons in V1, AL, and PM demonstrated similar selectivity for stimulus orientation but not for stimulus direction. Based on these findings, we predict that area AL helps guide behaviors involving fast-moving stimuli (e.g., optic flow), while area PM?helps guide behaviors involving slow-moving objects.  相似文献   

11.
Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of ‘spiking’ neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations) of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining ‘Mexican hat’ functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles). These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP), resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network’s ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the ‘superposition catastrophe’. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.  相似文献   

12.
 In motion-processing areas of the visual cortex in cats and monkeys, an anisotropic distribution of direction selectivities displays a preference for movements away from the fovea. This ‘centrifugal bias’ has been hypothetically linked to the processing of optic flow fields generated during forward locomotion. In this paper, we show that flow fields induced on the retina in many natural situations of locomotion of higher mammals are indeed qualitatively centrifugal in structure, even when biologically plausible eye movements to stabilize gaze on environmental targets are performed. We propose a network model of heading detection that carries an anisotropy similar to the one found in cat and monkey. In simulations, this model reproduces a number of psychophysical results of human heading detection. It suggests that a recently reported human disability to correctly identify the direction of heading from optic flow when a certain type of eye movement is simulated might be linked to the noncentrifugal structure of the resulting retinal flow field and to the neurophysiological anisotropies. Received: 1 April 1994/Accepted in revised form: 4 August 1994  相似文献   

13.
The retinal image flow a blowfly experiences in its daily life on the wing is determined by both the structure of the environment and the animal’s own movements. To understand the design of visual processing mechanisms, there is thus a need to analyse the performance of neurons under natural operating conditions. To this end, we recorded flight paths of flies outdoors and reconstructed what they had seen, by moving a panoramic camera along exactly the same paths. The reconstructed image sequences were later replayed on a fast, panoramic flight simulator to identified, motion sensitive neurons of the so-called horizontal system (HS) in the lobula plate of the blowfly, which are assumed to extract self-motion parameters from optic flow. We show that under real life conditions HS-cells not only encode information about self-rotation, but are also sensitive to translational optic flow and, thus, indirectly signal information about the depth structure of the environment. These properties do not require an elaboration of the known model of these neurons, because the natural optic flow sequences generate—at least qualitatively—the same depth-related response properties when used as input to a computational HS-cell model and to real neurons.  相似文献   

14.
The accessory optic system and pretectum are highly conserved brainstem visual pathways that process the visual consequences of self-motion (i.e. optic flow) and generate the optokinetic response. Neurons in these nuclei have very large receptive fields in the contalateral eye, and exhibit direction-selectivity to large-field moving stimuli. Previous research on visual motion pathways in the geniculostriate system has employed "plaids" composed of two non-parallel sine-wave gratings to investigate the visual system's ability to detect the global direction of pattern motion as opposed to the direction of motion of the components within the plaids. In this study, using standard extracellular techniques, we recorded the responses of 47 neurons in the nucleus of the basal optic root of the accessory optic system and 49 cells in the pretectal nucleus lentiformis mesencephali of pigeons to large-field gratings and plaids. We found that most neurons were classified as pattern-selective (41-49%) whereas fewer were classified as component-selective (8-17%). There were no striking differences between nucleus of the basal optic root and lentiformis mesencephali neurons in this regard. These data indicate that most of the input to the optokinetic system is orientation-insensitive but a small proportion is orientation-selective. The implications for the connectivity of the motion processing system are discussed.  相似文献   

15.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.  相似文献   

16.
An evolutionarily conserved system of small retinotopic neurons in dipteran insects, called bushy T-cells, provides information about directional motion to large collator neurons in the lobula plate. Physiological and anatomical features of these cells provide the basis for a model that is used to investigate requirements for generating optic flow selectivity in collators while allowing for evolutionary variations. This account focuses on the role of physiological tuning properties of T5 neurons. Various flow fields are defined as inputs to retinotopic arrays of T5 cells, the responses of which are mapped onto collators using innervation matrices that promote selectivity for flow type and position. Properties known or inferred from physiological and anatomical studies of neurons contributing to motion detection are incorporated into the model: broad tuning to local motion direction and the representation of each visual sampling unit by a quartet of small-field T5-like neurons with orthogonal preferred directions. The model predicts hitherto untested response properties of optic flow selective collators, and predicts that selectivity for a given flow field can be highly sensitive to perturbations in physiological properties of the motion detectors.  相似文献   

17.
We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.  相似文献   

18.
Two strategies can guide walking to a stationary goal: (1) the optic-flow strategy, in which one aligns the direction of locomotion or "heading" specified by optic flow with the visual goal; and (2) the egocentric-direction strategy, in which one aligns the locomotor axis with the perceived egocentric direction of the goal and in which error results in optical target drift. Optic flow appears to dominate steering control in richly structured visual environments, whereas the egocentric- direction strategy prevails in visually sparse environments. Here we determine whether optic flow also drives visuo-locomotor adaptation in visually structured environments. Participants adapted to walking with the virtual-heading direction displaced 10 degrees to the right of the actual walking direction and were then tested with a normally aligned heading. Two environments, one visually structured and one visually sparse, were crossed in adaptation and test phases. Adaptation of the walking path was more rapid and complete in the structured environment; the negative aftereffect on path deviation was twice that in the sparse environment, indicating that optic flow contributes over and above target drift alone. Optic flow thus plays a central role in both online control of walking and adaptation of the visuo-locomotor mapping.  相似文献   

19.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

20.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号