首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Understanding how organismal design evolves in response to environmental challenges is a central goal of evolutionary biology. In particular, assessing the extent to which environmental requirements drive general design features among distantly related groups is a major research question. The visual system is a critical sensory apparatus that evolves in response to changing light regimes. In vertebrates, the optic tectum is the primary visual processing centre of the brain and yet it is unclear how or whether this structure evolves while lineages adapt to changes in photic environment. On one hand, dim‐light adaptation is associated with larger eyes and enhanced light‐gathering power that could require larger information processing capacity. On the other hand, dim‐light vision may evolve to maximize light sensitivity at the cost of acuity and colour sensitivity, which could require less processing power. Here, we use X‐ray microtomography and phylogenetic comparative methods to examine the relationships between diel activity pattern, optic morphology, trophic guild and investment in the optic tectum across the largest radiation of vertebrates—teleost fishes. We find that despite driving the evolution of larger eyes, enhancement of the capacity for dim‐light vision generally is accompanied by a decrease in investment in the optic tectum. These findings underscore the importance of considering diel activity patterns in comparative studies and demonstrate how vision plays a role in brain evolution, illuminating common design principles of the vertebrate visual system.  相似文献   

2.
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects’ abilities and better understanding their flight.  相似文献   

3.
Optic flow, the pattern of apparent motion elicited on the retina during movement, has been demonstrated to be widely used by animals living in the aerial habitat, whereas underwater optic flow has not been intensively studied so far. However optic flow would also provide aquatic animals with valuable information about their own movement relative to the environment; even under conditions in which vision is generally thought to be drastically impaired, e. g. in turbid waters. Here, we tested underwater optic flow perception for the first time in a semi-aquatic mammal, the harbor seal, by simulating a forward movement on a straight path through a cloud of dots on an underwater projection. The translatory motion pattern expanded radially out of a singular point along the direction of heading, the focus of expansion. We assessed the seal''s accuracy in determining the simulated heading in a task, in which the seal had to judge whether a cross superimposed on the flow field was deviating from or congruent with the actual focus of expansion. The seal perceived optic flow and determined deviations from the simulated heading with a threshold of 0.6 deg of visual angle. Optic flow is thus a source of information seals, fish and most likely aquatic species in general may rely on for e. g. controlling locomotion and orientation under water. This leads to the notion that optic flow seems to be a tool universally used by any moving organism possessing eyes.  相似文献   

4.
Sensing is often implicitly assumed to be the passive acquisition of information. However, part of the sensory information is generated actively when animals move. For instance, humans shift their gaze actively in a sequence of saccades towards interesting locations in a scene. Likewise, many insects shift their gaze by saccadic turns of body and head, keeping their gaze fixed between saccades. Here we employ a novel panoramic virtual reality stimulator and show that motion computation in a blowfly visual interneuron is tuned to make efficient use of the characteristic dynamics of retinal image flow. The neuron is able to extract information about the spatial layout of the environment by utilizing intervals of stable vision resulting from the saccadic viewing strategy. The extraction is possible because the retinal image flow evoked by translation, containing information about object distances, is confined to low frequencies. This flow component can be derived from the total optic flow between saccades because the residual intersaccadic head rotations are small and encoded at higher frequencies. Information about the spatial layout of the environment can thus be extracted by the neuron in a computationally parsimonious way. These results on neuronal function based on naturalistic, behaviourally generated optic flow are in stark contrast to conclusions based on conventional visual stimuli that the neuron primarily represents a detector for yaw rotations of the animal.  相似文献   

5.
For optimal visual control of compensatory eye movements during locomotion it is necessary to distinguish the rotational and translational components of the optic flow field. Optokinetic eye movements can reduce the rotational component only, making the information contained in the translational flow readily available to the animal. We investigated optokinetic eye rotation in the marble rock crab, Pachygrapsus marmoratus, during translational movement, either by displacing the animal or its visual surroundings. Any eye movement in response to such stimuli is taken as an indication that the system is unable to separate the translational and the rotational components in the optic flow in a mathematically perfect way. When the crabs are translated within a pseudo-natural environment, eye movements are negligible, especially during sideways translation. When, however, crabs were placed in a gangway between two elongated rectangular sidewalls carrying dotted patterns which were translated back and forth, marked eye movements were elicited, depending on the translational velocity. To resolve this discrepancy, we tested several hypotheses about mechanisms using detailed analysis of the optic flow or whole-field integration. We found that the latter are sufficient to explain the efficient separation of translation and rotation of crabs in quasi-natural situations. Accepted: 6 May 1997  相似文献   

6.
Observers moving through a three-dimensional environment can use optic flow to determine their direction of heading. Existing heading algorithms use cartesian flow fields in which image flow is the displacement of image features over time. I explore a heading algorithm that uses affine flow instead. The affine flow at an image feature is its displacement modulo an affine transformation defined by its neighborhood. Modeling the observer's instantaneous motion by a translation and a rotation about an axis through its eye, affine flow is tangent to the translational field lines on the observer's viewing sphere. These field lines form a radial flow field whose center is the direction of heading. The affine flow heading algorithm has characteristics that can be used to determine whether the human visual system relies on it. The algorithm is immune to observer rotation and arbitrary affine transformations of its input images; its accuracy improves with increasing variation in environmental depth; and it cannot recover heading in an environment consisting of a single plane because affine flow vanishes in this case. Translational field lines can also be approximated through differential cartesian motion. I compare the performance of heading algorithms based on affine flow, differential cartesian flow, and least-squares search.  相似文献   

7.
A system for the back projection of computer-generated visual images onto a screen or screens that cover 240° of the horizontal visual field is described. Its applicability for the study of crab vision is tested by comparing the frequency response of the optokinetic response of the land crab, Cardisoma guanhumi, to sinusoidal oscillation of computer-generated striped patterns and a real striped drum. Significant differences were observed only at the low end of the frequency spectrum. The flexibility of computer-generated visual stimulation and its advantages for the study of optic flow are illustrated by experiments that: (a) demonstrate how well crabs separate the translational and rotational components of optic flow by showing compensatory eye movements to only the latter; (b) show that the ability to compensate for rotation is not impaired by combinations of rotation and translation; (c) show that motion parallax cues are used in addition to previously-described global cues for making the distinction between rotation and translation. Finally, the use of these methods in a successful search for visual interneurones sensitive to optic flow stimuli is demonstrated for the shore crab, Carcinus maenas.  相似文献   

8.
A system for the back projection of computer-generated visual images onto a screen or screens that cover 240° of the horizontal visual field is described. Its applicability for the study of crab vision is tested by comparing the frequency response of the optokinetic response of the land crab, Cardisoma guanhumi , to sinusoidal oscillation of computer-generated striped patterns and a real striped drum. Significant differences were observed only at the low end of the frequency spectrum. The flexibility of computer-generated visual stimulation and its advantages for the study of optic flow are illustrated by experiments that: (a) demonstrate how well crabs separate the translational and rotational components of optic flow by showing compensatory eye movements to only the latter; (b) show that the ability to compensate for rotation is not impaired by combinations of rotation and translation; (c) show that motion parallax cues are used in addition to previously-described global cues for making the distinction between rotation and translation. Finally, the use of these methods in a successful search for visual interneurones sensitive to optic flow stimuli is demonstrated for the shore crab, Carcinus maenas .  相似文献   

9.
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.  相似文献   

10.
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment.  相似文献   

11.
The control of self-motion is a basic, but complex task for both technical and biological systems. Various algorithms have been proposed that allow the estimation of self-motion from the optic flow on the eyes. We show that two apparently very different approaches to solve this task, one technically and one biologically inspired, can be transformed into each other under certain conditions. One estimator of self-motion is based on a matched filter approach; it has been developed to describe the function of motion sensitive cells in the fly brain. The other estimator, the Koenderink and van Doorn (KvD) algorithm, was derived analytically with a technical background. If the distances to the objects in the environment can be assumed to be known, the two estimators are linear and equivalent, but are expressed in different mathematical forms. However, for most situations it is unrealistic to assume that the distances are known. Therefore, the depth structure of the environment needs to be determined in parallel to the self-motion parameters and leads to a non-linear problem. It is shown that the standard least mean square approach that is used by the KvD algorithm leads to a biased estimator. We derive a modification of this algorithm in order to remove the bias and demonstrate its improved performance by means of numerical simulations. For self-motion estimation it is beneficial to have a spherical visual field, similar to many flying insects. We show that in this case the representation of the depth structure of the environment derived from the optic flow can be simplified. Based on this result, we develop an adaptive matched filter approach for systems with a nearly spherical visual field. Then only eight parameters about the environment have to be memorized and updated during self-motion.  相似文献   

12.
The optokinetic response in wild type and white zebra finches   总被引:1,自引:0,他引:1  
Optic flow is a main source of information about self movement and the three-dimensional composition of the environment during locomotion. It is processed by the accessory optic system in all vertebrates. The optokinetic response is elicited by rotational optic flow, e.g. in a rotating drum lined with vertical stripes. We investigated here the effect of rotational optic flow on the optokinetic response in wild type and white zebra finches. The highest stimulus velocity eliciting an optokinetic response (upper velocity threshold) was dependent on stimulus direction and illumination level, but was not different between the colour morphs. The upper velocity threshold was higher with temporal to nasal movements in monocularly exposed birds and symmetrical with binocular exposure. Its increase with illumination level followed Fechner's law and reached a plateau at about 560 Lux. In bright daylight, white birds did not show optokinetic responses. We conclude that the altered wiring of the visual system of white birds has no influence on accessory optic system function. The unwillingness of white birds to respond with optokinetic response in bright daylight may be due to a substantial lack of inhibition within the visual system as demonstrated earlier, which may enhance the sensibility to glare.  相似文献   

13.
通过视觉获取图像信息是人类学习和生活的重要功能,失明则会显著降低其生活质量.因视网膜色素变性、青光眼和黄斑变性等疾病而造成后天失明者,以及由意外事故、战争等造成眼部创伤者,有可能通过人工视觉辅助系统的帮助恢复部分视觉,或者完成复杂的生活任务.一些盲症患者视觉通路的神经传导剩余部分依然有功能,因此可以借助电极阵列刺激视神经向大脑传递视觉信息,也可在大脑视觉皮层贴敷电极阵列的方法输入视觉信息.此外,还能借助体外装置,如通过人工智能将视觉转换成语音指令、触觉阵列编码等,帮助盲症患者获得环境信息.本文综述各类人工视觉辅助系统的现状,展望其发展趋势,并提出了新的植入器件与随身体外装置的新设想.  相似文献   

14.
We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.  相似文献   

15.
By expanding on issues raised by D’Eath (1998), I address in this article three aspects of vision that are difficult to reproduce in the video- and computer-generated images used in experiments, in which images of conspecifics or of predators are replayed to animals. The lack of depth cues derived from binocular stereopsis, from accommodation, and from motion parallax may be one of the reasons why animals do not respond to video displays in the same way as they do to real conspecifics or to predators. Part of the problem is the difficulty of reproducing the closed-loop nature of natural vision in video playback experiments. Every movement an animal makes has consequences for the pattern of stimulation on its retina and this ”optic flow” in turn carries information about both the animal’s own movement and about the three-dimensional structure of the environment. A further critical issue is the behavioural context that often determines what animals attend to but that may be difficult to induce or reproduce in an experimental setting. I illustrate this point by describing some visual behaviours in fiddler crabs, in which social and spatial context define which part of the visual field a crab attends to and which visual information is used to guide behaviour. I finally mention some aspects of natural illumination that may influence how animals perceive an object or a scene: shadows, specular reflections, and polarisation reflections. Received: 23 November 1999 / Received in revised form: 9 February 2000 / Accepted: 10 February 2000  相似文献   

16.
The retinal image flow a blowfly experiences in its daily life on the wing is determined by both the structure of the environment and the animal’s own movements. To understand the design of visual processing mechanisms, there is thus a need to analyse the performance of neurons under natural operating conditions. To this end, we recorded flight paths of flies outdoors and reconstructed what they had seen, by moving a panoramic camera along exactly the same paths. The reconstructed image sequences were later replayed on a fast, panoramic flight simulator to identified, motion sensitive neurons of the so-called horizontal system (HS) in the lobula plate of the blowfly, which are assumed to extract self-motion parameters from optic flow. We show that under real life conditions HS-cells not only encode information about self-rotation, but are also sensitive to translational optic flow and, thus, indirectly signal information about the depth structure of the environment. These properties do not require an elaboration of the known model of these neurons, because the natural optic flow sequences generate—at least qualitatively—the same depth-related response properties when used as input to a computational HS-cell model and to real neurons.  相似文献   

17.
An evolutionarily conserved system of small retinotopic neurons in dipteran insects, called bushy T-cells, provides information about directional motion to large collator neurons in the lobula plate. Physiological and anatomical features of these cells provide the basis for a model that is used to investigate requirements for generating optic flow selectivity in collators while allowing for evolutionary variations. This account focuses on the role of physiological tuning properties of T5 neurons. Various flow fields are defined as inputs to retinotopic arrays of T5 cells, the responses of which are mapped onto collators using innervation matrices that promote selectivity for flow type and position. Properties known or inferred from physiological and anatomical studies of neurons contributing to motion detection are incorporated into the model: broad tuning to local motion direction and the representation of each visual sampling unit by a quartet of small-field T5-like neurons with orthogonal preferred directions. The model predicts hitherto untested response properties of optic flow selective collators, and predicts that selectivity for a given flow field can be highly sensitive to perturbations in physiological properties of the motion detectors.  相似文献   

18.
Horseshoe crabs use vision to find mates. They can reliably detect objects resembling potential mates under a variety of lighting conditions. To understand how they achieve this remarkable performance, we constructed a cell based realistic model of the lateral eye to compute the ensembles of optic nerve activity ("neural images") it transmits to the brain. The neural images reveal a robust encocding of mate-like objects that move underwater during the day. The neural images are much less clear at night, even though the eyes undergo large circadian increases of sensitivity that nearly compensate for the millionfold decreasein underwater lighting after sundown. At night the neurral images are noisy, dominated by bursts of nerve impulses from random photon events that occur at low nighttime levels of illumination. Deciphering the eye's input to the brain begins at the first synaptic level with lowpass temporal and spatial filtering. Both neural filtering mechanisms improve the signal-to-noise properties of the eye's input, yielding clearer neural images of potential mates, especiallyat night. Insights about visual processing by the relatively simple visual system of Limulus may aid in the designof robotic sensors for the marine environment.  相似文献   

19.
The shift from a diurnal to nocturnal lifestyle in vertebrates is generally associated with either enhanced visual sensitivity or a decreased reliance on vision. Within birds, most studies have focused on differences in the visual system across all birds with respect to nocturnality-diurnality. The critically endangered Kakapo (Strigops habroptilus), a parrot endemic to New Zealand, is an example of a species that has evolved a nocturnal lifestyle in an otherwise diurnal lineage, but nothing is known about its' visual system. Here, we provide a detailed morphological analysis of the orbits, brain, eye, and retina of the Kakapo and comparisons with other birds. Morphometric analyses revealed that the Kakapo's orbits are significantly more convergent than other parrots, suggesting an increased binocular overlap in the visual field. The Kakapo exhibits an eye shape that is consistent with other nocturnal birds, including owls and nightjars, but is also within the range of the diurnal parrots. With respect to the brain, the Kakapo has a significantly smaller optic nerve and tectofugal visual pathway. Specifically, the optic tectum, nucleus rotundus and entopallium were significantly reduced in relative size compared to other parrots. There was no apparent reduction to the thalamofugal visual pathway. Finally, the retinal morphology of the Kakapo is similar to that of both diurnal and nocturnal birds, suggesting a retina that is specialised for a crepuscular niche. Overall, this suggests that the Kakapo has enhanced light sensitivity, poor visual acuity and a larger binocular field than other parrots. We conclude that the Kakapo possesses a visual system unlike that of either strictly nocturnal or diurnal birds and therefore does not adhere to the traditional view of the evolution of nocturnality in birds.  相似文献   

20.
作为昆虫种群的重要组成部分,夜行性昆虫成功进化出了与其生存环境相适应的感觉机制,普遍认为夜行性昆虫主要依靠嗅觉和机械性感受等来探索环境,其视觉器官发生了退化或功能丧失。近年来,随着红外夜视、视网膜电位(electroretinogram, ERG)和视觉神经等生物新技术的应用,昆虫视觉生态学研究出现了突破性进展,自2002年以来陆续发现蛾类、蜜蜂和蜣螂等夜行性昆虫进化出了非凡的微光视觉(dim-light vision)能力,在夜晚(光照强度低于0.3 lx)依然可以如同在明亮的白天一样清晰、准确地感知目标物体特定的视觉特性,如明暗、颜色、形状、大小、对比度、偏振光和运动状态等,展现出视觉调控夜行性昆虫行为活动的巨大潜力。此外,这些夜行性昆虫复眼瞳孔、小眼焦距、视杆和色素颗粒等方面进化出了一些相应的形态生理特征,以提高光学灵敏度适应夜间微光环境。鉴于夜行性昆虫微光视觉行为及其视觉适应机制的研究尚处于起步阶段,仅见于少数访花昆虫或粪食性昆虫,建议加强以下几个方面的研究:(1)重大夜行性农业害虫的微光视觉及其应用的研究;(2)非典型重叠复眼的光学结构特征及其应对微光环境的适应机制研究;(3)夜行性昆虫响应微光环境的视觉适应机制研究;(4)基于夜行性昆虫微光视觉行为研发新型害虫防控技术。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号