首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As animals travel through the environment, powerful reflexes help stabilize their gaze by actively maintaining head and eyes in a level orientation. Gaze stabilization reduces motion blur and prevents image rotations. It also assists in depth perception based on translational optic flow. Here we describe side-to-side flight manoeuvres in honeybees and investigate how the bees’ gaze is stabilized against rotations during these movements. We used high-speed video equipment to record flight paths and head movements in honeybees visiting a feeder. We show that during their approach, bees generate lateral movements with a median amplitude of about 20 mm. These movements occur with a frequency of up to 7 Hz and are generated by periodic roll movements of the thorax with amplitudes of up to ±60°. During such thorax roll oscillations, the head is held close to horizontal, thereby minimizing rotational optic flow. By having bees fly through an oscillating, patterned drum, we show that head stabilization is based mainly on visual motion cues. Bees exposed to a continuously rotating drum, however, hold their head fixed at an oblique angle. This result shows that although gaze stabilization is driven by visual motion cues, it is limited by other mechanisms, such as the dorsal light response or gravity reception.  相似文献   

2.
Barn owls exhibit a rich repertoire of head movements before taking off for prey capture. These movements occur mainly at light levels that allow for the visual detection of prey. To investigate these movements and their functional relevance, we filmed the pre-attack behavior of barn owls. Off-line image analysis enabled reconstruction of all six degrees of freedom of head movements. Three categories of head movements were observed: fixations, head translations and head rotations. The observed rotations contained a translational component. Head rotations did not follow Listing’s law, but could be well described by a second-order surface, which indicated that they are in close agreement with Donder’s law. Head translations did not contain any significant rotational components. Translations were further segmented into straight-line and curved paths. Translations along an axis perpendicular to the line of sight were similar to peering movements observed in other animals. We suggest that these basic motion elements (fixations, head rotations, translations along a straight line, and translation along a curved trajectory) may be combined to form longer and more complex behavior. We speculate that these head movements mainly underlie estimation of distance during prey capture.  相似文献   

3.
Understanding motion perception continues to be the subject of much debate, a central challenge being to account for why the speeds and directions seen accord with neither the physical movements of objects nor their projected movements on the retina. Here we investigate the varied perceptions of speed that occur when stimuli moving across the retina traverse different projected distances (the speed-distance effect). By analyzing a database of moving objects projected onto an image plane we show that this phenomenology can be quantitatively accounted for by the frequency of occurrence of image speeds generated by perspective transformation. These results indicate that speed-distance effects are determined empirically from accumulated past experience with the relationship between image speeds and moving objects.  相似文献   

4.
Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments.  相似文献   

5.
Discovering that a shrimp can flick its eyes over to a fish and follow up by tracking it or flicking back to observe something else implies a ‘primate-like’ awareness of the immediate environment that we do not normally associate with crustaceans. For several reasons, stomatopods (mantis shrimp) do not fit the general mould of their subphylum, and here we add saccadic, acquisitional eye movements to their repertoire of unusual visual capabilities. Optically, their apposition compound eyes contain an area of heightened acuity, in some ways similar to the fovea of vertebrate eyes. Using rapid eye movements of up to several hundred degrees per second, objects of interest are placed under the scrutiny of this area. While other arthropod species, including insects and spiders, are known to possess and use acute zones in similar saccadic gaze relocations, stomatopods are the only crustacean known with such abilities. Differences among species exist, generally reflecting both the eye size and lifestyle of the animal, with the larger-eyed more sedentary species producing slower saccades than the smaller-eyed, more active species. Possessing the ability to rapidly look at and assess objects is ecologically important for mantis shrimps, as their lifestyle is, by any standards, fast, furious and deadly.  相似文献   

6.
In experiments described in the literature objects presented to restrained goldfish failed to induce eye movements like fixation and/or tracking. We show here that eye movements can be induced only if the background (visual surround) is not stationary relative to the fish but moving. We investigated the influence of background motion on eye movements in the range of angular velocities of 5–20° s−1. The response to presentation of an object is a transient shift in mean horizontal eye position which lasts for some 10 s. If an object is presented in front of the fish the eyes move in a direction such that it is seen more or less symmetrically by both eyes. If it is presented at ±70° from the fish's long axis the eye on the side of the object moves in the direction that the object falls more centrally on its retina. During these object induced eye responses the typical optokinetic nystagmus of amplitude of some 5° with alternating fast and slow phases is maintained, and the eye velocity during the slow phase is not modified by presentation of the object. Presenting an object in front of stationary or moving backgrounds leads to transient suppression of respiration which shows habituation to repeated object presentations. Accepted: 14 April 2000  相似文献   

7.
Using video recordings of hens, Gallus gallus domesticus, as they approached different kinds of objects, I examined how change in object distance is associated with a change from lateral to binocular viewing. The birds tended to view distant objects laterally while they preferentially viewed objects less than 20-30 cm away frontally; this was true whether they were looking at another bird or at an inanimate object. However, as well as switching between lateral and frontal viewing, the hens also swung their heads from side to side with movements so large that the same object appeared to be viewed with completely different parts of the retina, and even with different eyes, in rapid succession. When confronted with a novel object, the hens walked more slowly but continued to show large head movements. This suggests that, unlike mammals, which gaze fixedly at novel objects, hens investigate them by moving the head and looking at them with different, specialized, parts of their eyes. Many aspects of bird behaviour, such as search image formation, vigilance and visual discriminations, may be affected by the way they move the head and eyes. Copyright 2002 The Association for the Study of Animal Behaviour. Published by Elsevier Science Ltd. All rights reserved.  相似文献   

8.
Although considerable effort has been devoted to investigating how birds migrate over large distances, surprisingly little is known about how they tackle so successfully the moment-to-moment challenges of rapid flight through cluttered environments [1]. It has been suggested that birds detect and avoid obstacles [2] and control landing maneuvers [3-5] by using cues derived from the image motion that is generated in the eyes during flight. Here we investigate the ability of budgerigars to fly through narrow passages in a collision-free manner, by filming their trajectories during flight in a corridor where the walls are decorated with various visual patterns. The results demonstrate, unequivocally and for the first time, that birds negotiate narrow gaps safely by balancing the speeds of image motion that are experienced by the two eyes and that the speed of flight is regulated by monitoring the speed of image motion that is experienced by the two eyes. These findings have close parallels with those previously reported for flying insects [6-13], suggesting that some principles of visual guidance may be shared by all diurnal, flying animals.  相似文献   

9.
Hand-eye coordination during sequential tasks.   总被引:4,自引:0,他引:4  
The small angle subtended by the human fovea places a premium on the ability to quickly and accurately direct the gaze to targets of interest. Thus the resultant saccadic eye fixations are a very instructive behaviour, revealing much about the underlying cognitive mechanisms that guide them. Of particular interest are the eye fixations used in hand-eye coordination. Such coordination has been extensively studied for single movements from a source location to a target location. In contrast, we have studied multiple fixations where the sources and targets are a function of a task and chosen dynamically by the subject according to task requirements. The task chosen is a copying task: subjects must copy a figure made up of contiguous coloured blocks as fast as possible. The main observation is that although eye fixations are used for the terminal phase of hand movements, they are used for other tasks before and after that phase. The analysis of the spatial and temporal details of these fixations suggests that the underlying decision process that moves the eyes leaves key decisions until just before they are required.  相似文献   

10.
In contradistinction to conventional wisdom, we propose that retinal image slip of a visual scene (optokinetic pattern, OP) does not constitute the only crucial input for visually induced percepts of self-motion (vection). Instead, the hypothesis is investigated that there are three input factors: 1) OP retinal image slip, 2) motion of the ocular orbital shadows across the retinae, and 3) smooth pursuit eye movements (efference copy). To test this hypothesis, we visually induced percepts of sinusoidal rotatory self-motion (circular vection, CV) in the absence of vestibular stimulation. Subjects were presented with three concurrent stimuli: a large visual OP, a fixation point to be pursued with the eyes (both projected in superposition on a semi-circular screen), and a dark window frame placed close to the eyes to create artificial visual field boundaries that simulate ocular orbital rim boundary shadows, but which could be moved across the retinae independent from eye movements. In different combinations these stimuli were independently moved or kept stationary. When moved together (horizontally and sinusoidally around the subject's head), they did so in precise temporal synchrony at 0.05 Hz. The results show that the occurrence of CV requires retinal slip of the OP and/or relative motion between the orbital boundary shadows and the OP. On the other hand, CV does not develop when the two retinal slip signals equal each other (no relative motion) and concur with pursuit eye movements (as it is the case, e.g., when we follow with the eyes the motion of a target on a stationary visual scene). The findings were formalized in terms of a simulation model. In the model two signals coding relative motion between OP and head are fused and fed into the mechanism for CV, a visuo-oculomotor one, derived from OP retinal slip and eye movement efference copy, and a purely visual signal of relative motion between the orbital rims (head) and the OP. The latter signal is also used, together with a version of the oculomotor efference copy, for a mechanism that suppresses CV at a later stage of processing in conditions in which the retinal slip signals are self-generated by smooth pursuit eye movements.  相似文献   

11.
Rapid orientating movements of the eyes are believed to be controlled ballistically. The mechanism underlying this control is thought to involve a comparison between the desired displacement of the eye and an estimate of its actual position (obtained from the integration of the eye velocity signal). This study shows, however, that under certain circumstances fast gaze movements may be controlled quite differently and may involve mechanisms which use visual information to guide movements prospectively. Subjects were required to make large gaze shifts in yaw towards a target whose location and motion were unknown prior to movement onset. Six of those tested demonstrated remarkable accuracy when making gaze shifts towards a target that appeared during their ongoing movement. In fact their level of accuracy was not significantly different from that shown when they performed a 'remembered' gaze shift to a known stationary target (F3,15 = 0.15, p > 0.05). The lack of a stereotypical relationship between the skew of the gaze velocity profile and movement duration indicates that on-line modifications were being made. It is suggested that a fast route from the retina to the superior colliculus could account for this behaviour and that models of oculomotor control need to be updated.  相似文献   

12.
Honeybees fixed in small tubes scan an object within the range of the antennae by touching it briefly and frequently. In our experiments the animals were able to scan an object for several minutes with the antennae. After moving the object out of the range of the antennae, the animals showed antennal movements for several minutes that were correlated with the position of the removed object. These changes of antennal movements are called “behavioural plasticity” and are interpreted as a form of motor learning. Bees showed behavioural plasticity only for objects with relatively large surfaces. Plasticity was more pronounced in bees whose compound eyes were occluded. Behavioural plasticity was related to the duration of object presentation. Repeated presentations of the object increased the degree of plasticity. After presentation durations of 30 min the animals showed a significant increase of antennal positions related to the surface of the object and avoidance of areas corresponding to the edges. Behavioural plasticity was compared with reward-dependent learning by conditioning bees to objects. The results of motor learning and reward-dependent conditioning suggest that bees have tactile spatial memory. Accepted: 13 May 1997  相似文献   

13.
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.  相似文献   

14.
For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the target''s contrast is modified (increased/decreased) and its background modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target + background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios.  相似文献   

15.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

16.
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another''s gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent''s face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.  相似文献   

17.
The LGMD2 belongs to a group of giant movement-detecting neurones which have fan-shaped arbors in the lobula of the locust optic lobe and respond to movements of objects. One of these neurones, the LGMD1, has been shown to respond directionally to movements of objects in depth, generating vigorous, maintained spike discharges during object approach. Here we compare the responses of the LGMD2 neurone with those of the LGMD1 to simulated movements of objects in depth and examine different image cues which could allow the LGMD2 to distinguish approaching from receding objects. In the absence of stimulation, the LGMD2 has a resting discharge of 10–40 spikes s−1 compared with <1 spike s−1 for the LGMD1. The most powerful excitatory stimulus for the LGMD2 is a dark object approaching the eye. Responses to approaching objects are suppressed by wide field movements of the background. Unlike the LGMD1, the LGMD2 is not excited by the approach of light objects; it specifically responds to movement of edges in the light to dark direction. Both neurones rely on the same monocular image cues to distinguish approaching from receding objects: an increase in the velocity with which edges of images travel over the eye; and an increase in the extent of edges in the image during approach. Accepted: 23 October 1996  相似文献   

18.
This study examined behavioral strategies for texture discrimination by echolocation in free-flying bats. Big brown bats, Eptesicus fuscus, were trained to discriminate a smooth 16 mm diameter object (S+) from a size-matched textured object (S−), both of which were tethered in random locations in a flight room. The bat’s three-dimensional flight path was reconstructed using stereo images from high-speed video recordings, and the bat’s sonar vocalizations were recorded for each trial and analyzed off-line. A microphone array permitted reconstruction of the sonar beam pattern, allowing us to study the bat’s directional gaze and inspection of the objects. Bats learned the discrimination, but performance varied with S−. In acoustic studies of the objects, the S+ and S− stimuli were ensonified with frequency-modulated sonar pulses. Mean intensity differences between S+ and S− were within 4 dB. Performance data, combined with analyses of echo recordings, suggest that the big brown bat listens to changes in sound spectra from echo to echo to discriminate between objects. Bats adapted their sonar calls as they inspected the stimuli, and their sonar behavior resembled that of animals foraging for insects. Analysis of sonar beam-directing behavior in certain trials clearly showed that the bat sequentially inspected S+ and S−.  相似文献   

19.
Ever since the Renaissance speaking about paintings has been a fundamental approach for beholders, especially experts. However, it is unclear whether and how speaking about art modifies the way we look at it and this was not yet empirically tested. The present study investigated to the best of our knowledge for the first time in what way speaking modifies the patterns of fixations and gaze movements while looking at paintings. Ninety nine university students looked at four paintings selected to cover different art historical typologies for periods of 15 minutes each while gaze movement data were recorded. After 10 minutes, the participants of the experimental group were asked open questions about the painting. Speaking dramatically reduced the duration of fixations and painting area covered by fixations while at the same time increasing the frequencies of fixations, gaze length and the amount of repeated transitions between fixation clusters. These results suggest that the production of texts as well-organised sequences of information, structures the gazes of art beholders by making them quicker, more focused and better connected.  相似文献   

20.
The influence of Brownian motion on marine bacteria was examined. Due to their small size, marine bacteria rotate up to 1,400 degrees in one second. This rapid rotation makes directional swimming difficult or impossible, as a bacterium may point in a particular direction for only a few tens of milliseconds on average. Some directional movement, however, was found to be possible if swimming speed is sufficiently great, over approximately 100 μm sec−1. This led to the testable hypothesis that marine bacteria with radiii less than about 0.75 μm should exceed this speed. The result of the increased speed is that marine bacteria may spend in excess of 10% of their total energy budget on movement. This expenditure is 100 times greater than values for enteric bacteria, and indicates that marine bacteria are likely to be immotile below critical size-specific nutrient concentrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号