首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This review article is devoted to results on distance measurement in locusts (e.g., Wallace, 1959; Collett, 1978; Sobel, 1990) and mantids. Before locusts or mantids jump toward a stationary object, they perform characteristic pendulum movements with the head or body, called peering movements, in the direction of the object. The fact that the animals over- or underestimate the distance to the object when the object is moved with or against the peering movement, and so perform jumps that are too long or short, would seem to indicate that motion parallax is used in this distance measurement. The behavior of the peering parameters with different object distances also indicates that not only retinal image motion but also the animal’s own movement is used in calculating the distance.  相似文献   

2.
Before jumping to a landing object, praying mantids determine the distance, using information obtained from retinal image motion resulting from horizontal peering movements. The present study investigates the peering-jump behaviour of Mantis religiosa larvae with regard to jump targets differing in shape and size. The experimental animals were presented with square, triangular and round target objects with visual extensions of 20 degrees and 40 degrees. The cardboard objects, presented against a uniform white background, were solid black or shaded with a gradation from white to black. It was found that larger objects were preferred to smaller ones as jump targets, and that the square and triangle were preferred to the round disk. When two objects were presented, no preference was exhibited between square and triangular objects. However, when three objects were presented, the square was preferred. For targets with a visual angle of 40 degrees, the amplitude and velocity of the horizontal peering movements were greater for the round disk than for the square or triangle. This amplification of the peering movements suggests that weaker motion signals are generated in the case of curved edges. This may help to account for the preference for the square and triangle as jump targets.  相似文献   

3.
Karl Kral 《Insect Science》2008,15(4):369-374
The peering-jump behavior was studied for the common field grasshopper Chorthippus brunneus , the meadow grasshopper C. parallelus and the alpine grasshopper Miramella alpina (Orthoptera, Caelifera). It was found that immediately before jumping M. alpina executes primarily unilateral object-related peering movements, with approximately twice the amplitude and velocity of the predominantly bilateral object-related peering movements of the other two species. Whereas M. alpina almost always jumped toward the black stripes in the experimental arena, the other species jumped toward both the black stripes and the white spaces between them. All three species preferred the same pattern of black stripes, which permitted them to view one black stripe frontally, with an additional black stripe to the left and right, in the lateral visual field. The similarities and differences in the peering-jump behavior of the three grasshopper species is discussed with regard to visual perception (parallax cues) and environmental adaptation.  相似文献   

4.
The aim of the present study was to investigate the distance at which vertical black and white stripes (contrast boundaries) elicit object-related behavioral responses in 6th instar and adults of the praying mantis Mantis religiosa. The mantids reacted when the contrast boundaries were not further away than 60 cm. However, with increasing distance (>20 cm), the contrast boundaries became progressively less significant for the mantids. Jumps/preparation of jumps could be observed between 10 and 30 cm. The results are supportive for distance measurement of up to 20–30 cm, which corresponds to distance accessible for the insect. It seems that image motion cues induced by peering movements play an important role.  相似文献   

5.
In the present study, peering behaviour, which is used to measure distance by the image motion caused by head movement, is examined in two types of mantid. Mantis religiosa inhabits a region of dense grass consisting of uniform, generally uniformly aligned, and closely spaced elements and executes slow, simple peering movements. In contrast, Empusa fasciata climbs about in open regions of shrubs and bushes which consist of irregular, variably aligned and variably spaced elements and it executes comparatively quick, complex peering movements. Hence, it seems that in these two species of mantid, the same orientation mechanism has been adapted to the unique structures of their visual surroundings. Apparently M. religiosa uses motion parallax and E. fasciata uses a combination of motion parallax and forward and backward movements (image expansion/contraction over time) to detect object distances.  相似文献   

6.
Visually targeted reaching to a specific object is a demanding neuronal task requiring the translation of the location of the object from a two-dimensionsal set of retinotopic coordinates to a motor pattern that guides a limb to that point in three-dimensional space. This sensorimotor transformation has been intensively studied in mammals, but was not previously thought to occur in animals with smaller nervous systems such as insects. We studied horse-head grasshoppers (Orthoptera: Proscopididae) crossing gaps and found that visual inputs are sufficient for them to target their forelimbs to a foothold on the opposite side of the gap. High-speed video analysis showed that these reaches were targeted accurately and directly to footholds at different locations within the visual field through changes in forelimb trajectory and body position, and did not involve stereotyped searching movements. The proscopids estimated distant locations using peering to generate motion parallax, a monocular distance cue, but appeared to use binocular visual cues to estimate the distance of nearby footholds. Following occlusion of regions of binocular overlap, the proscopids resorted to peering to target reaches even to nearby locations. Monocular cues were sufficient for accurate targeting of the ipsilateral but not the contralateral forelimb. Thus, proscopids are capable not only of the sensorimotor transformations necessary for visually targeted reaching with their forelimbs but also of flexibly using different visual cues to target reaches.  相似文献   

7.
Barn owls exhibit a rich repertoire of head movements before taking off for prey capture. These movements occur mainly at light levels that allow for the visual detection of prey. To investigate these movements and their functional relevance, we filmed the pre-attack behavior of barn owls. Off-line image analysis enabled reconstruction of all six degrees of freedom of head movements. Three categories of head movements were observed: fixations, head translations and head rotations. The observed rotations contained a translational component. Head rotations did not follow Listing’s law, but could be well described by a second-order surface, which indicated that they are in close agreement with Donder’s law. Head translations did not contain any significant rotational components. Translations were further segmented into straight-line and curved paths. Translations along an axis perpendicular to the line of sight were similar to peering movements observed in other animals. We suggest that these basic motion elements (fixations, head rotations, translations along a straight line, and translation along a curved trajectory) may be combined to form longer and more complex behavior. We speculate that these head movements mainly underlie estimation of distance during prey capture.  相似文献   

8.
Adult females of the mantis Tenodera angustipennis were presented with the "nonlocomotive" prey model, a static rectangle with two lines oscillating regularly at its sides, generated on a computer display. The models were varied in rectangle luminance (black, gray, and light gray), rectangle height (0.72, 3.6, and 18 mm), rectangle width (0.72, 3.6, and 18 mm), and angular velocity of oscillating lines (65°, 260°, and 1040°/s) to examine their effects on prey recognition. Before striking the model, the mantis sometimes showed peering movements that involved swaying its body from side to side. The black model of medium size (both height and width) elicited higher rates of fixation, peering, and strike responses than the large, small, or gray model. The model of medium angular velocity elicited a higher strike rate than that of large or small angular velocity, but angular velocity had little effect on fixation and peering. We conclude that mantises respond to a rectangle in deciding whether to fixate, and to both rectangle and lines in deciding whether to strike after fixation. Received: September 2, 1999 / Accepted: March 21, 2000  相似文献   

9.
Kirsch W  Herbort O  Butz MV  Kunde W 《PloS one》2012,7(4):e34880
We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements.  相似文献   

10.
In the praying mantis, vision plays a major role even in newly hatched nymphs, which have compound eyes that are not yet fully developed. This study examines how this factor affects the visual orientation behavior of freely mobile Mantis religiosa. Mantises from three age groups (nymphs newly hatched to 2 h old, three-day-old nymphs, and three- to four-month-old female and male adults) were placed in a completely unprotected open area, either with or without visual cues in the surroundings. As a visual cue, five vertical rods with high-, medium- or low-luminance contrast, having a vertical extension of 45° and overall horizontal extension of 40°, were presented at a distance of 300 mm, simulating a group of plant stems with differing contrasts. The mantis search behavior, probability of reaction to the visual cues, distance at which the first target-related reaction occurred, and target approach behavior were investigated. It was found that the search behavior differed in the different age groups. With high-contrast visual cues, newly hatched nymphs performed similarly to adults, but this was not the case for medium- and low-contrast visual cues. Visual performance was greatly improved 3 days after hatching, presumably due to the complete hardening of the cuticle. Nevertheless, despite differences in visual acuity, it was found that even newly hatched nymphs used visual orientation mechanisms similar to those of adult mantises, including fixation, scanning and peering.  相似文献   

11.
Measurement of the optomotor response is a common way to determine thresholds of the visual system in animals. Particularly in mice, it is frequently used to characterize the visual performance of different genetically modified strains or to test the effect of various drugs on visual performance. Several methods have been developed to facilitate the presentation of stimuli using computer screens or projectors. Common methods are either based on the measurement of eye movement during optokinetic reflex behavior or rely on the measurement of head and/or body-movements during optomotor responses. Eye-movements can easily and objectively be quantified, but their measurement requires invasive fixation of the animals. Head movements can be observed in freely moving animals, but until now depended on the judgment of a human observer who reported the counted tracking movements of the animal during an experiment. In this study we present a novel measurement and stimulation system based on open source building plans and software. This system presents appropriate 360 stimuli while simultaneously video-tracking the animal''s head-movements without fixation. The on-line determined head gaze is used to adjust the stimulus to the head position, as well as to automatically calculate visual acuity. Exemplary, we show that automatically measured visual response curves of mice match the results obtained by a human observer very well. The spatial acuity thresholds yielded by the automatic analysis are also consistent with the human observer approach and with published results. Hence, OMR-arena provides an affordable, convenient and objective way to measure mouse visual performance.  相似文献   

12.
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.  相似文献   

13.
Mantises (Mantodea, Mantidae) visually detect insect prey and capture it by a ballistic strike of their specialized forelegs. We tested predatory responses of female mantis, Sphodromantis viridis, to computer generated visual stimuli, to determine the effects of (i) target size and velocity (ii) discrete changes in target size and (iii) visual occlusion. Maximal predatory responses were elicited by stimuli that (i) subtended ~20°–23° horizontally and ~16°–19° vertically, at the eye, and moved across the screen at angular velocities of ~46°–119°/s, (ii) increased in size in a stepwise manner, with step duration ≥0.8 s, while stimuli decreasing in size elicited only peering movements, (iii) Stimuli disappearing gradually behind a virtual occlusion elicited one or more head saccades but not actual interception.  相似文献   

14.
Adult females of the mantis, Tenodera angustipennis, were presented with a wriggling model, consisting of six circular spots positioned in a row horizontally and adjacently. During presentation, this model wriggled like a worm by moving some spots. When the motion of the model was small (the number of moving spots ≤2), the mantis sometimes stalked the model with peering movements but seldom struck it. When the motion was large (the number of moving spots ≥3), the mantis frequently fixated, rapidly approached, and struck the model. These results suggest that the mantis changes its approach behavior depending on the amount of prey motion. Disappearance of some terminal spots at the stationary end hardly affected the rates of fixation, peering, and strike. The model that wriggled at each end elicited lower rates of fixation and strike than the model that wriggled at one end. These results suggest that the mantis responds to only the fastest moving part of the wriggling model when the motion of the model is large. Electronic Publication  相似文献   

15.
Relative binocular disparity cannot tell us the absolute 3D shape of an object, nor the 3D trajectory of its motion, unless the visual system has independent access to how far away the object is at any moment. Indeed, as the viewing distance is changed, the same disparate retinal motions will correspond to very different real 3D trajectories. In this paper we were interested in whether binocular 3D motion detection is affected by viewing distance. A visual search task was used, in which the observer is asked to detect a target dot, moving in 3D, amidst 3D stationary distractor dots. We found that distance does not affect detection performance. Motion-in-depth is consistently harder to detect than the equivalent lateral motion, for all viewing distances. For a constant retinal motion with both lateral and motion-in-depth components, detection performance is constant despite variations in viewing distance that produce large changes in the direction of the 3D trajectory. We conclude that binocular 3D motion detection relies on retinal, not absolute, visual signals.  相似文献   

16.
Foraging mode influences the dominant sensory modality used by a forager and likely the strategies of information gathering used in foraging and anti-predator contexts. We assessed three components of visual information gathering in a sit-and-wait avian predator, the black phoebe (Sayornis nigricans): configuration of the visual field, degree of eye movement, and scanning behavior through head-movement rates. We found that black phoebes have larger lateral visual fields than similarly sized ground-foraging passerines, as well as relatively narrower binocular and blind areas. Black phoebes moved their eyes, but eye movement amplitude was relatively smaller than in other passerines. Black phoebes may compensate for eye movement constraints with head movements. The rate of head movements increased before attacking prey in comparison to non-foraging contexts and before movements between perches. These findings suggest that black phoebes use their lateral visual fields, likely subtended by areas of high acuity in the retina, to track prey items in a three-dimensional space through active head movements. These head movements may increase depth perception, motion detection and tracking. Studying information gathering through head movement changes, rather than body posture changes (head-up, head-down) as generally presented in the literature, may allow us to better understand the mechanisms of information gathering from a comparative perspective.  相似文献   

17.
The proximity of visual landmarks impacts reaching performance   总被引:3,自引:0,他引:3  
The control of goal-directed reaching movements is thought to rely upon egocentric visual information derived from the visuomotor networks of the dorsal visual pathway. However, recent research (Krigolson and Heath, 2004) suggests it is also possible to make allocentric comparisons between a visual background and a target object to facilitate reaching accuracy. Here we sought to determine if the effectiveness of these allocentric comparisons is reduced as distance between a visual background and a target object increases. To accomplish this, participants completed memory-guided reaching movements to targets presented in an otherwise empty visual background or positioned within a proximal, medial, or distal visual background. Our results indicated that the availability of a proximal or medial visual background reduced endpoint variability relative to reaches made without a visual background. Interestingly, we found that endpoint variability was not reduced when participants reached to targets framed within a distal visual background. Such findings suggest that allocentric visual information is used to facilitate reaching performance; however, the fidelity by which such cues are used appears linked to the proximity of veridical target location. Importantly, these data also suggest that information from both the dorsal and ventral visual streams can be integrated to facilitate the online control of reaching movements.  相似文献   

18.
1. Voluntary saccadic eye movements were made toward flashes of light on the horizontal meridian, whose duration and distance from the point of fixation were varied; eye movements were measured using d.c.-electrooculography.—2. Targets within 10°–15° eccentricity are usually reached by one saccadic eye movement. When the eyes turn toward targets of more than 10°–15° eccentricity, the first saccadic eye movement falls short of the target by an angle usually not exceeding 10°. The presence of the image of the target off the fovea (visual error signal) subsequent to such an undershoot elicits, after a short interval, corrective saccades (usually one) which place the image of the target on the fovea. In the absence of a visual error signal, the probability of occurrence of corrective saccades is low, but it increases with greater target eccentricities. These observations suggest that there are different, eccentricity-dependent modes of programming saccadic eye movements.—3. Saccadic eye movements appear to be programmed in retinal coordinates. This conclusion is based on the observations that, irrespective of the initial position of the eyes in the orbit, a) there are different programming modes for eye movements to targets within and beyond 10°–15° from the fixation point, and b_ the maximum velocity of saccadic eye movements is always reached at 25° to 30° target eccentricity. —4. Distributions of latency and intersaccadic interval (ISI) are frequently multimodal, with a separation between modes of 30 to 40 msec. These observations suggest that saccadic eye movements are produced by mechanisms which, at a frequency of 30 Hz, process visual information. —5. Corrective saccades may occur after extremely short intervals (30 to 60 msec) regardless of whether or not a visual error signal is present; the eyes may not even come to a complete stop during these very short intersaccadic intervals. It is suggested that these corrective saccades are triggered by errors in the programming of the initial saccadic eye movements, and not by a visual error signal. —6. The exitence of different, eccentricity-dependent programming modes of saccadic eye movements, is further supported by anatomical, physiological, psychophysical, and neuropathological observations that suggest a dissociation of visual functions dependent on retinal eccentricity. Saccadic eye movements to targets more eccentric than 10°–15° appear to be executed by a mechanism involving the superior colliculus (perhaps independent of the visual cortex), whereas saccadic eye movements to less eccentric targets appear to depend on a mechanism involving the geniculo-cortical pathway (perhaps in collaboration with the superior colliculus).  相似文献   

19.
In natural images, the distance measure between two images taken at different locations rises smoothly with increasing distance between the locations. This fact can be exploited for local visual homing where the task is to reach a goal location that is characterized by a snapshot image: descending in the image distance will lead the agent to the goal location. To compute an estimate of the spatial gradient in the distance measure, its value must be sampled at three noncollinear points. An animal or robot would have to insert exploratory movements into its home trajectory to collect these samples. Here we suggest a method based on the matched-filter concept that allows one to estimate the gradient without exploratory movements. Two matched filters – optical flow fields resulting from translatory movements in the horizontal plane – are used to predict two images in perpendicular directions from the current location. We investigate the relation to differential flow methods applied to the local homing problem and show that the matched-filter approach produces reliable homing behavior on image databases. Two alternative methods that only require a single matched filter are suggested. The matched-filter concept is also applied to derive a home-vector equation for a Fourier-based parameter method.  相似文献   

20.
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to “spy” on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号