首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In experiments described in the literature objects presented to restrained goldfish failed to induce eye movements like fixation and/or tracking. We show here that eye movements can be induced only if the background (visual surround) is not stationary relative to the fish but moving. We investigated the influence of background motion on eye movements in the range of angular velocities of 5–20° s−1. The response to presentation of an object is a transient shift in mean horizontal eye position which lasts for some 10 s. If an object is presented in front of the fish the eyes move in a direction such that it is seen more or less symmetrically by both eyes. If it is presented at ±70° from the fish's long axis the eye on the side of the object moves in the direction that the object falls more centrally on its retina. During these object induced eye responses the typical optokinetic nystagmus of amplitude of some 5° with alternating fast and slow phases is maintained, and the eye velocity during the slow phase is not modified by presentation of the object. Presenting an object in front of stationary or moving backgrounds leads to transient suppression of respiration which shows habituation to repeated object presentations. Accepted: 14 April 2000  相似文献   

2.
It has been well known that the canal driven vestibulo-ocular reflex (VOR) is controlled and modulated through the central nervous system by external sensory information (e.g. visual, otolithic and somatosensory inputs) and by mental conditions. Because the origin of retinal image motion exists both in the subjects (eye, head and body motions) and in the external world (object motion), the head motion should be canceled and/or the object should be followed by smooth eye movements. Human has developed a lot of central nervous mechanisms for smooth eye movements (e.g. VOR, optokinetic reflex and smooth pursuit eye movements). These mechanisms are thought to work for the purpose of better seeing. Distinct mechanism will work in appropriate self motion and/or object motion. As the results, whole mechanisms are controlled in a purpose-directed manner. This can be achieved by a self-organizing holistic system. Holistic system is very useful for understanding human oculomotor behavior.  相似文献   

3.
Among the various possible criteria guiding eye movement selection, we investigate the role of position uncertainty in the peripheral visual field. In particular, we suggest that, in everyday life situations of object tracking, eye movement selection probably includes a principle of reduction of uncertainty. To evaluate this hypothesis, we confront the movement predictions of computational models with human results from a psychophysical task. This task is a freely moving eye version of the multiple object tracking task, where the eye movements may be used to compensate for low peripheral resolution. We design several Bayesian models of eye movement selection with increasing complexity, whose layered structures are inspired by the neurobiology of the brain areas implied in this process. Finally, we compare the relative performances of these models with regard to the prediction of the recorded human movements, and show the advantage of taking explicitly into account uncertainty for the prediction of eye movements.  相似文献   

4.
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.  相似文献   

5.

Background

The aim of this longitudinal study was to investigate how the kinematic organization of upper limb movements changes from fetal to post-natal life. By means of off-line kinematical techniques we compared the kinematics of hand-to-mouth and hand-to-eye movements, in the same individuals, during prenatal life and early postnatal life, as well as the kinematics of hand-to-mouth and reaching-toward-object movements in the later age periods.

Methodology/Principal Findings

Movements recorded at the 14th, 18th and 22nd week of gestation were compared with similar movements recorded in an ecological context at 1, 2, 3, 4, 8, and 12 months after birth. The results indicate a similar kinematic organization depending on movement type (i.e., eye, mouth) for the infants at one month and for the fetuses at 22 weeks of gestation. At two and three months such differential motor planning depending on target is lost and no statistical differences emerge. Hand to eye movements were no longer observed after the fourth month of life, therefore we compared kinematics for hand to mouth with hand to object movements. Results of these analyses revealed differences in the performance of hand to mouth and reaching to object movements in the length of the deceleration phase of the movement, depending on target.

Conclusion/Significance

Data are discussed in terms of how the passage from intrauterine to extra-uterine environments modifies motor planning. These results provide novel evidence of how different types of upper extremity movements, those directed towards one’s own face and those directed to external objects, develop.  相似文献   

6.
The LGMD2 belongs to a group of giant movement-detecting neurones which have fan-shaped arbors in the lobula of the locust optic lobe and respond to movements of objects. One of these neurones, the LGMD1, has been shown to respond directionally to movements of objects in depth, generating vigorous, maintained spike discharges during object approach. Here we compare the responses of the LGMD2 neurone with those of the LGMD1 to simulated movements of objects in depth and examine different image cues which could allow the LGMD2 to distinguish approaching from receding objects. In the absence of stimulation, the LGMD2 has a resting discharge of 10–40 spikes s−1 compared with <1 spike s−1 for the LGMD1. The most powerful excitatory stimulus for the LGMD2 is a dark object approaching the eye. Responses to approaching objects are suppressed by wide field movements of the background. Unlike the LGMD1, the LGMD2 is not excited by the approach of light objects; it specifically responds to movement of edges in the light to dark direction. Both neurones rely on the same monocular image cues to distinguish approaching from receding objects: an increase in the velocity with which edges of images travel over the eye; and an increase in the extent of edges in the image during approach. Accepted: 23 October 1996  相似文献   

7.
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.  相似文献   

8.
The question of whether perceptual illusions influence eye movements is critical for the long-standing debate regarding the separation between action and perception. To test the role of auditory context on a visual illusion and on eye movements, we took advantage of the fact that the presence of an auditory cue can successfully modulate illusory motion perception of an otherwise static flickering object (sound-induced visual motion effect). We found that illusory motion perception modulated by an auditory context consistently affected saccadic eye movements. Specifically, the landing positions of saccades performed towards flickering static bars in the periphery were biased in the direction of illusory motion. Moreover, the magnitude of this bias was strongly correlated with the effect size of the perceptual illusion. These results show that both an audio-visual and a purely visual illusion can significantly affect visuo-motor behavior. Our findings are consistent with arguments for a tight link between perception and action in localization tasks.  相似文献   

9.
V R Galoian 《Biofizika》1978,23(2):370-378
A comparative study of torsional movement of the eye in passive and active tilting of the head and body of the object was carried out. Similarity of torsional movement of the eyes in passive and active movements was shown. It was found by the method of exclusion and selective stimulation of vestibular, cervikal, lumbar optokinetic reflexes, that neither the cervikal, nor lumbar reflexes elicited spontaneous torsional movements of the eyes and had no influence on them. A direct study (coinciding with rotation direction of the stimulus of head rotation) and the reverse (noncoinciding) torsional tracing of a rotating disc and tracing without head movements was investigated. During direct tracing depression of saccades and extention of the slow phase of torsion was found; during the reverse one--a decrease of the eye drist and increase of the amplitude and number of saccades. Phenomena of a seeming acceleration and deceleration of disc rotation etc. have been observed. It was found that with torsional saccades vision was retained. The presence of optokinetic control of phases of torsional eye movements formation has been recorded. Tracing without rotation of the head was accompanied by torsional nistagmus. Possible causes of incomplete stabilisation and optokinetic torsional tracing are discussed.  相似文献   

10.
The perception of visual information in cytoscreening was studied: eye movements were recorded while the cytotechnologist was screening cervical smears by means of a projection screen. Four phases of eye movement could be distinguished: small, aimless movements during the stage movement; a latency period with a duration of about 180 milliseconds; saccadic movement to the position of an object; and fixation on an object. These components explain the two-phase behavior of cytoscreening found in our previous investigations of the stage movement. Visual perception during the period of latency was found to be the most important since only those objects that are recognized by peripheral vision during this period can trigger the necessary saccadic movement before fixation takes place. The scanpath of search in the stationary field of view is determined by the conspicuousness of the objects; the main features of conspicuousness are size and contrast. Even with the comparatively small fields of view (24 degrees and 29 degrees in diameter) used in these experiments, it was found that the detection threshold of peripheral vision increases towards the margin of the field of view. This raises the question of whether the use of large-field binoculars (with 40-degree visual angles) may cause higher false-negative rates for samples with only a few atypical cells.  相似文献   

11.
Does a dysfunction in the mirror neuron system (MNS) underlie the social symptoms defining autism spectrum disorder (ASD)? Research suggests that the MNS matches observed actions to motor plans for similar actions, and that these motor plans include directions for predictive eye movements when observing goal-directed actions. Thus, one important question is whether children with ASD use predictive eye movements in action observation. Young children with ASD as well as typically developing children and adults were shown videos in which an actor performed object-directed actions (human agent condition). Children with ASD were also shown control videos showing objects moving by themselves (self-propelled condition). Gaze was measured using a corneal reflection technique. Children with ASD and typically developing individuals used strikingly similar goal-directed eye movements when observing others’ actions in the human agent condition. Gaze was reactive in the self-propelled condition, suggesting that prediction is linked to seeing a hand–object interaction. This study does not support the view that ASD is characterized by a global dysfunction in the MNS.  相似文献   

12.
Eye movements are very important in order to track an object or to stabilize an image on the retina during movement. Animals without a fovea, such as the mouse, have a limited capacity to lock their eyes onto a target. In contrast to these target directed eye movements, compensatory ocular eye movements are easily elicited in afoveate animals1,2,3,4. Compensatory ocular movements are generated by processing vestibular and optokinetic information into a command signal that will drive the eye muscles. The processing of the vestibular and optokinetic information can be investigated separately and together, allowing the specification of a deficit in the oculomotor system. The oculomotor system can be tested by evoking an optokinetic reflex (OKR), vestibulo-ocular reflex (VOR) or a visually-enhanced vestibulo-ocular reflex (VVOR). The OKR is a reflex movement that compensates for "full-field" image movements on the retina, whereas the VOR is a reflex eye movement that compensates head movements. The VVOR is a reflex eye movement that uses both vestibular as well as optokinetic information to make the appropriate compensation. The cerebellum monitors and is able to adjust these compensatory eye movements. Therefore, oculography is a very powerful tool to investigate brain-behavior relationship under normal as well as under pathological conditions (f.e. of vestibular, ocular and/or cerebellar origin).Testing the oculomotor system, as a behavioral paradigm, is interesting for several reasons. First, the oculomotor system is a well understood neural system5. Second, the oculomotor system is relative simple6; the amount of possible eye movement is limited by its ball-in-socket architecture ("single joint") and the three pairs of extra-ocular muscles7. Third, the behavioral output and sensory input can easily be measured, which makes this a highly accessible system for quantitative analysis8. Many behavioral tests lack this high level of quantitative power. And finally, both performance as well as plasticity of the oculomotor system can be tested, allowing research on learning and memory processes9.Genetically modified mice are nowadays widely available and they form an important source for the exploration of brain functions at various levels10. In addition, they can be used as models to mimic human diseases. Applying oculography on normal, pharmacologically-treated or genetically modified mice is a powerful research tool to explore the underlying physiology of motor behaviors under normal and pathological conditions. Here, we describe how to measure video-oculography in mice8.  相似文献   

13.
Pesaran B  Nelson MJ  Andersen RA 《Neuron》2006,51(1):125-134
When reaching to grasp an object, we often move our arm and orient our gaze together. How are these movements coordinated? To investigate this question, we studied neuronal activity in the dorsal premotor area (PMd) and the medial intraparietal area (area MIP) of two monkeys while systematically varying the starting position of the hand and eye during reaching. PMd neurons encoded the relative position of the target, hand, and eye. MIP neurons encoded target location with respect to the eye only. These results indicate that whereas MIP encodes target locations in an eye-centered reference frame, PMd uses a relative position code that specifies the differences in locations between all three variables. Such a relative position code may play an important role in coordinating hand and eye movements by computing their relative position.  相似文献   

14.
1. In the crayfish, behavioral arousal is known to elicit walking and to enhance compensatory eye movements.2. To see if serotonin and octopamine modulate arousal, we measured their effects on walking and eye movements in tethered crayfish, Procambarus clarkii. Serotonin strongly suppresses both walking and eye movements.3. In contrast, octopamine elicits an arousal-like state of continuous jittery leg movements and increased eye movements.4. Serotonin's effect on arousal is uncertain, but octopamine remains a plausible modulator of behavioral arousal.  相似文献   

15.
Based on an information theoretical approach, we investigate feature selection processes in saccadic object and scene analysis. Saccadic eye movements of human observers are recorded for a variety of natural and artificial test images. These experimental data are used for a statistical evaluation of the fixated image regions. Analysis of second-order statistics indicates that regions with higher spatial variance have a higher probability to be fixated, but no significant differences beyond these variance effects could be found at the level of power spectra. By contrast, an investigation with higher-order statistics, as reflected in the bispectral density, yielded clear structural differences between the image regions selected by saccadic eye movements as opposed to regions selected by a random process. These results indicate that nonredundant, intrinsically two-dimensional image features like curved lines and edges, occlusions, isolated spots, etc. play an important role in the saccadic selection process which must be integrated with top-down knowledge to fully predict object and scene analysis by human observers.  相似文献   

16.
When Cambarus clarkii is exposed to a source of light so that both eyes are equally illuminated, leg movements of the two sides are equal in frequency and amplitude. On covering one eye and exposing the uncovered eye to light, leg movements on the side of the uncovered eye are more frequent and are of greater amplitude than on the side of the covered eye. On covering the exposed eye also the leg movements on the two sides again tend to become equal in frequency and amplitude. When one eye is lost and the other remains functional, the leg movements on the side of the lost eye will be similar to those on the side of a normal, covered eye.  相似文献   

17.
1. Voluntary saccadic eye movements were made toward flashes of light on the horizontal meridian, whose duration and distance from the point of fixation were varied; eye movements were measured using d.c.-electrooculography.—2. Targets within 10°–15° eccentricity are usually reached by one saccadic eye movement. When the eyes turn toward targets of more than 10°–15° eccentricity, the first saccadic eye movement falls short of the target by an angle usually not exceeding 10°. The presence of the image of the target off the fovea (visual error signal) subsequent to such an undershoot elicits, after a short interval, corrective saccades (usually one) which place the image of the target on the fovea. In the absence of a visual error signal, the probability of occurrence of corrective saccades is low, but it increases with greater target eccentricities. These observations suggest that there are different, eccentricity-dependent modes of programming saccadic eye movements.—3. Saccadic eye movements appear to be programmed in retinal coordinates. This conclusion is based on the observations that, irrespective of the initial position of the eyes in the orbit, a) there are different programming modes for eye movements to targets within and beyond 10°–15° from the fixation point, and b_ the maximum velocity of saccadic eye movements is always reached at 25° to 30° target eccentricity. —4. Distributions of latency and intersaccadic interval (ISI) are frequently multimodal, with a separation between modes of 30 to 40 msec. These observations suggest that saccadic eye movements are produced by mechanisms which, at a frequency of 30 Hz, process visual information. —5. Corrective saccades may occur after extremely short intervals (30 to 60 msec) regardless of whether or not a visual error signal is present; the eyes may not even come to a complete stop during these very short intersaccadic intervals. It is suggested that these corrective saccades are triggered by errors in the programming of the initial saccadic eye movements, and not by a visual error signal. —6. The exitence of different, eccentricity-dependent programming modes of saccadic eye movements, is further supported by anatomical, physiological, psychophysical, and neuropathological observations that suggest a dissociation of visual functions dependent on retinal eccentricity. Saccadic eye movements to targets more eccentric than 10°–15° appear to be executed by a mechanism involving the superior colliculus (perhaps independent of the visual cortex), whereas saccadic eye movements to less eccentric targets appear to depend on a mechanism involving the geniculo-cortical pathway (perhaps in collaboration with the superior colliculus).  相似文献   

18.
The location of visual objects in the world around us is reconstructed in a complex way from the image falling on the retina. Recent studies have begun to reveal the different ways in which the brain dynamically re-maps retinal information across eye movements to compute object locations for perception and directing actions.  相似文献   

19.
Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one’s eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion.  相似文献   

20.
The dot-probe paradigm is one of the most often used paradigms to investigate attentional biases towards emotional information. However, a large number of the dot-probe studies so far used a long stimulus onset asynchrony allowing for eye movements to occur, which might increase the error variance. This study aimed at addressing this methodological issue by varying the instructions with regard to the gaze behavior and calculating the reaction time (RT) bias score (i.e., RTs for targets presented at the location of the emotional compared to the neutral stimulus) separately for trials with eye movements and trials without eye movements. Results of Experiment 1 (using typical instructions, i.e., instructions that are lenient with regard to eye movements) showed an RT bias, but only in the trials without eye movements The overall RT bias (calculated “blind” for eye movements) was non-significant. In Experiment 2, stricter instructions and small changes in the procedure led to a sharp decrease in the number of eye movements, such that both the RT bias in the trials without eye movements as well as the RT bias across all trials was significant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号