首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 411 毫秒
1.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

2.
For humans, social cues often guide the focus of attention. Although many nonhuman primates, like humans, live in large, complex social groups, the extent to which human and nonhuman primates share fundamental mechanisms of social attention remains unexplored. Here, we show that, when viewing a rhesus macaque looking in a particular direction, both rhesus macaques and humans reflexively and covertly orient their attention in the same direction. Specifically, when performing a peripheral visual target detection task, viewing a monkey with either its eyes alone or with both its head and eyes averted to one side facilitated the detection of peripheral targets when they randomly appeared on the same side. Moreover, viewing images of a monkey with averted gaze evoked small but systematic shifts in eye position in the direction of gaze in the image. The similar magnitude and temporal dynamics of response facilitation and eye deviation in monkeys and humans suggest shared neural circuitry mediating social attention.  相似文献   

3.

Background

Humans detect faces with direct gazes among those with averted gazes more efficiently than they detect faces with averted gazes among those with direct gazes. We examined whether this “stare-in-the-crowd” effect occurs in chimpanzees (Pan troglodytes), whose eye morphology differs from that of humans (i.e., low-contrast eyes, dark sclera).

Methodology/Principal Findings

An adult female chimpanzee was trained to search for an odd-item target (front view of a human face) among distractors that differed from the target only with respect to the direction of the eye gaze. During visual-search testing, she performed more efficiently when the target was a direct-gaze face than when it was an averted-gaze face. This direct-gaze superiority was maintained when the faces were inverted and when parts of the face were scrambled. Subsequent tests revealed that gaze perception in the chimpanzee was controlled by the contrast between iris and sclera, as in humans, but that the chimpanzee attended only to the position of the iris in the eye, irrespective of head direction.

Conclusion/Significance

These results suggest that the chimpanzee can discriminate among human gaze directions and are more sensitive to direct gazes. However, limitations in the perception of human gaze by the chimpanzee are suggested by her inability to completely transfer her performance to faces showing a three-quarter view.  相似文献   

4.
Vision is important for postural control as is shown by the Romberg quotient (RQ): with eyes closed, postural instability increases relative to eyes open (RQ = 2). Yet while fixating at far distance, postural stability is similar with eyes open and eyes closed (RQ = 1). Postural stability can be better with both eyes viewing than one eye, but such effect is not consistent among healthy subjects. The first goal of the study is to test the RQ as a function of distance for children with convergent versus divergent strabismus. The second goal is to test whether vision from two eyes relative to vision from one eye provides better postural stability. Thirteen children with divergent strabismus and eleven with convergent strabismus participated in this study. Posturtography was done with the Techno concept device. Experiment 1, four conditions: fixation at 40 cm and at 200 cm both with eyes open and eyes covered (evaluation of RQ). Experiment 2, six conditions: fixation at 40 cm and at 200 cm, with both eyes viewing or under monocular vision (dominant and non-dominant eye). For convergent strabismus, the groups mean value of RQ was 1.3 at near and 0.94 at far distance; for divergent, it was 1.06 at near and 1.68 at far. For all children, the surface of body sway was significantly smaller under both eyes viewing than monocular viewing (either eye). Increased RQ value at near for convergent and at far for divergent strabismus is attributed to the influence of the default strabismus angle and to better use of ocular motor signals. Vision with the two eyes improves postural control for both viewing distances and for both types of strabismus. Such benefit can be due to complementary mechanisms: larger visual field, better quality of fixation and vergence angle due to the use of visual inputs from both eyes.  相似文献   

5.
Humans are able to judge whether a target is accelerating in many viewing contexts, but it is an open question how the motion pattern per se affects visual acceleration perception. We measured acceleration and deceleration detection using patterns of random dots with horizontal (simpler) or radial motion (more visually complex). The results suggest that we detect acceleration better when viewing radial optic flow than horizontal translation. However, the direction within each type of pattern has no effect on performance and observers detect acceleration and deceleration similarly within each condition. We conclude that sensitivity to the presence of acceleration is generally higher for more complex patterns, regardless of the direction within each type of pattern or the sign of acceleration.  相似文献   

6.
The influence of eye movement-related artifacts on electroencephalography (EEG) signals of human subjects, who were requested to perform a direction or viewing area dependent saccade task, was investigated by using a simultaneous recording with ocular potentials as electro-oculography (EOG). In the past, EOG artifact removals have been studied in tasks with a single fixation point in the screen center, with less attention to the sensitivity of cornea-retinal dipole orientations to the EEG head map. In the present study, we hypothesized the existence of a systematic EOG influence that differs according to coupling conditions of eye-movement directions with viewing areas including different fixation points. The effect was validated in the linear regression analysis by using 12 task conditions combining horizontal/vertical eye-movement direction and three segregated zones of gaze in the screen. In the first place, event-related potential topographic patterns were analyzed to compare the 12 conditions and propagation coefficients of the linear regression analysis were successively calculated in each condition. As a result, the EOG influences were significantly different in a large number of EEG channels, especially in the case of horizontal eye-movements. In the cross validation, the linear regression analysis using the appropriate dataset of the target direction/viewing area combination demonstrated an improved performance compared with the traditional methods using a single fixation at the center. This result may open a potential way to improve artifact correction methods by considering the systematic EOG influence that can be predicted according to the view angle such as using eye-tracker systems.  相似文献   

7.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

8.
Measurement of the optomotor response is a common way to determine thresholds of the visual system in animals. Particularly in mice, it is frequently used to characterize the visual performance of different genetically modified strains or to test the effect of various drugs on visual performance. Several methods have been developed to facilitate the presentation of stimuli using computer screens or projectors. Common methods are either based on the measurement of eye movement during optokinetic reflex behavior or rely on the measurement of head and/or body-movements during optomotor responses. Eye-movements can easily and objectively be quantified, but their measurement requires invasive fixation of the animals. Head movements can be observed in freely moving animals, but until now depended on the judgment of a human observer who reported the counted tracking movements of the animal during an experiment. In this study we present a novel measurement and stimulation system based on open source building plans and software. This system presents appropriate 360 stimuli while simultaneously video-tracking the animal''s head-movements without fixation. The on-line determined head gaze is used to adjust the stimulus to the head position, as well as to automatically calculate visual acuity. Exemplary, we show that automatically measured visual response curves of mice match the results obtained by a human observer very well. The spatial acuity thresholds yielded by the automatic analysis are also consistent with the human observer approach and with published results. Hence, OMR-arena provides an affordable, convenient and objective way to measure mouse visual performance.  相似文献   

9.
Using video recordings of hens, Gallus gallus domesticus, as they approached different kinds of objects, I examined how change in object distance is associated with a change from lateral to binocular viewing. The birds tended to view distant objects laterally while they preferentially viewed objects less than 20-30 cm away frontally; this was true whether they were looking at another bird or at an inanimate object. However, as well as switching between lateral and frontal viewing, the hens also swung their heads from side to side with movements so large that the same object appeared to be viewed with completely different parts of the retina, and even with different eyes, in rapid succession. When confronted with a novel object, the hens walked more slowly but continued to show large head movements. This suggests that, unlike mammals, which gaze fixedly at novel objects, hens investigate them by moving the head and looking at them with different, specialized, parts of their eyes. Many aspects of bird behaviour, such as search image formation, vigilance and visual discriminations, may be affected by the way they move the head and eyes. Copyright 2002 The Association for the Study of Animal Behaviour. Published by Elsevier Science Ltd. All rights reserved.  相似文献   

10.
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another''s gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent''s face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.  相似文献   

11.
Observers made a saccade between two fixation markers while a probe was flashed sequentially at two locations on a side screen. The first probe was presented in the far periphery just within the observer''s visual field. This target was extinguished and the observers made a large saccade away from the probe, which would have left it far outside the visual field if it had still been present. The second probe was then presented, displaced from the first in the same direction as the eye movement and by about the same distance as the saccade step. Because both eyes and probes shifted by similar amounts, there was little or no shift between the first and second probe positions on the retina. Nevertheless, subjects reported seeing motion corresponding to the spatial displacement not the retinal displacement. When the second probe was presented, the effective location of the first probe lay outside the visual field demonstrating that apparent motion can be seen from a location outside the visual field to a second location inside the visual field. Recent physiological results suggest that target locations are “remapped” on retinotopic representations to correct for the effects of eye movements. Our results suggest that the representations on which this remapping occurs include locations that fall beyond the limits of the retina.  相似文献   

12.
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.  相似文献   

13.
Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.  相似文献   

14.
The goal of this study was to test whether a superposition model of smooth-pursuit and vestibulo-ocular reflex (VOR) eye movements could account for the stability of gaze that subjects show as they view a stationary target, during head rotations at frequencies that correspond to natural movements. Horizontal smooth-pursuit and the VOR were tested using sinusoidal stimuli with frequencies in the range 1.0–3.5 Hz. During head rotation, subjects viewed a stationary target either directly or through an optical device that required eye movements to be approximately twice the amplitude of head movements in order to maintain foveal vision of the target. The gain of compensatory eye movements during viewing through the optical device was generally greater than during direct viewing or during attempted fixation of the remembered target location in darkness. This suggests that visual factors influence the response, even at high frequencies of head rotation. During viewing through the optical device, the gain of compensatory eye movements declined as a function of the frequency of head rotation (P < 0.001) but, at any particular frequency, there was no correlation with peak head velocity (P > 0.23), peak head acceleration (P > 0.22) or retinal slip speed (P > 0.22). The optimal values of parameters of smooth-pursuit and VOR components of a simple superposition model were estimated in the frequency domain, using the measured responses during head rotation, as each subject viewed the stationary target through the optical device. We then compared the model's prediction of smooth-pursuit gain and phase, at each frequency, with values obtained experimentally. Each subject's pursuit showed lower gain and greater phase lag than the model predicted. Smooth-pursuit performance did not improve significantly if the moving target was a 10 deg × 10 deg Amsler grid, or if sinusoidal oscillation of the target was superimposed on ramp motion. Further, subjects were still able to modulate the gain of compensatory eye movements during pseudo-random head perturbations, making improved predictor performance during visual-vestibular interactions unlikely. We conclude that the increase in gain of eye movements that compensate for head rotations when subjects view, rather than imagine, a stationary target cannot be adequately explained by superposition of VOR and smooth-pursuit signals. Instead, vision may affect VOR performance by determining the context of the behavior. Received: 16 June 1997 / Accepted: 5 December 1997  相似文献   

15.
Near work is associated with increased activity in the neck and shoulder muscles, but the underlying mechanism is still unknown. This study was designed to determine whether a dynamic change in focus, alternating between a nearby and a more distant visual target, produces a direct parallel change in trapezius muscle activity. Fourteen healthy controls and 12 patients with a history of visual and neck/shoulder symptoms performed a Near-Far visual task under three different viewing conditions; one neutral condition with no trial lenses, one condition with negative trial lenses to create increased accommodation, and one condition with positive trial lenses to create decreased accommodation. Eye lens accommodation and trapezius muscle activity were continuously recorded. The trapezius muscle activity was significantly higher during Near than during Far focusing periods for both groups within the neutral viewing condition, and there was a significant co-variation in time between accommodation and trapezius muscle activity within the neutral and positive viewing conditions for the control group. In conclusion, these results reveal a connection between Near focusing and increased muscle activity during dynamic changes in focus between a nearby and a far target. A direct link, from the accommodation/vergence system to the trapezius muscles cannot be ruled out, but the connection may also be explained by an increased need for eye-neck (head) stabilization when focusing on a nearby target as compared to a more distant target.  相似文献   

16.
Blake R  Sobel KV  Gilroy LA 《Neuron》2003,39(5):869-878
When the visual system is faced with conflicting or ambiguous stimulus information, visual perception fluctuates over time. We found that perceptual alternations are slowed when inducing stimuli move within the visual field, constantly engaging fresh, unadapted neural tissue. During binocular rivalry, dominance durations were longer when rival figures moved compared to when they were stationary, yielding lower alternation rates. Rate was not reduced, however, when observers tracked the moving targets, keeping the images on approximately the same retinal area. Alternations were reliably triggered when rival targets passed through a local region of the visual field preadapted to one of the rival targets. During viewing of a kinetic globe whose direction of rotation was ambiguous, observers experienced fewer alternations in perceived direction when the globe moved around the visual field or when the globe's axis of rotation changed continuously. Evidently, local neural adaptation is a key ingredient in the instability of perception.  相似文献   

17.
Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.  相似文献   

18.

Background

Visual exploration of the surroundings during locomotion at heights has not yet been investigated in subjects suffering from fear of heights.

Methods

Eye and head movements were recorded separately in 16 subjects susceptible to fear of heights and in 16 non-susceptible controls while walking on an emergency escape balcony 20 meters above ground level. Participants wore mobile infrared eye-tracking goggles with a head-fixed scene camera and integrated 6-degrees-of-freedom inertial sensors for recording head movements. Video recordings of the subjects were simultaneously made to correlate gaze and gait behavior.

Results

Susceptibles exhibited a limited visual exploration of the surroundings, particularly the depth. Head movements were significantly reduced in all three planes (yaw, pitch, and roll) with less vertical head oscillations, whereas total eye movements (saccade amplitudes, frequencies, fixation durations) did not differ from those of controls. However, there was an anisotropy, with a preference for the vertical as opposed to the horizontal direction of saccades. Comparison of eye and head movement histograms and the resulting gaze-in-space revealed a smaller total area of visual exploration, which was mainly directed straight ahead and covered vertically an area from the horizon to the ground in front of the feet. This gaze behavior was associated with a slow, cautious gait.

Conclusions

The visual exploration of the surroundings by susceptibles to fear of heights differs during locomotion at heights from the earlier investigated behavior of standing still and looking from a balcony. During locomotion, anisotropy of gaze-in-space shows a preference for the vertical as opposed to the horizontal direction during stance. Avoiding looking into the abyss may reduce anxiety in both conditions; exploration of the “vertical strip” in the heading direction is beneficial for visual control of balance and avoidance of obstacles during locomotion.  相似文献   

19.
When viewing two superimposed, translating sets of dots moving in different directions, one overestimates direction difference. This phenomenon of direction repulsion is thought to be driven by inhibitory interactions between directionally tuned motion detectors. However, there is disagreement on where this occurs-at early stages of motion processing, when local motions are extracted; or at the later, global motion-processing stage following "pooling" of these local measures. These two stages of motion processing have been identified as occurring in area V1 and the human homolog of macaque MT/V5, respectively. We designed experiments in which local and global predictions of repulsion are pitted against one another. Our stimuli contained a target set of dots, moving at a uniform speed, superimposed on a "mixed-speed" distractor set. Because the perceived speed of a mixed-speed stimulus is equal to the dots' average speed, a global-processing account of direction repulsion predicts that repulsion magnitude induced by a mixed-speed distractor will be indistinguishable from that induced by a single-speed distractor moving at the same mean speed. This is exactly what we found. These results provide compelling evidence that global-motion interactions play a major role in driving direction repulsion.  相似文献   

20.
Research on sensory perception now often considers more than one sense at a time. This approach reflects real-world situations, such as when a visible object touches us. Indeed, vision and touch show great interdependence: the sight of a body part can reduce tactile target detection times [1], visual and tactile attentional systems are spatially linked [2], and the texture of surfaces that are actively touched with the fingertips is perceived using both vision and touch [3]. However, these previous findings might be mediated by spatial attention [1, 2] or by improved guidance of movement [3] via visually enhanced body position sense [4--6]. Here, we investigate the direct effects of viewing the body on passive touch. We measured tactile two-point discrimination thresholds [7] on the forearm while manipulating the visibility of the arm but holding gaze direction constant. The spatial resolution of touch was better when the arm was visible than when it was not. Tactile performance was further improved when the view of the arm was magnified. In contrast, performance was not improved by viewing a neutral object at the arm's location, ruling out improved spatial orienting as a possible account. Controls confirmed that no information about the tactile stimulation was provided by visibility of the arm. This visual enhancement of touch may point to online reorganization of tactile receptive fields.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号