首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. We introduce a softmax barrier model for gaze selection that uses two key elements: a priority parameter that represents task importance per module, and noise estimates that allow modules to represent uncertainty about the state of task-relevant visual information. Comparisons with human gaze data gathered in a virtual driving environment show that the model closely approximates human performance.  相似文献   

2.
Eye contact is a crucial social cue constituting a frequent preliminary to interaction. Thus, the perception of others' gaze may be associated with specific processes beginning with asymmetries in the detection of direct versus averted gaze. We tested this hypothesis in two behavioural experiments using realistic eye stimuli in a visual search task. We manipulated the head orientation (frontal or deviated) and the visual field (right or left) in which the target appeared at display onset. We found that direct gaze targets presented among averted gaze distractors were detected faster and better than averted gaze targets among direct gaze distractors, but only when the head was deviated. Moreover, direct gaze targets were detected very quickly and efficiently regardless of head orientation and visual field, whereas the detection of averted gaze was strongly modulated by these factors. These results suggest that gaze contact has precedence over contextual information such as head orientation and visual field.  相似文献   

3.
Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual’s environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We used the gaze cue task to induce attention to or away from the location of a target stimulus. In the experiment, the gaze cue either belonged to a (dominant looking) male face or a (non-dominant looking) female face. Critically, prior to the task, individuals were primed with pictures of threat or no threat to induce either a dangerous or safe environment. Findings revealed that the primed emotional context critically influenced the gaze cuing effect. While a gaze cue of the dominant male face influenced performance in both the threat and no-threat conditions, the gaze cue of the non-dominant female face only influenced performance in the no-threat condition. This research suggests an implicit, context-dependent follower bias, which carries implications for research on visual attention, social cognition, and leadership.  相似文献   

4.
Comparative analysis of the EEG amplitude depression in the α band has been performed in two paradigms varying in the degree of involvement of functionally different attention processes: visual search for the relevant stimulus (RS) among many irrelevant stimuli (iRS) and oddball. Simple visual examination of several identical stimuli was used as a control for the visual search task. The method of videooculography was used for verification of gaze direction during RS search. The EEG dynamics in the α band (desynchronization, D) was considered to be a correlate of attention processes. The visual search task performance revealed considerably higher D after RS finding compared to the control. The higher degree of D during the search seems to be due to the higher complexity of the task and complexity of visual environment. The D in the frontal regions, which has the greatest amplitude, supposedly reflects the performance of an adequate motor performance program under the control of voluntary (top-down) attention. At the same time, the D in the occipital and parietal areas seems to reflect the processes of involuntary attention activated due to the change in visual information (the finding of the only RS among numerous iRS). In the oddball task, presentation of both RS and iRS also induced D, which proved to be more marked in response to RS and maximal in the visual areas. We suppose that D under oddball reflects the involvement of involuntary attention.  相似文献   

5.
The aim of this study was to clarify the nature of visual processing deficits caused by cerebellar disorders. We studied the performance of two types of visual search (top-down visual scanning and bottom-up visual scanning) in 18 patients with pure cerebellar types of spinocerebellar degeneration (SCA6: 11; SCA31: 7). The gaze fixation position was recorded with an eye-tracking device while the subjects performed two visual search tasks in which they looked for a target Landolt figure among distractors. In the serial search task, the target was similar to the distractors and the subject had to search for the target by processing each item with top-down visual scanning. In the pop-out search task, the target and distractor were clearly discernible and the visual salience of the target allowed the subjects to detect it by bottom-up visual scanning. The saliency maps clearly showed that the serial search task required top-down visual attention and the pop-out search task required bottom-up visual attention. In the serial search task, the search time to detect the target was significantly longer in SCA patients than in normal subjects, whereas the search time in the pop-out search task was comparable between the two groups. These findings suggested that SCA patients cannot efficiently scan a target using a top-down attentional process, whereas scanning with a bottom-up attentional process is not affected. In the serial search task, the amplitude of saccades was significantly smaller in SCA patients than in normal subjects. The variability of saccade amplitude (saccadic dysmetria), number of re-fixations, and unstable fixation (nystagmus) were larger in SCA patients than in normal subjects, accounting for a substantial proportion of scattered fixations around the items. Saccadic dysmetria, re-fixation, and nystagmus may play important roles in the impaired top-down visual scanning in SCA, hampering precise visual processing of individual items.  相似文献   

6.
We examined the effect of increased cognitive load on visual search behavior and measures of gait performance during locomotion. Also, we investigated how personality traits, specifically the propensity to consciously control or monitor movements (trait movement ‘reinvestment’), impacted the ability to maintain effective gaze under conditions of cognitive load. Healthy young adults traversed a novel adaptive walking path while performing a secondary serial subtraction task. Performance was assessed using correct responses to the cognitive task, gaze behavior, stepping accuracy, and time to complete the walking task. When walking while simultaneously carrying out the secondary serial subtraction task, participants visually fixated on task-irrelevant areas ‘outside’ the walking path more often and for longer durations of time, and fixated on task-relevant areas ‘inside’ the walkway for shorter durations. These changes were most pronounced in high-trait-reinvesters. We speculate that reinvestment-related processes placed an additional cognitive demand upon working memory. These increased task-irrelevant ‘outside’ fixations were accompanied by slower completion rates on the walking task and greater gross stepping errors. Findings suggest that attention is important for the maintenance of effective gaze behaviors, supporting previous claims that the maladaptive changes in visual search observed in high-risk older adults may be a consequence of inefficiencies in attentional processing. Identifying the underlying attentional processes that disrupt effective gaze behaviour during locomotion is an essential step in the development of rehabilitation, with this information allowing for the emergence of interventions that reduce the risk of falling.  相似文献   

7.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

8.
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to “spy” on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.  相似文献   

9.
Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictive visual target motions does not take into account this essential characteristic of the human movement, and may result in task specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degree of complexity during whole-body swaying in the Anterior-Posterior (AP) and Medio-Lateral (ML) direction. Participants were asked to track three visual target motions: a complex (Lorenz attractor), a noise (brown) and a periodic (sine) moving target while receiving online visual feedback about their performance. Postural sway, gaze and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.  相似文献   

10.
It remains unclear whether spontaneous eye movements during visual imagery reflect the mental generation of a visual image (i.e. the arrangement of the component parts of a mental representation). To address this specificity, we recorded eye movements in an imagery task and in a phonological fluency (non-imagery) task, both consisting in naming French towns from long-term memory. Only in the condition of visual imagery the spontaneous eye positions reflected the geographic position of the towns evoked by the subjects. This demonstrates that eye positions closely reflect the mapping of mental images. Advanced analysis of gaze positions using the bi-dimensional regression model confirmed the spatial correlation of gaze and towns’ locations in every single individual in the visual imagery task and in none of the individuals when no imagery accompanied memory retrieval. In addition, the evolution of the bi-dimensional regression’s coefficient of determination revealed, in each individual, a process of generating several iterative series of a limited number of towns mapped with the same spatial distortion, despite different individual order of towns’ evocation and different individual mappings. Such consistency across subjects revealed by gaze (the mind’s eye) gives empirical support to theories postulating that visual imagery, like visual sampling, is an iterative fragmented processing.  相似文献   

11.
Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.  相似文献   

12.
The perception of pictorial gaze cues was examined in long-tailed macaques (Macaca fascicularis). A computerised object-location task was used to explore whether the monkeys would show faster response time to locate a target when its appearance was preceded with congruent as opposed to incongruent gaze cues. Despite existing evidence that macaques preferentially attend to the eyes in facial images and also visually orient with depicted gaze cues, the monkeys did not show faster response times on congruent trials either in response to schematic or photographic stimuli. These findings coincide with those reported for baboons testing with a similar paradigm in which gaze cues preceded a target identification task [Fagot, J., Deruelle, C., 2002. Perception of pictorial gaze by baboons (Papio papio). J. Exp. Psychol. 28, 298-308]. When tested with either pictorial stimuli or interactants, nonhuman primates readily follow gaze but do not seem to use this mechanism to identify a target object; there seems to be some mismatch in performance between attentional changes and manual responses to gaze cues on ostensibly similar tasks.  相似文献   

13.
Training has been shown to improve perceptual performance on limited sets of stimuli. However, whether training can generally improve top-down biasing of visual search in a target-nonspecific manner remains unknown. We trained subjects over ten days on a visual search task, challenging them with a novel target (top-down goal) on every trial, while bottom-up uncertainty (distribution of distractors) remained constant. We analyzed the changes in saccade statistics and visual behavior over the course of training by recording eye movements as subjects performed the task. Subjects became experts at this task, with twofold increased performance, decreased fixation duration, and stronger tendency to guide gaze toward items with color and spatial frequency (but not necessarily orientation) that resembled the target, suggesting improved general top-down biasing of search.  相似文献   

14.
Despite possessing the capacity for selective attention, we often fail to notice the obvious. We investigated participants’ (n = 39) failures to detect salient changes in a change blindness experiment. Surprisingly, change detection success varied by over two-fold across participants. These variations could not be readily explained by differences in scan paths or fixated visual features. Yet, two simple gaze metrics–mean duration of fixations and the variance of saccade amplitudes–systematically predicted change detection success. We explored the mechanistic underpinnings of these results with a neurally-constrained model based on the Bayesian framework of sequential probability ratio testing, with a posterior odds-ratio rule for shifting gaze. The model’s gaze strategies and success rates closely mimicked human data. Moreover, the model outperformed a state-of-the-art deep neural network (DeepGaze II) with predicting human gaze patterns in this change blindness task. Our mechanistic model reveals putative rational observer search strategies for change detection during change blindness, with critical real-world implications.  相似文献   

15.
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field.  相似文献   

16.
We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition.  相似文献   

17.
This paper reports a comparison between two tasks of visual search. Two observers carried out, in separate blocks, a saccade-to-target task and a manual-target-detection task. The displays, which were identical for the two tasks, consisted of a ring of eight equally spaced Gabor patches. The target could be defined by a difference from the distractors along four possible dimensions: orientation, spatial frequency, contrast or size. These four dimensions were used as variables in separate experiments. In each experiment, performance was measured over an extensive range of values of the particular dimension. Thresholds were thus obtained for the saccade and the manual response tasks. The nature of the response was found to modify the relative visual sensitivity. For orientation differences, manual response performance was better than saccade-to-target performance. The reverse was true for spatial frequency and contrast differences, where saccade-to-target performance was better than manual response performance. We conclude that saccade-selection in a search task draws on different visual information from that used for manual responding in the equivalent task. The two tasks thus differ in more than the different response systems used: the results suggest the action of different underlying neural visual mechanisms as well as different neural motor mechanisms.  相似文献   

18.
Reading performance during standing and walking was assessed for information presented on earth-fixed and head-fixed displays by determining the minimal duration during which a numerical time stimulus needed to be presented for 50% correct naming answers. Reading from the earth-fixed display was comparable during standing and walking, with optimal performance being attained for visual character sizes in the range of 0.2° to 1°. Reading from the head-fixed display was impaired for small (0.2-0.3°) and large (5°) visual character sizes, especially during walking. Analysis of head and eye movements demonstrated that retinal slip was larger during walking than during standing, but remained within the functional acuity range when reading from the earth-fixed display. The detrimental effects on performance of reading from the head-fixed display during walking could be attributed to loss of acuity resulting from large retinal slip. Because walking activated the angular vestibulo-ocular reflex, the resulting compensatory eye movements acted to stabilize gaze on the information presented on the earth-fixed display but destabilized gaze from the information presented on the head-fixed display. We conclude that the gaze stabilization mechanisms that normally allow visual performance to be maintained during physical activity adversely affect reading performance when the information is presented on a display attached to the head.  相似文献   

19.
Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants'' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants'' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans'' beliefs about the observed scene influence sensory processing.  相似文献   

20.
Gregoriou GG  Gotts SJ  Desimone R 《Neuron》2012,73(3):581-594
Shifts of gaze and shifts of attention are closely linked and it is debated whether they result from the same neural mechanisms. Both processes involve the frontal eye fields (FEF), an area which is also a source of top-down feedback to area V4 during covert attention. To test the relative contributions of oculomotor and attention-related FEF signals to such feedback, we recorded simultaneously from both areas in a covert attention task and in a saccade task. In the attention task, only visual and visuomovement FEF neurons showed enhanced responses, whereas movement cells were unchanged. Importantly, visual, but not movement or visuomovement cells, showed enhanced gamma frequency synchronization with activity in V4 during attention. Within FEF, beta synchronization was increased for movement cells during attention but was suppressed in the saccade task. These findings support the idea that the attentional modulation of visual processing is not mediated by movement neurons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号