首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 429 毫秒
1.
Our eyes move continuously. Even when we attempt to fix our gaze, we produce “fixational” eye movements including microsaccades, drift and tremor. The potential role of microsaccades versus drifts in the control of eye position has been debated for decades and remains in question today. Here we set out to determine the corrective functions of microsaccades and drifts on gaze-position errors due to blinks in non-human primates (Macaca mulatta) and humans. Our results show that blinks contribute to the instability of gaze during fixation, and that microsaccades, but not drifts, correct fixation errors introduced by blinks. These findings provide new insights about eye position control during fixation, and indicate a more general role of microsaccades in fixation correction than thought previously.  相似文献   

2.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

3.
Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.  相似文献   

4.
The primary purpose of this study was to investigate the effects of cognitive loading on movement kinematics and trajectory formation during goal-directed walking in a virtual reality (VR) environment. The secondary objective was to measure how participants corrected their trajectories for perturbed feedback and how participants'' awareness of such perturbations changed under cognitive loading. We asked 14 healthy young adults to walk towards four different target locations in a VR environment while their movements were tracked and played back in real-time on a large projection screen. In 75% of all trials we introduced angular deviations of ±5° to ±30° between the veridical walking trajectory and the visual feedback. Participants performed a second experimental block under cognitive load (serial-7 subtraction, counter-balanced across participants). We measured walking kinematics (joint-angles, velocity profiles) and motor performance (end-point-compensation, trajectory-deviations). Motor awareness was determined by asking participants to rate the veracity of the feedback after every trial. In-line with previous findings in natural settings, participants displayed stereotypical walking trajectories in a VR environment. Our results extend these findings as they demonstrate that taxing cognitive resources did not affect trajectory formation and deviations although it interfered with the participants'' movement kinematics, in particular walking velocity. Additionally, we report that motor awareness was selectively impaired by the secondary task in trials with high perceptual uncertainty. Compared with data on eye and arm movements our findings lend support to the hypothesis that the central nervous system (CNS) uses common mechanisms to govern goal-directed movements, including locomotion. We discuss our results with respect to the use of VR methods in gait control and rehabilitation.  相似文献   

5.
The dynamics of collective decision making is not yet well understood. Its practical relevance however can be of utmost importance, as experienced by people who lost their fortunes in turbulent moments of financial markets. In this paper we show how spontaneous collective “moods” or “biases” emerge dynamically among human participants playing a trading game in a simple model of the stock market. Applying theory and computer simulations to the experimental data generated by humans, we are able to predict the onset of such moments before they actually happen.  相似文献   

6.
We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition.  相似文献   

7.
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.  相似文献   

8.
9.
Han X  Byrne P  Kahana M  Becker S 《PloS one》2012,7(5):e35940
We investigated how objects come to serve as landmarks in spatial memory, and more specifically how they form part of an allocentric cognitive map. Participants performing a virtual driving task incidentally learned the layout of a virtual town and locations of objects in that town. They were subsequently tested on their spatial and recognition memory for the objects. To assess whether the objects were encoded allocentrically we examined pointing consistency across tested viewpoints. In three experiments, we found that spatial memory for objects at navigationally relevant locations was more consistent across tested viewpoints, particularly when participants had more limited experience of the environment. When participants' attention was focused on the appearance of objects, the navigational relevance effect was eliminated, whereas when their attention was focused on objects' locations, this effect was enhanced, supporting the hypothesis that when objects are processed in the service of navigation, rather than merely being viewed as objects, they engage qualitatively distinct attentional systems and are incorporated into an allocentric spatial representation. The results are consistent with evidence from the neuroimaging literature that when objects are relevant to navigation, they not only engage the ventral "object processing stream", but also the dorsal stream and medial temporal lobe memory system classically associated with allocentric spatial memory.  相似文献   

10.

Background

Recent studies have shown that playing prosocial video games leads to greater subsequent prosocial behavior in the real world. However, immersive virtual reality allows people to occupy avatars that are different from them in a perceptually realistic manner. We examine how occupying an avatar with the superhero ability to fly increases helping behavior.

Principal Findings

Using a two-by-two design, participants were either given the power of flight (their arm movements were tracked to control their flight akin to Superman’s flying ability) or rode as a passenger in a helicopter, and were assigned one of two tasks, either to help find a missing diabetic child in need of insulin or to tour a virtual city. Participants in the “super-flight” conditions helped the experimenter pick up spilled pens after their virtual experience significantly more than those who were virtual passengers in a helicopter.

Conclusion

The results indicate that having the “superpower” of flight leads to greater helping behavior in the real world, regardless of how participants used that power. A possible mechanism for this result is that having the power of flight primed concepts and prototypes associated with superheroes (e.g., Superman). This research illustrates the potential of using experiences in virtual reality technology to increase prosocial behavior in the physical world.  相似文献   

11.
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.  相似文献   

12.
It remains unclear whether spontaneous eye movements during visual imagery reflect the mental generation of a visual image (i.e. the arrangement of the component parts of a mental representation). To address this specificity, we recorded eye movements in an imagery task and in a phonological fluency (non-imagery) task, both consisting in naming French towns from long-term memory. Only in the condition of visual imagery the spontaneous eye positions reflected the geographic position of the towns evoked by the subjects. This demonstrates that eye positions closely reflect the mapping of mental images. Advanced analysis of gaze positions using the bi-dimensional regression model confirmed the spatial correlation of gaze and towns’ locations in every single individual in the visual imagery task and in none of the individuals when no imagery accompanied memory retrieval. In addition, the evolution of the bi-dimensional regression’s coefficient of determination revealed, in each individual, a process of generating several iterative series of a limited number of towns mapped with the same spatial distortion, despite different individual order of towns’ evocation and different individual mappings. Such consistency across subjects revealed by gaze (the mind’s eye) gives empirical support to theories postulating that visual imagery, like visual sampling, is an iterative fragmented processing.  相似文献   

13.
14.
Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term “in vivo behavioral tracking,” we track individuals’ movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants’ tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants’ gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore, participants’ proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.  相似文献   

15.
Previous research has demonstrated that the way human adults look at others’ faces is modulated by their cultural background, but very little is known about how such a culture-specific pattern of face gaze develops. The current study investigated the role of cultural background on the development of face scanning in young children between the ages of 1 and 7 years, and its modulation by the eye gaze direction of the face. British and Japanese participants’ eye movements were recorded while they observed faces moving their eyes towards or away from the participants. British children fixated more on the mouth whereas Japanese children fixated more on the eyes, replicating the results with adult participants. No cultural differences were observed in the differential responses to direct and averted gaze. The results suggest that different patterns of face scanning exist between different cultures from the first years of life, but differential scanning of direct and averted gaze associated with different cultural norms develop later in life.  相似文献   

16.
The role of contingency awareness in simple associative learning experiments with human participants is currently debated. Since prior work suggests that eye movements can index mnemonic processes that occur without awareness, we used eye tracking to better understand the role of awareness in learning aversive Pavlovian conditioning. A complex real-world scene containing four embedded household items was presented to participants while skin conductance, eye movements, and pupil size were recorded. One item embedded in the scene served as the conditional stimulus (CS). One exemplar of that item (e.g. a white pot) was paired with shock 100 percent of the time (CS+) while a second exemplar (e.g. a gray pot) was never paired with shock (CS-). The remaining items were paired with shock on half of the trials. Participants rated their expectation of receiving a shock during each trial, and these expectancy ratings were used to identify when (i.e. on what trial) each participant became aware of the programmed contingencies. Disproportionate viewing of the CS was found both before and after explicit contingency awareness, and patterns of viewing distinguished the CS+ from the CS-. These observations are consistent with “dual process” models of fear conditioning, as they indicate that learning can be expressed in patterns of viewing prior to explicit contingency awareness.  相似文献   

17.
Human eyes move continuously, even during visual fixation. These “fixational eye movements” (FEMs) include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift) and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.  相似文献   

18.
Perspective (route or survey) during the encoding of spatial information can influence recall and navigation performance. In our experiment we investigated a third type of perspective, which is a slanted view. This slanted perspective is a compromise between route and survey perspectives, offering both information about landmarks as in route perspective and geometric information as in survey perspective. We hypothesized that the use of slanted perspective would allow the brain to use either egocentric or allocentric strategies during storage and recall. Twenty-six subjects were scanned (3-Tesla fMRI) during the encoding of a path (40-s navigation movie within a virtual city). They were given the task of encoding a segment of travel in the virtual city and of subsequent shortcut-finding for each perspective: route, slanted and survey. The analysis of the behavioral data revealed that perspective influenced response accuracy, with significantly more correct responses for slanted and survey perspectives than for route perspective. Comparisons of brain activation with route, slanted, and survey perspectives suggested that slanted and survey perspectives share common brain activity in the left lingual and fusiform gyri and lead to very similar behavioral performance. Slanted perspective was also associated with similar activation to route perspective during encoding in the right middle occipital gyrus. Furthermore, slanted perspective induced intermediate patterns of activation (in between route and survey) in some brain areas, such as the right lingual and fusiform gyri. Our results suggest that the slanted perspective may be considered as a hybrid perspective. This result offers the first empirical support for the choice to present the slanted perspective in many navigational aids.  相似文献   

19.
Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues.  相似文献   

20.

Background

Humans detect faces with direct gazes among those with averted gazes more efficiently than they detect faces with averted gazes among those with direct gazes. We examined whether this “stare-in-the-crowd” effect occurs in chimpanzees (Pan troglodytes), whose eye morphology differs from that of humans (i.e., low-contrast eyes, dark sclera).

Methodology/Principal Findings

An adult female chimpanzee was trained to search for an odd-item target (front view of a human face) among distractors that differed from the target only with respect to the direction of the eye gaze. During visual-search testing, she performed more efficiently when the target was a direct-gaze face than when it was an averted-gaze face. This direct-gaze superiority was maintained when the faces were inverted and when parts of the face were scrambled. Subsequent tests revealed that gaze perception in the chimpanzee was controlled by the contrast between iris and sclera, as in humans, but that the chimpanzee attended only to the position of the iris in the eye, irrespective of head direction.

Conclusion/Significance

These results suggest that the chimpanzee can discriminate among human gaze directions and are more sensitive to direct gazes. However, limitations in the perception of human gaze by the chimpanzee are suggested by her inability to completely transfer her performance to faces showing a three-quarter view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号