首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ability to devote attention simultaneously to multiple visual objects plays an important role in domains ranging from everyday activities to the workplace. Yet, no studies have systematically explored the fixation strategies that optimize attention to two spatially distinct objects. Assuming the two objects require attention nearly simultaneously, subjects either could fixate one object or they could fixate between the objects. Studies measuring the breadth of attention have focused almost exclusively on the former strategy, by having subjects simultaneously perform one attention-demanding task at fixation and another in the periphery. We compared performance when one object was at fixation and the other was in the periphery to a condition in which both objects were in the periphery and subjects fixated between them. Performance was better with two peripheral stimuli than with one central and one peripheral stimulus, meaning that a strategy of fixating between stimuli permitted greater attention breadth. Consistent with the idea that both measures tap attention breadth, sport experts consistently outperformed novices with both fixation strategies. Our findings suggest a way to improve performance when observers must pay attention to multiple objects across spatial regions. We discuss possible explanations for this performance advantage.  相似文献   

2.

Background

Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.

Methodology/Principal Findings

Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.

Conclusions/Significance

Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.  相似文献   

3.
4.
The location of visual objects in the world around us is reconstructed in a complex way from the image falling on the retina. Recent studies have begun to reveal the different ways in which the brain dynamically re-maps retinal information across eye movements to compute object locations for perception and directing actions.  相似文献   

5.
When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers'' eye movements, focusing on two possibilities: (i) that observers'' gaze behaviour arises from processes involved in the prediction of the target object of the actor''s reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor''s hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers'' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action–observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.  相似文献   

6.
Han X  Byrne P  Kahana M  Becker S 《PloS one》2012,7(5):e35940
We investigated how objects come to serve as landmarks in spatial memory, and more specifically how they form part of an allocentric cognitive map. Participants performing a virtual driving task incidentally learned the layout of a virtual town and locations of objects in that town. They were subsequently tested on their spatial and recognition memory for the objects. To assess whether the objects were encoded allocentrically we examined pointing consistency across tested viewpoints. In three experiments, we found that spatial memory for objects at navigationally relevant locations was more consistent across tested viewpoints, particularly when participants had more limited experience of the environment. When participants' attention was focused on the appearance of objects, the navigational relevance effect was eliminated, whereas when their attention was focused on objects' locations, this effect was enhanced, supporting the hypothesis that when objects are processed in the service of navigation, rather than merely being viewed as objects, they engage qualitatively distinct attentional systems and are incorporated into an allocentric spatial representation. The results are consistent with evidence from the neuroimaging literature that when objects are relevant to navigation, they not only engage the ventral "object processing stream", but also the dorsal stream and medial temporal lobe memory system classically associated with allocentric spatial memory.  相似文献   

7.
Certain visual stimuli can give rise to contradictory perceptions. In this paper we examine the temporal dynamics of perceptual reversals experienced with biological motion, comparing these dynamics to those observed with other ambiguous structure from motion (SFM) stimuli. In our first experiment, naïve observers monitored perceptual alternations with an ambiguous rotating walker, a figure that randomly alternates between walking in clockwise (CW) and counter-clockwise (CCW) directions. While the number of reported reversals varied between observers, the observed dynamics (distribution of dominance durations, CW/CCW proportions) were comparable to those experienced with an ambiguous kinetic depth cylinder. In a second experiment, we compared reversal profiles with rotating and standard point-light walkers (i.e. non-rotating). Over multiple test repetitions, three out of four observers experienced consistently shorter mean percept durations with the rotating walker, suggesting that the added rotational component may speed up reversal rates with biomotion. For both stimuli, the drift in alternation rate across trial and across repetition was minimal. In our final experiment, we investigated whether reversals with the rotating walker and a non-biological object with similar global dimensions (rotating cuboid) occur at random phases of the rotation cycle. We found evidence that some observers experience peaks in the distribution of response locations that are relatively stable across sessions. Using control data, we discuss the role of eye movements in the development of these reversal patterns, and the related role of exogenous stimulus characteristics. In summary, we have demonstrated that the temporal dynamics of reversal with biological motion are similar to other forms of ambiguous SFM. We conclude that perceptual switching with biological motion is a robust bistable phenomenon.  相似文献   

8.
Functional anatomical studies indicate that a set of neural signals in parietal and frontal cortex mediates the covert allocation of attention to visual locations across a wide variety of visual tasks. This frontoparietal network includes areas, such as the frontal eye field and supplementary eye field. This anatomical overlap suggests that shifts of attention to visual locations of objects recruit areas involved in oculomotor programming and execution. Finally, the fronto-parietal network may be the source of spatial attentional modulations in the ventral visual system during object recognition or discrimination.  相似文献   

9.
Behavioural studies of the perceptual cues for female physical attractiveness have suggested two potentially important features: body fat distribution [the waist-to-hip ratio (WHR)] and overall body fat [often estimated by the body mass index (BMI)]. However, none of these studies tell us directly which regions of the stimulus images inform observers' judgments. Therefore, we recorded the eye movements of three groups of 10 male observers and three groups of 10 female observers, when they rated a set of 46 photographs of female bodies. The first sets of observers rated the images for attractiveness, the second sets rated for body fat and the third sets for WHR. If either WHR and/or body fat is used to judge attractiveness, then observers rating attractiveness should look at those areas of the body which allow assessment of these features, and they should look in the same areas when they are directly asked to estimate WHR and body fat. So we are able to compare the fixation patterns for the explicit judgments with those for attractiveness judgments and infer which features were used for attractiveness. Prior to group analysis of the eye-movement data, the locations of individual eye fixations were transformed into a common reference space to permit comparisons of fixation density at high resolution across all stimuli. This manipulation allowed us to use spatial statistical analysis techniques to show the following: (1) Observers' fixations for attractiveness and body fat clustered in the central and upper abdomen and chest, but not the pelvic or hip areas, consistent with the finding that WHR had little influence over attractiveness judgments. (2) The pattern of fixations for attractiveness ratings was very similar to the fixation patterns for body fat judgments. (3) The fixations for WHR ratings were significantly different from those for attractiveness and body fat.  相似文献   

10.
C Hudson  PD Howe  DR Little 《PloS one》2012,7(8):e43796
In everyday life, we often need to attentively track moving objects. A previous study has claimed that this tracking occurs independently in the left and right visual hemifields (Alvarez & Cavanagh, 2005, Psychological Science,16, 637–647). Specifically, it was shown that observers were much more accurate at tracking objects that were spread over both visual hemifields as opposed to when all were confined to a single visual hemifield. In that study, observers were not required to remember the identities of the objects. Conversely, in real life, there is seldom any benefit to tracking an object unless you can also recall its identity. It has been predicted that when observers are required to remember the identities of the tracked objects a bilateral advantage should no longer be observed (Oksama & Hyönä, 2008, Cognitive Psychology, 56, 237–283). We tested this prediction and found that a bilateral advantage still occurred, though it was not as strong as when observers were not required to remember the identities of the targets. Even in the later case we found that tracking was not completely independent in the two visual hemifields. We present a combined model of multiple object tracking and multiple identity tracking that can explain our data.  相似文献   

11.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

12.
The perception of visual information in cytoscreening was studied: eye movements were recorded while the cytotechnologist was screening cervical smears by means of a projection screen. Four phases of eye movement could be distinguished: small, aimless movements during the stage movement; a latency period with a duration of about 180 milliseconds; saccadic movement to the position of an object; and fixation on an object. These components explain the two-phase behavior of cytoscreening found in our previous investigations of the stage movement. Visual perception during the period of latency was found to be the most important since only those objects that are recognized by peripheral vision during this period can trigger the necessary saccadic movement before fixation takes place. The scanpath of search in the stationary field of view is determined by the conspicuousness of the objects; the main features of conspicuousness are size and contrast. Even with the comparatively small fields of view (24 degrees and 29 degrees in diameter) used in these experiments, it was found that the detection threshold of peripheral vision increases towards the margin of the field of view. This raises the question of whether the use of large-field binoculars (with 40-degree visual angles) may cause higher false-negative rates for samples with only a few atypical cells.  相似文献   

13.
We examined whether the abilities of observers to perform an analogue of a real-world monitoring task involving detection and identification of changes to items in a visual display could be explained better by models based on signal detection theory (SDT) or high threshold theory (HTT). Our study differed from most previous studies in that observers were allowed to inspect the initial display for 3s, simulating the long inspection times typical of natural viewing, and their eye movements were not constrained. For the majority of observers, combined change detection and identification performance was best modelled by a SDT-based process that assumed that memory resources were distributed across all eight items in our displays. Some observers required a parameter to allow for sometimes making random guesses at the identities of changes they had missed. However, the performance of a small proportion of observers was best explained by a HTT-based model that allowed for lapses of attention.  相似文献   

14.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

15.
The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies.  相似文献   

16.
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively.  相似文献   

17.
Human observers see a single mixed color (yellow) when different colors (red and green) rapidly alternate. Accumulating evidence suggests that the critical temporal frequency beyond which chromatic fusion occurs does not simply reflect the temporal limit of peripheral encoding. However, it remains poorly understood how the central processing controls the fusion frequency. Here we show that the fusion frequency can be elevated by extra-retinal signals during smooth pursuit. This eye movement can keep the image of a moving target in the fovea, but it also introduces a backward retinal sweep of the stationary background pattern. We found that the fusion frequency was higher when retinal color changes were generated by pursuit-induced background motions than when the same retinal color changes were generated by object motions during eye fixation. This temporal improvement cannot be ascribed to a general increase in contrast gain of specific neural mechanisms during pursuit, since the improvement was not observed with a pattern flickering without changing position on the retina or with a pattern moving in the direction opposite to the background motion during pursuit. Our findings indicate that chromatic fusion is controlled by a cortical mechanism that suppresses motion blur. A plausible mechanism is that eye-movement signals change spatiotemporal trajectories along which color signals are integrated so as to reduce chromatic integration at the same locations (i.e., along stationary trajectories) on the retina that normally causes retinal blur during fixation.  相似文献   

18.
To date, it has been shown that cognitive map representations based on cartographic visualisations are systematically distorted. The grid is a traditional element of map graphics that has rarely been considered in research on perception-based spatial distortions. Grids do not only support the map reader in finding coordinates or locations of objects, they also provide a systematic structure for clustering visual map information (“spatial chunks”). The aim of this study was to examine whether different cartographic kinds of grids reduce spatial distortions and improve recall memory for object locations. Recall performance was measured as both the percentage of correctly recalled objects (hit rate) and the mean distance errors of correctly recalled objects (spatial accuracy). Different kinds of grids (continuous lines, dashed lines, crosses) were applied to topographic maps. These maps were also varied in their type of characteristic areas (LANDSCAPE) and different information layer compositions (DENSITY) to examine the effects of map complexity. The study involving 144 participants shows that all experimental cartographic factors (GRID, LANDSCAPE, DENSITY) improve recall performance and spatial accuracy of learned object locations. Overlaying a topographic map with a grid significantly reduces the mean distance errors of correctly recalled map objects. The paper includes a discussion of a square grid''s usefulness concerning object location memory, independent of whether the grid is clearly visible (continuous or dashed lines) or only indicated by crosses.  相似文献   

19.
Recognition memories are formed during perceptual experience and allow subsequent recognition of previously encountered objects as well as their distinction from novel objects. As a consequence, novel objects are generally explored longer than familiar objects by many species. This novelty preference has been documented in rodents using the novel object recognition (NOR) test, as well is in primates including humans using preferential looking time paradigms. Here, we examine novelty preference using the NOR task in tree shrew, a small animal species that is considered to be an intermediary between rodents and primates. Our paradigm consisted of three phases: arena familiarization, object familiarization sessions with two identical objects in the arena and finally a test session following a 24-h retention period with a familiar and a novel object in the arena. We employed two different object familiarization durations: one and three sessions on consecutive days. After three object familiarization sessions, tree shrews exhibited robust preference for novel objects on the test day. This was accompanied by significant reduction in familiar object exploration time, occurring largely between the first and second day of object familiarization. By contrast, tree shrews did not show a significant preference for the novel object after a one-session object familiarization. Nonetheless, they spent significantly less time exploring the familiar object on the test day compared to the object familiarization day, indicating that they did maintain a memory trace for the familiar object. Our study revealed different time courses for familiar object habituation and emergence of novelty preference, suggesting that novelty preference is dependent on well-consolidated memory of the competing familiar object. Taken together, our results demonstrate robust novelty preference of tree shrews, in general similarity to previous findings in rodents and primates.  相似文献   

20.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号