首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.  相似文献   

2.
A model for the extraocular plant of the human visual eye tracking mechanisms is discussed. Its sensitivity to variation of controller signal nervous activity is studied in order to determine the type of activity that yields realistic simulations characteristic of typical saccadic eye movements.  相似文献   

3.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

4.
Pursuit eye movements have been recorded with the photo-oculographic technique from newborn infants during the presentation of stimulations specific for spatial discrimination functions. 72.5 per cent of 51 subjects whose eye movements have been recorded have successfully followed stimuli of spatial frequency up to 0.4 cycles per degree. Estimations of grating visual acuity are similar to those provided by the preferential looking technique.  相似文献   

5.
Preparing a goal directed movement often requires detailed analysis of our environment. When picking up an object, its orientation, size and relative distance are relevant parameters when preparing a successful grasp. It would therefore be beneficial if the motor system is able to influence early perception such that information processing needs for action control are met at the earliest possible stage. However, only a few studies reported (indirect) evidence for action-induced visual perception improvements. We therefore aimed to provide direct evidence for a feature-specific perceptual modulation during the planning phase of a grasping action. Human subjects were instructed to either grasp or point to a bar while simultaneously performing an orientation discrimination task. The bar could slightly change its orientation during grasping preparation. By analyzing discrimination response probabilities, we found increased perceptual sensitivity to orientation changes when subjects were instructed to grasp the bar, rather than point to it. As a control experiment, the same experiment was repeated using bar luminance changes, a feature that is not relevant for either grasping or pointing. Here, no differences in visual sensitivity between grasping and pointing were found. The present results constitute first direct evidence for increased perceptual sensitivity to a visual feature that is relevant for a certain skeletomotor act during the movement preparation phase. We speculate that such action-induced perception improvements are controlled by neuronal feedback mechanisms from cortical motor planning areas to early visual cortex, similar to what was recently established for spatial perception improvements shortly before eye movements.  相似文献   

6.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

7.
Humans and other species continually perform microscopic eye movements, even when attending to a single point. These movements, which include drifts and microsaccades, are under oculomotor control, elicit strong neural responses, and have been thought to serve important functions. The influence of these fixational eye movements on the acquisition and neural processing of visual information remains unclear. Here, we show that during viewing of natural scenes, microscopic eye movements carry out a crucial information-processing step: they remove predictable correlations in natural scenes by equalizing the spatial power of the retinal image within the frequency range of ganglion cells' peak sensitivity. This transformation, which had been attributed to center-surround receptive field organization, occurs prior to any neural processing and reveals a form of matching between the statistics of natural images and those of normal eye movements. We further show that the combined effect of microscopic eye movements and retinal receptive field organization is to convert spatial luminance discontinuities into synchronous firing events, beginning the process of edge detection. Thus, microscopic eye movements are fundamental to two goals of early visual processing: redundancy reduction and feature extraction.  相似文献   

8.
Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl’s were able to use illusory contours for object discrimination.  相似文献   

9.
A model is described which provides a simple algorithm to compute the reaction times of saccadic eye movements and reach movements aimed at a single visual target. It is assumed, that the two movements are prepared inparallel and initiated independently unless the preparation of the saccade for some reason takes longer than the preparation of the reach movement. In the latter case the final command to execute the reach movement is synchronized with that to execute the eye movement and therefore the corresponding reaction times are highly correlated in a one-to-one relationship. Random variables are used to predict sets of data that are directly comparable with the experimental results. The algorithm includes the effects of daily practice (learning). The structure of the model and its computational results will be compared with the physiological data from monkey and man.  相似文献   

10.
Sensory responses of the brain are known to be highly variable, but the origin and functional relevance of this variability have long remained enigmatic. Using the variable foreperiod of a visual discrimination task to assess variability in the primate cerebral cortex, we report that visual evoked response variability is not only tied to variability in ongoing cortical activity, but also predicts mean response time. We used cortical local field potentials, simultaneously recorded from widespread cortical areas, to gauge both ongoing and visually evoked activity. Trial-to-trial variability of sensory evoked responses was strongly modulated by foreperiod duration and correlated both with the cortical variability before stimulus onset as well as with response times. In a separate set of experiments we probed the relation between small saccadic eye movements, foreperiod duration and manual response times. The rate of eye movements was modulated by foreperiod duration and eye position variability was positively correlated with response times. Our results indicate that when the time of a sensory stimulus is predictable, reduction in cortical variability before the stimulus can improve normal behavioral function that depends on the stimulus.  相似文献   

11.
Human exhibits an anisotropy in direction perception: discrimination is superior when motion is around horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy. We conclude that MT represents different motion directions uniformly, and subsequent processing creates a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least for visual motion, perception and action are guided by inputs from separate sensory streams. The directional anisotropy of perception appears to originate after the two streams have segregated and downstream from area MT.  相似文献   

12.
Smooth pursuit eye movements provide a good model system for cerebellar studies of complex motor control in monkeys. First, the pursuit system exhibits predictive control along complex trajectories and this control improves with training. Second, the flocculus/paraflocculus region of the cerebellum appears to generate this control. Lesions impair pursuit and neural activity patterns are closely related to eye motion during complex pursuit. Importantly, neural responses lead eye motion during predictive pursuit and lag eye motion during non-predictable target motions that require visual control. The idea that flocculus/paraflocculus predictive control is non-visual is also supported by a lack of correlation between neural activity and retinal image motion during pursuit. Third, biologically accurate neural network models of the flocculus/paraflocculus allow the exploration and testing of pursuit mechanisms. Our current model can generate predictive control without visual input in a manner that is compatible with the extensive experimental data available for this cerebellar system. Similar types of non-visual cerebellar control are likely to facilitate the wide range of other skilled movements that are observed.  相似文献   

13.
Eye movements modulate visual receptive fields of V4 neurons   总被引:11,自引:0,他引:11  
The receptive field, defined as the spatiotemporal selectivity of neurons to sensory stimuli, is central to our understanding of the neuronal mechanisms of perception. However, despite the fact that eye movements are critical during normal vision, the influence of eye movements on the structure of receptive fields has never been characterized. Here, we map the receptive fields of macaque area V4 neurons during saccadic eye movements and find that receptive fields are remarkably dynamic. Specifically, before the initiation of a saccadic eye movement, receptive fields shrink and shift towards the saccade target. These spatiotemporal dynamics may enhance information processing of relevant stimuli during the scanning of a visual scene, thereby assisting the selection of saccade targets and accelerating the analysis of the visual scene during free viewing.  相似文献   

14.
Neurons in posterior parietal cortex of the awake, trained monkey respond to passive visual and/or somatosensory stimuli. In general, the receptive fields of these cells are large and nonspecific. When these neurons are studied during visually guided hand movements and eye movements, most of their activity can be accounted for by passive sensory stimulation. However, for some visual cells, the response to a stimulus is enhanced when it is to be the target for a saccadic eye movement. This enhancement is selective for eye movements into the visual receptive field since it does not occur with eye movements to other parts of the visual field. Cells that discharge in association with a visual fixation task have foveal receptive fields and respond to the spots of light used as fixation targets. Cells discharging selectively in association with different directions of tracking eye movements have directionally selective responses to moving visual stimuli. Every cell in our sample discharging in association with movement could be driven by passive sensory stimuli. We conclude that the activity of neurons in posterior parietal cortex is dependent on and indicative of external stimuli but not predictive of movement.  相似文献   

15.
Rhegmatogenous retinal detachment (RD) is a sight threatening condition. In this type of RD a break in the retina allows retrohyaloid fluid to enter the subretinal space. The prognosis concerning the patients’ visual acuity is better if the RD has not progressed to the macula. The patient is given a posturing advice of bed rest and semi-supine positioning (with the RD as low as possible) to allow the utilisation of gravity and immobilisation in preventing progression of the RD. It is, however, unknown what external loads on the eye contribute the most to the progression of a RD. The goal of this exploratory study is to elucidate the role of eye movements caused by head movements and saccades on the progression of an RD. A finite element model is produced and evaluated in this study. The model is based on geometric and material properties reported in the literature. The model shows that a mild head movement and a severe eye movement produce similar traction loads on the retina. This implies that head movements—and not eye movements—are able to cause loads that can trigger and progress an RD. These preliminary results suggest that head movements have a larger effect on the progression of an RD than saccadic eye movements. This study is the first to use numerical analysis to investigate the development and progression of RD and shows promise for future work.  相似文献   

16.
 In motion-processing areas of the visual cortex in cats and monkeys, an anisotropic distribution of direction selectivities displays a preference for movements away from the fovea. This ‘centrifugal bias’ has been hypothetically linked to the processing of optic flow fields generated during forward locomotion. In this paper, we show that flow fields induced on the retina in many natural situations of locomotion of higher mammals are indeed qualitatively centrifugal in structure, even when biologically plausible eye movements to stabilize gaze on environmental targets are performed. We propose a network model of heading detection that carries an anisotropy similar to the one found in cat and monkey. In simulations, this model reproduces a number of psychophysical results of human heading detection. It suggests that a recently reported human disability to correctly identify the direction of heading from optic flow when a certain type of eye movement is simulated might be linked to the noncentrifugal structure of the resulting retinal flow field and to the neurophysiological anisotropies. Received: 1 April 1994/Accepted in revised form: 4 August 1994  相似文献   

17.
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.  相似文献   

18.
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.  相似文献   

19.
The present experiment was designed to assess daily fluctuations of visual discriminability, a function reflecting the resolution power of the visual sensitivity by measure of a differential threshold. Sixteen subjects underwent a visual discrimination threshold task (using the constant method) in a protocol allowing one point every 2h over the 24h period. The results show that the visual discrimination threshold is low in the morning and increases progressively over the day, reaching a first peak at 22:00. During the night, the same pattern occurs, with low threshold levels at the beginning of the night and high levels at the end. This profile is quite different from that of detection threshold variations, suggesting that the two visual functions are under the control of different underlying mechanisms. Two interpretations could account for this discrepancy. The first relates to different oscillators in the eye for detection and discrimination. The second refers to a possible linkage of visual discriminability with the sleep-wake cycle since threshold measures were systematically low (i.e., high resolution power) after long sleep periods. (Chronobiology International, 17(12), 187-195, 2000)  相似文献   

20.
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号