共查询到20条相似文献,搜索用时 15 毫秒
1.
David P. Crabb Nicholas D. Smith Franziska G. Rauscher Catharine M. Chisholm John L. Barbur David F. Edgar David F. Garway-Heath 《PloS one》2010,5(3)
Background
Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient''s actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT).Methodology/Principal Findings
The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver''s perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect.Conclusions/Significance
Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. 相似文献2.
Background
Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements.Methods
Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position.Results
Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing.Conclusion
Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis. 相似文献3.
Kessler H Taubner S Buchheim A Münte TF Stasch M Kächele H Roth G Heinecke A Erhard P Cierpka M Wiswede D 《PloS one》2011,6(1):e15712
Objectives
In the search for neurobiological correlates of depression, a major finding is hyperactivity in limbic-paralimbic regions. However, results so far have been inconsistent, and the stimuli used are often unspecific to depression. This study explored hemodynamic responses of the brain in patients with depression while processing individualized and clinically derived stimuli.Methods
Eighteen unmedicated patients with recurrent major depressive disorder and 17 never-depressed control subjects took part in standardized clinical interviews from which individualized formulations of core interpersonal dysfunction were derived. In the patient group such formulations reflected core themes relating to the onset and maintenance of depression. In controls, formulations reflected a major source of distress. This material was thereafter presented to subjects during functional magnetic resonance imaging (fMRI) assessment.Results
Increased hemodynamic responses in the anterior cingulate cortex, medial frontal gyrus, fusiform gyrus and occipital lobe were observed in both patients and controls when viewing individualized stimuli. Relative to control subjects, patients with depression showed increased hemodynamic responses in limbic-paralimbic and subcortical regions (e.g. amygdala and basal ganglia) but no signal decrease in prefrontal regions.Conclusions
This study provides the first evidence that individualized stimuli derived from standardized clinical interviewing can lead to hemodynamic responses in regions associated with self-referential and emotional processing in both groups and limbic-paralimbic and subcortical structures in individuals with depression. Although the regions with increased responses in patients have been previously reported, this study enhances the ecological value of fMRI findings by applying stimuli that are of personal relevance to each individual''s depression. 相似文献4.
Background
Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others'' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.Methodology/Principal Findings
Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.Conclusions/Significance
These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli. 相似文献5.
Background
Converging evidence from different species indicates that some newborn vertebrates, including humans, have visual predispositions to attend to the head region of animate creatures. It has been claimed that newborn preferences for faces are domain-relevant and similar in different species. One of the most common criticisms of the work supporting domain-relevant face biases in human newborns is that in most studies they already have several hours of visual experience when tested. This issue can be addressed by testing newly hatched face-naïve chicks (Gallus gallus) whose preferences can be assessed prior to any other visual experience with faces.Methods
In the present study, for the first time, we test the prediction that both newly hatched chicks and human newborns will demonstrate similar preferences for face stimuli over spatial frequency matched structured noise. Chicks and babies were tested using identical stimuli for the two species. Chicks underwent a spontaneous preference task, in which they have to approach one of two stimuli simultaneously presented at the ends of a runway. Human newborns participated in a preferential looking task.Results and Significance
We observed a significant preference for orienting toward the face stimulus in both species. Further, human newborns spent more time looking at the face stimulus, and chicks preferentially approached and stood near the face-stimulus. These results confirm the view that widely diverging vertebrates possess similar domain-relevant biases toward faces shortly after hatching or birth and provide a behavioural basis for a comparison with neuroimaging studies using similar stimuli. 相似文献6.
Purpose
We sought brain activity that predicts visual consciousness.Methods
We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not.Results
We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically.Conclusion
We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. 相似文献7.
Ping Xie Zizhong Hu Xiaojun Zhang Xinhua Li Zhishan Gao Dongqing Yuan Qinghuai Liu 《PloS one》2014,9(11)
Objective
To construct a life-sized eye model using the three-dimensional (3D) printing technology for fundus viewing study of the viewing system.Methods
We devised our schematic model eye based on Navarro''s eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs). Optical performance of our schematic model eye was compared with Navarro''s schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD) software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D) printing. Together with the main printed structure, polymethyl methacrylate(PMMA) aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy.Results
In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro''s and Bakaraju''s model eye, much smaller than Arianpour''s model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro''s eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value.Conclusions
The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus range viewing research. 相似文献8.
Background
Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.Methodology/Principle Findings
Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.Conclusions/Significance
This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level. 相似文献9.
Background
Neuroimaging has demonstrated that voluntary emotion regulation is effective in reducing amygdala activation to aversive stimuli during regulation. However, to date little is known about the sustainability of these neural effects once active emotion regulation has been terminated.Methodology/Principal Findings
We addressed this issue by means of functional magnetic resonance imaging (fMRI) in healthy female subjects. We performed an active emotion regulation task using aversive visual scenes (task 1) and a subsequent passive viewing task using the same stimuli (task 2). Here we demonstrate not only a significantly reduced amygdala activation during active regulation but also a sustained regulation effect on the amygdala in the subsequent passive viewing task. This effect was related to an immediate increase of amygdala signal in task 1 once active emotion regulation has been terminated: The larger this peak postregulation signal in the amygdala in task 1, the smaller the sustained regulation effect in task 2.Conclusions/Significance
In summary, we found clear evidence that effects of voluntary emotion regulation extend beyond the period of active regulation. These findings are of importance for the understanding of emotion regulation in general, for disorders of emotion regulation and for psychotherapeutic interventions. 相似文献10.
Background
Normal reading requires eye guidance and activation of lexical representations so that words in text can be identified accurately. However, little is known about how the visual content of text supports eye guidance and lexical activation, and thereby enables normal reading to take place.Methods and Findings
To investigate this issue, we investigated eye movement performance when reading sentences displayed as normal and when the spatial frequency content of text was filtered to contain just one of 5 types of visual content: very coarse, coarse, medium, fine, and very fine. The effect of each type of visual content specifically on lexical activation was assessed using a target word of either high or low lexical frequency embedded in each sentenceResults
No type of visual content produced normal eye movement performance but eye movement performance was closest to normal for medium and fine visual content. However, effects of lexical frequency emerged early in the eye movement record for coarse, medium, fine, and very fine visual content, and were observed in total reading times for target words for all types of visual content.Conclusion
These findings suggest that while the orchestration of multiple scales of visual content is required for normal eye-guidance during reading, a broad range of visual content can activate processes of word identification independently. Implications for understanding the role of visual content in reading are discussed. 相似文献11.
12.
Background
The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.Methodology/Principal Findings
Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.Conclusions/Significance
Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does. 相似文献13.
Saskia J. te Velde Amika Singh Mai Chinapaw Ilse De Bourdeaudhuij Natasa Jan Eva Kovacs Elling Bere Froydis N. Vik Bettina Bringolf-Isler Yannis Manios Luis Moreno Johannes Brug 《PloS one》2014,9(11)
Objective
To design interventions that target energy balance-related behaviours, knowledge of primary schoolchildren''s perceptions regarding soft drink intake, fruit juice intake, breakfast consumption, TV viewing and physical activity (PA) is essential. The current study describes personal beliefs and attitudes, home- and friend-related variables regarding these behaviours across Europe.Design
Cross-sectional study in which personal, family and friend -related variables were assessed by validated questionnaires, and dichotomized as favourable versus unfavourable answers. Logistic regression analyses were conducted to estimate proportions of children giving unfavourable answers and test between-country differences.Setting
A survey in eight European countries.Subjects
A total of 7903 10–12 year old primary schoolchildren.Results
A majority of the children reported unfavourable attitudes, preferences and subjective norms regarding soft drink, fruit juice intake and TV viewing accompanied with high availability and accessibility at home. Few children reported unfavourable attitudes and preferences regarding breakfast consumption and PA. Many children reported unfavourable health beliefs regarding breakfast consumption and TV viewing. Substantial differences between countries were observed, especially for variables regarding soft drink intake, breakfast consumption and TV viewing.Conclusion
The surveyed children demonstrated favourable attitudes to some healthy behaviours (PA, breakfast intake) as well as to some unhealthy behaviours (soft drink consumption, TV viewing). Additionally, many children across Europe have personal beliefs and are exposed to social environments that are not supportive to engagement in healthy behaviours. Moreover, the large differences in personal, family and friend-related variables across Europe argue for implementing different strategies in the different European countries. 相似文献14.
Background
The image formed by the eye''s optics is inherently blurred by aberrations specific to an individual''s eyes. We examined how visual coding is adapted to the optical quality of the eye.Methods and Findings
We assessed the relationship between perceived blur and the retinal image blur resulting from high order aberrations in an individual''s optics. Observers judged perceptual blur in a psychophysical two-alternative forced choice paradigm, on stimuli viewed through perfectly corrected optics (using a deformable mirror to compensate for the individual''s aberrations). Realistic blur of different amounts and forms was computer simulated using real aberrations from a population. The blur levels perceived as best focused were close to the levels predicted by an individual''s high order aberrations over a wide range of blur magnitudes, and were systematically biased when observers were instead adapted to the blur reproduced from a different observer''s eye.Conclusions
Our results provide strong evidence that spatial vision is calibrated for the specific blur levels present in each individual''s retinal image and that this adaptation at least partly reflects how spatial sensitivity is normalized in the neural coding of blur. 相似文献15.
Background
Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.Methodology
Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.Principal Findings
The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.Conclusions
Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information. 相似文献16.
Susan B. Perlman James P. Morris Brent C. Vander Wyk Steven R. Green Jaime L. Doyle Kevin A. Pelphrey 《PloS one》2009,4(6)
Background
Determining the ways in which personality traits interact with contextual determinants to shape social behavior remains an important area of empirical investigation. The specific personality trait of neuroticism has been related to characteristic negative emotionality and associated with heightened attention to negative, emotionally arousing environmental signals. However, the mechanisms by which this personality trait may shape social behavior remain largely unspecified.Methodology/Principal Findings
We employed eye tracking to investigate the relationship between characteristics of visual scanpaths in response to emotional facial expressions and individual differences in personality. We discovered that the amount of time spent looking at the eyes of fearful faces was positively related to neuroticism.Conclusions/Significance
This finding is discussed in relation to previous behavioral research relating personality to selective attention for trait-congruent emotional information, neuroimaging studies relating differences in personality to amygdala reactivity to socially relevant stimuli, and genetic studies suggesting linkages between the serotonin transporter gene and neuroticism. We conclude that personality may be related to interpersonal interaction by shaping aspects of social cognition as basic as eye contact. In this way, eye gaze represents a possible behavioral link in a complex relationship between genes, brain function, and personality. 相似文献17.
C Frick S Lang B Kotchoubey S Sieswerda R Dinu-Biringer M Berger S Veser M Essig S Barnow 《PloS one》2012,7(8):e41650
Background
One of the core symptoms of borderline personality disorder (BPD) is the instability in interpersonal relationships. This might be related to existent differences in mindreading between BPD patients and healthy individuals.Methods
We examined the behavioural and neurophysiological (fMRI) responses of BPD patients and healthy controls (HC) during performance of the ‘Reading the Mind in the Eyes’ test (RMET).Results
Mental state discrimination was significantly better and faster for affective eye gazes in BPD patients than in HC. At the neurophysiological level, this was manifested in a stronger activation of the amygdala and greater activity of the medial frontal gyrus, the left temporal pole and the middle temporal gyrus during affective eye gazes. In contrast, HC subjects showed a greater activation in the insula and the superior temporal gyri.Conclusion
These findings indicate that BPD patients are highly vigilant to social stimuli, maybe because they resonate intuitively with mental states of others. 相似文献18.
Background
Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.Methodology/Principal Findings
Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.Conclusions/Significance
These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing. 相似文献19.
Teresa A. Victor Maura L. Furey Stephen J. Fromm Patrick S. F. Bellgowan Arne ?hman Wayne C. Drevets 《PloS one》2012,7(10)
Background
Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however.Aims
To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants.Method
Unmedicated-depressed participants with MDD (n = 22) and healthy controls (HC; n = 25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups.Results
The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex.Conclusions
Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli. 相似文献20.