首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shapiro AG  Knight EJ  Lu ZL 《PloS one》2011,6(4):e18719

Background

Anatomical and physiological differences between the central and peripheral visual systems are well documented. Recent findings have suggested that vision in the periphery is not just a scaled version of foveal vision, but rather is relatively poor at representing spatial and temporal phase and other visual features. Shapiro, Lu, Huang, Knight, and Ennis (2010) have recently examined a motion stimulus (the “curveball illusion”) in which the shift from foveal to peripheral viewing results in a dramatic spatial/temporal discontinuity. Here, we apply a similar analysis to a range of other spatial/temporal configurations that create perceptual conflict between foveal and peripheral vision.

Methodology/Principal Findings

To elucidate how the differences between foveal and peripheral vision affect super-threshold vision, we created a series of complex visual displays that contain opposing sources of motion information. The displays (referred to as the peripheral escalator illusion, peripheral acceleration and deceleration illusions, rotating reversals illusion, and disappearing squares illusion) create dramatically different perceptions when viewed foveally versus peripherally. We compute the first-order and second-order directional motion energy available in the displays using a three-dimensional Fourier analysis in the (x, y, t) space. The peripheral escalator, acceleration and deceleration illusions and rotating reversals illusion all show a similar trend: in the fovea, the first-order motion energy and second-order motion energy can be perceptually separated from each other; in the periphery, the perception seems to correspond to a combination of the multiple sources of motion information. The disappearing squares illusion shows that the ability to assemble the features of Kanisza squares becomes slower in the periphery.

Conclusions/Significance

The results lead us to hypothesize “feature blur” in the periphery (i.e., the peripheral visual system combines features that the foveal visual system can separate). Feature blur is of general importance because humans are frequently bringing the information in the periphery to the fovea and vice versa.  相似文献   

2.

Background

When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities.

Methodology/Principal Finding

Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE.

Conclusions/Significance

Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).  相似文献   

3.

Background

It is well established that foveating a behaviorally relevant part of the visual field improves localization performance as compared to the situation where the gaze is directed elsewhere. Reduced localization performance in the peripheral encoding conditions has been attributed to an eccentricity-dependent increase in positional uncertainty. It is not known, however, whether and how the foveal and peripheral encoding conditions can influence spatial interval estimation. In this study we compare observers'' estimates of a distance between two co-planar dots in the condition where they foveate the two sample dots and where they fixate a central dot while viewing the sample dots peripherally.

Methodology/Principal Findings

Observers were required to reproduce, after a short delay, a distance between two sample dots based on a stationary reference dot and a movable mouse pointer. When both sample dots are foveated, we find that the distance estimation error is small but consistently increases with the dots-separation size. In comparison, distance judgment in peripheral encoding condition is significantly overestimated for smaller separations and becomes similar to the performance in foveal trials for distances from 10 to 16 degrees.

Conclusions/Significance

Although we find improved accuracy of distance estimation in the foveal condition, the fact that the difference is related to the reduction of the estimation bias present in the peripheral conditon, challenges the simple account of reducing the eccentricity-dependent positional uncertainty. Contrary to this, we present evidence for an explanation in terms of neuronal populations activated by the two sample dots and their inhibitory interactions under different visual encoding conditions. We support our claims with simulations that take into account receptive fields size differences between the two encoding conditions.  相似文献   

4.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

5.

Background

The image formed by the eye''s optics is inherently blurred by aberrations specific to an individual''s eyes. We examined how visual coding is adapted to the optical quality of the eye.

Methods and Findings

We assessed the relationship between perceived blur and the retinal image blur resulting from high order aberrations in an individual''s optics. Observers judged perceptual blur in a psychophysical two-alternative forced choice paradigm, on stimuli viewed through perfectly corrected optics (using a deformable mirror to compensate for the individual''s aberrations). Realistic blur of different amounts and forms was computer simulated using real aberrations from a population. The blur levels perceived as best focused were close to the levels predicted by an individual''s high order aberrations over a wide range of blur magnitudes, and were systematically biased when observers were instead adapted to the blur reproduced from a different observer''s eye.

Conclusions

Our results provide strong evidence that spatial vision is calibrated for the specific blur levels present in each individual''s retinal image and that this adaptation at least partly reflects how spatial sensitivity is normalized in the neural coding of blur.  相似文献   

6.
Kramer P  Di Bono MG  Zorzi M 《PloS one》2011,6(2):e17378

Background

Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills.It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like—in visual stimuli—spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.

Methodology/Principal Findings

Here we introduce a novel method, based on second-order (contrast-based) visual motion, to create stimuli that exclude all first-order (luminance-based) cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.

Conclusions/Significance

The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.  相似文献   

7.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

8.

Background

Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.

Methodology/Principal Findings

Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.

Conclusions/Significance

The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object.  相似文献   

9.

Background

In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied.

Methodology/Principal Findings

Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity.

Conclusions

Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.  相似文献   

10.

Background

Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals.

Methodology/Principal Findings

We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age.

Conclusions/Significance

This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.  相似文献   

11.
Dessing JC  Craig CM 《PloS one》2010,5(10):e13161

Background

As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper''s actual movements.

Methodology/Principal Findings

Here, an in-depth analysis of goalkeepers'' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball''s current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases.

Conclusions

While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.  相似文献   

12.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

13.

Background

In ecological situations, threatening stimuli often come out from the peripheral vision. Such aggressive messages must trigger rapid attention to the periphery to allow a fast and adapted motor reaction. Several clues converge to hypothesize that peripheral danger presentation can trigger off a fast arousal network potentially independent of the consciousness spot.

Methodology/Principal Findings

In the present MEG study, spatio-temporal dynamics of the neural processing of danger related stimuli were explored as a function of the stimuli position in the visual field. Fearful and neutral faces were briefly presented in the central or peripheral visual field, and were followed by target faces stimuli. An event-related beamformer source analysis model was applied in three time windows following the first face presentations: 80 to 130 ms, 140 to 190 ms, and 210 to 260 ms. The frontal lobe and the right internal temporal lobe part, including the amygdala, reacted as soon as 80 ms of latency to fear occurring in the peripheral vision. For central presentation, fearful faces evoked the classical neuronal activity along the occipito-temporal visual pathway between 140 and 190 ms.

Conclusions

Thus, the high spatio-temporal resolution of MEG allowed disclosing a fast response of a network involving medial temporal and frontal structures in the processing of fear related stimuli occurring unconsciously in the peripheral visual field. Whereas centrally presented stimuli are precisely processed by the ventral occipito-temporal cortex, the related-to-danger stimuli appearing in the peripheral visual field are more efficient to produce a fast automatic alert response possibly conveyed by subcortical structures.  相似文献   

14.
Kim J  Park S  Blake R 《PloS one》2011,6(5):e19971

Background

Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits.

Methodology/Findings

In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task.Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion.

Conclusion

Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes.  相似文献   

15.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

16.
Jolij J  Meurs M 《PloS one》2011,6(4):e18861

Background

Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception.

Methods and Findings

We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers'' mood.

Conclusions

As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.  相似文献   

17.

Background

It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object''s elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2).

Methodology/Principal Findings

In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object''s elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions.

Conclusions/Significance

These findings suggest that the phase difference in an object''s motion is a significant factor in the material perception of motion-related elasticity.  相似文献   

18.

Background

Incidental CT findings may provide an opportunity for early detection of chronic obstructive pulmonary disease (COPD), which may prove important in CT-based lung cancer screening setting. We aimed to determine the diagnostic performance of human observers to visually evaluate COPD presence on CT images, in comparison to automated evaluation using quantitative CT measures.

Methods

This study was approved by the Dutch Ministry of Health and the institutional review board. All participants provided written informed consent. We studied 266 heavy smokers enrolled in a lung cancer screening trial. All subjects underwent volumetric inspiratory and expiratory chest computed tomography (CT). Pulmonary function testing was used as the reference standard for COPD. We evaluated the diagnostic performance of eight observers and one automated model based on quantitative CT measures.

Results

The prevalence of COPD in the study population was 44% (118/266), of whom 62% (73/118) had mild disease. The diagnostic accuracy was 74.1% in the automated evaluation, and ranged between 58.3% and 74.3% for the visual evaluation of CT images. The positive predictive value was 74.3% in the automated evaluation, and ranged between 52.9% and 74.7% for the visual evaluation. Interobserver variation was substantial, even within the subgroup of experienced observers. Agreement within observers yielded kappa values between 0.28 and 0.68, regardless of the level of expertise. The agreement between the observers and the automated CT model showed kappa values of 0.12–0.35.

Conclusions

Visual evaluation of COPD presence on chest CT images provides at best modest accuracy and is associated with substantial interobserver variation. Automated evaluation of COPD subjects using quantitative CT measures appears superior to visual evaluation by human observers.  相似文献   

19.

Background

Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern.

Methodology/Principal Findings

Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face.

Conclusions/Significance

These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures.  相似文献   

20.

Background

Individuals'' faces communicate a great deal of information about them. Although some of this information tends to be perceptually obvious (such as race and sex), much of it is perceptually ambiguous, without clear or obvious visual cues.

Methodology/Principal Findings

Here we found that individuals'' political affiliations could be accurately discerned from their faces. In Study 1, perceivers were able to accurately distinguish whether U.S. Senate candidates were either Democrats or Republicans based on photos of their faces. Study 2 showed that these effects extended to Democrat and Republican college students, based on their senior yearbook photos. Study 3 then showed that these judgments were related to differences in perceived traits among the Democrat and Republican faces. Republicans were perceived as more powerful than Democrats. Moreover, as individual targets were perceived to be more powerful, they were more likely to be perceived as Republicans by others. Similarly, as individual targets were perceived to be warmer, they were more likely to be perceived as Democrats.

Conclusions/Significance

These data suggest that perceivers'' beliefs about who is a Democrat and Republican may be based on perceptions of traits stereotypically associated with the two political parties and that, indeed, the guidance of these stereotypes may lead to categorizations of others'' political affiliations at rates significantly more accurate than chance guessing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号