首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When a static textured background is covered and uncovered by a moving bar of the same mean luminance we can clearly see the motion of the bar. Texture-defined motion provides an example of a naturally occurring second-order motion. Second-order motion sequences defeat standard spatio-temporal energy models of motion perception. It has been proposed that second-order stimuli are analysed by separate systems, operating in parallel with luminance-defined motion processing, which incorporate identifiable pre-processing stages that make second-order patterns visible to standard techniques. However, the proposal of multiple paths to motion analysis remains controversial. Here we describe the behaviour of a model that recovers both luminance-defined and an important class of texture-defined motion. The model also accounts for the induced motion that is seen in some texture-defined motion sequences. We measured the perceived direction and speed of both the contrast envelope and induced motion in the case of a contrast modulation of static noise textures. Significantly, the model predicts the perceived speed of the induced motion seen at second-order texture boundaries. The induced motion investigated here appears distinct from classical induced effects resulting from motion contrast or the movement of a reference frame.  相似文献   

2.
According to the complexity-specific hypothesis, the efficacy with which individuals with autism spectrum disorder (ASD) process visual information varies according to the extensiveness of the neural network required to process stimuli. Specifically, adults with ASD are less sensitive to texture-defined (or second-order) information, which necessitates the implication of several cortical visual areas. Conversely, the sensitivity to simple, luminance-defined (or first-order) information, which mainly relies on primary visual cortex (V1) activity, has been found to be either superior (static material) or intact (dynamic material) in ASD. It is currently unknown if these autistic perceptual alterations are present in childhood. In the present study, behavioural (threshold) and electrophysiological measures were obtained for static luminance- and texture-defined gratings presented to school-aged children with ASD and compared to those of typically developing children. Our behavioural and electrophysiological (P140) results indicate that luminance processing is likely unremarkable in autistic children. With respect to texture processing, there was no significant threshold difference between groups. However, unlike typical children, autistic children did not show reliable enhancements of brain activity (N230 and P340) in response to texture-defined gratings relative to luminance-defined gratings. This suggests reduced efficiency of neuro-integrative mechanisms operating at a perceptual level in autism. These results are in line with the idea that visual atypicalities mediated by intermediate-scale neural networks emerge before or during the school-age period in autism.  相似文献   

3.
The successful detection of biological motion can have important consequences for survival. Previous studies have demonstrated the ease and speed with which observers can extract a wide range of information from impoverished dynamic displays in which only an actor's joints are visible. Although it has often been suggested that such biological motion processing can be accomplished relatively automatically, few studies have directly tested this assumption by using behavioral methods. Here we used a flanker paradigm to assess how peripheral "to-be-ignored" walkers affect the processing of a central target walker. Our results suggest that task-irrelevant dynamic figures cannot be ignored and are processed to a level where they influence behavior. These findings provide the first direct evidence that complex dynamic patterns can be processed incidentally, a finding that may have important implications for cognitive, neurophysiological, and computational models of biological motion processing.  相似文献   

4.
The information processing mechanism of the visual nervous system is an unresolved scientific problem that has long puzzled neuroscientists. The amount of visual information is significantly degraded when it reaches the V1 after entering the retina; nevertheless, this does not affect our visual perception of the outside world. Currently, the mechanisms of visual information degradation from retina to V1 are still unclear. For this purpose, the current study used the experimental data summarized by Marcus E. Raichle to investigate the neural mechanisms underlying the degradation of the large amount of data from topological mapping from retina to V1, drawing on the photoreceptor model first. The obtained results showed that the image edge features of visual information were extracted by the convolution algorithm with respect to the function of synaptic plasticity when visual signals were hierarchically processed from low-level to high-level. The visual processing was characterized by the visual information degradation, and this compensatory mechanism embodied the principles of energy minimization and transmission efficiency maximization of brain activity, which matched the experimental data summarized by Marcus E. Raichle. Our results further the understanding of the information processing mechanism of the visual nervous system.  相似文献   

5.
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling.  相似文献   

6.
Presenting the eyes with spatially mismatched images causes a phenomenon known as binocular rivalry-a fluctuation of awareness whereby each eye's image alternately determines perception. Binocular rivalry is used to study interocular conflict resolution and the formation of conscious awareness from retinal images. Although the spatial determinants of rivalry have been well-characterized, the temporal determinants are still largely unstudied. We confirm a previous observation that conflicting images do not need to be presented continuously or simultaneously to elicit binocular rivalry. This process has a temporal limit of about 350 ms, which is an order of magnitude larger than the visual system's temporal resolution. We characterize this temporal limit of binocular rivalry by showing that it is independent of low-level information such as interocular timing differences, contrast-reversals, stimulus energy, and eye-of-origin information. This suggests the temporal factors maintaining rivalry relate more to higher-level form information, than to low-level visual information. Systematically comparing the role of form and motion-the processing of which may be assigned to ventral and dorsal visual pathways, respectively-reveals that this temporal limit is determined by form conflict rather than motion conflict. Together, our findings demonstrate that binocular conflict resolution depends on temporally coarse form-based processing, possibly originating in the ventral visual pathway.  相似文献   

7.
Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols.  相似文献   

8.
Krauzlis RJ  Hafed ZM 《Neuron》2007,54(6):852-854
Our understanding of how sensory information is transformed into motor commands has grown increasingly sophisticated. In this issue of Neuron, Wilmer and Nakayama use a novel analysis to show that the initial changes in smooth-pursuit eye speed are driven by low-level motion signals, whereas the later eye speed is determined by high-level signals.  相似文献   

9.
The proposal that motion is processed by multiple mechanisms in the human brain has received little anatomical support so far. Here, we compared higher- and lower-level motion processing in the human brain using functional magnetic resonance imaging. We observed activation of an inferior parietal lobule (IPL) motion region by isoluminant red-green gratings when saliency of one color was increased and by long-range apparent motion at 7 Hz but not 2 Hz. This higher order motion region represents the entire visual field, while traditional motion regions predominantly process contralateral motion. Our results suggest that there are two motion-processing systems in the human brain: a contralateral lower-level luminance-based system, extending from hMT/V5+ into dorsal IPS and STS, and a bilateral higher-level saliency-based system in IPL.  相似文献   

10.
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to coexist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that coexist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously or whether the component signals encounter a serial 'bottleneck' during their processing. Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.  相似文献   

11.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

12.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

13.
E Castet  J Zanker 《Spatial Vision》1999,12(3):287-307
When a sinewave grating is moving within a cross-shaped aperture, a strongly multi-stable phenomenon is perceived. The percept switches between the coherence of an extended surface moving in a single direction and the segregation of two patterned strips sliding across each other in directions parallel to the branches of the cross. We studied how the balance between these two percepts is affected by the length of the arms and by the shape of their ends. We report here that human observers report the segregation into two surfaces more often when the branches of the cross are extended, and when the small sides of the arms are oriented parallel to the grating. Two kinds of early motion signals interact in the crossed barber-pole stimulus: (a) the signals extracted in the middle of the bars are ambiguous with regard to their direction, and usually would be interpreted as motion normal to the grating orientation; (b) the signals from regions where the grating is intersected by the borders of the aperture convey motion signals in direction of the border. Our results show that the global appearance of our display can be dramatically influenced by the reliability of motion signals located in small regions that may be separated by large distances. To explain this long-range effect, we tentatively propose the existence of a representation level situated between the extraction of low-level local signals and the final global percept. The postulated processing level is concerned with the segmenting of the entire image into surfaces that are likely to belong to the same object, even if they are not contiguous in space. This hypothetical mechanism involves the construction of coarse-scale 'patches' from the local motion signal distributions, each carrying a single velocity associated with a certain degree of reliability. Our experiments indicate that the probability of grouping together similar patches depends on their respective reliabilities.  相似文献   

14.
Traditional models of insect vision have assumed that insects are only capable of low-level analysis of local cues and are incapable of global, holistic perception. However, recent studies on honeybee (Apis mellifera) vision have refuted this view by showing that this insect also processes complex visual information by using spatial configurations or relational rules. In the light of these findings, we asked whether bees prioritize global configurations or local cues by setting these two levels of image analysis in competition. We trained individual free-flying honeybees to discriminate hierarchical visual stimuli within a Y-maze and tested bees with novel stimuli in which local and/or global cues were manipulated. We demonstrate that even when local information is accessible, bees prefer global information, thus relying mainly on the object''s spatial configuration rather than on elemental, local information. This preference can be reversed if bees are pre-trained to discriminate isolated local cues. In this case, bees prefer the hierarchical stimuli with the local elements previously primed even if they build an incorrect global configuration. Pre-training with local cues induces a generic attentional bias towards any local elements as local information is prioritized in the test, even if the local cues used in the test are different from the pre-trained ones. Our results thus underline the plasticity of visual processing in insects and provide new insights for the comparative analysis of visual recognition in humans and animals.  相似文献   

15.
Here, we describe a motion stimulus in which the quality of rotation is fractal. This makes its motion unavailable to the translation-based motion analysis known to underlie much of our motion perception. In contrast, normal rotation can be extracted through the aggregation of the outputs of translational mechanisms. Neural adaptation of these translation-based motion mechanisms is thought to drive the motion after-effect, a phenomenon in which prolonged viewing of motion in one direction leads to a percept of motion in the opposite direction. We measured the motion after-effects induced in static and moving stimuli by fractal rotation. The after-effects found were an order of magnitude smaller than those elicited by normal rotation. Our findings suggest that the analysis of fractal rotation involves different neural processes than those for standard translational motion. Given that the percept of motion elicited by fractal rotation is a clear example of motion derived from form analysis, we propose that the extraction of fractal rotation may reflect the operation of a general mechanism for inferring motion from changes in form.  相似文献   

16.
Vagaries of visual perception in autism   总被引:16,自引:0,他引:16  
Dakin S  Frith U 《Neuron》2005,48(3):497-507
Three classes of perceptual phenomena have repeatedly been associated with autism spectrum disorder (ASD): superior processing of fine detail (local structure), either inferior processing of overall/global structure or an ability to ignore disruptive global/contextual information, and impaired motion perception. This review evaluates the quality of the evidence bearing on these three phenomena. We argue that while superior local processing has been robustly demonstrated, conclusions about global processing cannot be definitively drawn from the experiments to date, which have generally not precluded observers using more local cues. Perception of moving stimuli is impaired in ASD, but explanations in terms of magnocellular/dorsal deficits do not appear to be sufficient. We suggest that abnormalities in the superior temporal sulcus (STS) may provide a neural basis for the range of motion-processing deficits observed in ASD, including biological motion perception. Such an explanation may also provide a link between perceptual abnormalities and specific deficits in social cognition associated with autism.  相似文献   

17.
It is shown that existing processing schemes of 3D motion perception such as interocular velocity difference, changing disparity over time, as well as joint encoding of motion and disparity, do not offer a general solution to the inverse optics problem of local binocular 3D motion. Instead we suggest that local velocity constraints in combination with binocular disparity and other depth cues provide a more flexible framework for the solution of the inverse problem. In the context of the aperture problem we derive predictions from two plausible default strategies: (1) the vector normal prefers slow motion in 3D whereas (2) the cyclopean average is based on slow motion in 2D. Predicting perceived motion directions for ambiguous line motion provides an opportunity to distinguish between these strategies of 3D motion processing. Our theoretical results suggest that velocity constraints and disparity from feature tracking are needed to solve the inverse problem of 3D motion perception. It seems plausible that motion and disparity input is processed in parallel and integrated late in the visual processing hierarchy.  相似文献   

18.
Attentional selection plays a critical role in conscious perception. When attention is diverted, even salient stimuli fail to reach visual awareness. Attention can be voluntarily directed to a spatial location or a visual feature for facilitating the processing of information relevant to current goals. In everyday situations, attention and awareness are tightly coupled. This has led some to suggest that attention and awareness might be based on a common neural foundation, whereas others argue that they are mediated by distinct mechanisms. A body of evidence shows that visual stimuli can be processed at multiple stages of the visual-processing streams without evoking visual awareness. To illuminate the relationship between visual attention and conscious perception, we investigated whether top-down attention can target and modulate the neural representations of unconsciously processed visual stimuli. Our experiments show that spatial attention can target only consciously perceived stimuli, whereas feature-based attention can modulate the processing of invisible stimuli. The attentional modulation of unconscious signals implies that attention and awareness can be dissociated, challenging a simplistic view of the boundary between conscious and unconscious visual processing.  相似文献   

19.
Seitz AR  Kim R  Shams L 《Current biology : CB》2006,16(14):1422-1427
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.  相似文献   

20.
Humans are remarkably adept at recognizing objects across a wide range of views. A notable exception to this general rule is that turning a face upside down makes it particularly difficult to recognize. This striking effect has prompted speculation that inversion qualitatively changes the way faces are processed. Researchers commonly assume that configural cues strongly influence the recognition of upright, but not inverted, faces. Indeed, the assumption is so well accepted that the inversion effect itself has been taken as a hallmark of qualitative processing differences. Here, we took a novel approach to understand the inversion effect. We used response classification to obtain a direct view of the perceptual strategies underlying face discrimination and to determine whether orientation effects can be explained by differential contributions of nonlinear processes. Inversion significantly impaired performance in our face discrimination task. However, surprisingly, observers utilized similar, local regions of faces for discrimination in both upright and inverted face conditions, and the relative contributions of nonlinear mechanisms to performance were similar across orientations. Our results suggest that upright and inverted face processing differ quantitatively, not qualitatively; information is extracted more efficiently from upright faces, perhaps as a by-product of orientation-dependent expertise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号