首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Recent developments have led to a greater insight into the complex processes of perception of visual motion. A better understanding of the neuronal circuitry involved and advances in electrophysiological techniques have allowed researchers to alter the perception of an animal with a stimulating electrode. In addition, studies have further elucidated the processes by which signals are combined and compared, allowing a greater understanding of the effects of selective brain damage.  相似文献   

3.
4.
5.
T Haarmeier  F Bunjes  A Lindner  E Berret  P Thier 《Neuron》2001,32(3):527-535
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.  相似文献   

6.
Stationary objects appear to move in the opposite direction to a pursuit eye movement (Filehne illusion) and moving objects appear slower when pursued (Aubert-Fleischl phenomenon). Both illusions imply that extra-retinal, eye-velocity signals lead to lower estimates of speed than corresponding retinal motion signals. Intriguingly, the velocity (i.e. speed and direction) of the Filehne illusion depends on the age of the observer, especially for brief display durations (Wertheim and Bekkering, 1992). This suggests relative signal size changes as the visual system matures. To test the signal-size hypothesis, we compared the Filehne illusion and Aubert-Fleischl phenomenon in young and old observers using short and long display durations. The trends in the Filehne data were similar to those reported by Wertheim and Bekkering. However, we found no evidence for an effect of age or duration in the Aubert-Fleischl phenomenon. The differences between the two illusions could not be reconciled on the basis of actual eye movements made. The findings suggest a more complicated explanation of the combined influence of age and duration on head-centred motion perception than that described by the signal-size hypothesis.  相似文献   

7.
8.
9.
10.
We sought to determine the extent to which red-green, colour-opponent mechanisms in the human visual system play a role in the perception of drifting luminance-modulated targets. Contrast sensitivity for the directional discrimination of drifting luminance-modulated (yellow-black) test sinusoids was measured following adaptation to isoluminant red-green sinusoids drifting in either the same or opposite direction. When the test and adapt stimuli drifted in the same direction, large sensitivity losses were evident at all test temporal frequencies employed (1-16 Hz). The magnitude of the loss was independent of temporal frequency. When adapt and test stimuli drifted in opposing directions, large sensitivity losses were evident at lower temporal frequencies (1-4 Hz) and declined with increasing temporal frequency. Control studies showed that this temporal-frequency-dependent effect could not reflect the activity of achromatic units. Our results provide evidence that chromatic mechanisms contribute to the perception of luminance-modulated motion targets drifting at speeds of up to at least 32 degrees s(-1). We argue that such mechanisms most probably lie within a parvocellular-dominated cortical visual pathway, sensitive to both chromatic and luminance modulation, but only weakly selective for the direction of stimulus motion.  相似文献   

11.
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain's form and motion systems that address such situations. Because the model's stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse 'feature tracking signals' from, for example, line ends are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and long-range cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. nonrigid appearance of rotating ellipses.  相似文献   

12.
《Current biology : CB》2022,32(16):3529-3544.e2
  1. Download : Download high-res image (230KB)
  2. Download : Download full-size image
  相似文献   

13.
14.
Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.  相似文献   

15.
The extraction of the direction of motion from the time varying retinal images is one of the most basic tasks any visual system is confronted with. However, retinal images are severely corrupted by photon noise, in particular at low light levels, thus limiting the performance of motion detection mechanisms of what sort so ever. Here, we study how photon noise propagates through an array of Reichardt-type motion detectors that are commonly believed to underlie fly motion vision. We provide closed-form analytical expressions of the signal and noise spectra at the output of such a motion detector array. We find that Reichardt detectors reveal favorable noise suppression in the frequency range where most of the signal power resides. Most notably, due to inherent adaptive properties, the transmitted information about stimulus velocity remains nearly constant over a large range of velocity entropies. Action editor: Matthew Wiener  相似文献   

16.
Human exhibits an anisotropy in direction perception: discrimination is superior when motion is around horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy. We conclude that MT represents different motion directions uniformly, and subsequent processing creates a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least for visual motion, perception and action are guided by inputs from separate sensory streams. The directional anisotropy of perception appears to originate after the two streams have segregated and downstream from area MT.  相似文献   

17.
Among topics related to the evolution of language, the evolution of speech is particularly fascinating. Early theorists believed that it was the ability to produce articulate speech that set the stage for the evolution of the «special» speech processing abilities that exist in modern-day humans. Prior to the evolution of speech production, speech processing abilities were presumed not to exist. The data reviewed here support a different view. Two lines of evidence, one from young human infants and the other from infrahuman species, neither of whom can produce articulate speech, show that in the absence of speech production capabilities, the perception of speech sounds is robust and sophisticated. Human infants and non-human animals evidence auditory perceptual categories that conform to those defined by the phonetic categories of language. These findings suggest the possibility that in evolutionary history the ability to perceive rudimentary speech categories preceded the ability to produce articulate speech. This in turn suggests that it may be audition that structured, at least initially, the formation of phonetic categories.  相似文献   

18.
Jolij J  Meurs M 《PloS one》2011,6(4):e18861

Background

Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception.

Methods and Findings

We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers'' mood.

Conclusions

As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.  相似文献   

19.
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects.  相似文献   

20.
Somewhere between the retina and our conscious visual experience, the majority of the information impinging on the eye is lost. We are typically aware of only either the most salient parts of a visual scene or the parts that we are actively paying attention to. Recent research on visual neurons in monkeys is beginning to show how the brain both selects and discards incoming visual information. For example, what happens to the responses of visual neurons when attention is directed to one element, such as an oriented colored bar, embedded among an array of other oriented bars? Some of this research shows that attention to the oriented bar restricts the receptive field of visual neurons down to this single element. However, other research shows that attention to this single element affects the responses of neurons with receptive fields throughout the visual field. In this review, these two seemingly contradictory results are shown to actually be mutually consistent. A simple computational model is described that explains these results, and also provides a framework for predicting a variety of additional neurophysiological, neuroimaging and behavioral studies of attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号