首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A powerful effect resembling an afterimage is demonstrated on the pathway to the motion-sensitive neuron H1. This effect is independent of the locally generated gain control described in an earlier paper (Maddess & Laughlin 1985, Proc. R. Soc. Lond. B 225, 251). The afterimage, produced across the eye by a stationary pattern, causes the sensitivity to movement to be different according to the local stimulus history, and the effects of low-contrast (0.1) patterns, presented for as little as a few hundred milliseconds, remain for up to 2 s. Moving patterns interact with the afterimage to modulate the spike rate of H1. The afterimage increases with contrast but saturates at contrasts above 0.5. Low spatial frequencies generate afterimages less effectively than moderate ones; this result indicates that the afterimage process could lie at, or after, lateral inhibition between tonic units. This is supported by the fact that the altered sensitivity profiles generated by single bright and dark vertical bars initially resemble Mach bands. However, this character alters as the afterimage decays, and the depression of H1's response to moving bright stimuli, produced by the afterimage of a dark bar, continues to grow for up to 1 s after the adapting bar is removed. A short-lived (0.5 s) reduction of H1's directional selectivity accompanies strong afterimage formation. All these factors, especially the saturation at low contrasts and the spatial frequency tuning, rule out light adaptation by photoreceptors as the afterimage source. Luminances used were also low enough to exclude influence by the pupil mechanism. Lastly, responses to patterns that are occasionally jumped by large or small distances are broadened by stimuli that produce an afterimage. Responses to small displacements have previously been described as 'velocity impulse responses' (Srinivasan 1983, Vision Res. 23, 659; Zaagman et al. 1983, IEEE Trans. SMC 13, 900) and so the response broadening (stimulus blurring) can be taken as a reduction of the fly's temporal resolution of moving objects. Previously reported work shows that afterimages seen in humans and the effect reported here act over the same range of temporal frequencies rather than retinal drift speeds. This may suggest an important role for afterimage-like effects in the processing of the low temporal frequency components of moving images. Certainly, the fly's afterimage system reduces the visibility of moving objects within patches of an image that, have on average, contained slowly varying motion signals.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

3.
Most studies of human motion perception have been based on the implicit assumption that the brain has only one motion-detection system, or at least that only one is operational in any given instance. We show, in the context of direction perception in spatially filtered two-frame random-dot kinematograms, that two quite different mechanisms operate simultaneously in the detection of such patterns. One mechanism causes reversal of the perceived direction (reversed-phi motion) when the image contrast is reversed between frames, and is highly dependent on the spatial-frequency content of the image. These characteristics are both signatures of detection based on motion energy. The other mechanism does not produce reversed-phi motion and is unaffected by spatial filtering. This appears to involve the tracking of unsigned complex spatial features. The perceived direction of a filtered dot pattern typically reflects a mixture of the two types of behaviour in any given instance. Although both types of mechanism have previously been invoked to explain the perception of motion of different types of image, the simultaneous involvement of two mechanisms in the detection of the same simple rigid motion of a pattern suggests that motion perception in general results from a combination of mechanisms working simultaneously on different principles in the same circumstances.  相似文献   

4.
Human observers perceive illusory rotations after the disappearance of circularly repeating patches containing dark-to-light luminance. This afterimage rotation is a very powerful phenomenon, but little is known about the mechanisms underlying it. Here, we use a computational model to show that the afterimage rotation can be explained by a combination of fast light adaptation and the physiological architecture of the early visual system, consisting of ON- and OFF-type visual pathways. In this retinal ON/OFF model, the afterimage rotation appeared as a rotation of focus lines of retinal ON/OFF responses. Focus lines rotated clockwise on a light background, but counterclockwise on a dark background. These findings were consistent with the results of psychophysical experiments, which were also performed by us. Additionally, the velocity of the afterimage rotation was comparable with that observed in our psychophysical experiments. These results suggest that the early visual system (including the retina) is responsible for the generation of the afterimage rotation, and that this illusory rotation may be systematically misinterpreted by our high-level visual system.  相似文献   

5.
Harding G  Harris JM  Bloj M 《PloS one》2012,7(4):e35950
The luminance and colour gradients across an image are the result of complex interactions between object shape, material and illumination. Using such variations to infer object shape or surface colour is therefore a difficult problem for the visual system. We know that changes to the shape of an object can affect its perceived colour, and that shading gradients confer a sense of shape. Here we investigate if the visual system is able to effectively utilise these gradients as a cue to shape perception, even when additional cues are not available. We tested shape perception of a folded card object that contained illumination gradients in the form of shading and more subtle effects such as inter-reflections. Our results suggest that observers are able to use the gradients to make consistent shape judgements. In order to do this, observers must be given the opportunity to learn suitable assumptions about the lighting and scene. Using a variety of different training conditions, we demonstrate that learning can occur quickly and requires only coarse information. We also establish that learning does not deliver a trivial mapping between gradient and shape; rather learning leads to the acquisition of assumptions about lighting and scene parameters that subsequently allow for gradients to be used as a shape cue. The perceived shape is shown to be consistent for convex and concave versions of the object that exhibit very different shading, and also similar to that delivered by outline, a largely unrelated cue to shape. Overall our results indicate that, although gradients are less reliable than some other cues, the relationship between gradients and shape can be quickly assessed and the gradients therefore used effectively as a visual shape cue.  相似文献   

6.
Many aspects of Drosophila segmentation can be discussed in one-dimensional terms as a linear pattern of repeated elements or cell states. But the initial metameric pattern seen in the expression of pair-rule genes is fully two-dimensional, i.e. a pattern of stripes. Several lines of evidence suggest a kinetic mechanism acting globally during the syncytial blastoderm stage may be responsible for generating this pattern. The requirement that the mechanism should produce stripes, not spots or some other periodic pattern, imposes preconditions on this act, namely (1) sharp anterior and posterior boundaries that delimit the pattern-forming region, and (2) an axial asymmetrizing influence in the form of an anteroposterior gradient. Models for Drosophila segmentation generally rely on the gradient to provide positional information in the form of concentration thresholds that cue downstream elements of a hierarchical control system. This imposes restrictions on how such models cope with experimental disturbances to the gradient. A shallower gradient, for example, means fewer pattern elements. This need not be the case if the gradient acts through a kinetic mechanism like reaction-diffusion that involves the whole system. It is then the overall direction of the gradient that is important rather than specific concentration values.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

7.
Richards (1985) showed that veridical three-dimensional shape may be recovered from the integration of binocular disparity and retinal motion information, but proposed that this integration may only occur for horizontal retinal motion. Psychophysical evidence supporting the combination of stereo and motion information is limited to the case of horizontal motion (Johnston et al., 1994), and has been criticised on the grounds of potential object boundary cues to shape present in the stimuli. We investigated whether veridical shape can be recovered under more general conditions. Observers viewed cylinders that were defined by binocular disparity, two-frame motion or a combination of disparity and motion, presented at simulated distances of 30 cm, 90 cm or 150 cm. Horizontally and vertically oriented cylinders were rotated about vertical and horizontal axes. When rotation was about the cylinder's own axis, no boundary cues to shape were introduced. Settings were biased for the disparity and two-frame motion stimuli, while more veridical shape judgements were made under all conditions for combined cue stimuli. These results demonstrate that the improved perception of three-dimensional shape in these stimuli is not a consequence of the presence of object boundary cues, and that the combination of disparity and motion is not restricted to horizontal image motion.  相似文献   

8.
After fixating on a colored pattern, observers see a similar pattern in complementary colors when the stimulus is removed [1-6]. Afterimages were important in disproving the theory that visual rays emanate from the eye, in demonstrating interocular interactions, and in revealing the independence of binocular vision from eye movements. Afterimages also prove invaluable in exploring selective attention, filling in, and consciousness. Proposed physiological mechanisms for color afterimages range from bleaching of cone photopigments to cortical adaptation [4-9], but direct neural measurements have not been reported. We introduce a time-varying method for evoking afterimages, which provides precise measurements of adaptation and a direct link between visual percepts and neural responses [10]. We then use in vivo electrophysiological recordings to show that all three classes of primate retinal ganglion cells exhibit subtractive adaptation to prolonged stimuli, with much slower time constants than those expected of photoreceptors. At the cessation of the stimulus, ganglion cells generate rebound responses that can provide afterimage signals for later neurons. Our results indicate that afterimage signals are generated in the retina but may be modified like other retinal signals by cortical processes, so that evidence presented for cortical generation of color afterimages is explainable by spatiotemporal factors that modify all signals.  相似文献   

9.
Chemotaxis, the directed motion of a cell toward a chemical source, plays a key role in many essential biological processes. Here, we derive a statistical model that quantitatively describes the chemotactic motion of eukaryotic cells in a chemical gradient. Our model is based on observations of the chemotactic motion of the social ameba Dictyostelium discoideum, a model organism for eukaryotic chemotaxis. A large number of cell trajectories in stationary, linear chemoattractant gradients is measured, using microfluidic tools in combination with automated cell tracking. We describe the directional motion as the interplay between deterministic and stochastic contributions based on a Langevin equation. The functional form of this equation is directly extracted from experimental data by angle-resolved conditional averages. It contains quadratic deterministic damping and multiplicative noise. In the presence of an external gradient, the deterministic part shows a clear angular dependence that takes the form of a force pointing in gradient direction. With increasing gradient steepness, this force passes through a maximum that coincides with maxima in both speed and directionality of the cells. The stochastic part, on the other hand, does not depend on the orientation of the directional cue and remains independent of the gradient magnitude. Numerical simulations of our probabilistic model yield quantitative agreement with the experimental distribution functions. Thus our model captures well the dynamics of chemotactic cells and can serve to quantify differences and similarities of different chemotactic eukaryotes. Finally, on the basis of our model, we can characterize the heterogeneity within a population of chemotactic cells.  相似文献   

10.
The question of whether perceptual illusions influence eye movements is critical for the long-standing debate regarding the separation between action and perception. To test the role of auditory context on a visual illusion and on eye movements, we took advantage of the fact that the presence of an auditory cue can successfully modulate illusory motion perception of an otherwise static flickering object (sound-induced visual motion effect). We found that illusory motion perception modulated by an auditory context consistently affected saccadic eye movements. Specifically, the landing positions of saccades performed towards flickering static bars in the periphery were biased in the direction of illusory motion. Moreover, the magnitude of this bias was strongly correlated with the effect size of the perceptual illusion. These results show that both an audio-visual and a purely visual illusion can significantly affect visuo-motor behavior. Our findings are consistent with arguments for a tight link between perception and action in localization tasks.  相似文献   

11.
The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.  相似文献   

12.
Feature-tracking explanations of 2D motion perception are fundamentally distinct from motion-energy, correlation, and gradient explanations, all of which can be implemented by applying spatiotemporal filters to raw image data. Filter-based explanations usually suffer from the aperture problem, but 2D motion predictions for moving plaids have been derived from the intersection of constraints (IOC) imposed by the outputs of such filters, and from the vector sum of signals generated by such filters. In most previous experiments, feature-tracking and IOC predictions are indistinguishable. By constructing plaids in apparent motion from missing-fundamental gratings, we set feature-tracking predictions in opposition to both IOC and vector-sum predictions. The perceived directions that result are inconsistent with feature tracking. Furthermore, we show that increasing size and spatial frequency in Type 2 missing-fundamental plaids drives perceived direction from vector-sum toward IOC directions. This reproduces results that have been used to support feature-tracking, but under experimental conditions that rule it out. We discuss our data in the context of a Bayesian model with a gradient-based likelihood and a prior favoring slow speeds. We conclude that filter-based explanations alone can explain both veridical and non-veridical 2D motion perception in such stimuli.  相似文献   

13.
We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness.  相似文献   

14.

Background

Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.

Methodology/Principal Findings

Observers viewed a virtual room (4 m width×5 m height×17.5 m depth) with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD) depth cues and with or without motion parallax (MP) depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5–17.5 m) in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD), they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.

Conclusions/Significance

These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D) processing model for lightness perception.  相似文献   

15.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

16.
The central problems of vision are often divided into object identification and localization. Object identification, at least at fine levels of discrimination, may require the application of top-down knowledge to resolve ambiguous image information. Utilizing top-down knowledge, however, may require the initial rapid access of abstract object categories based on low-level image cues. Does object localization require a different set of operating principles than object identification or is category determination also part of the perception of depth and spatial layout? Three-dimensional graphics movies of objects and their cast shadows are used to argue that identifying perceptual categories is important for determining the relative depths of objects. Processes that can identify the causal class (e.g. the kind of material) that generates the image data can provide information to determine the spatial relationships between surfaces. Changes in the blurriness of an edge may be characteristically associated with shadows caused by relative motion between two surfaces. The early identification of abstract events such as moving object/shadow pairs may also be important for depth from shadows. Knowledge of how correlated motion in the image relates to an object and its shadow may provide a reliable cue to access such event categories.  相似文献   

17.
视觉运动信息的感知过程,包括从局域运动检测到对模式整体运动的感知过程.我们以蝇视觉系统的图形-背景相对运动分辨的神经回路网络为基本框架,采用初级运动检测器的六角形阵列作为输入层,构造了一种感知视觉运动信息的简化脑模型,模拟了运动信息应该神经计算模型各个层次上的处理.该模型对差分行为实验结果作出了正确预测.本文并对空间生理整合的神经机制作了讨论.  相似文献   

18.
19.
Summary Freely flying honeybees are innately attracted to moving objects, as revealed by their spontaneous preference for a moving disc over an identical, but stationary disc. We have exploited this spontaneous preference to explore the visual cues by which a bee, which is herself in motion, recognizes a moving object. We find that the moving disc is not detected on the basis that it produces a more rapidly moving image on the retina. The relevant cue might therefore be the motion of the disc relative to the visual surround. We have attempted to test this hypothesis by artificially rotating the structured environment, together with the moving disc, around the bee. Under these conditions, the image of the stationary disc rather than that of the actually moving disc is in motion relative to the surround. We find that rotation of the surround disrupts the bee's capacity not only to distinguish a moving object from a stationary one, but also to discriminate stationary objects at different ranges. Possible interpretations of these results are discussed.  相似文献   

20.
Whether fundamental visual attributes, such as color, motion, and shape, are analyzed separately in specialized pathways has been one of the central questions of visual neuroscience. Although recent studies have revealed various forms of cross-attribute interactions, including significant contributions of color signals to motion processing, it is still widely believed that color perception is relatively independent of motion processing. Here, we report a new color illusion, motion-induced color mixing, in which moving bars, the color of each of which alternates between two colors (e.g., red and green), are perceived as the mixed color (e.g., yellow) even though the two colors are never superimposed on the retina. The magnitude of color mixture is significantly stronger than that expected from direction-insensitive spatial integration of color signals. This illusion cannot be ascribed to optical image blurs, including those induced by chromatic aberration, or to involuntary eye movements of the observer. Our findings indicate that color signals are integrated not only at the same retinal location, but also along a motion trajectory. It is possible that this neural mechanism helps us to see veridical colors for moving objects by reducing motion blur, as in the case of luminance-based pattern perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号