首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Current computational models of motion processing in the primate motion pathway do not cope well with image sequences in which a moving pattern is superimposed upon a static texture. The use of non-linear operations and the need for contrast normalization in motion models mean that the separation of the influences of moving and static patterns on the motion computation is not trivial. Therefore, the response to the superposition of static and moving patterns provides an important means of testing various computational strategies. Here we describe a computational model of motion processing in the visual cortex, one of the advantages of which is that it is highly resistant to interference from static patterns.  相似文献   

2.
Langley K 《Spatial Vision》2002,15(2):171-190
A computational model of motion perception is proposed. The model, which is gradient-based, adheres to the neural constraint that transmitted signals are positive-valued functions by posing the estimation of image motion as a quadratic programming problem combined with total-least squares: a model that assumes that image signals are contaminated by noise in both the spatial and temporal dimensions. By shrinking motion estimates with a regularizer whose subtractive effect introduces a contrast dependent speed threshold into motion computations, it is shown that the total-least squares model when posed as a quadratic programming problem, is capable of explaining both increases and decreases in perceived speed as these effects were reported by Thompson (1982) to vary as a function of image contrast and temporal frequency. The correlation that exists between the model's contrast speed response and results reported from visual psychophysics is consistent with the view that the visual system assumes that image signals may be contaminated by noise in both the spatial and the temporal domain, and that visual motion is influenced by the consequence of these assumptions.  相似文献   

3.
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.  相似文献   

4.
In Li and Atick's [1, 2] theory of efficient stereo coding, the two eyes' signals are transformed into uncorrelated binocular summation and difference signals, and gain control is applied to the summation and differencing channels to optimize their sensitivities. In natural vision, the optimal channel sensitivities vary from moment to moment, depending on the strengths of the summation and difference signals; these channels should therefore be separately adaptable, whereby a channel's sensitivity is reduced following overexposure to adaptation stimuli that selectively stimulate that channel. This predicts a remarkable effect of binocular adaptation on perceived direction of a dichoptic motion stimulus [3]. For this stimulus, the summation and difference signals move in opposite directions, so perceived motion direction (upward or downward) should depend on which of the two binocular channels is most strongly adapted, even if the adaptation stimuli are completely static. We confirmed this prediction: a single static dichoptic adaptation stimulus presented for less than 1 s can control perceived direction of a subsequently presented dichoptic motion stimulus. This is not predicted by any current model of motion perception and suggests that the visual cortex quickly adapts to the prevailing binocular image statistics to maximize information-coding efficiency.  相似文献   

5.
The perceived speed of moving images changes over time. Prolonged viewing of a pattern (adaptation) leads to an exponential decrease in its perceived speed. Similarly, responses of neurones tuned to motion reduce exponentially over time. It is tempting to link these phenomena. However, under certain conditions, perceived speed increases after adaptation and the time course of these perceptual effects varies widely. We propose a model that comprises two temporally tuned mechanisms whose sensitivities reduce exponentially over time. Perceived speed is taken as the ratio of these filters' outputs. The model captures increases and decreases in perceived speed following adaptation and describes our data well with just four free parameters. Whilst the model captures perceptual time courses that vary widely, parameter estimates for the time constants of the underlying filters are in good agreement with estimates of the time course of adaptation of direction selective neurones in the mammalian visual system.  相似文献   

6.
Whether fundamental visual attributes, such as color, motion, and shape, are analyzed separately in specialized pathways has been one of the central questions of visual neuroscience. Although recent studies have revealed various forms of cross-attribute interactions, including significant contributions of color signals to motion processing, it is still widely believed that color perception is relatively independent of motion processing. Here, we report a new color illusion, motion-induced color mixing, in which moving bars, the color of each of which alternates between two colors (e.g., red and green), are perceived as the mixed color (e.g., yellow) even though the two colors are never superimposed on the retina. The magnitude of color mixture is significantly stronger than that expected from direction-insensitive spatial integration of color signals. This illusion cannot be ascribed to optical image blurs, including those induced by chromatic aberration, or to involuntary eye movements of the observer. Our findings indicate that color signals are integrated not only at the same retinal location, but also along a motion trajectory. It is possible that this neural mechanism helps us to see veridical colors for moving objects by reducing motion blur, as in the case of luminance-based pattern perception.  相似文献   

7.
When a static textured background is covered and uncovered by a moving bar of the same mean luminance we can clearly see the motion of the bar. Texture-defined motion provides an example of a naturally occurring second-order motion. Second-order motion sequences defeat standard spatio-temporal energy models of motion perception. It has been proposed that second-order stimuli are analysed by separate systems, operating in parallel with luminance-defined motion processing, which incorporate identifiable pre-processing stages that make second-order patterns visible to standard techniques. However, the proposal of multiple paths to motion analysis remains controversial. Here we describe the behaviour of a model that recovers both luminance-defined and an important class of texture-defined motion. The model also accounts for the induced motion that is seen in some texture-defined motion sequences. We measured the perceived direction and speed of both the contrast envelope and induced motion in the case of a contrast modulation of static noise textures. Significantly, the model predicts the perceived speed of the induced motion seen at second-order texture boundaries. The induced motion investigated here appears distinct from classical induced effects resulting from motion contrast or the movement of a reference frame.  相似文献   

8.
When humans detect and discriminate visual motion, some neural mechanism extracts the motion information that is embedded in the noisy spatio-temporal stimulus. We show that an ideal mechanism in a motion discrimination experiment cross-correlates the received waveform with the signals to be discriminated. If the human visual system uses such a cross-correlator mechanism, discrimination performance should depend on the cross-correlation between the two signals. Manipulations of the signals' cross-correlation using differences in the speed and phase of moving gratings produced the predicted changes in the performance of human observers. The cross-correlator's motion performance improves linearly as contrast increases and human performance is similar. The ideal cross-correlator can be implemented by passing the stimulus through linear spatio-temporal filters matched to the signals. We propose that directionally selective simple cells in the striate cortex serve as matched filters during motion detection and discrimination.  相似文献   

9.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

10.
Kinetic occlusion produces discontinuities in the optic flow field, whose perception requires the detection of an unexpected onset or offset of otherwise predictably moving or stationary contrast patches. Many cells in primate visual cortex are directionally selective for moving contrasts, and recent reports suggest that this selectivity arises through the inhibition of contrast signals moving in the cells’ null direction, as in the rabbit retina. This nulling inhibition circuit (Barlow-Levick) is here extended to also detect motion onsets and offsets. The selectivity of extended circuit units, measured as a peak evidence accumulation response to motion onset/offset compared to the peak response to constant motion, is analyzed as a function of stimulus speed. Model onset cells are quiet during constant motion, but model offset cells activate during constant motion at slow speeds. Consequently, model offset cell speed tuning is biased towards higher speeds than onset cell tuning, similarly to the speed tuning of cells in the middle temporal area when exposed to speed ramps. Given a population of neurons with different preferred speeds, this asymmetry addresses a behavioral paradox—why human subjects in a simple reaction time task respond more slowly to motion offsets than onsets for low speeds, even though monkey neuron firing rates react more quickly to the offset of a preferred stimulus than to its onset.  相似文献   

11.
Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed.  相似文献   

12.
Since Barlow and Hill's classic study of the adaptation of the rabbit ganglion cell to movement [1], there have been several reports that motion adaptation is accompanied by an exponential reduction in spike rate, and similar estimates of the time course of velocity adaptation have been found across species [2-4]. Psychophysical studies in humans have shown that perceived velocity may reduce exponentially with adaptation [5,6]. It has been suggested that the reduction in firing of single cells may constitute the neural substrate of the reduction in perceived speed in humans [1,5-7]. Although a model of velocity coding in which the firing rate directly encodes speed may have the advantage of simplicity, it is not supported by psychophysical research. Furthermore, psychophysical estimates of the time course of perceived speed adaptation are not entirely consistent with physiological estimates. This discrepancy between psychophysical and physiological estimates may be due to the unrealistic assumption that speed is coded in the gross spike rate of neurons in the primary visual cortex. The psychophysical data on motion processing are, however, generally consistent with a model in which perceived velocity is derived from the ratio of two temporal channels [8-14]. We have examined the time course of speed adaptation and recovery to determine whether the observed rates can be better related to the established physiology if a ratio model of velocity processing is assumed. Our results indicate that such a model describes the data well and can accommodate the observed difference in the time courses of physiological and psychophysical processes.  相似文献   

13.
It is widely supposed that things tend to look blurred when they are moving fast. Previous work has shown that this is true for sharp edges but, paradoxically, blurred edges look sharper when they are moving than when stationary. This is 'motion sharpening'. We show that blurred edges also look up to 50% sharper when they are presented briefly (8-24 ms) than at longer durations (100-500 ms) without motion. This argues strongly against high-level models of sharpening based specifically on compensation for motion blur. It also argues against a recent, low-level, linear filter model that requires motion to produce sharpening. No linear filter model can explain our finding that sharpening was similar for sinusoidal and non-sinusoidal gratings, since linear filters can never distort sine waves. We also conclude that the idea of a 'default' assumption of sharpness is not supported by experimental evidence. A possible source of sharpening is a nonlinearity in the contrast response of early visual mechanisms to fast or transient temporal changes, perhaps based on the magnocellular (M-cell) pathway. Our finding that sharpening is not diminished at low contrast sets strong constraints on the nature of the nonlinearity.  相似文献   

14.
Ilg UJ  Schumann S  Thier P 《Neuron》2004,43(1):145-151
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.  相似文献   

15.
We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness.  相似文献   

16.
Tian J  Wang C  Sun F 《Spatial Vision》2003,16(5):407-418
When gratings moving in different directions are presented separately to the two eyes, we typically perceive periods of the combination of motion in the two eyes as well as periods of one or the other monocular motions. To investigate whether such interocular motion combination is determined by the intersection-of-constraints (IOC) or vector average mechanism, we recorded both optokinetic nystagmus eye movements (OKN) and perception during dichoptic presentation of moving gratings and random-dot patterns with various differences of interocular motion direction. For moving gratings, OKN alternately tracks not only the direction of the two monocular motions but also the direction of their combined motion. The OKN in the combined motion direction is highly correlated with the perceived direction of combined motion; its velocity complies with the IOC rule rather than the vector average of the dichoptic motion stimuli. For moving random-dot patterns, both OKN and perceived motion alternate only between the directions of the two monocular motions. These results suggest that interocular motion combination in dichoptic gratings is determined by the IOC and depends on their form.  相似文献   

17.
Biological motion displays depict a moving human figure by means of just a few isolated points of light attached to the major joints of the body. Naive observers readily interpret the moving pattern of dots as representing a human figure, despite the complete absence of form cues. This paper reports a series of experiments which investigated the visual processes underlying the phenomenon. Results suggest that (i) the effect relies upon responses in low-level motion-detecting processes, which operate over short temporal and spatial intervals and respond to local modulations in image intensity; and (ii) the effect does not involve hierarchical visual analysis of motion components, nor does it require the presence of dots which move in rigid relation to each other. Instead, movements of the extremities are crucial. Data are inconsistent with current theoretical treatments.  相似文献   

18.
《Zoology (Jena, Germany)》2014,117(3):163-170
The functional significance of the zebra coat stripe pattern is one of the oldest questions in evolutionary biology, having troubled scientists ever since Charles Darwin and Alfred Russel Wallace first disagreed on the subject. While different theories have been put forward to address this question, the idea that the stripes act to confuse or ‘dazzle’ observers remains one of the most plausible. However, the specific mechanisms by which this may operate have not been investigated in detail. In this paper, we investigate how motion of the zebra's high contrast stripes creates visual effects that may act as a form of motion camouflage. We simulated a biologically motivated motion detection algorithm to analyse motion signals generated by different areas on a zebra's body during displacements of their retinal images. Our simulations demonstrate that the motion signals that these coat patterns generate could be a highly misleading source of information. We suggest that the observer's visual system is flooded with erroneous motion signals that correspond to two well-known visual illusions: (i) the wagon-wheel effect (perceived motion inversion due to spatiotemporal aliasing); and (ii) the barber-pole illusion (misperceived direction of motion due to the aperture effect), and predict that these two illusory effects act together to confuse biting insects approaching from the air, or possibly mammalian predators during the hunt, particularly when two or more zebras are observed moving together as a herd.  相似文献   

19.
Feature-tracking explanations of 2D motion perception are fundamentally distinct from motion-energy, correlation, and gradient explanations, all of which can be implemented by applying spatiotemporal filters to raw image data. Filter-based explanations usually suffer from the aperture problem, but 2D motion predictions for moving plaids have been derived from the intersection of constraints (IOC) imposed by the outputs of such filters, and from the vector sum of signals generated by such filters. In most previous experiments, feature-tracking and IOC predictions are indistinguishable. By constructing plaids in apparent motion from missing-fundamental gratings, we set feature-tracking predictions in opposition to both IOC and vector-sum predictions. The perceived directions that result are inconsistent with feature tracking. Furthermore, we show that increasing size and spatial frequency in Type 2 missing-fundamental plaids drives perceived direction from vector-sum toward IOC directions. This reproduces results that have been used to support feature-tracking, but under experimental conditions that rule it out. We discuss our data in the context of a Bayesian model with a gradient-based likelihood and a prior favoring slow speeds. We conclude that filter-based explanations alone can explain both veridical and non-veridical 2D motion perception in such stimuli.  相似文献   

20.
Understanding the evolution of animal signals has to include consideration of the structure of signal and noise, and the sensory mechanisms that detect the signals. Considerable progress has been made in understanding sounds and colour signals, however, the degree to which movement-based signals are constrained by the particular patterns of environmental image motion is poorly understood. Here we have quantified the image motion generated by wind-blown plants at 12 sites in the coastal habitat of the Australian lizard Amphibolurus muricatus. Sampling across different plant communities and meteorological conditions revealed distinct image motion environments. At all locations, image motion became more directional and apparent speed increased as wind speeds increased. The magnitude of these changes and the spatial distribution of image motion, however, varied between locations probably as a function of plant structure and the topographic location. In addition, we show that the background motion noise depends strongly on the particular depth-structure of the environment and argue that such microhabitat differences suggest specific strategies to preserve signal efficacy. Movement-based signals and motion processing mechanisms, therefore, may reveal the same type of habitat specific structural variation that we see for signals from other modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号