首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
在充满生存竞争的动物世界,视觉的伪装与反伪装现象随处可见,视觉反伪装的原理是什么?本文对Reichardt的图形-背景相对运动分辨模型加以发展,提出了视觉反伪装功能的运动图象滤波器模型。为了检验此模型,我们建立了一个生物学似真的实时运动信息加工神经网络电子装置,实现了实时、高分辨运动目标图象滤波。与Mead的人工视网膜的运动目标图象检测功能相比,检测的运动目标图象的分辨率有很大提高,而噪声水平显著降低,克服了人工视网膜的一些局限性。  相似文献   

2.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients.In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.  相似文献   

3.
An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.  相似文献   

4.
BACKGROUND: In anorthoscopic viewing conditions, observers can perceive a moving object through a narrow slit even when only portions of its contour are visible at any time. We used fMRI to examine the contribution of early and later visual cortical areas to dynamic shape integration. Observers' success at integrating the shape of the slit-viewed object was manipulated by varying the degree to which the stimulus was dynamically distorted. Line drawings of common objects were either moderately distorted, strongly distorted, or shown undistorted. Phenomenologically, increasing the stimulus distortion made both object shape and motion more difficult to perceive.RESULTS: We found that bilateral cortical activity in portions of the ventral occipital cortex, corresponding to known object areas within the lateral occipital complex (LOC), was inversely correlated with the degree of stimulus distortion. We found that activity in left MT+, the human cortical area specialized for motion, showed a similar pattern as the ventral occipital region. The LOC also showed greater activity to a fully visible moving object than to the undistorted slit-viewed object. Area MT+, however, showed more equivalent activity to both the slit-viewed and fully visible moving objects.CONCLUSIONS: In early retinotopic cortex, the distorted and undistorted stimuli elicited the same amount of activity. Higher visual areas, however, were correlated with the percept of the coherent object, and this correlation suggests that the shape integration is mediated by later visual cortical areas. Motion information from the dorsal stream may project to the LOC to produce the shape percept.  相似文献   

5.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

6.
Huang X  Albright TD  Stoner GR 《Neuron》2007,53(5):761-770
Visual motion perception relies on two opposing operations: integration and segmentation. Integration overcomes motion ambiguity in the visual image by spatial pooling of motion signals, whereas segmentation identifies differences between adjacent moving objects. For visual motion area MT, previous investigations have reported that stimuli in the receptive field surround, which do not elicit a response when presented alone, can nevertheless modulate responses to stimuli in the receptive field center. The directional tuning of this "surround modulation" has been found to be mainly antagonistic and hence consistent with segmentation. Here, we report that surround modulation in area MT can be either antagonistic or integrative depending upon the visual stimulus. Both types of modulation were delayed relative to response onset. Our results suggest that the dominance of antagonistic modulation in previous MT studies was due to stimulus choice and that segmentation and integration are achieved, in part, via adaptive surround modulation.  相似文献   

7.
The interaction of visual and proprioceptive afferentation were studied in the motor task for discrimination of weights of falling objects. The availability of visual information reduced the time of motor response; however, the degree of shortening depended on the type of this information. The decrease in the response time was significantly greater when the subject saw the beginning of the real falling of object instead of having only visual information about the beginning of the fall. Thus, a subject solves the motor task for discrimination of weights of falling objects more efficiently when he sees the real beginning of the fall, rather than in the case when the subject receives only a visual signal at the moment when an electromagnet releases the object. This may be due to the fact that seeing the initial part of a real trajectory instead of an abstract signal about the beginning of the fall allows the subject to better predict the moment of the impact.  相似文献   

8.
The central problems of vision are often divided into object identification and localization. Object identification, at least at fine levels of discrimination, may require the application of top-down knowledge to resolve ambiguous image information. Utilizing top-down knowledge, however, may require the initial rapid access of abstract object categories based on low-level image cues. Does object localization require a different set of operating principles than object identification or is category determination also part of the perception of depth and spatial layout? Three-dimensional graphics movies of objects and their cast shadows are used to argue that identifying perceptual categories is important for determining the relative depths of objects. Processes that can identify the causal class (e.g. the kind of material) that generates the image data can provide information to determine the spatial relationships between surfaces. Changes in the blurriness of an edge may be characteristically associated with shadows caused by relative motion between two surfaces. The early identification of abstract events such as moving object/shadow pairs may also be important for depth from shadows. Knowledge of how correlated motion in the image relates to an object and its shadow may provide a reliable cue to access such event categories.  相似文献   

9.
In primates, tracking eye movements help vision by stabilising onto the retinas the images of a moving object of interest. This sensorimotor transformation involves several stages of motion processing, from the local measurement of one-dimensional luminance changes up to the integration of first and higher-order local motion cues into a global two-dimensional motion immune to antagonistic motions arising from the surrounding. The dynamics of this surface motion segmentation is reflected into the various components of the tracking responses and its underlying neural mechanisms can be correlated with behaviour at both single-cell and population levels. I review a series of behavioural studies which demonstrate that the neural representation driving eye movements evolves over time from a fast vector average of the outputs of linear and non-linear spatio-temporal filtering to a progressive and slower accurate solution for global motion. Because of the sensitivity of earliest ocular following to binocular disparity, antagonistic visual motion from surfaces located at different depths are filtered out. Thus, global motion integration is restricted within the depth plane of the object to be tracked. Similar dynamics were found at the level of monkey extra-striate areas MT and MST and I suggest that several parallel pathways along the motion stream are involved albeit with different latencies to build-up this accurate surface motion representation. After 200-300 ms, most of the computational problems of early motion processing (aperture problem, motion integration, motion segmentation) are solved and the eye velocity matches the global object velocity to maintain a clear and steady retinal image.  相似文献   

10.
Summary Freely flying honeybees are innately attracted to moving objects, as revealed by their spontaneous preference for a moving disc over an identical, but stationary disc. We have exploited this spontaneous preference to explore the visual cues by which a bee, which is herself in motion, recognizes a moving object. We find that the moving disc is not detected on the basis that it produces a more rapidly moving image on the retina. The relevant cue might therefore be the motion of the disc relative to the visual surround. We have attempted to test this hypothesis by artificially rotating the structured environment, together with the moving disc, around the bee. Under these conditions, the image of the stationary disc rather than that of the actually moving disc is in motion relative to the surround. We find that rotation of the surround disrupts the bee's capacity not only to distinguish a moving object from a stationary one, but also to discriminate stationary objects at different ranges. Possible interpretations of these results are discussed.  相似文献   

11.
Ilg UJ  Schumann S  Thier P 《Neuron》2004,43(1):145-151
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.  相似文献   

12.
Normal visual acuity requires a stationary retinal image on the fovea. If fixation instabilities cause movement of the retinal image across the fovea for a few degrees, visual acuity is diminished. Nystagmus as the fixation instability, consequently, may impair vision. Period of foveation is the area in the wave form, i.e. a brief period of time when the eye is still and is pointed at the object of regard. At this period eye velocity is at a minimum and visual acuity is the best. In the children with congenital ocular nystagmus, using usual clinical equipment (TC 1.0 and TC 0.3 s), was performed electronystagmography (ENG) and analysis of the obtained nystagmus waveforms. In the some patients visual acuity was also examined. The ENG records were classified according to Dell'Osso criteria for waveforms. The findings of jerk nystagmus with extended foveation (J(EF)) and of bidirectional jerk nystagmus (BDJ) were singled out. Foveation time, measured in these weveforms was compared with the visual acuity. Visual acuity was better in the jerk nystafmus weveforms with extended foveation period (J(EF)) than in bidirectional jerk nystagmus with shorter foveation time.  相似文献   

13.
Mirror agnosia.     
Normal people rarely confuse the mirror image of an object with a real object so long as they realize they are looking into a mirror. We report a new neurological sign, ''mirror agnosia'', following right parietal lesions in which this ability is severely compromised. We studied four right hemisphere stroke patients who had left visual field ''neglect''. i.e. they were indifferent to objects in their left visual field even though they were not blind. We then placed a vertical parasagittal mirror on each patients'' right so that they could clearly see the reflection of objects placed in the (neglected) visual field. When shown a candy or pen on their left, the patients kept banging their hand into the mirror or groped behind it attempting to grab the reflection; they did not reach for the real object on the left, even though they were mentally quite lucid and knew they were looking into a mirror. Remarkably, all four patients kept complaining that the object was ''in the mirror'', ''outside my reach'' or ''behind the mirror''. Thus, even the patients'' ability to make simple logical inferences about mirrors has been selectively warped to accommodate the strange new sensory world that they now inhabit. The finding may have implications for understanding how the brain creates representations of mirror reflections.  相似文献   

14.
The visual angle that is projected by an object (e.g. a ball) on the retina depends on the object's size and distance. Without further information, however, the visual angle is ambiguous with respect to size and distance, because equal visual angles can be obtained from a big ball at a longer distance and a smaller one at a correspondingly shorter distance. Failure to recover the true 3D structure of the object (e.g. a ball's physical size) causing the ambiguous retinal image can lead to a timing error when catching the ball. Two opposing views are currently prevailing on how people resolve this ambiguity when estimating time to contact. One explanation challenges any inference about what causes the retinal image (i.e. the necessity to recover this 3D structure), and instead favors a direct analysis of optic flow. In contrast, the second view suggests that action timing could be rather based on obtaining an estimate of the 3D structure of the scene. With the latter, systematic errors will be predicted if our inference of the 3D structure fails to reveal the underlying cause of the retinal image. Here we show that hand closure in catching virtual balls is triggered by visual angle, using an assumption of a constant ball size. As a consequence of this assumption, hand closure starts when the ball is at similar distance across trials. From that distance on, the remaining arrival time, therefore, depends on ball's speed. In order to time the catch successfully, closing time was coupled with ball's speed during the motor phase. This strategy led to an increased precision in catching but at the cost of committing systematic errors.  相似文献   

15.
16.
We propose a computational model of contour integration for visual saliency. The model uses biologically plausible devices to simulate how the representations of elements aligned collinearly along a contour in an image are enhanced. Our model adds such devices as a dopamine-like fast plasticity, local GABAergic inhibition and multi-scale processing of images. The fast plasticity addresses the problem of how neurons in visual cortex seem to be able to influence neurons they are not directly connected to, for instance, as observed in contour closure effect. Local GABAergic inhibition is used to control gain in the system without using global mechanisms which may be non-plausible given the limited reach of axonal arbors in visual cortex. The model is then used to explore not only its validity in real and artificial images, but to discover some of the mechanisms involved in processing of complex visual features such as junctions and end-stops as well as contours. We present evidence for the validity of our model in several phases, starting with local enhancement of only a few collinear elements. We then test our model on more complex contour integration images with a large number of Gabor elements. Sections of the model are also extracted and used to discover how the model might relate contour integration neurons to neurons that process end-stops and junctions. Finally, we present results from real world images. Results from the model suggest that it is a good current approximation of contour integration in human vision. As well, it suggests that contour integration mechanisms may be strongly related to mechanisms for detecting end-stops and junction points. Additionally, a contour integration mechanism may be involved in finding features for objects such as faces. This suggests that visual cortex may be more information efficient and that neural regions may have multiple roles.  相似文献   

17.
 Temporal correlation of neuronal activity has been suggested as a criterion for multiple object recognition. In this work, a two-dimensional network of simplified Wilson-Cowan oscillators is used to manage the binding and segmentation problem of a visual scene according to the connectedness Gestalt criterion. Binding is achieved via original coupling terms that link excitatory units to both excitatory and inhibitory units of adjacent neurons. These local coupling terms are time independent, i.e., they do not require Hebbian learning during the simulations. Segmentation is realized by a two-layer processing of the visual image. The first layer extracts all object contours from the image by means of “retinal cells” with an “on-center” receptive field. Information on contour is used to selectively inhibit Wilson-Cowan oscillators in the second layer, thus realizing a strong separation among neurons in different objects. Accidental synchronism between oscillations in different objects is prevented with the use of a global inhibitor, i.e., a global neuron that computes the overall activity in the Wilson-Cowan network and sends back an inhibitory signal. Simulations performed in a 50×50 neural grid with 21 different visual scenes (containing up to eight objects + background) with random initial conditions demonstrate that the network can correctly segment objects in almost 100% of cases using a single set of parameters, i.e., without the need to adjust parameters from one visual scene to the next. The network is robust with reference to dynamical noise superimposed on oscillatory neurons. Moreover, the network can segment both black objects on white background and vice versa and is able to deal with the problem of “fragmentation.” The main limitation of the network is its sensitivity to static noise superimposed on the objects. Overcoming this problem requires implementation of more robust mechanisms for contour enhancement in the first layer in agreement with mechanisms actually realized in the visual cortex. Received: 25 October 2001 / Accepted: 26 February 2003 / Published online: 20 May 2003 Correspondence to: Mauro Ursino (e-mail: mursino@deis.unibo.it, Tel.: +39-051-2093008, Fax: +39-051-2093073)  相似文献   

18.
Current gated radiation therapy starts with simulation 4DCT images of a patient with lung cancer. We propose a method to confirm the phase of 4DCT for planning and setup position at the time of treatment. An intensity-based rigid algorithm was developed in this work to register an orthogonal set of on-board projection X-ray images with each phase of the 4DCT. Multiple DRRs for one of ten 4DCT phases are first generated and the correlation coefficient (CC) between the projection X-ray image and each DRR is computed. The maximum value of CC for the phase is found via a simulated annealing optimization process. The whole process repeats for all ten phases. The 4DCT phase that has the highest CC is identified as the breathing phase of the X-ray. The phase verification process is validated by a moving phantom study. Thus, the method may be used to independently confirm the correspondence between the gating phase at the times of 4DCT simulation and radiotherapy delivery. When the intended X-ray phase and actual gating phase are consistent, the registration of the DRRs and the projection images may also yield the values of patient shifts for treatment setup. This method could serve as the 4D analog of the conventional setup film as it provides both verification of the specific phase at the time of treatment and isocenter positioning shifts for treatment delivery.  相似文献   

19.
《Journal of Physiology》1996,90(2):53-62
The anteroposterior sway of subjects under conditions of spontaneous dynamic balance on a wobbly platform was measured during visual stimulation by a visual target executing a circular trajectory in the frontal plane. The target was either a component of the whole moving visual scene or moving on a stationary background. With the former stimulation, obtained through the use of rotating prismatic glasses, every point of the visual field appeared to describe a circular trajectory around its real position so that the whole visual field apeared to be circularly translated, undistorted, inducing a binocular pursuit movement. Under these conditions, stereotyped anteroposterior dynamic balance reactions synchronous with the position of the stimulus were elicited. The latter stimulation consisted of pursuing a luminous target describing a trajectory similar to that of the fixation point seen through the rotating prisms on the same, this time stable, visual background. Although pursuit eye movements were comparable, as demonstrated by electro-oculographic recordings, no stereotyped equilibration reaction was induced. It is concluded that the translatory motion of the background image on the retina in the latter experiments contributed to the body's stability as well as to the perception of a stable environment.  相似文献   

20.
During the course of information processing, a visual system extracts characteristic information of the visual image and integrates the spatial and temporal visual information simultaneously. In this study, we investigate the integration effect of neurons in the primary visual cortex (V1 area) under the grating stimulation. First, an information integration model was established based on the receptive field properties of the extracted features of the visual images features, the interaction between neurons and the nonlinear integration of those neurons. Then the neuropsychological experiments were designed both to provide parameters for the model and to verify its effect. The experimental results with factual visual image were largely consistent with the model’s forecast output. This demonstrates that our model can truly reflect the integration effect of the primary visual system when being subjected to grating stimulations with different orientations. Our results indicate the primary visual system integrates the visual information in the following manner: it first extracts visual information through different types of receptive field, and then its neurons interact with each other in a non-linear manner, finally the neurons fire spikes recorded as responses to the visual stimulus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号