首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 625 毫秒
1.
以Reichardt的相关型初级运动检测器阵列和Rumelhart的误差反传学习(learningbyback-propagatingerrors,BP)网络相结合构成了一个视觉运动感知神经网络,探讨了视觉运动信息的感知过程。试图从计算神经科学的观点来阐明从一推运动分量的检测到二维模式运动感知的神经原理,从而回答运动矢量在脑内如何表征。计算机仿真表明,在有监督学习的条件下,网络可以学会解决局城运动检测所带来的多义性问题,给出模式的真实朝向、运动方向和运动速度。  相似文献   

2.
立体视觉不仅指对静态深度信息的感知,也包括对物体在三维空间中的运动方向的判断.本研究记录了人眼对于动态随机点图运动方向的辨别能力以及视觉训练在提高对动态信息分辨能力的作用.实验结果表明,对于没有任何相关经验的视力正常的受试者,很难分辨出动态随机点的深度运动方向,而视觉训练可以大大提高人眼对物体深度运动方向判断的敏感度.此外,这种视觉训练所达到的效果具有较长时间的持续性(至少6个月).这种通过视觉训练提高受试者对立体运动信息敏感度的方式,为立体视觉相关的实验和研究提供了新的视角.  相似文献   

3.
内容和运动方向感知计算模型   总被引:1,自引:0,他引:1  
对视野中的物体及运动方向进行感知是视觉感知的基本问题之一,较高级视皮层从V1区的简单细胞开始分为两个通路:“What通路”和“Where通路”.前者对物体的形状、颜色、纹理等内容感知,后者对空间运动速度和方向等感知.本文利用仿脑视觉信息处理计算结构,研究视觉内容和运动方向上的感知计算模型、计算机理和学习算法.该计算模型是一个三层的神经网络,第一层是视觉信号输入层,用于接收外界图像刺激.第二层是神经信息内部表象层,与第一层的网络联结是通过神经元稀疏表象原理自适应形成神经元的感受野.为此,引入Kullback_Leibler散度描述神经元响应的独立性,极小化该代价函数导出网络联结权值的学习算法.从自然图像块中学习得到图像基函数,这些基函数具有局部性、朝向性和带通滤波性.这些性质与生理实验结果中的V1区简单细胞感受野特征相吻合.将这些基函数作为神经元的感受野,并在第三层对较高级视皮层的内容感知和运动感知神经元进行建模.在理想刺激中加入一定量的噪声后,该模型对内容和运动方向的感知仍有较高的准确率和较好的鲁棒性.最后给出的实验仿真结果说明模型的可行性和学习算法的简单有效性.  相似文献   

4.
主观轮廓现象的研究对于探讨人类视觉系统中物体识别的形成机制有着重要的意义. 众多研究表明, 在运动和体视中形成的主观轮廓, 是由于存在着不能匹配的部分而产生的. 通过心理物理实验的研究, 试图说明Kanizsa类型主观轮廓同样产生于不能匹配的信息. 实验中采用了不同运动方式(包括水平运动和收缩扩张运动)的Kanizsa主观轮廓. 结果表明, 无论在哪种运动方式下, 当图形和背景单独运动时, 主观轮廓的感知都得到了增强, 并且两种运动间没有产生明显的感知强度差异, 而在图形和背景同时运动的情况下, 则没有产生主观轮廓增强的现象. 通过分析发现, 运动中的主观轮廓增强效应是由于不能匹配信息的增强而产生的. 这就间接地表明, 视觉系统对于不能匹配信息的处理是主观轮廓形成过程中关键的步骤.  相似文献   

5.
立体视觉不仅指对静态深度信息的感知,也包括对物体在三维空间中的运动方向的判断。本研究记录了人眼对于动态随机点图运动方向的辨别能力以及视觉训练在提高对动态信息分辨能力的作用。实验结果表明,对于没有任何相关经验的视力正常的受试者,很难分辨出动态随机点的深度运动方向,而视觉训练可以大大提高人眼对物体深度运动方向判断的敏感度。此外,这种视觉训练所达到的效果具有较长时间的持续性(至少6个月)。这种通过视觉训练提高受试者对立体运动信息的敏感度的方式为立体视觉相关的实验和研究提供了新的视角。  相似文献   

6.
本体感觉是重要的机体感觉之一,是不依赖视觉等其它感觉对身体姿势、运动状态、位置、所用力量大小等有相应的感知。在机体的运动过程中,需要通过本体感觉信息和视觉等其它感觉信息的整合来实现运动的协调,所以本体感觉是机体运动控制系统的重要组成部分。本文对本体感觉系统的基本构造、分子基础、发育途径和生理特征进行了概述,并对进一步的研究和应用前景进行了展望。  相似文献   

7.
微眼动是视觉注视过程中幅度最大、速度最快的眼动,可以消除由于神经系统适应性而产生的视觉衰退现象,在视觉信息处理过程中发挥着重要作用.基于微眼动与视觉感知功能的相关性,设计实验研究猕猴完成显性、隐性注意任务以及不同难度显性注意任务时,视觉注视情况下微眼动的差异.通过对不同难度显性注意任务下微眼动的参数进行比较,发现随着任务难度的增加,微眼动的幅度、速率和频率都被抑制.另一方面,对比不同类型的视觉感知任务(显性注意和隐性注意),发现在相似的实验范式下,隐性注意对微眼动的频率有明显的抑制作用,但幅度和频率没有得到一致的结果,这表明视觉注意任务类型的不同或将导致猕猴完成任务的策略不同.这些工作将为今后进一步研究微眼动产生的神经机制以及视觉注意过程中眼动的作用机制奠定良好的基础.  相似文献   

8.
老化对猕猴中颞视区细胞早期方向选择性的影响   总被引:1,自引:0,他引:1  
中颞视区(middle temporal area、MT/V5)在视觉运动处理过程中起着重要作用。MT区神经元对物体运动方向具有强选择性,而这种细胞的方向选择性被认为是运动方向知觉的神经基础,且已有实验表明方向选择性由于受到注意影响,而在时间进程上分为2个阶段。该研究组先前的实验发现麻醉猕猴(Rhesus macaque)MT区细胞的方向选择性发生了衰退,但该衰退是整个时间进程上平均的结果,并不能在时间进程上揭示其神经机制。因此,为了进一步探索运动方向感知能力下降的神经机制,该实验采用单细胞技术在麻醉猕猴的MT区研究了在正常老化过程中MT区细胞的早期方向选择性变化(early stage direction selectivity,esDB),结果表明老年猕猴MT区细胞早期方向选择性显著降低,具有强早期方向选择性的细胞显著减少。该结果进一步揭示了MT区细胞方向选择性在早期发生的衰退可能介导了视觉运动感知能力的下降。  相似文献   

9.
动物对不同的感觉刺激产生不同的行为反应,这对动物生存至关重要。关于其神经机制的研究,之前的工作多集中在感觉系统信息处理方面。但视觉刺激所包含的行为意义是怎样被大脑处理的,大脑处理后又如何根据刺激的行为意义调控行为的发生尚不清楚。为了更好地解析行为选择的神经机制,中国科学院神经科学研究所杜久林组姚园园等利用斑马鱼的逃跑环路为模型,研究了不同行为意义的视觉刺激引起不同行为反应的神经机制。首先,他们发现斑马鱼仅对危险性而非非危险性视觉刺激产生逃跑行为,且这一行为控制发生在视觉信息由视觉中枢向逃跑命令神经元传递的阶段(即视觉-运动信息转换阶段)。其次,发现下丘脑多巴胺能神经元和后脑甘氨酸能抑制性神经元组成"开关"样功能模块控制这一行为选择。进而,他们发现这一"开关"样功能模块对危险性和非危险性视觉刺激的不同控制是由这些神经元的视觉反应特性实现的。这一工作揭示了神经调质系统在行为选择中的作用,增加了人们对感觉–运动信息转换控制的认识。该工作发现的神经调质系统响应感觉刺激这一功能特点可能是大脑中一种普遍存在的神经机制,即神经调质系统接受和处理感觉刺激所携带的行为意义,进而通过调节感觉-运动神经通路,帮助动物作出相应的行为选择。这一工作为课题组提出的"Bi-modal Brain Function Hypothesis"提供了进一步的实验证据。  相似文献   

10.
亮度(luminance)是最基本的视觉信息.与其他视觉特征相比,由于视神经元对亮度刺激的反应较弱,并且许多神经元对均匀亮度无反应,对亮度信息编码的神经机制知之甚少.初级视皮层部分神经元对亮度的反应要慢于对比度反应,被认为是由边界对比度诱导的亮度知觉(brightness)的神经基础.我们的研究表明,初级视皮层许多神经元的亮度反应要快于对比度反应,并且这些神经元偏好低的空间频率、高的时间频率和高的运动速度,提示皮层下具有低空间频率和高运动速度通路的信息输入对产生初级视皮层神经元的亮度反应有贡献.已经知道初级视皮层神经元对空间频率反应的时间过程是从低空间频率到高空间频率,我们发现的早期亮度反应是对极低空间频率的反应,与这一时间过程是一致的,是这一从粗到细的视觉信息加工过程的第一步,揭示了处理最早的粗的视觉信息的神经基础.另外,初级视皮层含有偏好亮度下降和高运动速度的神经元,这群神经元的活动有助于在光照差的环境中检测高速运动的低亮度物体.  相似文献   

11.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

12.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

13.
A multilayer neural nerwork model for the perception of rotational motion has been developed usingReichardt's motion detector array of correlation type, Kohonen's self-organized feature map and Schuster-Wagner's oscillating neural network. It is shown that the unsupervised learning could make the neurons on the second layer of the network tend to be self-organized in a form resembling columnar organization of selective directions in area MT of the primate's visual cortex. The output layer can interpret rotation information and give the directions and velocities of rotational motion. The computer simulation results are in agreement with some psychophysical observations of rotation-al perception. It is demonstrated that the temporal correlation between the oscillating neurons would be powerful for solving the "binding problem" of shear components of rotational motion.  相似文献   

14.
在运动过程中,时距知觉的能力非常重要,能帮助个体对时长进行判断及对事件的发生做出预测和准备.近年来,越来越多的研究发现运动本身会直接影响个体的时距知觉.本文分别从运动参数、运动阶段、视觉运动刺激和运动有关的个体因素四个方面梳理了运动对时距知觉产生影响的行为学证据.目前已经有大量研究从不同角度证明,大脑运动系统组成了支持主观时间知觉的神经网络,并且编码和参与了人类的时距知觉.运动作用于时距知觉的理论机制可以基于内部时钟模型的理论框架并进而从感觉运动信息的交互、运动对唤醒状态的改变及具身认知理论三个角度予以解释.未来研究需要关注运动对不同单位级别时距知觉的影响,测评运动状态下时距知觉的研究范式及技术手段等多个方面进行思考和推进,从而更好地揭示运动如何调节时距知觉过程及其作用机制.还应该结合竞技运动项目特点,为减少运动中时距知觉误差和提高运动员时距知觉能力提供帮助与指导.  相似文献   

15.
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.  相似文献   

16.
A prevailing theory proposes that the brain''s two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers'' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals'' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers'' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways.  相似文献   

17.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

18.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

19.
Ilg UJ  Schumann S  Thier P 《Neuron》2004,43(1):145-151
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号