首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The perception of events in space and time is at the root of our interactions with the environment. The precision with which we perceive visual events in time enables us to act upon objects with great accuracy and the loss of such functions due to brain lesions can be catastrophic. We outline a visual timing mechanism that deals with the trajectory of an object's existence across time, a crucial function when keeping track of multiple objects that temporally overlap or occur sequentially. Recent evidence suggests these functions are served by an extended network of areas, which we call the 'when' pathway. Here we show that the when pathway is distinct from and interacts with the well-established 'where' and 'what' pathways.  相似文献   

2.

Background

Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.

Methodology/Principal Findings

Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.

Conclusions/Significance

These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.  相似文献   

3.
D Cheong  JK Zubieta  J Liu 《PloS one》2012,7(6):e39854
Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects' performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities.  相似文献   

4.
Complex neurodynamical systems are quite difficult to analyze and understand. New type of plots are introduced to help in visualization of high-dimensional trajectories and show global picture of the phase space, including relations between basins of attractors. Color recurrence plots (RPs) display distances from each point on the trajectory to all other points in a two-dimensional matrix. Fuzzy Symbolic Dynamics (FSD) plots enhance this information mapping the whole trajectory to two or three dimensions. Each coordinate is defined by the value of a fuzzy localized membership function, optimized to visualize interesting features of the dynamics, showing to which degree a point on the trajectory belongs to some neighborhood. The variance of the trajectory within the attraction basin plotted against the variance of the synaptic noise provides information about sizes and shapes of these basins. Plots that use color to show the distance between each trajectory point and a larger number of selected reference points (for example centers of attractor basins) are also introduced. Activity of 140 neurons in the semantic layer of dyslexia model implemented in the Emergent neural simulator is analyzed in details showing different aspects of neurodynamics that may be understood in this way. Influence of connectivity and various neural properties on network dynamics is illustrated using visualization techniques. A number of interesting conclusions about cognitive neurodynamics of lexical concept activations are drawn. Changing neural accommodation parameters has very strong influence on the dwell time of the trajectories. This may be linked to attention deficits disorders observed in autism in case of strong enslavement, and to ADHD-like behavior in case of weak enslavement.  相似文献   

5.
Chaotic dynamics introduced in a recurrent neural network model is applied to controlling an object to track a moving target in two-dimensional space, which is set as an ill-posed problem. The motion increments of the object are determined by a group of motion functions calculated in real time with firing states of the neurons in the network. Several cyclic memory attractors that correspond to several simple motions of the object in two-dimensional space are embedded. Chaotic dynamics introduced in the network causes corresponding complex motions of the object in two-dimensional space. Adaptively real-time switching of control parameter results in constrained chaos (chaotic itinerancy) in the state space of the network and enables the object to track a moving target along a certain trajectory successfully. The performance of tracking is evaluated by calculating the success rate over 100 trials with respect to nine kinds of trajectories along which the target moves respectively. Computer experiments show that chaotic dynamics is useful to track a moving target. To understand the relations between these cases and chaotic dynamics, dynamical structure of chaotic dynamics is investigated from dynamical viewpoint.  相似文献   

6.
In this paper we proposed an unsupervised neural architecture, called Temporal Parametrized Self Organizing Map (TEPSOM), capable of learning and reproducing complex robot trajectories and interpolating new states between the learned ones. The TEPSOM combines the Self-Organizing NARX (SONARX) network, responsible for coding the temporal associations of the robotic trajectory, with the Parametrized Self-Organizing (PSOM) network, responsible for an efficient interpolation mechanism acting on the SONARX neurons. The TEPSOM network is used to model the inverse kinematics of the PUMA 560 robot during the execution of trajectories with repeated states. Simulation results show that the TEPSOM is more accurate than the SONARX in the reproduction of the learned trajectories.  相似文献   

7.
Organization of voluntary movement.   总被引:3,自引:0,他引:3  
There have recently been a number of advances in our knowledge of the organization of complex, multi-joint movements. Promising starts have been made in our understanding of how the motor system translates information about the location of external targets into motor commands encoded in a body-based coordinate system. Two simplifying strategies for trajectory control that are discussed are parallel specification of response features and the programming of equilibrium trajectories. New insights have also been gained into how neural systems process sensory information to plan and assist with task performance. A number of recent papers emphasize the feedforward use of sensory input, which is mediated through models of the external world, the body's physical plant, and the task structure. These models exert their influence at both reflex and higher levels and permit the preparation of predictive default parameters of trajectories as well as strategies for resolving task demands.  相似文献   

8.
Men and women differ in their ability to solve spatial problems. There are two possible proximate explanations for this: (i) men and women differ in the kind (and value) of information they use and/or (ii) their cognitive abilities differ with respect to spatial problems. Using a simple computerized task which could be solved either by choosing an object based on what it looked like, or by its location, we found that the women relied on the object's visual features to solve the task, while the men used both visual and location information. There were no differences between the sexes in memory for the visual features of the objects, but women were poorer than men at remembering the locations of objects.  相似文献   

9.
I investigate essential neuronal mechanisms of visual attention based on object-based theory and a biased-competition scheme. A neural network model is proposed that consists of two feature networks, FI and FII, and one object network, OJ. The FI and FII networks send feedforward projections to the OJ network and receive feedback projections from the OJ network in a convergent/divergent manner. The OJ network integrates information about sensory features originated from the FI and FII networks into information about objects. I let the feature networks and the object network memorize individual features and objects according to the Hebbian learning rule and create the point attractors corresponding to these features and objects as long-term memories in the network dynamics. When the model tries to attend to objects that are superimposed, the point attractors relevant to the two objects emerge in each network. After a short interval (hundreds of milliseconds), the point attractors relevant to one of the two objects are selected and the other point attractors are completely suppressed. I suggest that coherent interactions of dynamical attractors relevant to the selected object may be the neuronal substrate for object-based selective attention. Bottom-up (FI-to-OJ and FI-to-OJ) neuronal mechanisms separate candidate objects from the background, and top-down (OJ-to-FI and OJ-to-FII) mechanisms resolve object-competition by which one relevant object is selected from candidate objects.  相似文献   

10.
《Journal of Physiology》2013,107(5):409-420
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation.  相似文献   

11.
One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams.  相似文献   

12.
Towards understanding of the cortical network underlying associative memory   总被引:1,自引:0,他引:1  
Declarative knowledge and experiences are represented in the association cortex and are recalled by reactivation of the neural representation. Electrophysiological experiments have revealed that associations between semantically linked visual objects are formed in neural representations in the temporal and limbic cortices. Memory traces are created by the reorganization of neural circuits. These regions are reactivated during retrieval and contribute to the contents of a memory. Two different types of retrieval signals are suggested as follows: automatic and active. One flows backward from the medial temporal lobe during the automatic retrieval process, whereas the other is conveyed as a top-down signal from the prefrontal cortex to the temporal cortex during the active retrieval process. By sending the top-down signal, the prefrontal cortex manipulates and organizes to-be-remembered information, devises strategies for retrieval and monitors the outcome. To further understand the neural mechanism of memory, the following two complementary views are needed: how the multiple cortical areas in the brain-wide network interact to orchestrate cognitive functions and how the properties of single neurons and their synaptic connections with neighbouring neurons combine to form local circuits and to exhibit the function of each cortical area. We will discuss some new methodological innovations that tackle these challenges.  相似文献   

13.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

14.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

15.
An unsupervised neural network is proposed to learn and recall complex robot trajectories. Two cases are considered: (i) A single trajectory in which a particular arm configuration (state) may occur more than once, and (ii) trajectories sharing states with each other. Ambiguities occur in both cases during recall of such trajectories. The proposed model consists of two groups of synaptic weights trained by competitive and Hebbian learning laws. They are responsible for encoding spatial and temporal features of the input sequences, respectively. Three mechanisms allow the network to deal with repeated or shared states: local and global context units, neurons disabled from learning, and redundancy. The network reproduces the current and the next state of the learned sequences and is able to resolve ambiguities. The model was simulated over various sets of robot trajectories in order to evaluate learning and recall, trajectory sampling effects and robustness.  相似文献   

16.
Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of ‘spiking’ neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations) of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining ‘Mexican hat’ functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles). These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP), resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network’s ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the ‘superposition catastrophe’. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.  相似文献   

17.
Sensory information about the outside world is encoded by neurons in sequences of discrete, identical pulses termed action potentials or spikes. There is persistent controversy about the extent to which the precise timing of these spikes is relevant to the function of the brain. We revisit this issue, using the motion-sensitive neurons of the fly visual system as a test case. Our experimental methods allow us to deliver more nearly natural visual stimuli, comparable to those which flies encounter in free, acrobatic flight. New mathematical methods allow us to draw more reliable conclusions about the information content of neural responses even when the set of possible responses is very large. We find that significant amounts of visual information are represented by details of the spike train at millisecond and sub-millisecond precision, even though the sensory input has a correlation time of ~55 ms; different patterns of spike timing represent distinct motion trajectories, and the absolute timing of spikes points to particular features of these trajectories with high precision. Finally, the efficiency of our entropy estimator makes it possible to uncover features of neural coding relevant for natural visual stimuli: first, the system's information transmission rate varies with natural fluctuations in light intensity, resulting from varying cloud cover, such that marginal increases in information rate thus occur even when the individual photoreceptors are counting on the order of one million photons per second. Secondly, we see that the system exploits the relatively slow dynamics of the stimulus to remove coding redundancy and so generate a more efficient neural code.  相似文献   

18.
Andrews TJ 《Current biology : CB》2005,15(12):R451-R453
The way in which information about complex objects and faces is represented in visual cortex is controversial. One model posits that information is processed in modules, highly specialized for different categories of objects; an opposing model appeals to a distributed representation across a large network of visual areas. A recent paper uses a novel imaging technique to address this controversy.  相似文献   

19.
The interaction of visual and proprioceptive afferentation were studied in the motor task for discrimination of weights of falling objects. The availability of visual information reduced the time of motor response; however, the degree of shortening depended on the type of this information. The decrease in the response time was significantly greater when the subject saw the beginning of the real falling of object instead of having only visual information about the beginning of the fall. Thus, a subject solves the motor task for discrimination of weights of falling objects more efficiently when he sees the real beginning of the fall, rather than in the case when the subject receives only a visual signal at the moment when an electromagnet releases the object. This may be due to the fact that seeing the initial part of a real trajectory instead of an abstract signal about the beginning of the fall allows the subject to better predict the moment of the impact.  相似文献   

20.
Spinal motor control system incorporates an internal model of limb dynamics   总被引:1,自引:0,他引:1  
The existence and utilization of an internal representation of the controlled object is one of the most important features of the functioning of neural motor control systems. This study demonstrates that this property already exists at the level of the spinal motor control system (SMCS), which is capable of generating motor patterns for reflex rhythmic movements, such as locomotion and scratching, without the aid of the peripheral afferent feedback, but substantially modifies the generated activity in response to peripheral afferent stimuli. The SMCS is presented as an optimal control system whose optimality requires that it incorporate an internal model (IM) of the controlled object's dynamics. A novel functional mechanism for the integration of peripheral sensory signals with the corresponding predictive output from the IM, the summation of information precision (SIP) is proposed. In contrast to other models in which the correction of the internal representation of the controlled object's state is based on the calculation of a mismatch between the internal and external information sources, the SIP mechanism merges the information from these sources in order to optimize the precision of the controlled object's state estimate. It is demonstrated, based on scratching in decerebrate cats as an example of the spinal control of goal-directed movements, that the results of computer modeling agree with the experimental observations related to the SMCS's reactions to phasic and tonic peripheral afferent stimuli. It is also shown that the functional requirements imposed by the mathematical model of the SMCS comply with the current knowledge about the related properties of spinal neuronal circuitry. The crucial role of the spinal presynaptic inhibition mechanism in the neuronal implementation of SIP is elucidated. Important differences between the IM and a state predictor employed for compensating for a neural reflex time delay are discussed. Received: 8 February 2000 / Accepted: 24 March 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号