首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of medial shell of the nucleus accumbens in acquisition of spatial behavior was studied in rats performing choice task in radial maze with asymmetrical water reinforcement. It has been found that the nucleus accumbens lesioned rats failed in finding larger rewards but preserve their reward-seeking behavior guided by visual discriminative stimuli. The results obtained are in good agreement with suggestion that the nucleus accumbens is a site of convergence of spatial information (from hippocampus) with reward information (from amygdala and VTA), providing bridge for effective limbic-motor interface underlying motivated goal-directed behavior in animals.  相似文献   

2.
Human performance on various visual tasks can be improved substantially via training. However, the enhancements are frequently specific to relatively low-level stimulus dimensions. While such specificity has often been thought to be indicative of a low-level neural locus of learning, recent research suggests that these same effects can be accounted for by changes in higher-level areas–in particular in the way higher-level areas read out information from lower-level areas in the service of highly practiced decisions. Here we contrast the degree of orientation transfer seen after training on two different tasks—vernier acuity and stereoacuity. Importantly, while the decision rule that could improve vernier acuity (i.e. a discriminant in the image plane) would not be transferable across orientations, the simplest rule that could be learned to solve the stereoacuity task (i.e. a discriminant in the depth plane) would be insensitive to changes in orientation. Thus, given a read-out hypothesis, more substantial transfer would be expected as a result of stereoacuity than vernier acuity training. To test this prediction, participants were trained (7500 total trials) on either a stereoacuity (N = 9) or vernier acuity (N = 7) task with the stimuli in either a vertical or horizontal configuration (balanced across participants). Following training, transfer to the untrained orientation was assessed. As predicted, evidence for relatively orientation specific learning was observed in vernier trained participants, while no evidence of specificity was observed in stereo trained participants. These results build upon the emerging view that perceptual learning (even very specific learning effects) may reflect changes in inferences made by high-level areas, rather than necessarily fully reflecting changes in the receptive field properties of low-level areas.  相似文献   

3.
The visual system needs to extract the most important elements of the external world from a large flux of information in a short time for survival purposes. It is widely believed that in performing this task, it operates a strong data reduction at an early stage, by creating a compact summary of relevant information that can be handled by further levels of processing. In this work we formulate a model of early vision based on a pattern-filtering architecture, partly inspired by high-speed digital data reduction in experimental high-energy physics (HEP). This allows a much stronger data reduction than models based just on redundancy reduction. We show that optimizing this model for best information preservation under tight constraints on computational resources yields surprisingly specific a-priori predictions for the shape of biologically plausible features, and for experimental observations on fast extraction of salient visual features by human observers. Interestingly, applying the same optimized model to HEP data acquisition systems based on pattern-filtering architectures leads to specific a-priori predictions for the relevant data patterns that these devices extract from their inputs. These results suggest that the limitedness of computing resources can play an important role in shaping the nature of perception, by determining what is perceived as “meaningful features” in the input data.  相似文献   

4.
Variable saccade trajectories are produced in visual search paradigms in which multiple potential target stimuli are present. These variable trajectories provide a rich source of information that may lead to a deeper understanding of the basic control mechanisms of the saccadic system. We have used published behavioral observations and neural recordings in the superior colliculus (SC), gathered in monkeys performing visual search paradigms, to guide the construction of a new distributed model of the saccadic system. The new model can account for many of the variations in saccade trajectory produced by the appearance of multiple visual stimuli in a search paradigm. The model uses distributed feedback about current eye motion from the brainstem to the SC to reduce activity there at physiologically realistic rates during saccades. The long-range lateral inhibitory connections between SC cells used in previous models have been eliminated to match recent physiological evidence. The model features interactions between visually activated multiple populations of cells in the SC and distributed and topologically organized inhibitory input to the SC from the SNr to produce some of the types of variable saccadic trajectories, including slightly curved and averaging saccades, observed in visual search tasks. The distributed perisaccadic disinhibition of SC from the substantia nigra (SNr) is assumed to have broad spatial tuning. In order to produce the strongly curved saccades occasionally recorded in visual search, the existence of a parallel input to the saccadic burst generators in addition to that provided by the distributed input from the SC is required. The spatiotemporal form of this additional parallel input is computed based on the assumption that the input from the model SC is realistic. In accordance with other recent models, it is assumed that the parallel input comes from the cerebellum, but our model predicts that the parallel input is delayed during highly curved saccadic trajectories.  相似文献   

5.
Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.  相似文献   

6.
In everyday life, eye movements enable the eyes to gather the information required for motor actions. They are thus proactive, anticipating actions rather than just responding to stimuli. This means that the oculomotor system needs to know where to look and what to look for. Using examples from table tennis, driving and music reading we show that the information the eye movement system requires is very varied in origin and highly task specific, and it is suggested that the control program or schema for a particular action must include directions for the oculomotor and visual processing systems. In many activities (reading text and music, typing, steering) processed information is held in a memory buffer for a period of about a second. This permits a match between the discontinuous input from the eyes and continuous motor output, and in particular allows the eyes to be involved in more than one task.  相似文献   

7.
This paper contributes with the first validation of swarm cognition as a useful framework for the design of autonomous robots controllers. The proposed model is built upon the authors’ previous work validated on a simulated robot performing local navigation on a 2-D deterministic world. Based on the ant foraging metaphor and motivated by the multiple covert attention hypothesis, the model consists of a set of simple virtual agents inhabiting the robot’s visual input, searching in a collectively coordinated way for obstacles. Parsimonious and accurate visual attention, operating on a by-need basis, is attained by making the activity of these agents modulated by the robot’s action selection process. A by-product of the system is the maintenance of active, parallel and sparse spatial working memories. In short, the model exhibits the self-organisation of a relevant set of features composing a cognitive system. To show its robustness, the model is extended in this paper to handle the challenges of physical off-road robots equipped with noisy stereoscopic vision sensors. Furthermore, an extensive aggregate of biological arguments sustaining the model is provided. Experimental results show the ability of the model to robustly control the robot on a local navigation task, with less than 1% of the robot’s visual input being analysed. Hence, with this system the computational cost of perception is considerably reduced, thus fostering robot miniaturisation and energetic efficiency. This confirms the advantages of using a swarm-based system, operating in an intricate way with action selection, to judiciously control visual attention and maintain sparse spatial memories, constituting a basic form of swarm cognition.  相似文献   

8.
Ribbon synapses of the retina   总被引:1,自引:0,他引:1  
Vision is a highly complex task that involves several steps of parallel information processing in various areas of the central nervous system. Complex processing of visual signals occurs as early as at the retina, the first stage in the visual system. Various aspects of visual information are transmitted in parallel from the photoreceptors (the input neurons of the retina) through their interconnecting bipolar cells to the ganglion cells (the output neurons). Photoreceptors and bipolar cells transfer information via the release of the neurotransmitter glutamate at a specialized synapse, the ribbon synapse. Although known from early days of electron microscopy, the precise functioning of ribbon synapses has yet to be explained. In this review, we highlight recent advances towards understanding the molecular composition and function of this enigmatic synapse.This study was supported by a grant from the Deutsche Forschungsgemeinschaft (BR 1643/4-1) to J.H.B.  相似文献   

9.
Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects.  相似文献   

10.
11.
Genome annotation conceptually consists of inferring and assigning biological information to gene products. Over the years, numerous pipelines and computational tools have been developed aiming to automate this task and assist researchers in gaining knowledge about target genes of study. However, even with these technological advances, manual annotation or manual curation is necessary, where the information attributed to the gene products is verified and enriched. Despite being called the gold standard process for depositing data in a biological database, the task of manual curation requires significant time and effort from researchers who sometimes have to parse through numerous products in various public databases. To assist with this problem, we present CODON, a tool for manual curation of genomic data, capable of performing the prediction and annotation process. This software makes use of a finite state machine in the prediction process and automatically annotates products based on information obtained from the Uniprot database. CODON is equipped with a simple and intuitive graphic interface that assists on manual curation, enabling the user to decide about the analysis based on information as to identity, length of the alignment, and name of the organism in which the product obtained a match. Further, visual analysis of all matches found in the database is possible, impacting significantly in the curation task considering that the user has at his disposal all the information available for a given product. An analysis performed on eleven organisms was used to test the efficiency of this tool by comparing the results of prediction and annotation through CODON to ones from the NCBI and RAST platforms.  相似文献   

12.
How and where object and spatial information are perceptually integrated in the brain is a central question in visual cognition. Single-unit physiology, scalp EEG, and fMRI research suggests that the prefrontal cortex (PFC) is a critical locus for object-spatial integration. To test the causal participation of the PFC in an object-spatial integration network, we studied ten patients with unilateral PFC damage performing a lateralized object-spatial integration task. Consistent with single-unit and neuroimaging studies, we found that PFC lesions result in a significant behavioral impairment in object-spatial integration. Furthermore, by manipulating inter-hemispheric transfer of object-spatial information, we found that masking of visual transfer impairs performance in the contralesional visual field in the PFC patients. Our results provide the first evidence that the PFC plays a key, causal role in an object-spatial integration network. Patient performance is also discussed within the context of compensation by the non-lesioned PFC.  相似文献   

13.

Background

In the continuum between a stroke and a circle including all possible ellipses, some eccentricities seem more “biologically preferred” than others by the motor system, probably because they imply less demanding coordination patterns. Based on the idea that biological motion perception relies on knowledge of the laws that govern the motor system, we investigated whether motorically preferential and non-preferential eccentricities are visually discriminated differently. In contrast with previous studies that were interested in the effect of kinematic/time features of movements on their visual perception, we focused on geometric/spatial features, and therefore used a static visual display.

Methodology/Principal Findings

In a dual-task paradigm, participants visually discriminated 13 static ellipses of various eccentricities while performing a finger-thumb opposition sequence with either the dominant or the non-dominant hand. Our assumption was that because the movements used to trace ellipses are strongly lateralized, a motor task performed with the dominant hand should affect the simultaneous visual discrimination more strongly. We found that visual discrimination was not affected when the motor task was performed by the non-dominant hand. Conversely, it was impaired when the motor task was performed with the dominant hand, but only for the ellipses that we defined as preferred by the motor system, based on an assessment of individual preferences during an independent graphomotor task.

Conclusions/Significance

Visual discrimination of ellipses depends on the state of the motor neural networks controlling the dominant hand, but only when their eccentricity is “biologically preferred”. Importantly, this effect emerges on the basis of a static display, suggesting that what we call “biological geometry”, i.e., geometric features resulting from preferential movements is relevant information for the visual processing of bidimensional shapes.  相似文献   

14.
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.  相似文献   

15.
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.  相似文献   

16.
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.  相似文献   

17.
To analyze the information provided about individual visual stimuliin the responses of single neurons in the primate temporal lobevisual cortex, neuronal responses to a set of 65 visual stimuli wererecorded in macaques performing a visual fixation task and analyzedusing information theoretical measures. The population of neuronsanalyzed responded primarily to faces. The stimuli included 23 facesand 42 nonface images of real-world scenes, so that the function ofthis brain region could be analyzed when it was processing relativelynatural scenes.It was found that for the majority of the neurons significantamounts of information were reflected about which of several of the23 faces had been seen. Thus the representation was not local, forin a local representation almost all the information available canbe obtained when the single stimulus to which the neuron respondsbest is shown. It is shown that the information available about anyone stimulus depended on how different (for example, how manystandard deviations) the response to that stimulus was from theaverage response to all stimuli. This was the case for responsesbelow the average response as well as above.It is shown that the fraction of information carried by the lowfiring rates of a cell was large—much larger than that carried bythe high firing rates. Part of the reason for this is that theprobability distribution of different firing rates is biased towardlow values (though with fewer very low values than would bepredicted by an exponential distribution). Another factor is thatthe variability of the response is large at intermediate and highfiring rates.Another finding is that at short sampling intervals (such as 20 ms)the neurons code information efficiently, by effectively acting asbinary variables and behaving less noisily than would be expectedof a Poisson process.  相似文献   

18.
The effect upon perceived location of adding an extra dot offset from the centre of a cluster of pseudorandom dots was investigated using a vernier acuity task. With this technique, weighting functions showing the extent to which the added dot pulls the apparent location of the entire cluster can be defined as a function of distance from the centre of the cluster. When dot density within the cluster is high, the weighting functions approximate to what would be expected on the basis of centroid alignment. With low dot densities, it appears that performance is determined by aligning the outermost dots within each cluster. The peak amplitudes of these weighting functions are proportional to the square root of dot density within the clusters. The results are consistent with the view that each vernier element is localised in an orthoaxial direction prior to discrimination of the vernier offset.  相似文献   

19.
A mathematical model of the primary visual cortex is presented. Basically, the model comprises two features. Firstly, in analogy with the principle of the computerized tomography (CT), it assumes that simple cells in each hypercolumn are not merely detecting line segments in images as features, but rather that they are as a whole representing the local image with a certain representation. Secondly, it assumes that each hypercolumn is performing spatial frequency analyses of local images using that representation, and that the resultant spectra are represented by complex cells. The model is analyzed using numerical simulations and its advantages are discussed from the viewpoint of visual information processing. It is shown that 1) the proposed processing is tolerant to shifts in position of input images, and that 2) spatial frequency filtering operations can be easily performed in the model.  相似文献   

20.
The present paper concentrates on the impact of visual attention task on structure of the brain functional and effective connectivity networks using coherence and Granger causality methods. Since most studies used correlation method and resting-state functional connectivity, the task-based approach was selected for this experiment to boost our knowledge of spatial and feature-based attention. In the present study, the whole brain was divided into 82 sub-regions based on Brodmann areas. The coherence and Granger causality were applied to construct functional and effective connectivity matrices. These matrices were converted into graphs using a threshold, and the graph theory measures were calculated from it including degree and characteristic path length. Visual attention was found to reveal more information during the spatial-based task. The degree was higher while performing a spatial-based task, whereas characteristic path length was lower in the spatial-based task in both functional and effective connectivity. Primary and secondary visual cortex (17 and 18 Brodmann areas) were highly connected to parietal and prefrontal cortex while doing visual attention task. Whole brain connectivity was also calculated in both functional and effective connectivity. Our results reveal that Brodmann areas of 17, 18, 19, 46, 3 and 4 had a significant role proving that somatosensory, parietal and prefrontal regions along with visual cortex were highly connected to other parts of the cortex during the visual attention task. Characteristic path length results indicated an increase in functional connectivity and more functional integration in spatial-based attention compared with feature-based attention. The results of this work can provide useful information about the mechanism of visual attention at the network level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号