首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Attention to surfaces modulates motion processing in extrastriate area MT   总被引:1,自引:0,他引:1  
Wannig A  Rodríguez V  Freiwald WA 《Neuron》2007,54(4):639-651
In the visual system, early atomized representations are grouped into higher-level entities through processes of perceptual organization. Here we present neurophysiological evidence that a representation of a simple object, a surface defined by color and motion, can be the unit of attentional selection at an early stage of visual processing. Monkeys were cued by the color of a fixation spot to attend to one of two transparent random-dot surfaces, one red and one green, which occupied the same region of space. Motion of the attended surface drove neurons in the middle temporal (MT) visual area more strongly than physically identical motion of the non-attended surface, even though both occurred within the spotlight of attention. Surface-based effects of attention persisted even without differential surface coloring, but attentional modulation was stronger with color. These results show that attention can select surface representations to modulate visual processing as early as cortical area MT.  相似文献   

2.
3.
Chromatic information is carried only by the parvocellular pathway, giving the neurophysiologist the opportunity for eliciting specific responses. Further subdivision of the parvo chromatic system in two opponent chromatic mechanisms is potentially of great interest, given that the anatomical correlate seems to reside in subclasses of parvo ganglion cells that show differences both in size and in susceptibility to disease. We separately recorded responses arising from each chromatic opponent mechanism using visual stimuli chosen to belong to one of the “cardinal” chromatic axes. A calibrated color monitor, driven by a high resolution (14 bits/gun) computer board, was used for visualization of 1 c/deg isoluminant color gratings, sinusoidally modulated in time at 4 Hz. VECPs were recorded at several color contrasts along both cardinal axes, allowing extrapolation of contrast thresholds. Psychophysical thresholds were derived in the same stimulus conditions for comparison and found to correlate very well with the electrophysiologically derived values, both as intersubject and axis differences. The S-(L+M) opponent mechanism consistently yielded higher thresholds, smaller amplitude, and higher phase lag than the L-M mechanism. This finding was largely explained by the perceptual non-uniformity of the CIE chromaticity diagram. Correcting the VECP data for the perceptual differences yielded comparable responses, supporting the view that the two mechanisms are similarly represented in the cortex. In conclusion, recording of cortical responses to color contrast stimuli belonging to the cardinal chromatic axes seems a reliable procedure and may prove to be useful in performing clinical evaluations that refine the assessment of the physiology of the visual system.  相似文献   

4.
It is clear that humans have mental representations of their spatial environments and that these representations are useful, if not essential, in a wide variety of cognitive tasks such as identification of landmarks and objects, guiding actions and navigation and in directing spatial awareness and attention. Determining the properties of mental representation has long been a contentious issue (see Pinker, 1984). One method of probing the nature of human representation is by studying the extent to which representation can surpass or go beyond the visual (or sensory) experience from which it derives. From a strictly empiricist standpoint what is not sensed cannot be represented; except as a combination of things that have been experienced. But perceptual experience is always limited by our view of the world and the properties of our visual system. It is therefore not surprising when human representation is found to be highly dependent on the initial viewpoint of the observer and on any shortcomings thereof. However, representation is not a static entity; it evolves with experience. The debate as to whether human representation of objects is view-dependent or view-invariant that has dominated research journals recently may simply be a discussion concerning how much information is available in the retinal image during experimental tests and whether this information is sufficient for the task at hand. Here we review an approach to the study of the development of human spatial representation under realistic problem solving scenarios. This is facilitated by the use of realistic virtual environments, exploratory learning and redundancy in visual detail.  相似文献   

5.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

6.

Background

How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown.

Methodology/Principal Findings

Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources.

Conclusions/Significance

These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. The second memory system sustains view-invariant representations of 3D objects, akin to the object-based representations that underlie object cognition.  相似文献   

7.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

8.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

9.
《Journal of Physiology》2014,108(1):11-17
In the primate visual system, information about color is known to be carried in separate divisions of the retino-geniculo-cortical pathway. From the retina, responses of photoreceptors to short (S), medium (M), and long (L) wavelengths of light are processed in two different opponent pathways. Signals in the S-opponent pathway, or blue/yellow channel, have been found to lag behind signals in the L/M-opponent pathway, or red/green channel in primary visual area V1, and psychophysical studies have suggested similar perceptual delays. However, more recent psychophysical studies have found that perceptual differences are negligible with the proper controls, suggesting that information between the two channels is integrated at some stage of processing beyond V1. To study the timing of color signals further downstream in visual cortex, we examined the responses of neurons in area V4 to colored stimuli varying along the two cardinal axes of the equiluminant opponent color space. We used information theory to measure the mutual information between the stimuli presented and the neural responses in short time windows in order to estimate the latency of color information in area V4. We found that on average, despite the latency difference in V1, information about S-opponent signals arrives in V4 at the same time as information about L/M-opponent signals. This work indicates a convergence of signal timing among chromatic channels within extrastriate cortex.  相似文献   

10.
We develop a neural network model that instantiates color constancy and color categorization in a single unified framework. Previous models achieve similar effects but ignore important biological constraints. Color constancy in this model is achieved by a new application of the double opponent cells found in the blobs of the visual cortex. Color categorization emerges naturally, as a consequence of processing chromatic stimuli as vectors in a four-dimensional color space. A computer simulation of this model is subjected to the classic psychophysical tests that first uncovered these phenomena, and its response matches psychophysical results very closely.  相似文献   

11.
To form an accurate internal representation of visual space, the brain must accurately account for movements of the eyes, head or body. Updating of internal representations in response to these movements is especially important when remembering spatial information, such as the location of an object, since the brain must rely on non-visual extra-retinal signals to compensate for self-generated movements. We investigated the computations underlying spatial updating by constructing a recurrent neural network model to store and update a spatial location based on a gaze shift signal, and to do so flexibly based on a contextual cue. We observed a striking similarity between the patterns of behaviour produced by the model and monkeys trained to perform the same task, as well as between the hidden units of the model and neurons in the lateral intraparietal area (LIP). In this report, we describe the similarities between the model and single unit physiology to illustrate the usefulness of neural networks as a tool for understanding specific computations performed by the brain.  相似文献   

12.
 A computational model of hippocampal activity during spatial cognition and navigation tasks is presented. The spatial representation in our model of the rat hippocampus is built on-line during exploration via two processing streams. An allothetic vision-based representation is built by unsupervised Hebbian learning extracting spatio-temporal properties of the environment from visual input. An idiothetic representation is learned based on internal movement-related information provided by path integration. On the level of the hippocampus, allothetic and idiothetic representations are integrated to yield a stable representation of the environment by a population of localized overlapping CA3-CA1 place fields. The hippocampal spatial representation is used as a basis for goal-oriented spatial behavior. We focus on the neural pathway connecting the hippocampus to the nucleus accumbens. Place cells drive a population of locomotor action neurons in the nucleus accumbens. Reward-based learning is applied to map place cell activity into action cell activity. The ensemble action cell activity provides navigational maps to support spatial behavior. We present experimental results obtained with a mobile Khepera robot. Received: 02 July 1999 / Accepted in revised form: 20 March 2000  相似文献   

13.
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations.  相似文献   

14.
15.
Cell membrane proteins play an important role in tissue architecture and cell-cell communication. We hypothesize that segmentation and multidimensional characterization of the distribution of cell membrane proteins, on a cell-by-cell basis, enable improved classification of treatment groups and identify important characteristics that can otherwise be hidden. We have developed a series of computational steps to 1) delineate cell membrane protein signals and associate them with a specific nucleus; 2) compute a coupled representation of the multiplexed DNA content with membrane proteins; 3) rank computed features associated with such a multidimensional representation; 4) visualize selected features for comparative evaluation through heatmaps; and 5) discriminate between treatment groups in an optimal fashion. The novelty of our method is in the segmentation of the membrane signal and the multidimensional representation of phenotypic signature on a cell-by-cell basis. To test the utility of this method, the proposed computational steps were applied to images of cells that have been irradiated with different radiation qualities in the presence and absence of other small molecules. These samples are labeled for their DNA content and E-cadherin membrane proteins. We demonstrate that multidimensional representations of cell-by-cell phenotypes improve predictive and visualization capabilities among different treatment groups, and identify hidden variables.  相似文献   

16.
Adaptive behavior guided by unconscious visual cues occurs in patients with various kinds of brain damage as well as in normal observers, all of whom can process visual information of which they are fully unaware [1] [2] [3] [4] [5] [6] [7] [8]. Little is known on the possibility that unconscious vision is influenced by visual cues that have access to consciousness [9]. Here we report a 'blind' letter discrimination induced through a semantic interaction with conscious color processing in a patient who is agnosic for visual shapes, but has normal color vision and visual imagery. In seeing the initial letters of color names printed in different colors, it is normally easier to name the print color when it is congruent with the initial letter of the color name than when it is not [10]. The patient could discriminate the initial letters of the words 'red' and 'green' printed in the corresponding colors significantly above chance but without any conscious accompaniment, whereas he performed at chance with the reverse color-letter mapping as well as in standard tests of letter reading. We suggest that the consciously perceived colors activated a representation of the corresponding word names and their component letters, which in turn brought out a partially successful, unconscious processing of visual inputs corresponding to the activated letter representations.  相似文献   

17.
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system []. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.  相似文献   

18.
A challenging goal for cognitive neuroscience researchers is to determine how mental representations are mapped onto the patterns of neural activity. To address this problem, functional magnetic resonance imaging (fMRI) researchers have developed a large number of encoding and decoding methods. However, previous studies typically used rather limited stimuli representation, like semantic labels and Wavelet Gabor filters, and largely focused on voxel-based brain patterns. Here, we present a new fMRI encoding model to predict the human brain’s responses to free viewing of video clips which aims to deal with this limitation. In this model, we represent the stimuli using a variety of representative visual features in the computer vision community, which can describe the global color distribution, local shape and spatial information and motion information contained in videos, and apply the functional connectivity to model the brain’s activity pattern evoked by these video clips. Our experimental results demonstrate that brain network responses during free viewing of videos can be robustly and accurately predicted across subjects by using visual features. Our study suggests the feasibility of exploring cognitive neuroscience studies by computational image/video analysis and provides a novel concept of using the brain encoding as a test-bed for evaluating visual feature extraction.  相似文献   

19.
Distance matrices obtained from allozyme studies on tilapiine fish are analysed by a multivarite approach. A hierarchical clustering procedure facilitated comparison with tree representations. A map-like representation provided information additional to the tree representation. The current belief that Sarotherodon is closer to Oreochromis than to Tilapia is strengthened. But while it may be the link between these genera at the species level, it is not entirely distinct from Oreochromis at the molecular level. Further, Sarotherodon and Oreochromis species may have arisen from Tilapia in several speciation events. Some of the species interrelations agreed with inferences from morphological data, and disagreed with those from a consensus maximum parsimony (MP) tree. It is suggested that both Chromidotilapia guntheri and Tylochromis jentinki are ancestral to diVerent sub-groups of Tilapia , so that inferences from morphological studies and the consensus MP method are both partially correct. The graphical representation also suggests that the Nile tilapia strains in Asia may be derived from Egypt rather than from Ghana. It is advantageous to use the map-like and tree representations together for maximum visual informativeness and inference from the data.  相似文献   

20.
Predictions derived from modelling the hippocampal role in navigation   总被引:2,自引:0,他引:2  
 A computational model of the lesion and single unit data from navigation in rats is reviewed. The model uses external (visual) and internal (odometric) information from the environment to drive the firing of simulated hippocampal place cells. Constraints on the functional form of these inputs are drawn from experiments using an environment of modifiable shape. The place cell representation is used to guide navigation via the creation of a representation of goal location via Hebbian modification of synaptic strengths. The model includes consideration of the phase of firing of place cells with respect to the theta rhythm of hippocampal EEG. A series of predictions for behavioural and single-unit data in rats are derived from the input and output representations of the model. Received: 15 July 1999 / Accepted in revised form: 20 March 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号