首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Andrews TJ 《Current biology : CB》2005,15(12):R451-R453
The way in which information about complex objects and faces is represented in visual cortex is controversial. One model posits that information is processed in modules, highly specialized for different categories of objects; an opposing model appeals to a distributed representation across a large network of visual areas. A recent paper uses a novel imaging technique to address this controversy.  相似文献   

2.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

3.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

4.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

5.
Aging and dual-task paradigms often degrade fine motor performance, but the effects of aging on correlated neural activity between motor cortex and contracting muscle are unknown during dual tasks requiring fine motor performance. The purpose of this study was to compare corticomuscular coherence between young and elderly adults during the performance of a unilateral fine motor task and concurrent motor and cognitive tasks. Twenty-nine healthy young (18-38 yr) and elderly (61-75 yr) adults performed unilateral motor, bilateral motor, concurrent motor-cognitive, and cognitive tasks. Peak corticomuscular coherence between the electroencephalogram from the primary motor cortex and surface electromyogram from the first dorsal interosseous muscle was compared during steady abduction of the index finger with visual feedback. In the alpha-band (8-14 Hz), corticomuscular coherence was greater in elderly than young adults especially during the motor-cognitive task. The beta-band (15-32 Hz) corticomuscular coherence was higher in elderly than young adults across unilateral motor and dual tasks. In addition, beta-band corticomuscular coherence in the motor-cognitive task was negatively correlated with motor output error across young but not elderly adults. The results suggest that 1) corticomuscular coherence was increased in senior age with a greater influence of an additional cognitive task in the alpha-band and 2) individuals with greater beta-band corticomuscular coherence may exhibit more accurate motor output in young, but not elderly adults, during steady contraction with visual feedback.  相似文献   

6.
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.  相似文献   

7.
We analyzed the EEG theta rhythm coherence in adult subjects who performed visual object classification task in the condition of uncertainty. The coherence function was estimated for the EEG segment following a feedback signal. It was shown that the functional coupling of cortical areas was stronger in the process of strategy discovering as comparing to the final period when the strategy is already found. The theta-related functional links are characterized by a specific topographical pattern: they converge to the foci located in the polar frontal cortex and reflect the interaction between the latter and the anterior associative cortices of the left hemisphere and occipital areas of both hemispheres. This pattern of functional connectivity may reflect an interaction between limbic structures and the frontal cortex in the process of strategy formation.  相似文献   

8.
Image motion is a primary source of visual information about the world. However, before this information can be used the visual system must determine the spatio-temporal displacements of the features in the dynamic retinal image, which originate from objects moving in space. This is known as the motion correspondence problem. We investigated whether cross-cue matching constraints contribute to the solution of this problem, which would be consistent with physiological reports that many directionally selective cells in the visual cortex also respond to additional visual cues. We measured the maximum displacement limit (Dmax) for two-frame apparent motion sequences. Dmax increases as the number of elements in such sequences decreases. However, in our displays the total number of elements was kept constant while the number of a subset of elements, defined by a difference in contrast polarity, binocular disparity or colour, was varied. Dmax increased as the number of elements distinguished by a particular cue was decreased. Dmax was affected by contrast polarity for all observers, but only some observers were influenced by binocular disparity and others by colour information. These results demonstrate that the human visual system exploits local, cross-cue matching constraints in the solution of the motion correspondence problem.  相似文献   

9.
Transcranial magnetic stimulation (TMS) noninvasively interferes with human cortical function, and is widely used as an effective technique for probing causal links between neural activity and cognitive function. However, the physiological mechanisms underlying TMS-induced effects on neural activity remain unclear. We examined the mechanism by which TMS disrupts neural activity in a local circuit in early visual cortex using a computational model consisting of conductance-based spiking neurons with excitatory and inhibitory synaptic connections. We found that single-pulse TMS suppressed spiking activity in a local circuit model, disrupting the population response. Spike suppression was observed when TMS was applied to the local circuit within a limited time window after the local circuit received sensory afferent input, as observed in experiments investigating suppression of visual perception with TMS targeting early visual cortex. Quantitative analyses revealed that the magnitude of suppression was significantly larger for synaptically-connected neurons than for isolated individual neurons, suggesting that intracortical inhibitory synaptic coupling also plays an important role in TMS-induced suppression. A conventional local circuit model of early visual cortex explained only the early period of visual suppression observed in experiments. However, models either involving strong recurrent excitatory synaptic connections or sustained excitatory input were able to reproduce the late period of visual suppression. These results suggest that TMS targeting early visual cortex disrupts functionally distinct neural signals, possibly corresponding to feedforward and recurrent information processing, by imposing inhibitory effects through intracortical inhibitory synaptic connections.  相似文献   

10.
How do we see the motion of objects as well as their shapes? The Gaussian Derivative (GD) spatial model is extended to time to help answer this question. The GD spatio-temporal model requires only two numbers to describe the complete three-dimensional space-time shapes of individual receptive fields in primate visual cortex. These two numbers are the derivative numbers along the respective spatial and temporal principal axes of a given receptive field. Nine transformation parameters allow for a standard geometric association of these intrinsic axes with the extrinsic environment. The GD spatio-temporal model describes in one framework the following properties of primate simple cell fields: motion properties, number of lobes in space-time, spatial orientation. location, and size. A discrete difference-of-offset-Gaussians (DOOG) model provides a plausible physiological mechanism to form GD-like model fields in both space and time. The GD model hypothesizes that receptive fields at the first stage of processing in the visual cortex approximate 'derivative analyzers' that estimate local spatial and temporal derivatives of the intensity profile in the visual environment. The receptive fields as modeled provide operators that can allow later stages of processing in either a biological or machine vision system to estimate the motion as well as the shapes of objects in the environment.  相似文献   

11.
Prestimulus EEG was recorded in the state of "operative rest" after the instruction and at the stages of formation, actualization, and extinction of unconscious visual set to perception of unequal circles. Two motivation conditions were used: (1) subjects were promised to be rewarded with a small money price for each correct response (a "general" rise of motivation) and (2) only correct assessments of stimuli of a certain kind were rewarded (a "selective" rise of motivation). In both conditions, additional motivation of subjects to the results of their performance led to an increase in EEG coherence most pronounced in the theta and alpha 1 frequency ranges in the left temporal area of the cortex. During the "general" rise of motivation the EEG coherence (as compared to the control group) was higher in a greater number of derivation pairs than during the "selective" rise. EEG coherence in "motivated" subjects was increased already at the stage of operative rest. Later on, at the set stages, no significant changes were revealed. Thus, the realized set formed by the verbal instruction, which increased motivation of subjects to the results of their performance, produced substantially more prominent changes in coherence of cortical potentials than the unconscious set formed during perception of visual stimuli.  相似文献   

12.
We propose a computational model of contour integration for visual saliency. The model uses biologically plausible devices to simulate how the representations of elements aligned collinearly along a contour in an image are enhanced. Our model adds such devices as a dopamine-like fast plasticity, local GABAergic inhibition and multi-scale processing of images. The fast plasticity addresses the problem of how neurons in visual cortex seem to be able to influence neurons they are not directly connected to, for instance, as observed in contour closure effect. Local GABAergic inhibition is used to control gain in the system without using global mechanisms which may be non-plausible given the limited reach of axonal arbors in visual cortex. The model is then used to explore not only its validity in real and artificial images, but to discover some of the mechanisms involved in processing of complex visual features such as junctions and end-stops as well as contours. We present evidence for the validity of our model in several phases, starting with local enhancement of only a few collinear elements. We then test our model on more complex contour integration images with a large number of Gabor elements. Sections of the model are also extracted and used to discover how the model might relate contour integration neurons to neurons that process end-stops and junctions. Finally, we present results from real world images. Results from the model suggest that it is a good current approximation of contour integration in human vision. As well, it suggests that contour integration mechanisms may be strongly related to mechanisms for detecting end-stops and junction points. Additionally, a contour integration mechanism may be involved in finding features for objects such as faces. This suggests that visual cortex may be more information efficient and that neural regions may have multiple roles.  相似文献   

13.
The part of the primate visual cortex responsible for the recognition of objects is parcelled into about a dozen areas organized somewhat hierarchically (the region is called the ventral stream). Why are there approximately this many hierarchical levels? Here I put forth a generic information-processing hierarchical model, and show how the total number of neurons required depends on the number of hierarchical levels and on the complexity of visual objects that must be recognized. Because the recognition of written words appears to occur in a similar part of inferotemporal cortex as other visual objects, the complexity of written words may be similar to that of other visual objects for humans; for this reason, I measure the complexity of written words, and use it as an approximate estimate of the complexity more generally of visual objects. I then show that the information-processing hierarchy that accommodates visual objects of that complexity possesses the minimum number of neurons when the number of hierarchical levels is approximately 15.  相似文献   

14.
Groups of neurons synchronize their activities during a variety of conditions, but whether this synchronization is functionally relevant has remained a matter of debate. Here, we survey recent findings showing that synchronization is dynamically modulated during cognitive processes. Based on this evidence, synchronization appears to reflect a general mechanism that renders interactions among selective subsets of neurons effective. We show that neuronal synchronization predicts which sensory input is processed and how efficient it is transmitted to postsynaptic target neurons during sensory-motor integration. Four lines of evidence are presented supporting the hypothesis that rhythmic neuronal synchronization, also called neuronal coherence, underlies effective and selective neuronal communication. (1) Findings from intracellular recordings strongly suggest that postsynaptic neurons are particularly sensitive to synaptic input that is synchronized in the gamma-frequency (30-90 Hz) range. (2) Neurophysiological studies in awake animals revealed enhanced rhythmic synchronization among neurons encoding task-relevant information. (3) The trial-by-trial variation in the precision of neuronal synchronization predicts part of the trial-by-trial variation in the speed of visuo-motor integration. (4) The planning and selection of specific movements can be predicted by the strength of coherent oscillations among local neuronal groups in frontal and parietal cortex. Thus, neuronal coherence appears as a neuronal substrate of an effective neuronal communication structure that dynamically links neurons into functional groups processing task-relevant information and selecting appropriate actions during attention and effective sensory-motor integration.  相似文献   

15.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

16.
Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.  相似文献   

17.
Zimmer U  Macaluso E 《Neuron》2005,47(6):893-905
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.  相似文献   

18.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

19.
During the formation of new episodic memories, a rich array of perceptual information is bound together for long-term storage. However, the brain mechanisms by which sensory representations (such as colors, objects, or individuals) are selected for episodic encoding are currently unknown. We describe a functional magnetic resonance imaging experiment in which participants encoded the association between two classes of visual stimuli that elicit selective responses in the extrastriate visual cortex (faces and houses). Using connectivity analyses, we show that correlation in the hemodynamic signal between face- and place-sensitive voxels and the left dorsolateral prefrontal cortex is a reliable predictor of successful face-house binding. These data support the view that during episodic encoding, "top-down" control signals originating in the prefrontal cortex help determine which perceptual information is fated to be bound into the new episodic memory trace.  相似文献   

20.
Using an Hebbian Learning Rule for Multi-Class SVM Classifiers   总被引:1,自引:0,他引:1  
Regarding biological visual classification, recent series of experiments have enlighten the fact that data classification can be realized in the human visual cortex with latencies of about 100-150 ms, which, considering the visual pathways latencies, is only compatible with a very specific processing architecture, described by models from Thorpe et al. Surprisingly enough, this experimental evidence is in coherence with algorithms derived from the statistical learning theory. More precisely, there is a double link: on one hand, the so-called Vapnik theory offers tools to evaluate and analyze the biological model performances and on the other hand, this model is an interesting front-end for algorithms derived from the Vapnik theory. The present contribution develops this idea, introducing a model derived from the statistical learning theory and using the biological model of Thorpe et al. We experiment its performances using a restrained sign language recognition experiment. This paper intends to be read by biologist as well as statistician, as a consequence basic material in both fields have been reviewed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号