首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.  相似文献   

2.
Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs.  相似文献   

3.
We present a network model of visual map development in layer 4 of primary visual cortex. Our model comprises excitatory and inhibitory spiking neurons. The input to the network consists of correlated spike trains to mimick the activity of neurons in the lateral geniculate nucleus (LGN). An activity-driven Hebbian learning mechanism governs the development of both the network's lateral connectivity and feedforward projections from LGN to cortex. Plasticity of inhibitory synapses has been included into the model so as to control overall cortical activity. Even without feedforward input, Hebbian modification of the excitatory lateral connections can lead to the development of an intracortical orientation map. We have found that such an intracortical map can guide the development of feedforward connections from LGN to cortical simple cells so that the structure of the final feedforward orientation map is predetermined by the intracortical map. In a scenario in which left- and right-eye geniculocortical inputs develop sequentially one after the other, the resulting maps are therefore very similar, provided the intracortical connectivity remains unaltered. This may explain the outcome of so-called reverse lid-suture experiments, where animals are reared so that both eyes never receive input at the same time, but the orientation maps measured separately for the two eyes are nevertheless nearly identical. Received: 20 December 1999 / Accepted in revised form: 9 June 2000  相似文献   

4.
Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure, but the problem of predicting reliable contact maps is far from solved. One of the main pitfalls of existing contact map predictors is that they generally predict unphysical maps, i.e. maps that cannot be embedded into three-dimensional structures or, at best, violate a number of basic constraints observed in real protein structures, such as the maximum number of contacts for a residue. Here, we focus on the problem of learning to predict more "physical" contact maps. We do so by first predicting contact maps through a traditional system (XXStout), and then filtering these maps by an ensemble of artificial neural networks. The filter is provided as input not only the bare predicted map, but also a number of global or long-range features extracted from it. In a rigorous cross-validation test, we show that the filter greatly improves the predicted maps it is input. CASP7 results, on which we report here, corroborate this finding. Importantly, since the approach we present here is fully modular, it may be beneficial to any other ab initio contact map predictor.  相似文献   

5.
In a multisensory task, human adults integrate information from different sensory modalities -behaviorally in an optimal Bayesian fashion- while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities -i.e. selection- at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.  相似文献   

6.
The stimulation of brachial plexus and sciatic nerve resulted in a precisely timed, synchronous volley of inputs to ventroposterolateral (VPL) neurons from either forelimb or hindlimb. Such stimulation activated sensory fibers of all modalities and was therefore modality-nonspecific. Extracellular recordings of modality-nonspecific single-unit evoked responses from VPL showed that 13% of VPL projection neurons responded to both forelimb and hindlimb inputs. We also demonstrated mutually inhibitory interactions between inputs from forelimb and hindlimb in 45% of VPL units. Unlike the somatotopic map produced by others using modality-specific inputs, the modality-nonspecific evoked response map of VPL had a broadly overlapping distribution of evoked responses. This was especially true for the more caudal aspects of VPL. When the delivery of stimuli was appropriately timed, forelimb inputs caused the inhibition of responses to forelimb stimulation; similarly, hindlimb inputs inhibited responses to forelimb stimulation. The inhibition had a variable duration that may reflect a combination of processes, including recurrent inhibitory collateral input from the thalamic reticular nucleus (TRN) or an intrinsic hyperpolarizing inhibitory afterpotential of the VPL neuron. The presence of an extensive converging input on VPL neurons and an inhibitory correlate to this overlapping of inputs may explain the shifting of VPL maps following lesions of peripheral nerve, spinal cord, or dorsal column nuclei (DCN).  相似文献   

7.
Issues in the classification of multimodal communication signals   总被引:2,自引:0,他引:2  
Communication involves complex behavior in multiple sensory channels, or "modalities." We provide an overview of multimodal communication and its costs and benefits, place examples of signals and displays from an array of taxa, sensory systems, and functions into our signal classification system, and consider issues surrounding the categorization of multimodal signals. The broadest level of classification is between signals with redundant and nonredundant components, with finer distinctions in each category. We recommend that researchers gather information on responses to each component of a multimodal signal as well as the response to the signal as a whole. We discuss the choice of categories, whether to categorize signals on the basis of the signal or the response, and how to classify signals if data are missing. The choice of behavioral assay may influence the outcome, as may the context of the communicative event. We also consider similarities and differences between multimodal and unimodal composite signals and signals that are sequentially, rather than simultaneously, multimodal.  相似文献   

8.
Implicit multisensory associations influence voice recognition   总被引:4,自引:1,他引:3       下载免费PDF全文
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.  相似文献   

9.
We analyse a Markovian algorithm for the formation of topologically correct feature maps proposed earlier by Kohonen. The maps from a space of input signals onto an array of formal neurons are generated by a learning scheme driven by a random sequence of input samples. The learning is described by an equivalent Fokker-Planck equation. Convergence to an equilibrium map can be ensured by a criterion for the time dependence of the learning step size. We investigate the stability of the equilibrium map and calculate the fluctuations around it. We also study an instability responsible for a phenomenon termed by Kohonen automatic selection of feature dimensions.  相似文献   

10.
11.
A self-organizing neural network model called LISSOM for the synergetic development of afferent and lateral connections in cortical feature maps is presented. The weight adaptation process is purely activity-dependent, unsupervised, and local. The afferent input weights self-organize into a topological map of the input space. At the same time, the lateral interconnection weights adapt, and a unique lateral interaction profile develops for each neuron. Weak lateral connections die off, leaving a pattern of connections that represents the significant long-term correlations of activity on the feature map. LISSOM demonstrates how self-organization can bootstrap based on input information only, without global supervision or predetermined lateral interaction. The model gives rise to a nontopographically organized lateral connectivity similar to that observed in the mammalian neocortex as illustrated by a LISSOM model of ocular dominance column formation in the primary visual cortex. In addition, LISSOM can potentially account for the development of multiple maps of different modalities on the same undifferentiated cortical architecture. Received: 12 May 1993/Accepted in revised form: 22 September 1993  相似文献   

12.
Second-year undergraduated students from 2008, 2009, and 2010 cohorts were asked to respond a questionnaire to determine their learning style preferences, the VARK questionnaire (where V is visual, A is aural, R is reading-writing, and K is kinesthetic), which was translated into Spanish by the author. The translated questionnaire was tested for wording comprehension before its application in the actual study. Using the results of the VARK questionnaire, students were classified as unimodal or multimodal and according to the first preferred sensory modality used for learning as V, A, R, or K learners. Multiple-choice questions (MCQs) and problems that required simple arithmetic calculations (arithmetic-type questions) were applied to the students. The relation between the main sensory modality used for learning and the grades obtained in each question type was analyzed both in unimodal and multimodal students. It was found that R unimodal students performed significantly better in arithmetic questions than A and K unimodal students (P < 0.001 by a Bonferroni multiple-comparison test after ANOVA). R unimodal students also performed better than R multimodal students in arithmetic questions (P = 0.02 by a Mann-Whitney U-test). However, no differences were observed after MCQs in either unimodal or multimodal students with different first sensory modalities used for learning. When MCQ scores between unimodal and multimodal students were compared, no differences were detected. It was concluded that the sensory learning style used for learning affects student outcome when students receive arithmetic questions but not when MCQs are applied.  相似文献   

13.
Auditory cortex mapmaking: principles, projections, and plasticity   总被引:3,自引:0,他引:3  
Schreiner CE  Winer JA 《Neuron》2007,56(2):356-365
Maps of sensory receptor epithelia and computed features of the sensory environment are common elements of auditory, visual, and somatic sensory representations from the periphery to the cerebral cortex. Maps enhance the understanding of normal neural organization and its modification by pathology and experience. They underlie the derivation of the computational principles that govern sensory processing and the generation of perception. Despite their intuitive explanatory power, the functions of and rules for organizing maps and their plasticity are not well understood. Some puzzles of auditory cortical map organization are that few complete receptor maps are available and that even fewer computational maps are known beyond primary cortical areas. Neuroanatomical evidence suggests equally organized connectional patterns throughout the cortical hierarchy that might underlie map stability. Here, we consider the implications of auditory cortical map organization and its plasticity and evaluate the complementary role of maps in representation and computation from an auditory perspective.  相似文献   

14.
 In this paper, we propose a modification of Kohonen's self-organization map (SOM) algorithm. When the input signal space is not convex, some reference vectors of SOM can protrude from it. The input signal space must be convex to keep all the reference vectors fixed on it for any updates. Thus, we introduce a projection learning method that fixes the reference vectors onto the input signal space. This version of SOM can be applied to a non-convex input signal space. We applied SOM with projection learning to a direction map observed in the primary visual cortex of area 17 of ferrets, and area 18 of cats. Neurons in those areas responded selectively to the orientation of edges or line segments, and their directions of motion. Some iso-orientation domains were subdivided into selective regions for the opposite direction of motion. The abstract input signal space of the direction map described in the manner proposed by Obermayer and Blasdel [(1993) J Neurosci 13: 4114–4129] is not convex. We successfully used SOM with projection learning to reproduce a direction-orientation joint map. Received: 29 September 2000 / Accepted: 7 March 2001  相似文献   

15.
In this paper we propose a novel saliency-based computational model for visual attention. This model processes both top-down (goal directed) and bottom-up information. Processing in the top-down channel creates the so called skin conspicuity map and emulates the visual search for human faces performed by humans. This is clearly a goal directed task but is generic enough to be context independent. Processing in the bottom-up information channel follows the principles set by Itti et al. but it deviates from them by computing the orientation, intensity and color conspicuity maps within a unified multi-resolution framework based on wavelet subband analysis. In particular, we apply a wavelet based approach for efficient computation of the topographic feature maps. Given that wavelets and multiresolution theory are naturally connected the usage of wavelet decomposition for mimicking the center surround process in humans is an obvious choice. However, our implementation goes further. We utilize the wavelet decomposition for inline computation of the features (such as orientation angles) that are used to create the topographic feature maps. The bottom-up topographic feature maps and the top-down skin conspicuity map are then combined through a sigmoid function to produce the final saliency map. A prototype of the proposed model was realized through the TMDSDMK642-0E DSP platform as an embedded system allowing real-time operation. For evaluation purposes, in terms of perceived visual quality and video compression improvement, a ROI-based video compression setup was followed. Extended experiments concerning both MPEG-1 as well as low bit-rate MPEG-4 video encoding were conducted showing significant improvement in video compression efficiency without perceived deterioration in visual quality.  相似文献   

16.
One way in which fish can move around efficiently is to learn and remember a spatial map of their environment. This can be a relatively simple process where, for example, sequences of landmarks are learned. However, more complex spatial representations can be generated by integrating multiple pieces of information. In this review, we consider what types of information fish use to generate a spatial map; for instance, beacons (single landmarks) that signal a specific location, or learned geometric relationships between multiple landmarks that allow fish to guide their movements. Owing to the diversity of fish species and the broad range of environments that they inhabit, there is considerable diversity in the maps that they develop and the sensory systems that they use to detect spatial information. This chapter uses a series of examples to investigate the types of spatial information that fish encode, for instance, how they map three-dimensional space, how they make use of different sensory modalities, and where this information might be processed. We also highlight the versatility of short-range orientation in fish, and discuss a number of similarities between the mapping mechanisms used by fish and terrestrial vertebrates.  相似文献   

17.
In most sensory systems, the sensory cortex is the place where sensation approaches perception. As described in this review, olfaction is no different. The olfactory system includes both primary and higher order cortical regions. These cortical structures perform computations that take highly analytical afferent input and synthesize it into configural odor objects. Cortical plasticity plays an important role in this synthesis and may underlie olfactory perceptual learning. Olfactory cortex is also involved in odor memory and association of odors with multimodal input and contexts. Finally, the olfactory cortex serves as an important sensory gate, modulating information throughput based on recent experience and behavioral state.  相似文献   

18.
The presence of "maps" in sensory cortex is a hallmark of the mammalian nervous system, but the functional significance of topographic organization has been called into question by physiological studies claiming that patterns of neural behavioral activity transcend topographic boundaries. This paper discusses recent behavioral and physiological studies suggesting that, when animals or human subjects learn perceptual tasks, the neural modifications associated with the learning are distributed according to the spatial arrangement of the primary sensory cortical map. Topographical cortical representations of sensory events, therefore, appear to constitute a true structural framework for information processing and plasticity.  相似文献   

19.
Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.  相似文献   

20.
We introduce an unsupervised competitive learning rule, called the extended Maximum Entropy learning Rule (eMER), for topographic map formation. Unlike Kohonen's Self-Organizing Map (SOM) algorithm, the presence of a neighborhood function is not a prerequisite for achieving topology-preserving mappings, but instead it is intended: (1) to speed up the learning process and (2) to perform nonparametric regression. We show that, when the neighborhood function vanishes, the neural weigh t density at convergence approaches a linear function of the input density so that the map can be regarded as a nonparametric model of the input density. We apply eMER to density estimation and compare its performance with that of the SOM algorithm and the variable kernel method. Finally, we apply the ‘batch’ version of eMER to nonparametric projection pursuit regression and compare its performance with that of back-propagation learning, projection pursuit learning, constrained topolog ical mapping, and the Heskes and Kappen approach. Received: 12 August 1996 / Accepted in revised form: 9 April 1997  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号