首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nere A  Olcese U  Balduzzi D  Tononi G 《PloS one》2012,7(5):e36958
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.  相似文献   

2.
Inferior temporal (IT) cortex as the final stage of the ventral visual pathway is involved in visual object recognition. In our everyday life we need to recognize visual objects that are degraded by noise. Psychophysical studies have shown that the accuracy and speed of the object recognition decreases as the amount of visual noise increases. However, the neural representation of ambiguous visual objects and the underlying neural mechanisms of such changes in the behavior are not known. Here, by recording the neuronal spiking activity of macaque monkeys’ IT we explored the relationship between stimulus ambiguity and the IT neural activity. We found smaller amplitude, later onset, earlier offset and shorter duration of the response as visual ambiguity increased. All of these modulations were gradual and correlated with the level of stimulus ambiguity. We found that while category selectivity of IT neurons decreased with noise, it was preserved for a large extent of visual ambiguity. This noise tolerance for category selectivity in IT was lost at 60% noise level. Interestingly, while the response of the IT neurons to visual stimuli at 60% noise level was significantly larger than their baseline activity and full (100%) noise, it was not category selective anymore. The latter finding shows a neural representation that signals the presence of visual stimulus without signaling what it is. In general these findings, in the context of a drift diffusion model, explain the neural mechanisms of perceptual accuracy and speed changes in the process of recognizing ambiguous objects.  相似文献   

3.
Multisensory integration is synergistic—input from one sensory modality might modulate the behavioural response to another. Work in flies has shown that a small visual object presented in the periphery elicits innate aversive steering responses in flight, likely representing an approaching threat. Object aversion is switched to approach when paired with a plume of food odour. The ‘open-loop’ design of prior work facilitated the observation of changing valence. How does odour influence visual object responses when an animal has naturally active control over its visual experience? In this study, we use closed-loop feedback conditions, in which a fly''s steering effort is coupled to the angular velocity of the visual stimulus, to confirm that flies steer toward or ‘fixate’ a long vertical stripe on the visual midline. They tend either to steer away from or ‘antifixate’ a small object or to disengage active visual control, which manifests as uncontrolled object ‘spinning’ within this experimental paradigm. Adding a plume of apple cider vinegar decreases the probability of both antifixation and spinning, while increasing the probability of frontal fixation for objects of any size, including a normally typically aversive small object.  相似文献   

4.
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.  相似文献   

5.
Summary To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas comprising spiking neurons. The peripheral area P is modeled similar to the primary visual cortex, while the central area C is modeled as an associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, spikes corresponding to stimulus representations in P are synchronized only locally (slow state). Feedback from C can induce fast oscillations and an increase of synchronization ranges (fast state). Presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast state, where neurons representing the same object are simultaneously in the fast state. We relate our simulation results to various phenomena observed in neurophysiological experiments, such as stimulus-dependent synchronization of fast oscillations, synchronization on different time scales, ongoing activity, and attention-dependent neural activity.  相似文献   

6.
Concepts act as a cornerstone of human cognition. Humans and non-human primates learn conceptual relationships such as ‘same’, ‘different’, ‘larger than’, ‘better than’, among others. In all cases, the relationships have to be encoded by the brain independently of the physical nature of objects linked by the relation. Consequently, concepts are associated with high levels of cognitive sophistication and are not expected in an insect brain. Yet, various works have shown that the miniature brain of honeybees rapidly learns conceptual relationships involving visual stimuli. Concepts such as ‘same’, ‘different’, ‘above/below of’ or ‘left/right are well mastered by bees. We review here evidence about concept learning in honeybees and discuss both its potential adaptive advantage and its possible neural substrates. The results reviewed here challenge the traditional view attributing supremacy to larger brains when it comes to the elaboration of concepts and have wide implications for understanding how brains can form conceptual relations.  相似文献   

7.
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition.  相似文献   

8.
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.  相似文献   

9.
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this “temporal stability” or “slowness” approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing–dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the “trace rule.” The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.  相似文献   

10.
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (~10-20 ms) for sufficiently many inputs (~100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.  相似文献   

11.
Where neural information processing is concerned, there is no debate about the fact that spikes are the basic currency for transmitting information between neurons. How the brain actually uses them to encode information remains more controversial. It is commonly assumed that neuronal firing rate is the key variable, but the speed with which images can be analysed by the visual system poses a major challenge for rate-based approaches. We will thus expose here the possibility that the brain makes use of the spatio-temporal structure of spike patterns to encode information. We then consider how such rapid selective neural responses can be generated rapidly through spike-timing-dependent plasticity (STDP) and how these selectivities can be used for visual representation and recognition. Finally, we show how temporal codes and sparse representations may very well arise one from another and explain some of the remarkable features of processing in the visual system.  相似文献   

12.
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics.  相似文献   

13.
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, ‘type 1’ and ‘type 2’ neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that ‘type 2’ neurons are more coherent with the overall network activity than ‘type 1’ neurons.  相似文献   

14.
Exposure to pleasant and rewarding visual stimuli can bias people''s choices towards either immediate or delayed gratification. We hypothesised that this phenomenon might be based on carry-over effects from a fast, unconscious assessment of the abstract ‘time reference’ of a stimuli, i.e. how the stimulus relates to one''s personal understanding and connotation of time. Here we investigated whether participants'' post-experiment ratings of task-irrelevant, positive background visual stimuli for the dimensions ‘arousal’ (used as a control condition) and ‘time reference’ were related to differences in single-channel event-related potentials (ERPs) and whether they could be predicted from spatio-temporal patterns of ERPs. Participants performed a demanding foreground choice-reaction task while on each trial one task-irrelevant image (depicting objects, people and scenes) was presented in the background. Conventional ERP analyses as well as multivariate support vector regression (SVR) analyses were conducted to predict participants'' subsequent ratings. We found that only SVR allowed both ‘arousal’ and ‘time reference’ ratings to be predicted during the first 200 ms post-stimulus. This demonstrates an early, automatic semantic stimulus analysis, which might be related to the high relevance of ‘time reference’ to everyday decision-making and preference formation.  相似文献   

15.
We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization.  相似文献   

16.
The most influential theory of learning to read is based on the idea that children rely on phonological decoding skills to learn novel words. According to the self-teaching hypothesis, each successful decoding encounter with an unfamiliar word provides an opportunity to acquire word-specific orthographic information that is the foundation of skilled word recognition. Therefore, phonological decoding acts as a self-teaching mechanism or ‘built-in teacher’. However, all previous connectionist models have learned the task of reading aloud through exposure to a very large corpus of spelling–sound pairs, where an ‘external’ teacher supplies the pronunciation of all words that should be learnt. Such a supervised training regimen is highly implausible. Here, we implement and test the developmentally plausible phonological decoding self-teaching hypothesis in the context of the connectionist dual process model. In a series of simulations, we provide a proof of concept that this mechanism works. The model was able to acquire word-specific orthographic representations for more than 25 000 words even though it started with only a small number of grapheme–phoneme correspondences. We then show how visual and phoneme deficits that are present at the outset of reading development can cause dyslexia in the course of reading development.  相似文献   

17.
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.  相似文献   

18.
Spike timing dependent plasticity (STDP) likely plays an important role in forming and changing connectivity patterns between neurons in our brain. In a unidirectional synaptic connection between two neurons, it uses the causal relation between spiking activity of a presynaptic input neuron and a postsynaptic output neuron to change the strength of this connection. While the nature of STDP benefits unsupervised learning of correlated inputs, any incorporation of value into the learning process needs some form of reinforcement. Chemical neuromodulators such as Dopamine or Acetylcholine are thought to signal changes between external reward and internal expectation to many brain regions, including the basal ganglia. This effect is often modelled through a direct inclusion of the level of Dopamine as a third factor into the STDP rule. While this gives the benefit of direct control over synaptic modification, it does not account for observed instantaneous effects in neuronal activity on application of Dopamine agonists. Specifically, an instant facilitation of neuronal excitability in the striatum can not be explained by the only indirect effect that dopamine-modulated STDP has on a neuron’s firing pattern. We therefore propose a model for synaptic transmission where the level of neuromodulator does not directly influence synaptic plasticity, but instead alters the relative firing causality between pre- and postsynaptic neurons. Through the direct effect on postsynaptic activity, our rule allows indirect modulation of the learning outcome even with unmodulated, two-factor STDP. However, it also does not prohibit joint operation together with three-factor STDP rules.  相似文献   

19.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron "sees" the input correlations. We thus denote this unsupervised learning scheme as 'kernel spectral component analysis' (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a "linear" response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.  相似文献   

20.
Frequency modulated (FM) sweeps are common in species-specific vocalizations, including human speech. Auditory neurons selective for the direction and rate of frequency change in FM sweeps are present across species, but the synaptic mechanisms underlying such selectivity are only beginning to be understood. Even less is known about mechanisms of experience-dependent changes in FM sweep selectivity. We present three network models of synaptic mechanisms of FM sweep direction and rate selectivity that explains experimental data: (1) The ‘facilitation’ model contains frequency selective cells operating as coincidence detectors, summing up multiple excitatory inputs with different time delays. (2) The ‘duration tuned’ model depends on interactions between delayed excitation and early inhibition. The strength of delayed excitation determines the preferred duration. Inhibitory rebound can reinforce the delayed excitation. (3) The ‘inhibitory sideband’ model uses frequency selective inputs to a network of excitatory and inhibitory cells. The strength and asymmetry of these connections results in neurons responsive to sweeps in a single direction of sufficient sweep rate. Variations of these properties, can explain the diversity of rate-dependent direction selectivity seen across species. We show that the inhibitory sideband model can be trained using spike timing dependent plasticity (STDP) to develop direction selectivity from a non-selective network. These models provide a means to compare the proposed synaptic and spectrotemporal mechanisms of FM sweep processing and can be utilized to explore cellular mechanisms underlying experience- or training-dependent changes in spectrotemporal processing across animal models. Given the analogy between FM sweeps and visual motion, these models can serve a broader function in studying stimulus movement across sensory epithelia.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号