首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the model's ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.Supported in part by AFOSR F49620-92-J-0499  相似文献   

2.
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.  相似文献   

3.
Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult “tutors”, and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.  相似文献   

4.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

5.
Auditory communication in humans and other animals frequently takes place in noisy environments with many co‐occurring signallers. Receivers are thus challenged to rapidly recognize salient auditory signals and filter out irrelevant sounds. Most bird species produce a variety of complex vocalizations that function to communicate with other members of their own species and behavioural evidence broadly supports preferences for conspecific over heterospecific sounds (auditory species recognition). However, it remains unclear whether such auditory signals are categorically recognized by the sensory and central nervous system. Here, we review 53 published studies that compare avian neural responses between conspecific versus heterospecific vocalizations. Irrespective of the techniques used to characterize neural activity, distinct nuclei of the auditory forebrain are consistently shown to be repeatedly conspecific selective across taxa, even in response to unfamiliar individuals with distinct acoustic properties. Yet, species‐specific neural discrimination is not a stereotyped auditory response, but is modulated according to its salience depending, for example, on ontogenetic exposure to conspecific versus heterospecific stimuli. Neuromodulators, in particular norepinephrine, may mediate species recognition by regulating the accuracy of neuronal coding for salient conspecific stimuli. Our review lends strong support for neural structures that categorically recognize conspecific signals despite the highly variable physical properties of the stimulus. The available data are in support of a ‘perceptual filter’‐based mechanism to determine the saliency of the signal, in that species identity and social experience combine to influence the neural processing of species‐specific auditory stimuli. Finally, we present hypotheses and their testable predictions, to propose next steps in species‐recognition research into the emerging model of the neural conceptual construct in avian auditory recognition.  相似文献   

6.
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain''s input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network''s spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings.  相似文献   

7.
Various hippocampal and neocortical synapses of mammalian brain show both short-term plasticity and long-term plasticity, which are considered to underlie learning and memory by the brain. According to Hebb’s postulate, synaptic plasticity encodes memory traces of past experiences into cell assemblies in cortical circuits. However, it remains unclear how the various forms of long-term and short-term synaptic plasticity cooperatively create and reorganize such cell assemblies. Here, we investigate the mechanism in which the three forms of synaptic plasticity known in cortical circuits, i.e., spike-timing-dependent plasticity (STDP), short-term depression (STD) and homeostatic plasticity, cooperatively generate, retain and reorganize cell assemblies in a recurrent neuronal network model. We show that multiple cell assemblies generated by external stimuli can survive noisy spontaneous network activity for an adequate range of the strength of STD. Furthermore, our model predicts that a symmetric temporal window of STDP, such as observed in dopaminergic modulations on hippocampal neurons, is crucial for the retention and integration of multiple cell assemblies. These results may have implications for the understanding of cortical memory processes.  相似文献   

8.
9.
Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint “forearm” to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (−1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.  相似文献   

10.
The ultrasonic vocalizations of mice are attracting increasing attention, because they have been recognized as an informative readout in genetically modified strains. In addition, the observation that male mice produce elaborate sequences of ultrasonic vocalizations (‘song’) when exposed to female mice or their scents has sparked a debate as to whether these sounds are—in terms of their structure and function—analogous to bird song. We conducted playback experiments with cycling female mice to explore the function of male mouse songs. Using a place preference design, we show that these vocalizations elicited approach behaviour in females. In contrast, the playback of pup isolation calls or whistle-like artificial control sounds did not evoke approach responses. Surprisingly, the females also did not respond to pup isolation calls. In addition, female responses did not vary in relation to reproductive cycle, i.e. whether they were in oestrus or not. Furthermore, our data revealed a rapid habituation of subjects to the experimental situation, which stands in stark contrast to other species'' responses to courtship vocalizations. Nevertheless, our results clearly demonstrate that male mouse songs elicit females'' interest.  相似文献   

11.
12.
Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including ‘random’ and ‘4Q’ (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the ‘4Q’ cultures presented absolutely different activities, and the robot controlled by the ‘4Q’ network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.  相似文献   

13.
Animals must respond selectively to specific combinations of salient environmental stimuli in order to survive in complex environments. A task with these features, biconditional discrimination, requires responses to select pairs of stimuli that are opposite to responses to those stimuli in another combination. We investigate the characteristics of synaptic plasticity and network connectivity needed to produce stimulus-pair neural responses within randomly connected model networks of spiking neurons trained in biconditional discrimination. Using reward-based plasticity for synapses from the random associative network onto a winner-takes-all decision-making network representing perceptual decision-making, we find that reliably correct decision making requires upstream neurons with strong stimulus-pair selectivity. By chance, selective neurons were present in initial networks; appropriate plasticity mechanisms improved task performance by enhancing the initial diversity of responses. We find long-term potentiation of inhibition to be the most beneficial plasticity rule by suppressing weak responses to produce reliably correct decisions across an extensive range of networks.  相似文献   

14.
Birds are major predators of many eared insects including moths, butterflies, crickets and cicadas. We provide evidence supporting the hypothesis that insect ears can function as ‘bird detectors’. First, we show that birds produce flight sounds while foraging. Eastern phoebes (Sayornis phoebe) and chickadees (Poecile atricapillus) generate broadband sounds composed of distinct repetitive elements (approx. 18 and 20 Hz, respectively) that correspond to cyclic wing beating. We estimate that insects can detect an approaching bird from distances of at least 2.5 m, based on insect hearing thresholds and sound level measurements of bird flight. Second, we show that insects with both high and low frequency hearing can hear bird flight sounds. Auditory nerve cells of noctuid moths (Trichoplusia ni) and nymphalid butterflies (Morpho peleides) responded in a bursting pattern to playbacks of an attacking bird. This is the first study to demonstrate that foraging birds generate flight sound cues that are detectable by eared insects. Whether insects exploit these sound cues, and alternatively, if birds have evolved sound-reducing foraging tactics to render them acoustically ‘cryptic’ to their prey, are tantalizing questions worthy of further investigation.  相似文献   

15.
Sensory deprivation has long been known to cause hallucinations or “phantom” sensations, the most common of which is tinnitus induced by hearing loss, affecting 10–20% of the population. An observable hearing loss, causing auditory sensory deprivation over a band of frequencies, is present in over 90% of people with tinnitus. Existing plasticity-based computational models for tinnitus are usually driven by homeostatic mechanisms, modeled to fit phenomenological findings. Here, we use an objective-driven learning algorithm to model an early auditory processing neuronal network, e.g., in the dorsal cochlear nucleus. The learning algorithm maximizes the network’s output entropy by learning the feed-forward and recurrent interactions in the model. We show that the connectivity patterns and responses learned by the model display several hallmarks of early auditory neuronal networks. We further demonstrate that attenuation of peripheral inputs drives the recurrent network towards its critical point and transition into a tinnitus-like state. In this state, the network activity resembles responses to genuine inputs even in the absence of external stimulation, namely, it “hallucinates” auditory responses. These findings demonstrate how objective-driven plasticity mechanisms that normally act to optimize the network’s input representation can also elicit pathologies such as tinnitus as a result of sensory deprivation.  相似文献   

16.
Recent physiological findings have revealed that long-term adaptation of the synaptic strengths between cortical pyramidal neurons depends on the temporal order of presynaptic and postsynaptic spikes, which is called spike-timing-dependent plasticity (STDP) or temporally asymmetric Hebbian (TAH) learning. Here I prove by analytical means that a physiologically plausible variant of STDP adapts synaptic strengths such that the presynaptic spikes predict the postsynaptic spikes with minimal error. This prediction error model of STDP implies a mechanism for cortical memory: cortical tissue learns temporal spike patterns if these spike patterns are repeatedly elicited in a set of pyramidal neurons. The trained network finishes these patterns if their beginnings are presented, thereby recalling the memory. Implementations of the proposed algorithms may be useful for applications in voice recognition and computer vision.  相似文献   

17.
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species.  相似文献   

18.
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates’ open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.  相似文献   

19.
Heterochronic formation of basic and language-specific speech sounds in the first year of life in infants from different ethnic groups (Chechens, Russians, and Mongols) has been studied. Spectral analysis of the frequency, amplitude, and formant characteristics of speech sounds has shown a universal pattern of organization of the basic sound repertoire and “language-specific” sounds in the process of babbling and prattle of infants of different ethnic groups. Possible mechanisms of the formation of specific speech sounds in early ontogeny are discussed.  相似文献   

20.
We propose a neural circuit model forming a semantic network with exceptions using the spike-timing-dependent plasticity (STDP) of inhibitory synapses. To evaluate the proposed model, we conducted nine types of computer simulation by combining the three STDP rules for inhibitory synapses and the three spike pairing rules. The simulation results obtained with the STDP rule for inhibitory synapses by Haas et al. [Haas, J.S., Nowotny, T., Abarbanel, H.D.I., 2006, Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex. J. Neurophysiol. 96, 3305–3313] are successful, whereas, the other results are unsuccessful. The results and examinations suggested that the inhibitory connection from the concept linked with an exceptional feature to the general feature is necessary for forming a semantic network with an exception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号