首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
Amplitude modulation can serve as a cue for segregating streams of sounds from different sources. Here we evaluate stream segregation in humans using ABA- sequences of sinusoidally amplitude modulated (SAM) tones. A and B represent SAM tones with the same carrier frequency (1000, 4000 Hz) and modulation depth (30, 100%). The modulation frequency of the A signals (fmodA) was 30, 100 or 300 Hz, respectively. The modulation frequency of the B signals was up to four octaves higher (Δfmod). Three different ABA- tone patterns varying in tone duration and stimulus onset asynchrony were presented to evaluate the effect of forward suppression. Subjects indicated their 1- or 2-stream percept on a touch screen at the end of each ABA- sequence (presentation time 5 or 15 s). Tone pattern, fmodA, Δfmod, carrier frequency, modulation depth and presentation time significantly affected the percentage of a 2-stream percept. The human psychophysical results are compared to responses of avian forebrain neurons evoked by different ABA- SAM tone conditions [1] that were broadly overlapping those of the present study. The neurons also showed significant effects of tone pattern and Δfmod that were comparable to effects observed in the present psychophysical study. Depending on the carrier frequency, modulation frequency, modulation depth and the width of the auditory filters, SAM tones may provide mainly temporal cues (sidebands fall within the range of the filter), spectral cues (sidebands fall outside the range of the filter) or possibly both. A computational model based on excitation pattern differences was used to predict the 50% threshold of 2-stream responses. In conditions for which the model predicts a considerably larger 50% threshold of 2-stream responses (i.e., larger Δfmod at threshold) than was observed, it is unlikely that spectral cues can provide an explanation of stream segregation by SAM.  相似文献   

2.
An auditory neuron can preserve the temporal fine structure of a low-frequency tone by phase-locking its response to the stimulus. Apart from sound localization, however, much about the role of this temporal information for signal processing in the brain remains unknown. Through psychoacoustic studies we provide direct evidence that humans employ temporal fine structure to discriminate between frequencies. To this end we construct tones that are based on a single frequency but in which, through the concatenation of wavelets, the phase changes randomly every few cycles. We then test the frequency discrimination of these phase-changing tones, of control tones without phase changes, and of short tones that consist of a single wavelet. For carrier frequencies below a few kilohertz we find that phase changes systematically worsen frequency discrimination. No such effect appears for higher carrier frequencies at which temporal information is not available in the central auditory system.  相似文献   

3.
Neural responses to tones in the mammalian primary auditory cortex (A1) exhibit adaptation over the course of several seconds. Important questions remain about the taxonomic distribution of multi-second adaptation and its possible roles in hearing. It has been hypothesized that neural adaptation could explain the gradual “build-up” of auditory stream segregation. We investigated the influence of several stimulus-related factors on neural adaptation in the avian homologue of mammalian A1 (field L2) in starlings (Sturnus vulgaris). We presented awake birds with sequences of repeated triplets of two interleaved tones (ABA–ABA–…) in which we varied the frequency separation between the A and B tones (ΔF), the stimulus onset asynchrony (time from tone onset to onset within a triplet), and tone duration. We found that stimulus onset asynchrony generally had larger effects on adaptation compared with ΔF and tone duration over the parameter range tested. Using a simple model, we show how time-dependent changes in neural responses can be transformed into neurometric functions that make testable predictions about the dependence of the build-up of stream segregation on various spectral and temporal stimulus properties.  相似文献   

4.
The phase of cortical oscillations contains rich information and is valuable for encoding sound stimuli. Here we hypothesized that oscillatory phase modulation, instead of amplitude modulation, is a neural correlate of auditory streaming. Our behavioral evaluation provided compelling evidences for the first time that rats are able to organize auditory stream. Local field potentials (LFPs) were investigated in the cortical layer IV or deeper in the primary auditory cortex of anesthetized rats. In response to ABA- sequences with different inter-tone intervals and frequency differences, neurometric functions were characterized with phase locking as well as the band-specific amplitude evoked by test tones. Our results demonstrated that under large frequency differences and short inter-tone intervals, the neurometric function based on stimulus phase locking in higher frequency bands, particularly the gamma band, could better describe van Noorden’s perceptual boundary than the LFP amplitude. Furthermore, the gamma-band neurometric function showed a build-up-like effect within around 3 seconds from sequence onset. These findings suggest that phase locking and amplitude have different roles in neural computation, and support our hypothesis that temporal modulation of cortical oscillations should be considered to be neurophysiological mechanisms of auditory streaming, in addition to forward suppression, tonotopic separation, and multi-second adaptation.  相似文献   

5.
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.  相似文献   

6.
Taaseh N  Yaron A  Nelken I 《PloS one》2011,6(8):e23369
Stimulus-specific adaptation (SSA) is the specific decrease in the response to a frequent ('standard') stimulus, which does not generalize, or generalizes only partially, to another, rare stimulus ('deviant'). Stimulus-specific adaptation could result simply from the depression of the responses to the standard. Alternatively, there may be an increase in the responses to the deviant stimulus due to the violation of expectations set by the standard, indicating the presence of true deviance detection. We studied SSA in the auditory cortex of halothane-anesthetized rats, recording local field potentials and multi-unit activity. We tested the responses to pure tones of one frequency when embedded in sequences that differed from each other in the frequency and probability of the tones composing them. The responses to tones of the same frequency were larger when deviant than when standard, even with inter-stimulus time intervals of almost 2 seconds. Thus, SSA is present and strong in rat auditory cortex. SSA was present even when the frequency difference between deviants and standards was as small as 10%, substantially smaller than the typical width of cortical tuning curves, revealing hyper-resolution in frequency. Strong responses were evoked also by a rare tone presented by itself, and by rare tones presented as part of a sequence of many widely spaced frequencies. On the other hand, when presented within a sequence of narrowly spaced frequencies, the responses to a tone, even when rare, were smaller. A model of SSA that included only adaptation of the responses in narrow frequency channels predicted responses to the deviants that were substantially smaller than the observed ones. Thus, the response to a deviant is at least partially due to the change it represents relative to the regularity set by the standard tone, indicating the presence of true deviance detection in rat auditory cortex.  相似文献   

7.
Repeated stimulus causes a specific suppression of neuronal responses, which is so-called as Stimulus-Specific Adaptation (SSA). This effect can be recovered when the stimulus changes. In the auditory system SSA is a well-known phenomenon that appears at different levels of the mammalian auditory pathway. In this study, we explored the effects of adaptation to a particular stimulus on the auditory tuning curves of anesthetized rats. We used two sequences and compared the responses of each tone combination in these two conditions. First sequence consists of different pure tone combinations that were presented randomly. In the second one, the same stimuli of the first sequence were presented in the context of an adapted stimulus (adapter) that occupied 80% of sequence probability. The population results demonstrated that the adaptation factor decreased the frequency response area and made a change in the tuning curve to shift it unevenly toward the higher thresholds of tones. The local field potentials and multi-unit activity responses have indicated that the neural activities strength of the adapted frequency has been suppressed as well as with lower suppression in neighboring frequencies. This aforementioned reduction changed the characteristic frequency of the tuning curve.  相似文献   

8.
In experiments on anesthetized cats, 80 neurons of the primary auditory cortex (A1) were studied. Within the examined neuronal population, 66 cells (or 82.5%) were monosensory units, i.e., they responded only to acoustic stimulations (sound clicks and tones); 8 (10.1%) neurons responded to acoustic stimulation and electrocutaneous stimulation (ECS); the rest of the units (7.4%) were either trisensory (responded also to visual stimulation) or responded only to non-acoustic stimulations. In the A1 area, neurons responding to ECS with rather short latencies (15.6–17.0 msec) were found. ECS usually suppressed the impulse neuronal responses evoked by sound clicks. It is concluded that somatosensory afferent signals cause predominantly an inhibitory effect on transmission of an acoustic afferent volley to the auditory cortex at a subcortical level; however, rare cases of excitatory convergence of acoustic and somatosensory inputs toA1 neurons were observed.  相似文献   

9.
Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28 % of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f = ±16 %). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45 % of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments.  相似文献   

10.
Learning-induced changes of the spectro-temporal characteristics of primary auditory cortex (AI) units were studied by response plane analysis of recordings from the AI in unanaesthetized Mongolian gerbils. Using response planes obtained prior to and after auditory discrimination training bins of significant change were identified and their spectro-temporal distribution was studied. Bins of significant changes were generally found to be distributed over the entire spectro-temporal receptive field but occurred most frequently within the first 100 ms of response in the spectral neighbourhood (1.5 octaves) of the frequency of the reinforced conditioned stimulus. Training-induced response decreases occurred early after 10 ms for reinforced conditioned tones and tones in the frequency neighbourhood. Response increases occurred so early only for non-reinforced tones in the neighbourhood of the reinforced frequency and occurred later (after 40 ms) for the reinforced tones. The results are discussed in the light of dynamic disinhibition. Accepted: 13 August 1997  相似文献   

11.
We offer a model of how human cortex detects changes in the auditory environment. Auditory change detection has recently been the object of intense investigation via the mismatch negativity (MMN). MMN is a preattentive response to sudden changes in stimulation, measured noninvasively in the electroencephalogram (EEG) and the magnetoencephalogram (MEG). It is elicited in the oddball paradigm, where infrequent deviant tones intersperse a series of repetitive standard tones. However, little apart from the participation of tonotopically organized auditory cortex is known about the neural mechanisms underlying change detection and the MMN. In the present study, we investigate how poststimulus inhibition might account for MMN and compare the effects of adaptation with those of lateral inhibition in a model describing tonotopically organized cortex. To test the predictions of our model, we performed MEG and EEG measurements on human subjects and used both small- (<1/3 octave) and large- (>5 octaves) frequency differences between the standard and deviant tones. The experimental results bear out the prediction that MMN is due to both adaptation and lateral inhibition. Finally, we suggest that MMN might serve as a probe of what stimulus features are mapped by human auditory cortex.  相似文献   

12.
In many sensory systems the formation of burst firing can be observed along a way from the periphery to the central nuclei. We investigate the putative transformation of spontaneous activity in the auditory pathway using a neuron model trained by real firing recorded in the auditory nuclei of the frog. The model has 200 separate inputs (neuronal spines). It is supposed that every spine is a coincidence detector. Its output (synaptic potential) sharply increases at emergence of the precisely certain interpulse interval in an input pulse sequence. If the total synaptic potentials excess a threshold, the model generates output spike, which changes weight of all spines according to the simplified Hebb principle. The model was trained by real firing caused in the auditory nuclei of the frog by tones modulated by low-frequency noise in the frequency ranges of 0–15 Hz, 0–50 Hz or 0–150 Hz. After that training the synaptic weights of every spine essentially changed. Thus, along with some increase of weights of spines tuned to boundary frequencies of modulating noise, the most characteristic change was the emphasizing weights of spines tuned to short interpulse intervals. As a result the spontaneous activity passed through the trained model became much more bursting. Efficiency of a signal transmission in model was higher when input spontaneous activity of real cells contains bursts of spikes. Results of modeling are discussed in connection with modern physiological data demonstrating the functional advantage of bursting.  相似文献   

13.
The effects of waking and sleep on the response properties of auditory units in the ventral cochlear nucleus (CN) were explored by using extracellular recordings in chronic guinea-pigs. Significant increases and decreases in firing rate were detected in two neuronal groups, a) the "sound-responding" and b) the "spontaneous" (units that do not show responses to any acoustic stimuli controlled by the experimenter). The "spontaneous" may be considered as belonging to the auditory system because the corresponding units showed a suppression of their discharge when the receptor was destroyed. The auditory CN units were characterized by their PSTH in response to tones at their characteristic frequency and also by the changes in firing rate and probability of discharge evaluated during periods of waking, slow wave and paradoxical sleep. The CNS performs functions dependent on sensory inputs during wakefulness and sleep phases. By studying the auditory input at the level of the ventral CN with constant sound stimuli, it was shown that, in addition to the firing rate shifts, some units presented changes in the temporal probability of discharge, implying central actions on the corresponding neurons. The mean latency of the responses, however, did not show significant changes throughout the sleep-waking cycle. The auditory efferent pathways are postulated to modulate the auditory input at CN level during different animal states. The probability of firing and the changes in the temporal pattern, as shown by the PSTH, are thus dependent on both the auditory input and the functional brain state related to the sleep-waking cycle.  相似文献   

14.
The influence of stimulus duration on auditory evoked potentials (AEPs) was examined for tones varying randomly in duration, location, and frequency in an auditory selective attention task. Stimulus duration effects were isolated as duration difference waves by subtracting AEPs to short duration tones from AEPs to longer duration tones of identical location, frequency and rise time. This analysis revealed that AEP components generally increased in amplitude and decreased in latency with increments in signal duration, with evidence of longer temporal integration times for lower frequency tones. Different temporal integration functions were seen for different N1 subcomponents. The results suggest that different auditory cortical areas have different temporal integration times, and that these functions vary as a function of tone frequency.  相似文献   

15.
In this article, we review a combined experimental-neuromodeling framework for understanding brain function with a specific application to auditory object processing. Within this framework, a model is constructed using the best available experimental data and is used to make predictions. The predictions are verified by conducting specific or directed experiments and the resulting data are matched with the simulated data. The model is refined or tested on new data and generates new predictions. The predictions in turn lead to better-focused experiments. The auditory object processing model was constructed using available neurophysiological and neuroanatomical data from mammalian studies of auditory object processing in the cortex. Auditory objects are brief sounds such as syllables, words, melodic fragments, etc. The model can simultaneously simulate neuronal activity at a columnar level and neuroimaging activity at a systems level while processing frequency-modulated tones in a delayed-match-to-sample task. The simulated neuroimaging activity was quantitatively matched with neuroimaging data obtained from experiments; both the simulations and the experiments used similar tasks, sounds, and other experimental parameters. We then used the model to investigate the neural bases of the auditory continuity illusion, a type of perceptual grouping phenomenon, without changing any of its parameters. Perceptual grouping enables the auditory system to integrate brief, disparate sounds into cohesive perceptual units. The neural mechanisms underlying auditory continuity illusion have not been studied extensively with conventional neuroimaging or electrophysiological techniques. Our modeling results agree with behavioral studies in humans and an electrophysiological study in cats. The results predict a particular set of bottom-up cortical processing mechanisms that implement perceptual grouping, and also attest to the robustness of our model.  相似文献   

16.
 Perception of complex communication sounds is a major function of the auditory system. To create a coherent percept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as “combination-sensitivity,” are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to “recognize” the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing. Received: 6 October 2001 / Accepted in revised form: 21 January 2002  相似文献   

17.
Visually induced plasticity of auditory spatial perception in macaques   总被引:1,自引:0,他引:1  
When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.  相似文献   

18.
When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.  相似文献   

19.
The amygdala plays a central role in evaluating the behavioral importance of sensory information. Anatomical subcortical pathways provide direct input to the amygdala from early sensory systems and may support an adaptively valuable rapid appraisal of salient information. However, the functional significance of these subcortical inputs remains controversial. We recorded magnetoencephalographic activity evoked by tones in the context of emotionally valent faces and tested two competing biologically motivated dynamic causal models against these data: the dual and cortical models. The dual model comprised two parallel (cortical and subcortical) routes to the amygdala, whereas the cortical model excluded the subcortical path. We found that neuronal responses elicited by salient information were better explained when a subcortical pathway was included. In keeping with its putative functional role of rapid stimulus appraisal, the subcortical pathway was most important early in stimulus processing. However, as often assumed, its action was not limited to the context of fear, pointing to a more widespread information processing role. Thus, our data supports the idea that an expedited evaluation of sensory input is best explained by an architecture that involves a subcortical path to the amygdala.  相似文献   

20.
The unique temporal and spectral properties of chopper neurons in the cochlear nucleus cannot be fully explained by current popular models. A new model of sustained chopper neurons was therefore suggested based on the assumption that chopper neurons receive input both from onset neurons and the auditory nerve (Bahmer and Langner in Biol Cybern 95:4, 2006). As a result of the interaction of broadband input from onset neurons and narrowband input from the auditory nerve, the chopper neurons in our model are characterized by a remarkable combination of sharp frequency tuning to pure tones and faithful periodicity coding. Our simulations show that the width of the spectral integration of the onset neuron is crucial for both the precision of periodicity coding and their resolution of single components of sinusoidally amplitude-modulated sine waves. One may hypothesize, therefore, that it would be an advantage if the hearing system were able to adapt the spectral integration of onset neurons to varying stimulus conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号