首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Research strategy in the auditory system has tended to parallel that in the visual system, where neurons have been shown to respond selectively to specific stimulus parameters. Auditory neurons have been shown to be sensitive to changes in acoustic parameters, but only rarely have neurons been reported that respond exclusively to only one biologically significant sound. Even at higher levels of the auditory system very few cells have been found that could be described as "vocalization detectors." In addition, variability in responses to artificial sounds have been reported for auditory cortical neurons similar to the response variability that has been reported in the visual system. Recent evidence indicates that the responses of auditory cortical neurons to species-specific vocalizations can also be labile, varying in both strength and selectivity. This is especially true of the secondary auditory cortex. This variability, coupled with the lack of extreme specificity in the secondary auditory cortex, suggests that secondary cortical neurons are not well suited for the role of "vocalization detectors."  相似文献   

2.
The auditory cortex   总被引:4,自引:0,他引:4  
The division of the auditory cortex into various fields, functional aspects of these fields, and neuronal coding in the primary auditory cortical field (AI) are reviewed with stress on features that may be common to mammals. On the basis of 14 topographies and clustered distributions of neuronal response characteristics in the primary auditory cortical field, a hypothesis is developed of how a certain complex acoustic pattern may be encoded in an equivalent spatial activity pattern in AI, generated by time-coordinated firing of groups of neurons. The auditory cortex, demonstrated specifically for AI, appears to perform sound analysis by synthesis, i.e. by combining spatially distributed coincident or time-coordinated neuronal responses. The dynamics of sounds and the plasticity of cortical responses are considered as a topic for research. Accepted: 25 July 1997  相似文献   

3.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

4.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

5.
The auditory system consists of the ascending and descending (corticofugal) systems. The corticofugal system forms multiple feedback loops. Repetitive acoustic or auditory cortical electric stimulation activates the cortical neural net and the corticofugal system and evokes cortical plastic changes as well as subcortical plastic changes. These changes are short-term and are specific to the properties of the acoustic stimulus or electrically stimulated cortical neurons. These plastic changes are modulated by the neuromodulatory system. When the acoustic stimulus becomes behaviorally relevant to the animal through auditory fear conditioning or when the cortical electric stimulation is paired with an electric stimulation of the cholinergic basal forebrain, the cortical plastic changes become larger and long-term, whereas the subcortical changes stay short-term, although they also become larger. Acetylcholine plays an essential role in augmenting the plastic changes and in producing long-term cortical changes. The corticofugal system has multiple functions. One of the most important functions is the improvement and adjustment (reorganization) of subcortical auditory signal processing for cortical signal processing.  相似文献   

6.
The size of a resonant source can be estimated by the acoustic-scale information in the sound [1-3]. Previous studies revealed that posterior superior temporal gyrus (STG) responds to acoustic scale in human speech when it is controlled for spectral-envelope change (unpublished data). Here we investigate whether the STG activity is specific to the processing of acoustic scale in human voice or whether it reflects a generic mechanism for the analysis of acoustic scale in resonant sources. In two functional magnetic resonance imaging (fMRI) experiments, we measured brain activity in response to changes in acoustic scale in different categories of resonant sound (human voice, animal call, and musical instrument). We show that STG is activated bilaterally for spectral-envelope changes in general; it responds to changes in category as well as acoustic scale. Activity in left posterior STG is specific to acoustic scale in human voices and not responsive to acoustic scale in other resonant sources. In contrast, the anterior temporal lobe and intraparietal sulcus are activated by changes in acoustic scale across categories. The results imply that the human voice requires special processing of acoustic scale, whereas the anterior temporal lobe and intraparietal sulcus process auditory size information independent of source category.  相似文献   

7.
The work presents experimental data on certain changes in electrical responses of the auditory system's midbrain centre in a contraphasic binaural presentation of sound impulse series. Neuronal cortical activity is selective in respect to dynamic interaural changes of signals' phasic spectre which may serve as a basis for the mechanisms of localising a moving source of sound. Human auditory evoked potentials reveal a manifestation of memorizing the auditory image movement direction as shown by appearance of stimuli deviant from standard mismatch negativity.  相似文献   

8.
Unit activity in cortical areas 24 and 32 was studied during conditioned placing reflex formation in cats. Neuronal responses in the limbic cortex of trained animals correlated with acoustic stimulation, the motor response, and also with the presentation of food reinforcement. In untrained animals 16% of neurons responded to acoustic stimulation. After training the number of neurons responding to sound in area 32 increased to 51.3%. Of the total number of neurons, 34.6% responded by initial excitation and 26.7% by inhibition of spike activity. The latent period of these responses was about 50 msec and their duration up to 200 msec. Similar but weaker responses were observed in area 24. Short-latency activation responses to conditioned and differential stimulation were similar in character. It is suggested that after training processes taking place in the limbic cortex may contribute to better perception of both conditioned and differential acoustic stimuli, irrespective of their functional significance.A. A. Bogomolets Institute of Physiology, Academy of Sciences of the Ukrainian SSR, Kiev. Translated from Neirofiziologiya, Vol. 16, No. 2, pp. 201–208, March–April, 1984.  相似文献   

9.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.  相似文献   

10.
Studies have been made of the effect of weak clicks, click series, wide band noise with the intensity of 10-20 dB on healthy subjects. Changes in the activity of central mechanisms of perception were evaluated by long-latent auditory evoked potentials, changes in vegetative functions of the organism were checked by analysis of the periodic structure of cardiac activity. It was shown that weak acoustic stimuli exert relatively weak direct influence on the auditory system which is revealed as weak and unstable realization of the stimuli together with the reduction of high-amplitude long-latent auditory evoked potentials. However, significant background effect of these signals on functional condition of human subjects was observed; changes in functional condition presumably affect the activity of central parts of the auditory and associative systems of the brain which is registered as changes in the long-latent auditory evoked potentials.  相似文献   

11.
There have been recent developments in our understanding of the auditory neuroscience of non-human primates that, to a certain extent, can be integrated with findings from human functional neuroimaging studies. This framework can be used to consider the cortical basis of complex sound processing in humans, including implications for speech perception, spatial auditory processing and auditory scene segregation.  相似文献   

12.
The coding of complex sounds in the early auditory system has a 'standard model' based on the known physiology of the cochlea and main brainstem pathways. This model accounts for a wide range of perceptual capabilities. It is generally accepted that high cortical areas encode abstract qualities such as spatial location or speech sound identity. Between the early and late auditory system, the role of primary auditory cortex (A1) is still debated. A1 is clearly much more than a 'whiteboard' of acoustic information-neurons in A1 have complex response properties, showing sensitivity to both low-level and high-level features of sounds.  相似文献   

13.
Across multiple timescales, acoustic regularities of speech match rhythmic properties of both the auditory and motor systems. Syllabic rate corresponds to natural jaw-associated oscillatory rhythms, and phonemic length could reflect endogenous oscillatory auditory cortical properties. Hemispheric lateralization for speech could result from an asymmetry of cortical tuning, with left and right auditory areas differentially sensitive to spectro-temporal features of speech. Using simultaneous electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) recordings from humans, we show that spontaneous EEG power variations within the gamma range (phonemic rate) correlate best with left auditory cortical synaptic activity, while fluctuations within the theta range correlate best with that in the right. Power fluctuations in both ranges correlate with activity in the mouth premotor region, indicating coupling between temporal properties of speech perception and production. These data show that endogenous cortical rhythms provide temporal and spatial constraints on the neuronal mechanisms underlying speech perception and production.  相似文献   

14.
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing1. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function 2. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions 3. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated 4. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity.Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS 5-7. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing. This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex 8,9, premotor cortex 10, primary somatosensory cortex 11,12 and language-related areas 13, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing. Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task 2. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e. decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS. Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.  相似文献   

15.
Perceptual organization of sound begins in the auditory periphery   总被引:2,自引:1,他引:1  
Segmenting the complex acoustic mixture that makes a typical auditory scene into relevant perceptual objects is one of the main challenges of the auditory system [1], for both human and nonhuman species. Several recent studies indicate that perceptual auditory object formation, or "streaming," may be based on neural activity within the auditory cortex and beyond [2, 3]. Here, we find that scene analysis starts much earlier in the auditory pathways. Single units were recorded from a peripheral structure of the mammalian auditory brainstem, the cochlear nucleus. Peripheral responses were similar to cortical responses and displayed all of the functional properties required for streaming, including multisecond adaptation. Behavioral streaming was also measured in human listeners. Neurometric functions derived from the peripheral responses predicted accurately behavioral streaming. This reveals that subcortical structures may already contribute to the analysis of auditory scenes. This finding is consistent with the observation that species lacking a neocortex can still achieve and benefit from behavioral streaming [4]. For humans, we argue that auditory scene analysis of complex scenes is probably based on interactions between subcortical and cortical neural processes, with the relative contribution of each stage depending on the nature of the acoustic cues forming the streams.  相似文献   

16.
Lewald J  Getzmann S 《PloS one》2011,6(9):e25146
The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1) in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2) in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.  相似文献   

17.
1. Frequency and space representation in the auditory cortex of the big brown bat, Eptesicus fuscus, were studied by recording responses of 223 neurons to acoustic stimuli presented in the bat's frontal auditory space. 2. The majority of the auditory cortical neurons were recorded at a depth of less than 500 microns with a response latency between 8 and 20 ms. They generally discharged phasically and had nonmonotonic intensity-rate functions. The minimum threshold, (MT) of these neurons was between 8 and 82 dB sound pressure level (SPL). Half of the cortical neurons showed spontaneous activity. All 55 threshold curves are V-shaped and can be described as broad, intermediate, or narrow. 3. Auditory cortical neurons are tonotopically organized along the anteroposterior axis of the auditory cortex. High-frequency-sensitive neurons are located anteriorly and low-frequency-sensitive neurons posteriorly. An overwhelming majority of neurons were sensitive to a frequency range between 30 and 75 kHz. 4. When a sound was delivered from the response center of a neuron on the bat's frontal auditory space, the neuron had its lowest MT. When the stimulus amplitude was increased above the MT, the neuron responded to sound delivered within a defined spatial area. The response center was not always at the geometric center of the spatial response area. The latter also expanded with stimulus amplitude. High-frequency-sensitive neurons tended to have smaller spatial response areas than low-frequency-sensitive neurons. 5. Response centers of all 223 neurons were located between 0 degrees and 50 degrees in azimuth, 2 degrees up and 25 degrees down in elevation of the contralateral frontal auditory space. Response centers of auditory cortical neurons tended to move toward the midline and slightly downward with increasing best frequency. 6. Auditory space representation appears to be systematically arranged according to the tonotopic axis of the auditory cortex. Thus, the lateral space is represented posteriorly and the middle space anteriorly. Space representation, however, is less systematic in the vertical direction. 7. Auditory cortical neurons are columnarly organized. Thus, the BFs, MTs, threshold curves, azimuthal location of response centers, and auditory spatial response areas of neurons sequentially isolated from an orthogonal electrode penetration are similar.  相似文献   

18.
Several acoustic cues contribute to auditory distance estimation. Nonacoustic cues, including familiarity, may also play a role. We tested participants' ability to distinguish the distances of acoustically similar sounds that differed in familiarity. Participants were better able to judge the distances of familiar sounds. Electroencephalographic (EEG) recordings collected while participants performed this auditory distance judgment task revealed that several cortical regions responded in different ways depending on sound familiarity. Surprisingly, these differences were observed in auditory cortical regions as well as other cortical regions distributed throughout both hemispheres. These data suggest that learning about subtle, distance-dependent variations in complex speech sounds involves processing in a broad cortical network that contributes both to speech recognition and to how spatial information is extracted from speech.  相似文献   

19.
Dysfunction of the inner ear as caused by presbyacusis, injuries or noise traumata may result in subjective tinnitus, but not everyone suffering from one of these diseases develops a tinnitus percept and vice versa. The reasons for these individual differences are still unclear and may explain why different treatments of the disease are beneficial for some patients but not for others. Here we for the first time compare behavioral and neurophysiological data from hearing impaired Mongolian gerbils with (T) and without (NT) a tinnitus percept that may elucidate why some specimen do develop subjective tinnitus after noise trauma while others do not. Although noise trauma induced a similar permanent hearing loss in all animals, tinnitus did develop only in about three quarters of these animals. NT animals showed higher overall cortical and auditory brainstem activity before noise trauma compared to T animals; that is, animals with low overall neuronal activity in the auditory system seem to be prone to develop tinnitus after noise trauma. Furthermore, T animals showed increased activity of cortical neurons representing the tinnitus frequencies after acoustic trauma, whereas NT animals exhibited an activity decrease at moderate sound intensities by that time. Spontaneous activity was generally increased in T but decreased in NT animals. Plastic changes of tonotopic organization were transient, only seen in T animals and vanished by the time the tinnitus percept became chronic. We propose a model for tinnitus prevention that points to a global inhibitory mechanism in auditory cortex that may prevent tinnitus genesis in animals with high overall activity in the auditory system, whereas this mechanism seems not potent enough for tinnitus prevention in animals with low overall activity.  相似文献   

20.
Han L  Zhang Y  Lou Y  Xiong Y 《PloS one》2012,7(4):e34837
Auditory cortical plasticity can be induced through various approaches. The medial geniculate body (MGB) of the auditory thalamus gates the ascending auditory inputs to the cortex. The thalamocortical system has been proposed to play a critical role in the responses of the auditory cortex (AC). In the present study, we investigated the cellular mechanism of the cortical activity, adopting an in vivo intracellular recording technique, recording from the primary auditory cortex (AI) while presenting an acoustic stimulus to the rat and electrically stimulating its MGB. We found that low-frequency stimuli enhanced the amplitudes of sound-evoked excitatory postsynaptic potentials (EPSPs) in AI neurons, whereas high-frequency stimuli depressed these auditory responses. The degree of this modulation depended on the intensities of the train stimuli as well as the intervals between the electrical stimulations and their paired sound stimulations. These findings may have implications regarding the basic mechanisms of MGB activation of auditory cortical plasticity and cortical signal processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号