首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Li ZF  Gao EQ 《生理科学进展》2005,36(2):175-178
听觉离皮层纤维系统是指由听皮层直接投射到皮层下听觉核团和耳蜗的下行纤维,这些纤维较严格的遵守频率分布的原则,与上行传入纤维构成多重反馈环路。听皮层通过离皮层纤维系统高度聚焦的正反馈作用,易化与其生理特性相匹配的皮层下听觉神经元的电活动,同时通过广泛的侧枝抑制作用来抑制与其生理特性不相匹配的皮层下听觉神经元的电活动,从而调节和改善皮层下听觉信息的处理,参与中枢听觉系统的可塑性变化。离皮层纤维的下行调节作用还广泛存在于视觉和躯体感觉系统,它们可能具有类似的神经机制。  相似文献   

2.
Yao XH  Xiong Y 《生理科学进展》2004,35(4):345-348
内侧膝状体 (medialgeniculatebody ,MGB)是听觉系统在丘脑的重要接替核团。MGB的丘脑 皮层神经元 (thalamocorticalneuron)发出上行性纤维投射至听皮层 ,同时接受听皮层的皮层 丘脑神经元 (corticothalamicneuron)发出的下行性纤维投射。因此 ,听觉信息既受到MGB上行听觉通路的编码和整合 ,也接受皮层下行通路的调控。同时 ,MGB还参与声音定位、听觉可塑性等过程。本文总结近年MGB的解剖与生理学研究进展 ,着重叙述MGB与听皮层的纤维联系及其在听觉信息调控中的作用。  相似文献   

3.
音乐是一种听觉艺术形式,在儿童教育和发展中,尤其是语言能力发展中扮演着重要角色。语音意识是个体感知识别、分析和运用语音的能力,是预测儿童语言读写能力的重要指标。本文梳理了近十年来音乐训练影响儿童语音意识的研究证据,并讨论了音乐训练可能促进语音意识的神经基础和解释模型。大量研究表明,音乐训练可以在行为水平提高儿童在语音意识测量任务上的成绩。此外,音乐训练从两方面影响语音加工的神经基础:一方面通过影响皮层下基本听觉神经通路与大脑听觉皮层,促进儿童前注意水平的语音感知能力;另一方面通过影响语音加工大脑区域间的功能连接,促进语音编码,强化语音加工的听觉-运动整合功能。相关神经机制为音乐训练促进儿童的语音意识提供了生物学基础。基于已有研究,本文提出综合的层级模型对音乐训练影响儿童语音意识的认知神经机制进行系统的解释,该模型认为音乐训练影响语音意识的认知神经机制分为3个层级:第一层级,音乐训练通过影响基本听觉神经通路促进了语音的基本听觉加工,其中节奏训练促进对语音时长信息的感知,音高训练促进对语音频率信息的感知;第二层级,音乐训练通过影响语音加工的神经网络,进一步促进语音编码,其中节奏训练主要促进...  相似文献   

4.
尽管大脑听皮层神经元对声音空间信息的编码已有不少的研究报道,但其编码机制并不十分清楚,相关研究在大鼠的初级听皮层也未见详细的研究报道.用神经电生理学方法在大鼠初级听皮层考察了151个听神经元的听空间反应域,分析了神经元对来自不同空间方位声刺激反应的放电数和平均首次发放潜伏期的关系.结果表明,多数(52.32%)神经元对来自对侧听空间的声刺激反应较强,表现为对侧偏好型特征,其他神经元分别归类为同侧偏好型(18.54%)、中间偏好型(18.54%)、全向型(3.31%)和复杂型(7.28%).多数神经元偏好的听空间区域的几何中心位于记录部位对侧听空间的中部和上部.绝大多数初级听皮层神经元对来自偏好听空间的声刺激反应的放电数较多、反应潜伏期较短,对来自非偏好听空间的声刺激反应的放电数较少、反应潜伏期较长,放电数与平均首次发放潜伏期呈显著负相关.在对声音空间信息的编码中,大脑初级听皮层可能综合放电数和潜伏期的信息以实现对声源方位的编码.  相似文献   

5.
听觉系统能感受的声音干变万化,其参数变异范围极大,如频率上下限可相差1000倍,强度按能量计算上下限可相差10000倍。那么听觉系统是如何对如此巨量的听觉信息进行编码的呢?我们知道,声波经过耳蜗毛细胞的换能作用转变为神经冲动,成为传递声音的信息。但神经冲动是以全或无形式传布的,单纤维的神经冲动其振幅和与波形都是相对固定的,因此神经冲动的振幅波形不能反映声音的特性,只能依据神经冲动的节律、冲动的时间间隔以及发放神经冲动的纤维在耳蜗基底膜上的起源部位来传递不同形式的声音信息。我们把神经冲动在听神经纤维上传输…  相似文献   

6.
本文旨在综述面孔社会性线索加工的神经机制。通过系统回顾面孔社会性线索相关的研究,分别从面孔情绪、面孔吸引力、眼睛注视方向和面孔朝向以及唇读四个角度阐述其加工的神经机制。首先简要阐述了人类大脑对面孔刺激的一般加工机制,包括下颞叶梭状回面孔区、颞上沟后部面孔区和枕下回的枕叶面孔区等脑区在加工面孔刺激中的功能。接下来探讨了大脑对情绪面孔的加工。情绪面孔加工主要包括对面孔的知觉编码和情绪编码。研究显示,除了视觉皮层的面孔加工区之外,杏仁核在情绪编码中具有重要作用。神经系统对面孔表情的反应受到情绪类型、情绪面孔的动态性以及情绪面孔阈下呈现等因素的影响。在面孔吸引力加工方面,研究表明高吸引力面孔会激活奖赏相关的神经环路,但是吸引力对神经活动的具体影响目前仍存在争议。对面孔吸引力的神经反应可能受实验任务类型、观察者性取向和性别、观察者心理因素、面孔其他社会线索等因素的调节。眼睛注视方向和面孔朝向线索则和视觉注意有关,其神经加工系统除了包括面孔加工区外,还包括和注意相关的顶内沟等区域。关于唇读的研究则表明唇读在言语知觉中具有重要作用,可以激活听觉皮层和言语相关皮层。最后,一方面总结了以上各方面实验证据对面孔信息加工理论的支持和改善作用,另一方面进一步探讨了特殊人群中这些加工存在的缺陷,并指出了该领域未来的研究方向。  相似文献   

7.
在自然环境中,人和动物常在一定的背景噪声下感知信号声刺激,然而,关于低强度的弱背景噪声如何影响听皮层神经元对声刺激频率的编码尚不清楚.本研究以大鼠听皮层神经元的频率反应域为研究对象,测定了阈下背景噪声对79个神经元频率反应域的影响.结果表明,弱背景噪声对大鼠初级听皮层神经元的听反应既有抑制性影响、又有易化性影响.一般来说,抑制性影响使神经元的频率调谐范围和最佳频率反应域缩小,易化性影响使神经元的频率调谐范围和最佳频率反应域增大.对于少数神经元,弱背景噪声并未显著改变其频率调谐范围,但却改变了其最佳频率反应域范围.弱背景噪声对63.64%神经元的特征频率和55.84%神经元的最低阈值无显著影响.神经元频率调谐曲线的尖部比中部更容易受到弱背景噪声的影响.该研究结果有助于我们进一步理解复杂声环境下大脑听皮层对听觉信息的编码机制.  相似文献   

8.
既往研究发现听觉感知包括对声音信号的觉察、感觉、注意和知觉等多个认知过程,但依然不清楚大脑如何对不同类型的复杂声音信号(如同种鸣声和其他声音)进行解码和处理,以及在感知不同类型声音信号时大脑活动的动态特征.本研究记录了在随机播放白噪声和洞内鸣叫声音刺激时仙琴蛙Nidirana daunchina的左右端脑、间脑和中脑的...  相似文献   

9.
白静  唐佳 《生物学杂志》2011,28(2):62-65
频率作为声音的一个重要参数,在听敏感神经元对声音进行分析和编码过程中扮演重要角色。一般用频率调谐曲线来表示听敏感神经元的频率调谐特性,并用Qn(10,30,50)值表达频率调谐曲线的尖锐程度,Qn值越大,频率调谐曲线也越尖锐,神经元的频率调谐能力越好,对频率的分辨能力越高。从听觉外周到中枢,听敏感神经元的频率调谐逐级锐化,而这种锐化主要是由听中枢的多种抑制性神经递质的作用而产生的,其中起主要作用的是GABA能和甘氨酸能神经递质。此外,离皮层调控,双侧下丘间的联合投射以及弱噪声前掩蔽等因素也会影响听敏感神经元的频率调谐特性。  相似文献   

10.
基于时间机理与部位机理整合的鲁棒性语音信号表达   总被引:1,自引:0,他引:1  
传统语音信号谱特征的提取是基于FFT 的能谱分析方法,在噪音环境情况下,对噪音的频谱成分与语音信号的频谱成分的处理采用“平均主义”的原则。也就是说噪音的频谱成分与语音信号的频谱成分占同等重要的地位。显然在噪音环境中这种处理方法会使噪音掩蔽掉语音信号的成分。在听觉系统中这种处理编码方式如同耳蜗滤波器的频率分析功能那样,也就是部位机理。实际上听觉系统对噪音和周期信号的处理不是“平均主义”原则,而是对周期信号敏感, 对噪音不敏感,听觉神经纤维通过神经脉冲发放的周期间隔来编码刺激信号, 这对应听觉处理机制中的时间编码方式。基于这两种处理机制,文中提出整合部位机理和时间机理的方法,这正是听觉的处理刺激的方式。这样处理的方法很好地结合了两种处理机制的优点,能有效地探测噪音环境中的语音信号  相似文献   

11.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.  相似文献   

12.
The representation of sound information in the central nervous system relies on the analysis of time-varying features in communication and other environmental sounds. How are auditory physiologists and theoreticians to choose an appropriate method for characterizing spectral and temporal acoustic feature representations in single neurons and neural populations? A brief survey of currently available scientific methods and their potential usefulness is given, with a focus on the strengths and weaknesses of using noise analysis techniques for approximating spectrotemporal response fields (STRFs). Noise analysis has been used to foster several conceptual advances in describing neural acoustic feature representation in a variety of species and auditory nuclei. STRFs have been used to quantitatively assess spectral and temporal transformations across mutually connected auditory nuclei, to identify neuronal interactions between spectral and temporal sound dimensions, and to compare linear vs. nonlinear response properties through state-dependent comparisons. We propose that noise analysis techniques used in combination with novel stimulus paradigms and parametric experiment designs will provide powerful means of exploring acoustic feature representations in the central nervous system.  相似文献   

13.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

14.
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.

A study of the neuronal representations elicited in guinea pigs by conspecific calls at different auditory processing stages reveals insights into where call-selective neuronal responses emerge; the transformation from nonselective to call-selective responses occurs in the superficial layers of the primary auditory cortex.  相似文献   

15.
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.  相似文献   

16.
In this article, we study the neural encoding of acoustic information for FM-bats (such as Eptesicus fuscus) in simulation. In echolocation research, the frequency–time sound representation as expressed by the spectrogram is often considered as input. The rationale behind this is that a similar representation is present in the cochlea, i.e. the receptor potential of the inner hair cells (IHC) along the length of the cochlea, and hence similar acoustic information is relayed to the brain. In this article, we study to what extent the latter assumption is true. The receptor potential is converted into neural activity of the synapting auditory nerve cells (ANC), and information might be lost in this conversion process. Especially for FM-bats, this information transmission is not trivial: in contrast to other mammals, they detect short transient signals, and consequently neural activity can only be integrated over very limited time intervals. To quantify the amount of information transmitted we design a neural network-based algorithm to reconstruct the IHC receptor potentials from the spiking activity of the synapting auditory neurons. Both the receptor potential and the resulting neural activity are simulated using Meddis’ peripheral model. Comparing the reconstruction to the IHC receptor potential, we quantify the information transmission of the bat hearing system and investigate how this depends on the intensity of the incoming signal, the distribution of auditory neurons, and previous masking stimulation (adaptation). In addition, we show how this approach allows to inspect which spectral features survive neural encoding and hence can be relevant for echolocation.  相似文献   

17.
Sparse representation of sounds in the unanesthetized auditory cortex   总被引:2,自引:0,他引:2  
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.  相似文献   

18.
Miura K  Mainen ZF  Uchida N 《Neuron》2012,74(6):1087-1098
How information encoded in neuronal spike trains is used to guide sensory decisions is a fundamental question. In olfaction, a single sniff is sufficient for fine odor discrimination but the neural representations on which olfactory decisions are based are unclear. Here, we recorded neural ensemble activity in the anterior piriform cortex (aPC) of rats performing an odor mixture categorization task. We show that odors evoke transient bursts locked to sniff onset and that odor identity can be better decoded using burst spike counts than by spike latencies or temporal patterns. Surprisingly, aPC ensembles also exhibited near-zero noise correlations during odor stimulation. Consequently, fewer than 100 aPC neurons provided sufficient information to account for behavioral speed and accuracy, suggesting that behavioral performance limits arise downstream of aPC. These findings demonstrate profound transformations in the dynamics of odor representations from the olfactory bulb to cortex and reveal likely substrates for odor-guided decisions. VIDEO ABSTRACT:  相似文献   

19.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

20.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号