首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
既往研究发现听觉感知包括对声音信号的觉察、感觉、注意和知觉等多个认知过程,但依然不清楚大脑如何对不同类型的复杂声音信号(如同种鸣声和其他声音)进行解码和处理,以及在感知不同类型声音信号时大脑活动的动态特征.本研究记录了在随机播放白噪声和洞内鸣叫声音刺激时仙琴蛙Nidirana daunchina的左右端脑、间脑和中脑的...  相似文献   

2.
人类听觉的基本特性和机制与其他哺乳动物相似,因此,利用动物所作的听觉研究和获得的结果,有助于认识人类自身的听觉.围绕听觉中枢神经元对不同模式的声信号的识别和处理,简要综述了这方面的研究.声信号和声模式识别在听觉中枢对声信号的感受和加工中具有重要意义.听神经元作为声模式识别的结构和功能基础,对不同的声刺激模式产生不同反应,甚至是在同一声刺激模式下,改变其中的某个声参数,神经元的反应也会发生相应改变,而其反应的特性和机制均需要更多研究来解答.另外,声信号作为声信息的载体,不同的声信息寓于不同的声参数和声特征之中,研究发现,听觉中枢神经元存在相应的声信息甄别和选择的神经基础,能对动态变化的声频率、幅度和时程等进行反应和编码,并且,在不同种类动物上获得的研究结果极为相似,表明听觉中枢对不同声信号和声刺激模式的识别、分析和加工,具有共同性和普遍性.  相似文献   

3.
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.  相似文献   

4.
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.  相似文献   

5.
Growing evidence indicates that syntax and semantics are basic aspects of music. After the onset of a chord, initial music-syntactic processing can be observed at about 150-400 ms and processing of musical semantics at about 300-500 ms. Processing of musical syntax activates inferior frontolateral cortex, ventrolateral premotor cortex and presumably the anterior part of the superior temporal gyrus. These brain structures have been implicated in sequencing of complex auditory information, identification of structural relationships, and serial prediction. Processing of musical semantics appears to activate posterior temporal regions. The processes and brain structures involved in the perception of syntax and semantics in music have considerable overlap with those involved in language perception, underlining intimate links between music and language in the human brain.  相似文献   

6.
Main tendencies in studying of human and animals auditory system with psychoacoustical and electrophysiologycal methods are considered. Concerning psychoacoustical studies some basic data are presented as well as contemporary tendencies in hearing physiology in analysis of the intensity, frequency, temporal characteristics of the sound signals and data related to such phenomena as masking and adaptation. Data concerning directional hearing are presented in detail as a basis of auditory virtual reality. In electrophysiological studies of the auditory system detailed analysis of mapping in auditory centers and mechanisms concerning localization of unmoved and moving auditory stimuli was performed. Special attempt was paid to consider the reflection of different types of auditory signals in human evoked potentials.  相似文献   

7.
Rhythmic grouping and discrimination is fundamental to music. When compared to the perception of pitch, rhythmic abilities in animals have received scant attention until recently. In this experiment, four pigeons were tested with three types of auditory rhythmic discriminations to investigate their processing of this aspect of sound and music. Two experiments examined a meter discrimination in which successively presented idiophonic sounds were repeated in meters of different lengths in a go/no-go discrimination task. With difficulty, the birds eventually learned to discriminate between 8/4 and 3/4 meters constructed from cymbal and tom drum sounds at 180 beats per minute. This discrimination subsequently transferred to faster tempos, but not to different drum sounds or their combination. Experiment 3 tested rhythmic and arrhythmic patterns of sounds. After 40 sessions of training, these same pigeons showed no discrimination. Experiment 4 tested repetitions of a piano sound at fast and slow tempos. This discrimination was readily learned and showed transfer to novel tempos. The pattern of results suggests that pigeons can time periodic auditory events, but their capacity to understand generalized rhythmic groupings appears limited.  相似文献   

8.
Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.  相似文献   

9.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.  相似文献   

10.
Livshits MS 《Biofizika》2000,45(5):922-926
The study is based on the model of sound perception that involves two systems of measuring the frequency of the sound being perceived. The system of analyzing the periodicity of spike sequence in axons of neurons innervating the internal auditory hair cells excited by the running wave is less precise, but it provides the estimation of the frequency of any periodical sounds. Exact measurement of the frequency of the sinusoidal sound occurs from the spikes in axons of neurones innervating the internal hair cells of the auditory reception field, which uses the entire train of waves excited by this sound in the critical layer of the waveguide of the internal ear cochlea, which corresponds to the frequency of the sound. The octave effect is explained in terms of the fact that the spectrum of frequencies of speech sounds, singing and music coincides with the region of the audibility range in which the impulses of the auditory nerve fibers are synchronized by incoming signals. The octave similarity, i.e., the similarity in the sounding of harmonic signals, whose frequencies relate as even numbers (2:1, etc.), is explained by an unambiguous match between the sound frequency and pulse rate in auditory fibers coming from the auditory reception field. The presence in the brain posterior tubercles of multipeak neurons whose peaks are in octave ratio, confirm the crucial role of the system of exact measurement of frequency in the phenomenon of octave similarity. The phenomenon of diplacusis, which is particularly pronounced in persons with Menier disease, is caused by changes in the position of the auditory reception field in the diseased ear as compared with the healthy ear. The alternating switching of reception from one ear to the other is related to a disturbance of the unitary image of pitch.  相似文献   

11.
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.  相似文献   

12.
In natural environments that contain multiple sound sources, acoustic energy arising from the different sources sums to produce a single complex waveform at each of the listener's ears. The auditory system must segregate this waveform into distinct streams to permit identification of the objects from which the signals emanate [1]. Although the processes involved in stream segregation are now reasonably well understood [1, 2 and 3], little is known about the nature of our perception of complex auditory scenes. Here, we examined complex scene perception by having listeners detect a discrete change to an auditory scene comprising multiple concurrent naturalistic sounds. We found that listeners were remarkably poor at detecting the disappearance of an individual auditory object when listening to scenes containing more than four objects, but they performed near perfectly when their attention was directed to the identity of a potential change. In the absence of directed attention, this "change deafness" [4] was greater for objects arising from a common location in space than for objects separated in azimuth. Change deafness was also observed for changes in object location, suggesting that it may reflect a general effect of the dependence of human auditory perception on attention.  相似文献   

13.
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.  相似文献   

14.
The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.  相似文献   

15.
We propose in this paper a new class of model processes for the extraction of spectral information from the neural representation of acoustic signals in mammals. We are concerned particularly with mechanisms for detecting the phase-locked activity of auditory neurons in response to frequencies and intensities of sound associated with speech perception. Recent psychophysical tests on deaf human subjects implanted with intracochlear stimulating electrodes as an auditory prosthesis have produced results which are in conflict with the predictions of the classical place-pitch and periodicity-pitch theories. In our model, the detection of synchronicity between two phase-locked signals derived from sources spaced a finite distance apart on the basilar membrane can be used to extract spectral information from the spatiotemporal pattern of basilar membrane motion. Computer simulations of this process suggest an optimal spacing of about 0.3–0.4 of the wavelength of the frequency to be detected. This interval is consistent with a number of psychophysical, neurophysiological, and anatomical observations, including the results of high resolution frequency-mapping of the anteroventral cochlear nucleus which are presented here. One particular version of this model, invoking the binaurally sensitive cells of the medial superior olive as the critical detecting elements, has properties which are useful in accounting for certain complex binaural psychophysical observations.  相似文献   

16.
Pitch perception is crucial for vocal communication, music perception, and auditory object processing in a complex acoustic environment. How pitch is represented in the cerebral cortex has for a long time remained an unanswered question in auditory neuroscience. Several lines of evidence now point to a distinct non-primary region of auditory cortex in primates that contains a cortical representation of pitch.  相似文献   

17.
Recognizing that two elements within a sequence of variable length depend on each other is a key ability in understanding the structure of language and music. Perception of such interdependencies has previously been documented in chimpanzees in the visual domain and in human infants and common squirrel monkeys with auditory playback experiments, but it remains unclear whether it typifies primates in general. Here, we investigated the ability of common marmosets (Callithrix jacchus) to recognize and respond to such dependencies. We tested subjects in a familiarization-discrimination playback experiment using stimuli composed of pure tones that either conformed or did not conform to a grammatical rule. After familiarization to sequences with dependencies, marmosets spontaneously discriminated between sequences containing and lacking dependencies (‘consistent’ and ‘inconsistent’, respectively), independent of stimulus length. Marmosets looked more often to the sound source when hearing sequences consistent with the familiarization stimuli, as previously found in human infants. Crucially, looks were coded automatically by computer software, avoiding human bias. Our results support the hypothesis that the ability to perceive dependencies at variable distances was already present in the common ancestor of all anthropoid primates (Simiiformes).  相似文献   

18.

Background

Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality.

Methodology/Principal Findings

Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians.

Conclusions/Significance

There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.  相似文献   

19.
It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.  相似文献   

20.
The perception of music depends on many culture-specific factors, but is also constrained by properties of the auditory system. This has been best characterized for those aspects of music that involve pitch. Pitch sequences are heard in terms of relative as well as absolute pitch. Pitch combinations give rise to emergent properties not present in the component notes. In this review we discuss the basic auditory mechanisms contributing to these and other perceptual effects in music.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号