共查询到20条相似文献,搜索用时 15 毫秒
1.
Alwina Stein Alva Engell Hidehiko Okamoto Andreas Wollbrink Pia Lau Robert Wunderlich Claudia Rudack Christo Pantev 《PloS one》2013,8(12)
We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG). In the first experiment, five acoustic masking stimuli (MS), consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB) by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch). N1m and auditory steady state responses (ASSR), evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF) in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training. 相似文献
2.
3.
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons'' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. 相似文献
4.
Roberta Santoro Michelle Moerel Federico De Martino Rainer Goebel Kamil Ugurbil Essa Yacoub Elia Formisano 《PLoS computational biology》2014,10(1)
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. 相似文献
5.
Jaakko Kauram?ki Iiro P. J??skel?inen Jarno L. H?nninen Toni Auranen Aapo Nummenmaa Jouko Lampinen Mikko Sams 《PloS one》2012,7(10)
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. 相似文献
6.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex. 相似文献
7.
Ken Rosslau Sibylle C. Herholz Arne Knief Magdalene Ortmann Dirk Deuster Claus-Michael Schmidt Antoinetteam Zehnhoff-Dinnesen Christo Pantev Christian Dobel 《PloS one》2016,11(2)
The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences. 相似文献
8.
I. G. Andreeva V. A. Orlov V. L. Ushakov 《Journal of Evolutionary Biochemistry and Physiology》2018,54(5):363-373
Functional magnetic resonance imaging (fMRI) was used to investigate activation of the multimodal areas in the cerebral cortex–supramarginal and angular gyri, precuneus, and middle temporal visual cortex (MT/V5)–in response to motion of biologically significant sounds (human footsteps). The subjects listened to approaching or receding footstep sounds during 45 s, and such stimulation was supposed to evoke auditory adaptation to biological motion. Listening conditions alternated with stimulation-free control. To reveal activity in the regions of interest, the periods before and during stimulation were compared. Most stable and voluminous activation was detected in the supramarginal and angular gyri, being registered for all footstep sound types–approaching, receding and steps in place. Listening to human approaching steps activated the precuneus area, with the volume of activation clusters varying considerably between subjects. In the MT/V5 area, activation was revealed in 5 of 21 subjects. The involvement of the tested multimodal cortical areas in analyzing biological motion is discussed. 相似文献
9.
A fundamental principle of brain organization is bilateral symmetry of structures and functions. For spatial sensory and motor information processing, this organization is generally plausible subserving orientation and coordination of a bilaterally symmetric body. However, breaking of the symmetry principle is often seen for functions that depend on convergent information processing and lateralized output control, e.g. left hemispheric dominance for the linguistic speech system. Conversely, a subtle splitting of functions into hemispheres may occur if peripheral information from symmetric sense organs is partly redundant, e.g. auditory pattern recognition, and therefore allows central conceptualizations of complex stimuli from different feature viewpoints, as demonstrated e.g. for hemispheric analysis of frequency modulations in auditory cortex (AC) of mammals including humans. Here we demonstrate that discrimination learning of rapidly but not of slowly amplitude modulated tones is non-uniformly distributed across both hemispheres: While unilateral ablation of left AC in gerbils leads to impairment of normal discrimination learning of rapid amplitude modulations, right side ablations lead to improvement over normal learning. These results point to a rivalry interaction between both ACs in the intact brain where the right side competes with and weakens learning capability maximally attainable by the dominant left side alone. 相似文献
10.
11.
12.
Activation of p53 by MEG3 non-coding RNA 总被引:2,自引:0,他引:2
Zhou Y Zhong Y Wang Y Zhang X Batista DL Gejman R Ansell PJ Zhao J Weng C Klibanski A 《The Journal of biological chemistry》2007,282(34):24731-24742
13.
Ying Wang Yigang Feng Yanbin Jia Yanping Xie Wensheng Wang Yufang Guan Shuming Zhong Dan Zhu Li Huang 《PloS one》2013,8(12)
Background
Whether schizophrenia and bipolar disorder are the clinical outcomes of discrete or shared causative processes is much debated in psychiatry. Several studies have demonstrated anomalous structural and functional superior temporal gyrus (STG) symmetries in schizophrenia. We examined bipolar patients to determine if they also have altered STG asymmetry.Methods
Whole-head magnetoencephalography (MEG) recordings of auditory evoked fields were obtained for 20 subjects with schizophrenia, 20 with bipolar disorder, and 20 control subjects. Neural generators of the M100 auditory response were modeled using a single equivalent current dipole for each hemisphere. The source location of the M100 response was used as a measure of functional STG asymmetry.Results
Control subjects showed the typical M100 asymmetrical pattern with more anterior sources in the right STG. In contrast, both schizophrenia and bipolar disorder patients displayed a symmetrical M100 source pattern. There was no significant difference in the M100 latency and strength in bilateral hemispheres within three groups.Conclusions
Our results indicate that disturbed asymmetry of temporal lobe function may reflect a common deviance present in schizophrenia and bipolar disorder, suggesting the two disorders might share etiological and pathophysiological factors. 相似文献14.
Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. 相似文献
15.
The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8–9-year-old schoolgirls and on adult female Campbell''s monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only “negative” voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates'' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention. 相似文献
16.
GINO J. D'ANGELO ALBERT R. DE CHICCHIS DAVID A. OSBORN GEORGE R. GALLAGHER ROBERT J. WARREN KARL V. MILLER 《The Journal of wildlife management》2007,71(4):1238-1242
Abstract: Basic knowledge of white-tailed deer (Odocoileus virginianus) hearing can improve understanding of deer behavior and may assist in the development of effective deterrent strategies. Using auditory brainstem response testing, we determined that white-tailed deer hear within the range of frequencies we tested, between 0.25–30 kilohertz (kHz), with best sensitivity between 4–8 kHz. The upper limit of human hearing lies at about 20 kHz, whereas we demonstrated that white-tailed deer detected frequencies to at least 30 kHz. This difference suggests that research on the use of ultrasonic (frequencies >20 kHz) auditory deterrents is justified as a possible means of reducing deer—human conflicts. 相似文献
17.
Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences. 相似文献
18.
Junko Matsuzaki Kuriko Kagitani-Shimono Hisato Sugata Masayuki Hirata Ryuzo Hanaie Fumiyo Nagatani Masaya Tachibana Koji Tominaga Ikuko Mohri Masako Taniike 《PloS one》2014,9(7)
The aim of this study was to investigate the differential time-course responses of the auditory cortex to repeated auditory stimuli in children with autism spectrum disorder (ASD) showing auditory hypersensitivity. Auditory-evoked field values were obtained from 21 boys with ASD (12 with and 9 without auditory hypersensitivity) and 15 age-matched typically developing controls. M50 dipole moments were significantly increased during the time-course study only in the ASD with auditory hypersensitivity compared with those for the other two groups. The boys having ASD with auditory hypersensitivity also showed more prolonged response duration than those in the other two groups. The response duration was significantly related to the severity of auditory hypersensitivity. We propose that auditory hypersensitivity is associated with decreased inhibitory processing, possibly resulting from an abnormal sensory gating system or dysfunction of inhibitory interneurons. 相似文献
19.
20.
Nelken I Chechik G Mrsic-Flogel TD King AJ Schnupp JW 《Journal of computational neuroscience》2005,19(2):199-221
Neurons can transmit information about sensory stimuli via their firing rate, spike latency, or by the occurrence of complex spike patterns. Identifying which aspects of the neural responses actually encode sensory information remains a fundamental question in neuroscience. Here we compared various approaches for estimating the information transmitted by neurons in auditory cortex in two very different experimental paradigms, one measuring spatial tuning and the other responses to complex natural stimuli. We demonstrate that, in both cases, spike counts and mean response times jointly carry essentially all the available information about the stimuli. Thus, in auditory cortex, whereas spike counts carry only partial information about stimulus identity or location, the additional availability of relatively coarse temporal information is sufficient in order to extract essentially all the sensory information available in the spike discharge pattern, at least for the relatively short stimuli (< ∼ 100 ms) commonly used in auditory research. 相似文献