首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

2.
Wang XD  Gu F  He K  Chen LH  Chen L 《PloS one》2012,7(1):e30027

Background

Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.

Methodology/Principal Findings

We chose Chinese lexical tones for the current study because they help to define word meaning and hence facilitate the fabrication of an abstract auditory rule in a speech sound stream. We continuously presented native Chinese speakers with Chinese vowels differing in formant, intensity, and level of pitch to construct a complex and varying auditory stream. In this stream, most of the sounds shared flat lexical tones to form an embedded abstract auditory rule. Occasionally the rule was randomly violated by those with a rising or falling lexical tone. The results showed that the violation of the abstract auditory rule of lexical tones evoked a robust preattentive auditory response, as revealed by whole-head electrical recordings of the mismatch negativity (MMN), though none of the subjects acquired explicit knowledge of the rule or became aware of the violation.

Conclusions/Significance

Our results demonstrate that there is an auditory sensory intelligence in the perception of Chinese lexical tones. The existence of this intelligence suggests that the humans can automatically extract abstract auditory rules in speech at a preattentive stage to ensure speech communication in complex and noisy auditory environments without drawing on conscious resources.  相似文献   

3.

Background

Whether schizophrenia and bipolar disorder are the clinical outcomes of discrete or shared causative processes is much debated in psychiatry. Several studies have demonstrated anomalous structural and functional superior temporal gyrus (STG) symmetries in schizophrenia. We examined bipolar patients to determine if they also have altered STG asymmetry.

Methods

Whole-head magnetoencephalography (MEG) recordings of auditory evoked fields were obtained for 20 subjects with schizophrenia, 20 with bipolar disorder, and 20 control subjects. Neural generators of the M100 auditory response were modeled using a single equivalent current dipole for each hemisphere. The source location of the M100 response was used as a measure of functional STG asymmetry.

Results

Control subjects showed the typical M100 asymmetrical pattern with more anterior sources in the right STG. In contrast, both schizophrenia and bipolar disorder patients displayed a symmetrical M100 source pattern. There was no significant difference in the M100 latency and strength in bilateral hemispheres within three groups.

Conclusions

Our results indicate that disturbed asymmetry of temporal lobe function may reflect a common deviance present in schizophrenia and bipolar disorder, suggesting the two disorders might share etiological and pathophysiological factors.  相似文献   

4.
Liu H  Wang EQ  Metman LV  Larson CR 《PloS one》2012,7(3):e33629

Background

One of the most common symptoms of speech deficits in individuals with Parkinson''s disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency.

Methodology/Principal Findings

Twelve individuals with Parkinson''s disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD.

Conclusions/Significance

The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.  相似文献   

5.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

6.

Background

Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system.

Methodology/Principal Findings

Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli.

Conclusions/Significance

Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps.  相似文献   

7.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

8.

Background

A condition vital for the consolidation and maintenance of sleep is generally reduced responsiveness to external stimuli. Despite this, the sleeper maintains a level of stimulus processing that allows to respond to potentially dangerous environmental signals. The mechanisms that subserve these contradictory functions are only incompletely understood.

Methodology/Principal Findings

Using combined EEG/fMRI we investigated the neural substrate of sleep protection by applying an acoustic oddball paradigm during light NREM sleep. Further, we studied the role of evoked K-complexes (KCs), an electroencephalographic hallmark of NREM sleep with a still unknown role for sleep protection. Our main results were: (1) Other than in wakefulness, rare tones did not induce a blood oxygenation level dependent (BOLD) signal increase in the auditory pathway but a strong negative BOLD response in motor areas and the amygdala. (2) Stratification of rare tones by the presence of evoked KCs detected activation of the auditory cortex, hippocampus, superior and middle frontal gyri and posterior cingulate only for rare tones followed by a KC. (3) The typical high frontocentral EEG deflections of KCs were not paralleled by a BOLD equivalent.

Conclusions/Significance

We observed that rare tones lead to transient disengagement of motor and amygdala responses during light NREM sleep. We interpret this as a sleep protective mechanism to delimit motor responses and to reduce the sensitivity of the amygdala towards further incoming stimuli. Evoked KCs are suggested to originate from a brain state with relatively increased stimulus processing, revealing an activity pattern resembling novelty processing as previously reported during wakefulness. The KC itself is not reflected by increased metabolic demand in BOLD based imaging, arguing that evoked KCs result from increased neural synchronicity without altered metabolic demand.  相似文献   

9.

Background

A paradoxical enhancement of the magnitude of the N1 wave of the auditory event-related potential (ERP) has been described when auditory stimuli are presented at very short (<400 ms) inter-stimulus intervals (ISI). Here, we examined whether this enhancement is specific for the auditory system, or whether it also affects ERPs elicited by stimuli belonging to other sensory modalities.

Methodology and Principal Findings

We recorded ERPs elicited by auditory and somatosensory stimuli in 13 healthy subjects. For each sensory modality, 4800 stimuli were presented. Auditory stimuli consisted in brief tones presented binaurally, and somatosensory stimuli consisted in constant-current electrical pulses applied to the right median nerve. Stimuli were delivered continuously, and the ISI was varied randomly between 100 and 1000 ms. We found that the ISI had a similar effect on both auditory and somatosensory ERPs. In both sensory modalities, ISI had an opposite effect on the magnitude of the N1 and P2 waves: the magnitude of the auditory and the somatosensory N1 was significantly increased at ISI≤200 ms, while the magnitude of the auditory and the somatosensory P2 was significantly decreased at ISI≤200 ms.

Conclusion and Significance

The observation that both the auditory and the somatosensory N1 are enhanced at short ISIs indicates that this phenomenon reflects a physiological property that is common across sensory systems, rather than, as previously suggested, unique for the auditory system. Two of the hypotheses most frequently put forward to explain this observation, namely (i) the decreased contribution of inhibitory postsynaptic potentials to the recorded scalp ERPs and (ii) the decreased contribution of ‘latent inhibition’, are discussed. Because neither of these two hypotheses can satisfactorily account for the concomitant reduction of the auditory and the somatosensory P2, we propose a third, novel hypothesis, consisting in the modulation of a single neural component contributing to both the N1 and the P2 waves.  相似文献   

10.

Background

In a previous report we showed that cognitive training fostering auditory-verbal discrimination and working memory normalized magnetoencephalographic (MEG) M50 gating ratio in schizophrenia patients. The present analysis addressed whether training effects on M50 ratio and task performance are mediated by changes in brain oscillatory activity. Such evidence should improve understanding of the role of oscillatory activity in phenomena such as M50 ratio, the role of dysfunctional oscillatory activity in processing abnormalities in schizophrenia, and mechanisms of action of cognitive training.

Methodology/Principal Findings

Time-locked and non-time-locked oscillatory activity was measured together with M50 ratio in a paired-click design before and after a 4-week training of 36 patients randomly assigned to specific cognitive exercises (CE) or standard (comparison) cognitive training (CP). Patient data were compared to those of 15 healthy controls who participated in two MEG measurements 4 weeks apart without training. Training led to more time-locked gamma-band response and more non-time-locked alpha-band desynchronization, moreso after CE than after CP. Only after CE, increased alpha desynchronization was associated with normalized M50 ratio and with improved verbal memory performance. Thus, both types of cognitive training normalized gamma activity, associated with improved stimulus encoding. More targeted training of auditory-verbal discrimination and memory additionally normalized alpha desynchronization, associated with improved elaborative processing. The latter presumably contributes to improved auditory gating and cognitive function.

Conclusions/Significance

Results suggest that dysfunctional interplay of ocillatory activity that may contribute to auditory processing disruption in schizophrenia can be modified by targeted training.  相似文献   

11.

Background

The sound-induced flash illusion is an auditory-visual illusion – when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.

Methodology/Principal Findings

The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees.

Conclusions/Significance

The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the “spatial rule” does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated “fusion” illusion reportedly occurring when multiple flashes are accompanied by a single beep.  相似文献   

12.

Objectives

To compare the event-related potentials (ERPs) and brain topographic maps characteristic and change in normal controls and subjective tinnitus patients before and after repetitive transcranial magnetic stimulation (rTMS) treatment.

Methods and Participants

The ERPs and brain topographic maps elicited by target stimulus were compared before and after 1-week treatment with rTMS in 20 subjective tinnitus patients and 16 healthy controls.

Results

Before rTMS, target stimulus elicited a larger N1 component than the standard stimuli (repeating sounds)in control group but not in tinnitus patients. Instead, the tinnitus group pre-treatment exhibited larger amplitude of N1 in response to standard stimuli than to deviant stimuli. Furthermore tinnitus patients had smaller mismatch negativity (MMN) and late discriminative negativity (LDN)component at Fz compared with the control group. After rTMS treatment, tinnitus patients showed increased N1 response to deviant stimuli and larger MMN and LDN compared with pre-treatment. The topographic maps for the tinnitus group before rTMS -treatment demonstrated global asymmetry between the left and right cerebral hemispheres with more negative activities in left side and more positive activities in right side. In contrast, the brain topographic maps for patients after rTMS-treatment and controls seem roughly symmetrical. The ERP amplitudes and brain topographic maps in post-treatment patient group showed no significant difference with those in controls.

Conclusions

The characterical changes in ERP and brain topographic maps in tinnitus patients maybe related with the electrophysiological mechanism of tinnitus induction and development. It can be used as an objective biomarker for the evaluation of auditory central in subjective tinnitus patients. These findings support the notion that rTMS treatment in tinnitus patients may exert a beneficial effect.  相似文献   

13.

Background

The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres.

Methods/Principal Findings

To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate.

Conclusions/Significance

These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization.  相似文献   

14.

Objective

Interaural level difference (ILD) is the difference in sound pressure level (SPL) between the two ears and is one of the key physical cues used by the auditory system in sound localization. Our current understanding of ILD encoding has come primarily from invasive studies of individual structures, which have implicated subcortical structures such as the cochlear nucleus (CN), superior olivary complex (SOC), lateral lemniscus (LL), and inferior colliculus (IC). Noninvasive brain imaging enables studying ILD processing in multiple structures simultaneously.

Methods

In this study, blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) is used for the first time to measure changes in the hemodynamic responses in the adult Sprague-Dawley rat subcortex during binaural stimulation with different ILDs.

Results and Significance

Consistent responses are observed in the CN, SOC, LL, and IC in both hemispheres. Voxel-by-voxel analysis of the change of the response amplitude with ILD indicates statistically significant ILD dependence in dorsal LL, IC, and a region containing parts of the SOC and LL. For all three regions, the larger amplitude response is located in the hemisphere contralateral from the higher SPL stimulus. These findings are supported by region of interest analysis. fMRI shows that ILD dependence occurs in both hemispheres and multiple subcortical levels of the auditory system. This study is the first step towards future studies examining subcortical binaural processing and sound localization in animal models of hearing.  相似文献   

15.
Liu P  Chen Z  Jones JA  Huang D  Liu H 《PloS one》2011,6(7):e22791

Background

Auditory feedback has been demonstrated to play an important role in the control of voice fundamental frequency (F0), but the mechanisms underlying the processing of auditory feedback remain poorly understood. It has been well documented that young adults can use auditory feedback to stabilize their voice F0 by making compensatory responses to perturbations they hear in their vocal pitch feedback. However, little is known about the effects of aging on the processing of audio-vocal feedback during vocalization.

Methodology/Principal Findings

In the present study, we recruited adults who were between 19 and 75 years of age and divided them into five age groups. Using a pitch-shift paradigm, the pitch of their vocal feedback was unexpectedly shifted ±50 or ±100 cents during sustained vocalization of the vowel sound/u/. Compensatory vocal F0 response magnitudes and latencies to pitch feedback perturbations were examined. A significant effect of age was found such that response magnitudes increased with increasing age until maximal values were reached for adults 51–60 years of age and then decreased for adults 61–75 years of age. Adults 51–60 years of age were also more sensitive to the direction and magnitude of the pitch feedback perturbations compared to younger adults.

Conclusion

These findings demonstrate that the pitch-shift reflex systematically changes across the adult lifespan. Understanding aging-related changes to the role of auditory feedback is critically important for our theoretical understanding of speech production and the clinical applications of that knowledge.  相似文献   

16.

Background

The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding ‘rapid temporal processing’.

Methodology

A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.

Conclusions

Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.  相似文献   

17.

Background

Paired associative stimulation (PAS) consisting of repeated application of transcranial magnetic stimulation (TMS) pulses and contingent exteroceptive stimuli has been shown to induce neuroplastic effects in the motor and somatosensory system. The objective was to investigate whether the auditory system can be modulated by PAS.

Methods

Acoustic stimuli (4 kHz) were paired with TMS of the auditory cortex with intervals of either 45 ms (PAS(45 ms)) or 10 ms (PAS(10 ms)). Two-hundred paired stimuli were applied at 0.1 Hz and effects were compared with low frequency repetitive TMS (rTMS) at 0.1 Hz (200 stimuli) and 1 Hz (1000 stimuli) in eleven healthy students. Auditory cortex excitability was measured before and after the interventions by long latency auditory evoked potentials (AEPs) for the tone (4 kHz) used in the pairing, and a control tone (1 kHz) in a within subjects design.

Results

Amplitudes of the N1-P2 complex were reduced for the 4 kHz tone after both PAS(45 ms) and PAS(10 ms), but not after the 0.1 Hz and 1 Hz rTMS protocols with more pronounced effects for PAS(45 ms). Similar, but less pronounced effects were observed for the 1 kHz control tone.

Conclusion

These findings indicate that paired associative stimulation may induce tonotopically specific and also tone unspecific human auditory cortex plasticity.  相似文献   

18.

Background

Apart from findings on both functional and motor asymmetries in captive aquatic mammals, only few studies have focused on lateralized behaviour of these species in the wild.

Methodology/Principal Findings

In this study we focused on lateralized visual behaviour by presenting wild striped dolphins with objects of different degrees of familiarity (fish, ball, toy). Surveys were conducted in the Gulf of Taranto, the northern Ionian Sea portion delimited by the Italian regions of Calabria, Basilicata and Apulia. After sighting striped dolphins from a research vessel, different stimuli were presented in a random order by a telescopic bar connected to the prow of the boat. The preferential use of the right/left monocular viewing during inspection of the stimuli was analysed.

Conclusion

Results clearly showed a monocular viewing preference with respect to the type of the stimulus employed. Due to the complete decussation of the optical nerves in dolphin brain our results reflected a different specialization of brain hemispheres for visual scanning processes confirming that in this species different stimuli evoked different patterns of eye use. A preferential use of the right eye (left hemisphere) during visual inspection of unfamiliar targets was observed supporting the hypothesis that, in dolphins, the organization of the functional neural structures which reflected cerebral asymmetries for visual object recognition could have been subjected to a deviation from the evolutionary line of most terrestrial vertebrates.  相似文献   

19.

Background

Prepulse inhibition (PPI) depicts the effects of a weak sound preceding strong acoustic stimulus on acoustic startle response (ASR). Previous studies suggest that PPI is influenced by physical parameters of prepulse sound such as intensity and preceding time. The present study characterizes the impact of prepulse tone frequency on PPI.

Methods

Seven female C57BL mice were used in the present study. ASR was induced by a 100 dB SPL white noise burst. After assessing the effect of background sounds (white noise and pure tones) on ASR, PPI was tested by using prepulse pure tones with the background tone of either 10 or 18 kHz. The inhibitory effect was assessed by measuring and analyzing the changes in the first peak-to-peak magnitude, root mean square value, duration and latency of the ASR as the function of frequency difference between prepulse and background tones.

Results

Our data showed that ASR magnitude with pure tone background varied with tone frequency and was smaller than that with white noise background. Prepulse tone systematically reduced ASR as the function of the difference in frequency between prepulse and background tone. The 0.5 kHz difference appeared to be a prerequisite for inducing substantial ASR inhibition. The frequency dependence of PPI was similar under either a 10 or 18 kHz background tone.

Conclusion

PPI is sensitive to frequency information of the prepulse sound. However, the critical factor is not tone frequency itself, but the frequency difference between the prepulse and background tones.  相似文献   

20.
Bishop DV  Anderson M  Reid C  Fox AM 《PloS one》2011,6(5):e18993

Background

There is considerable uncertainty about the time-course of central auditory maturation. On some indices, children appear to have adult-like competence by school age, whereas for other measures development follows a protracted course.

Methodology

We studied auditory development using auditory event-related potentials (ERPs) elicited by tones in 105 children on two occasions two years apart. Just over half of the children were 7 years initially and 9 years at follow-up, whereas the remainder were 9 years initially and 11 years at follow-up. We used conventional analysis of peaks in the auditory ERP, independent component analysis, and time-frequency analysis.

Principal Findings

We demonstrated maturational changes in the auditory ERP between 7 and 11 years, both using conventional peak measurements, and time-frequency analysis. The developmental trajectory was different for temporal vs. fronto-central electrode sites. Temporal electrode sites showed strong lateralisation of responses and no increase of low-frequency phase-resetting with age, whereas responses recorded from fronto-central electrode sites were not lateralised and showed progressive change with age. Fronto-central vs. temporal electrode sites also mapped onto independent components with differently oriented dipole sources in auditory cortex. A global measure of waveform shape proved to be the most effective method for distinguishing age bands.

Conclusions/Significance

The results supported the idea that different cortical regions mature at different rates. The ICC measure is proposed as the best measure of ‘auditory ERP age’.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号