首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

2.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

3.

Background

In this study we investigated the association between instrumental music training in childhood and outcomes closely related to music training as well as those more distantly related.

Methodology/Principal Findings

Children who received at least three years (M = 4.6 years) of instrumental music training outperformed their control counterparts on two outcomes closely related to music (auditory discrimination abilities and fine motor skills) and on two outcomes distantly related to music (vocabulary and nonverbal reasoning skills). Duration of training also predicted these outcomes. Contrary to previous research, instrumental music training was not associated with heightened spatial skills, phonemic awareness, or mathematical abilities.

Conclusions/Significance

While these results are correlational only, the strong predictive effect of training duration suggests that instrumental music training may enhance auditory discrimination, fine motor skills, vocabulary, and nonverbal reasoning. Alternative explanations for these results are discussed.  相似文献   

4.

Background

It is usually possible to identify the sex of a pre-pubertal child from their voice, despite the absence of sex differences in fundamental frequency at these ages. While it has been suggested that the overall spacing between formants (formant frequency spacing - ΔF) is a key component of the expression and perception of sex in children''s voices, the effect of its continuous variation on sex and gender attribution has not yet been investigated.

Methodology/Principal findings

In the present study we manipulated voice ΔF of eight year olds (two boys and two girls) along continua covering the observed variation of this parameter in pre-pubertal voices, and assessed the effect of this variation on adult ratings of speakers'' sex and gender in two separate experiments. In the first experiment (sex identification) adults were asked to categorise the voice as either male or female. The resulting identification function exhibited a gradual slope from male to female voice categories. In the second experiment (gender rating), adults rated the voices on a continuum from “masculine boy” to “feminine girl”, gradually decreasing their masculinity ratings as ΔF increased.

Conclusions/Significance

These results indicate that the role of ΔF in voice gender perception, which has been reported in adult voices, extends to pre-pubertal children''s voices: variation in ΔF not only affects the perceived sex, but also the perceived masculinity or femininity of the speaker. We discuss the implications of these observations for the expression and perception of gender in children''s voices given the absence of anatomical dimorphism in overall vocal tract length before puberty.  相似文献   

5.
Liu H  Wang EQ  Metman LV  Larson CR 《PloS one》2012,7(3):e33629

Background

One of the most common symptoms of speech deficits in individuals with Parkinson''s disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency.

Methodology/Principal Findings

Twelve individuals with Parkinson''s disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD.

Conclusions/Significance

The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.  相似文献   

6.

Background

There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into music; and most of them are mainly based on spectra feature of EEG.

Methodology/Principal Findings

In this study, audibly recognizable scale-free music was deduced from individual Electroencephalogram (EEG) waveforms. The translation rules include the direct mapping from the period of an EEG waveform to the duration of a note, the logarithmic mapping of the change of average power of EEG to music intensity according to the Fechner''s law, and a scale-free based mapping from the amplitude of EEG to music pitch according to the power law. To show the actual effect, we applied the deduced sonification rules to EEG segments recorded during rapid-eye movement sleep (REM) and slow-wave sleep (SWS). The resulting music is vivid and different between the two mental states; the melody during REM sleep sounds fast and lively, whereas that in SWS sleep is slow and tranquil. 60 volunteers evaluated 25 music pieces, 10 from REM, 10 from SWS and 5 from white noise (WN), 74.3% experienced a happy emotion from REM and felt boring and drowsy when listening to SWS, and the average accuracy for all the music pieces identification is 86.8%(κ = 0.800, P<0.001). We also applied the method to the EEG data from eyes closed, eyes open and epileptic EEG, and the results showed these mental states can be identified by listeners.

Conclusions/Significance

The sonification rules may identify the mental states of the brain, which provide a real-time strategy for monitoring brain activities and are potentially useful to neurofeedback therapy.  相似文献   

7.

Background

Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants.

Methodology/Principal Findings

Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users.

Conclusion/Significance

Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.  相似文献   

8.
Liu P  Chen Z  Jones JA  Huang D  Liu H 《PloS one》2011,6(7):e22791

Background

Auditory feedback has been demonstrated to play an important role in the control of voice fundamental frequency (F0), but the mechanisms underlying the processing of auditory feedback remain poorly understood. It has been well documented that young adults can use auditory feedback to stabilize their voice F0 by making compensatory responses to perturbations they hear in their vocal pitch feedback. However, little is known about the effects of aging on the processing of audio-vocal feedback during vocalization.

Methodology/Principal Findings

In the present study, we recruited adults who were between 19 and 75 years of age and divided them into five age groups. Using a pitch-shift paradigm, the pitch of their vocal feedback was unexpectedly shifted ±50 or ±100 cents during sustained vocalization of the vowel sound/u/. Compensatory vocal F0 response magnitudes and latencies to pitch feedback perturbations were examined. A significant effect of age was found such that response magnitudes increased with increasing age until maximal values were reached for adults 51–60 years of age and then decreased for adults 61–75 years of age. Adults 51–60 years of age were also more sensitive to the direction and magnitude of the pitch feedback perturbations compared to younger adults.

Conclusion

These findings demonstrate that the pitch-shift reflex systematically changes across the adult lifespan. Understanding aging-related changes to the role of auditory feedback is critically important for our theoretical understanding of speech production and the clinical applications of that knowledge.  相似文献   

9.
10.

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.  相似文献   

11.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

12.
Wang XD  Gu F  He K  Chen LH  Chen L 《PloS one》2012,7(1):e30027

Background

Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.

Methodology/Principal Findings

We chose Chinese lexical tones for the current study because they help to define word meaning and hence facilitate the fabrication of an abstract auditory rule in a speech sound stream. We continuously presented native Chinese speakers with Chinese vowels differing in formant, intensity, and level of pitch to construct a complex and varying auditory stream. In this stream, most of the sounds shared flat lexical tones to form an embedded abstract auditory rule. Occasionally the rule was randomly violated by those with a rising or falling lexical tone. The results showed that the violation of the abstract auditory rule of lexical tones evoked a robust preattentive auditory response, as revealed by whole-head electrical recordings of the mismatch negativity (MMN), though none of the subjects acquired explicit knowledge of the rule or became aware of the violation.

Conclusions/Significance

Our results demonstrate that there is an auditory sensory intelligence in the perception of Chinese lexical tones. The existence of this intelligence suggests that the humans can automatically extract abstract auditory rules in speech at a preattentive stage to ensure speech communication in complex and noisy auditory environments without drawing on conscious resources.  相似文献   

13.

Background

Accessory pathway (AP) ablation is not always easy. Our purpose was to assess the age-related prevalence of AP location, electrophysiological and prognostic data according to this location.

Methods

Electrophysiologic study (EPS) was performed in 994 patients for a pre-excitation syndrome. AP location was determined on a 12 lead ECG during atrial pacing at maximal preexcitation and confirmed at intracardiac EPS in 494 patients.

Results

AP location was classified as anteroseptal (AS)(96), right lateral (RL)(54), posteroseptal (PS)(459), left lateral (LL)(363), nodoventricular (NV)(22).Patients with ASAP or RLAP were younger than patients with another AP location. Poorly-tolerated arrhythmias were more frequent in patients with LLAP than in other patients (0.009 for ASAP, 0.0037 for RLAP, <0.0001 for PSAP).Maximal rate conducted over AP was significantly slower in patients with ASAP and RLAP than in other patients. Malignant forms at EPS were more frequent in patients with LLAP than in patients with ASAP (0.002) or PSAP (0.001).Similar data were noted when AP location was confirmed at intracardiac EPS. Among untreated patients, poorly-tolerated arrhythmia occurred in patients with LLAP (3) or PSAP (6). Failures of ablation were more frequent for AS or RL AP than for LL or PS AP.

Conclusions

AS and RLAP location in pre-excitation syndrome was more frequent in young patients. Maximal rate conducted over AP was lower than in other locations. Absence of poorly-tolerated arrhythmias during follow-up and higher risk of ablation failure should be taken into account for indications of AP ablation in children with few symptoms.  相似文献   

14.

Introduction

Antipsychotics (AP) induce weight gain. However, reviews and meta-analyses generally are restricted to second generation antipsychotics (SGA) and do not stratify for duration of AP use. It is hypothesised that patients gain more weight if duration of AP use is longer.

Method

A meta-analysis was conducted of clinical trials of AP that reported weight change. Outcome measures were body weight change, change in BMI and clinically relevant weight change (7% weight gain or loss). Duration of AP-use was stratified as follows: ≤6 weeks, 6–16 weeks, 16–38 weeks and >38 weeks. Forest plots stratified by AP as well as by duration of use were generated and results were summarised in figures.

Results

307 articles met inclusion criteria. The majority were AP switch studies. Almost all AP showed a degree of weight gain after prolonged use, except for amisulpride, aripiprazole and ziprasidone, for which prolonged exposure resulted in negligible weight change. The level of weight gain per AP varied from discrete to severe. Contrary to expectations, switch of AP did not result in weight loss for amisulpride, aripiprazole or ziprasidone. In AP-naive patients, weight gain was much more pronounced for all AP.

Conclusion

Given prolonged exposure, virtually all AP are associated with weight gain. The rational of switching AP to achieve weight reduction may be overrated. In AP-naive patients, weight gain is more pronounced.  相似文献   

15.

Background

Prepulse inhibition (PPI) depicts the effects of a weak sound preceding strong acoustic stimulus on acoustic startle response (ASR). Previous studies suggest that PPI is influenced by physical parameters of prepulse sound such as intensity and preceding time. The present study characterizes the impact of prepulse tone frequency on PPI.

Methods

Seven female C57BL mice were used in the present study. ASR was induced by a 100 dB SPL white noise burst. After assessing the effect of background sounds (white noise and pure tones) on ASR, PPI was tested by using prepulse pure tones with the background tone of either 10 or 18 kHz. The inhibitory effect was assessed by measuring and analyzing the changes in the first peak-to-peak magnitude, root mean square value, duration and latency of the ASR as the function of frequency difference between prepulse and background tones.

Results

Our data showed that ASR magnitude with pure tone background varied with tone frequency and was smaller than that with white noise background. Prepulse tone systematically reduced ASR as the function of the difference in frequency between prepulse and background tone. The 0.5 kHz difference appeared to be a prerequisite for inducing substantial ASR inhibition. The frequency dependence of PPI was similar under either a 10 or 18 kHz background tone.

Conclusion

PPI is sensitive to frequency information of the prepulse sound. However, the critical factor is not tone frequency itself, but the frequency difference between the prepulse and background tones.  相似文献   

16.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

17.

Introduction

Difficulties in word-level reading skills are prevalent in Brazilian schools and may deter children from gaining the knowledge obtained through reading and academic achievement. Music education has emerged as a potential method to improve reading skills because due to a common neurobiological substratum.

Objective

To evaluate the effectiveness of music education for the improvement of reading skills and academic achievement among children (eight to 10 years of age) with reading difficulties.

Method

235 children with reading difficulties in 10 schools participated in a five-month, randomized clinical trial in cluster (RCT) in an impoverished zone within the city of São Paulo to test the effects of music education intervention while assessing reading skills and academic achievement during the school year. Five schools were chosen randomly to incorporate music classes (n = 114), and five served as controls (n = 121). Two different methods of analysis were used to evaluate the effectiveness of the intervention: The standard method was intention-to-treat (ITT), and the other was the Complier Average Causal Effect (CACE) estimation method, which took compliance status into account.

Results

The ITT analyses were not very promising; only one marginal effect existed for the rate of correct real words read per minute. Indeed, considering ITT, improvements were observed in the secondary outcomes (slope of Portuguese = 0.21 [p<0.001] and slope of math = 0.25 [p<0.001]). As for CACE estimation (i.e., complier children versus non-complier children), more promising effects were observed in terms of the rate of correct words read per minute [β = 13.98, p<0.001] and phonological awareness [β = 19.72, p<0.001] as well as secondary outcomes (academic achievement in Portuguese [β = 0.77, p<0.0001] and math [β = 0.49, p<0.001] throughout the school year).

Conclusion

The results may be seen as promising, but they are not, in themselves, enough to make music lessons as public policy.  相似文献   

18.
Cartei V  Cowles HW  Reby D 《PloS one》2012,7(2):e31353

Background

The frequency components of the human voice play a major role in signalling the gender of the speaker. A voice imitation study was conducted to investigate individuals'' ability to make behavioural adjustments to fundamental frequency (F0), and formants (Fi) in order to manipulate their expression of voice gender.

Methodology/Principal Findings

Thirty-two native British-English adult speakers were asked to read out loud different types of text (words, sentence, passage) using their normal voice and then while sounding as ‘masculine’ and ‘feminine’ as possible. Overall, the results show that both men and women raised their F0 and Fi when feminising their voice, and lowered their F0 and Fi when masculinising their voice.

Conclusions/Significance

These observations suggest that adult speakers are capable of spontaneous glottal and vocal tract length adjustments to express masculinity and femininity in their voice. These results point to a “gender code”, where speakers make a conventionalized use of the existing sex dimorphism to vary the expression of their gender and gender-related attributes.  相似文献   

19.
20.

Background

Different mechanisms have been proposed to be involved in tinnitus generation, among them reduced lateral inhibition and homeostatic plasticity. On a perceptual level these different mechanisms should be reflected by the relationship between the individual audiometric slope and the perceived tinnitus pitch. Whereas some studies found the tinnitus pitch corresponding to the maximum hearing loss, others stressed the relevance of the edge frequency. This study investigates the relationship between tinnitus pitch and audiometric slope in a large sample.

Methodology

This retrospective observational study analyzed 286 patients. The matched tinnitus pitch was compared to the frequency of maximum hearing loss and the edge of the audiogram (steepest hearing loss) by t-tests and correlation coefficients. These analyses were performed for the whole group and for sub-groups (uni- vs. bilateral (117 vs. 338 ears), pure-tone vs. narrow-band (340 vs. 115 ears), and low and high audiometric slope (114 vs. 113 ears)).

Findings

For the right ear, tinnitus pitch was in the same range and correlated significantly with the frequency of maximum hearing loss, but differed from and did not correlate with the edge frequency. For the left ear, similar results were found but the correlation between tinnitus pitch and maximum hearing loss did not reach significance. Sub-group analyses (bi- and unilateral, tinnitus character, slope steepness) revealed identical results except for the sub-group with high audiometric slope which revealed a higher frequency of maximum hearing loss as compared to the tinnitus pitch.

Conclusion

The study-results confirm a relationship between tinnitus pitch and maximum hearing loss but not to the edge frequency, suggesting that tinnitus is rather a fill-in-phenomenon resulting from homeostatic mechanisms, than the result of deficient lateral inhibition. Sub-group analyses suggest that audiometric steepness and the side of affected ear affect this relationship. Future studies should control for these potential confounding factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号