首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Humans can easily restore a speech signal that is temporally masked by an interfering sound (e.g., a cough masking parts of a word in a conversation), and listeners have the illusion that the speech continues through the interfering sound. This perceptual restoration for human speech is affected by prior experience. Here we provide evidence for perceptual restoration in complex vocalizations of a songbird that are acquired by vocal learning in a similar way as humans learn their language.

Methodology/Principal Findings

European starlings were trained in a same/different paradigm to report salient differences between successive sounds. The birds'' response latency for discriminating between a stimulus pair is an indicator for the salience of the difference, and these latencies can be used to evaluate perceptual distances using multi-dimensional scaling. For familiar motifs the birds showed a large perceptual distance if discriminating between song motifs that were muted for brief time periods and complete motifs. If the muted periods were filled with noise, the perceptual distance was reduced. For unfamiliar motifs no such difference was observed.

Conclusions/Significance

The results suggest that starlings are able to perceptually restore partly masked sounds and, similarly to humans, rely on prior experience. They may be a suitable model to study the mechanism underlying experience-dependent perceptual restoration.  相似文献   

2.

Background

Prepulse inhibition (PPI) depicts the effects of a weak sound preceding strong acoustic stimulus on acoustic startle response (ASR). Previous studies suggest that PPI is influenced by physical parameters of prepulse sound such as intensity and preceding time. The present study characterizes the impact of prepulse tone frequency on PPI.

Methods

Seven female C57BL mice were used in the present study. ASR was induced by a 100 dB SPL white noise burst. After assessing the effect of background sounds (white noise and pure tones) on ASR, PPI was tested by using prepulse pure tones with the background tone of either 10 or 18 kHz. The inhibitory effect was assessed by measuring and analyzing the changes in the first peak-to-peak magnitude, root mean square value, duration and latency of the ASR as the function of frequency difference between prepulse and background tones.

Results

Our data showed that ASR magnitude with pure tone background varied with tone frequency and was smaller than that with white noise background. Prepulse tone systematically reduced ASR as the function of the difference in frequency between prepulse and background tone. The 0.5 kHz difference appeared to be a prerequisite for inducing substantial ASR inhibition. The frequency dependence of PPI was similar under either a 10 or 18 kHz background tone.

Conclusion

PPI is sensitive to frequency information of the prepulse sound. However, the critical factor is not tone frequency itself, but the frequency difference between the prepulse and background tones.  相似文献   

3.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

4.

Background

Sound production is widespread among fishes and accompanies many social interactions. The literature reports twenty-nine cichlid species known to produce sounds during aggressive and courtship displays, but the precise range in behavioural contexts is unclear. This study aims to describe the various Oreochromis niloticus behaviours that are associated with sound production in order to delimit the role of sound during different activities, including agonistic behaviours, pit activities, and reproduction and parental care by males and females of the species.

Methodology/Principal Findings

Sounds mostly occur during the day. The sounds recorded during this study accompany previously known behaviours, and no particular behaviour is systematically associated with sound production. Males and females make sounds during territorial defence but not during courtship and mating. Sounds support visual behaviours but are not used alone. During agonistic interactions, a calling Oreochromis niloticus does not bite after producing sounds, and more sounds are produced in defence of territory than for dominating individuals. Females produce sounds to defend eggs but not larvae.

Conclusion/Significance

Sounds are produced to reinforce visual behaviours. Moreover, comparisons with O. mossambicus indicate two sister species can differ in their use of sound, their acoustic characteristics, and the function of sound production. These findings support the role of sounds in differentiating species and promoting speciation. They also make clear that the association of sounds with specific life-cycle roles cannot be generalized to the entire taxa.  相似文献   

5.

Background

Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.

Methodology/Principal Findings

Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.

Conclusions/Significance

The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object.  相似文献   

6.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

7.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

8.
Schmidt AK  Römer H 《PloS one》2011,6(12):e28593

Background

Insects often communicate by sound in mixed species choruses; like humans and many vertebrates in crowded social environments they thus have to solve cocktail-party-like problems in order to ensure successful communication with conspecifics. This is even more a problem in species-rich environments like tropical rainforests, where background noise levels of up to 60 dB SPL have been measured.

Principal Findings

Using neurophysiological methods we investigated the effect of natural background noise (masker) on signal detection thresholds in two tropical cricket species Paroecanthus podagrosus and Diatrypa sp., both in the laboratory and outdoors. We identified three ‘bottom-up’ mechanisms which contribute to an excellent neuronal representation of conspecific signals despite the masking background. First, the sharply tuned frequency selectivity of the receiver reduces the amount of masking energy around the species-specific calling song frequency. Laboratory experiments yielded an average signal-to-noise ratio (SNR) of −8 dB, when masker and signal were broadcast from the same side. Secondly, displacing the masker by 180° from the signal improved SNRs by further 6 to 9 dB, a phenomenon known as spatial release from masking. Surprisingly, experiments carried out directly in the nocturnal rainforest yielded SNRs of about −23 dB compared with those in the laboratory with the same masker, where SNRs reached only −14.5 and −16 dB in both species. Finally, a neuronal gain control mechanism enhances the contrast between the responses to signals and the masker, by inhibition of neuronal activity in interstimulus intervals.

Conclusions

Thus, conventional speaker playbacks in the lab apparently do not properly reconstruct the masking noise situation in a spatially realistic manner, since under real world conditions multiple sound sources are spatially distributed in space. Our results also indicate that without knowledge of the receiver properties and the spatial release mechanisms the detrimental effect of noise may be strongly overestimated.  相似文献   

9.
Neuhofer D  Ronacher B 《PloS one》2012,7(3):e34384

Background

Animals that communicate by sound face the problem that the signals arriving at the receiver often are degraded and masked by noise. Frequency filters in the receiver''s auditory system may improve the signal-to-noise ratio (SNR) by excluding parts of the spectrum which are not occupied by the species-specific signals. This solution, however, is hardly amenable to species that produce broad band signals or have ears with broad frequency tuning. In mammals auditory filters exist that work in the temporal domain of amplitude modulations (AM). Do insects also use this type of filtering?

Principal Findings

Combining behavioural and neurophysiological experiments we investigated whether AM filters may improve the recognition of masked communication signals in grasshoppers. The AM pattern of the sound, its envelope, is crucial for signal recognition in these animals. We degraded the species-specific song by adding random fluctuations to its envelope. Six noise bands were used that differed in their overlap with the spectral content of the song envelope. If AM filters contribute to reduced masking, signal recognition should depend on the degree of overlap between the song envelope spectrum and the noise spectra. Contrary to this prediction, the resistance against signal degradation was the same for five of six masker bands. Most remarkably, the band with the strongest frequency overlap to the natural song envelope (0–100 Hz) impaired acceptance of degraded signals the least. To assess the noise filter capacities of single auditory neurons, the changes of spike trains as a function of the masking level were assessed. Increasing levels of signal degradation in different frequency bands led to similar changes in the spike trains in most neurones.

Conclusions

There is no indication that auditory neurones of grasshoppers are specialized to improve the SNR with respect to the pattern of amplitude modulations.  相似文献   

10.
Valor LM  Grant SG 《PloS one》2007,2(12):e1303

Background

Gene expression profiling using microarrays is a powerful technology widely used to study regulatory networks. Profiling of mRNA levels in mutant organisms has the potential to identify genes regulated by the mutated protein.

Methodology/Principle Findings

Using tissues from multiple lines of knockout mice we have examined genome-wide changes in gene expression. We report that a significant proportion of changed genes were found near the targeted gene.

Conclusions/Significance

The apparent clustering of these genes was explained by the presence of flanking DNA from the parental ES cell. We provide recommendations for the analysis and reporting of microarray data from knockout mice  相似文献   

11.

Background/Methodology

A significant implication of increasing urbanization is anthropogenic noise pollution. Although noise is strongly associated with disruption of animal communication systems and negative health effects for humans, the study of these consequences at ecologically relevant spatial and temporal scales (termed soundscape ecology) is in early stages of application. In this study, we examined the above- and below-water soundscape of recreational and residential lakes in the region surrounding a large metropolitan area. Using univariate and multivariate approaches we test the importance of large- and local-scale landscape factors in driving acoustic characteristics across an urbanization gradient, and visualize changes in the soundscape over space and time.

Principal Findings

Anthropogenic noise (anthrophony) was strongly predicted by a landcover-based metric of urbanization (within a 10 km radius), with presence of a public park as a secondary influence; this urbanization signal was apparent even in below-water recordings. The percent of hourly measurements exceeding noise thresholds associated with outdoor disturbance was 67%, 17%, and 0%, respectively, for lakes characterized as High, Medium, and Low urbanization. Decreased biophony (proportion of natural sounds) was associated with presence of a public park followed by increased urbanization; time of day was also a significant predictor of biophony. Local-scale (shoreline) residential development was not related to changes in anthrophony or biophony. The patterns we identify are illustrated with a multivariate approach which allows use of entire sound samples and facilitates interpretation of changes in a soundscape.

Conclusions/Significance

As highly valued residential and recreation areas, lakes represent everyday soundscapes important to both humans and wildlife. Our findings that many of these areas, particularly those with public parks, routinely experience sound types and levels associated with disturbance, suggests that urban planners need to account for the effect of increasing development on soundscapes to avoid compromising goals for ecological and human health.  相似文献   

12.

Objective

Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology.

Design

Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events.

Patients

Sixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects.

Measurements and Results

To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise).

Conclusions

Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.  相似文献   

13.
Yamamoto K  Kawabata H 《PloS one》2011,6(12):e29414

Background

We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF). DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique.

Methods and Findings

Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms) during three minutes to induce ‘Lag Adaptation’. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase.

Conclusions

These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.  相似文献   

14.
Ren T  He W  Porsov E 《PloS one》2011,6(5):e20149

Background

To detect soft sounds, the mammalian cochlea increases its sensitivity by amplifying incoming sounds up to one thousand times. Although the cochlear amplifier is thought to be a local cellular process at an area basal to the response peak on the spiral basilar membrane, its location has not been demonstrated experimentally.

Methodology and Principal Findings

Using a sensitive laser interferometer to measure sub-nanometer vibrations at two locations along the basilar membrane in sensitive gerbil cochleae, here we show that the cochlea can boost soft sound-induced vibrations as much as 50 dB/mm at an area proximal to the response peak on the basilar membrane. The observed amplification works maximally at low sound levels and at frequencies immediately below the peak-response frequency of the measured apical location. The amplification decreases more than 65 dB/mm as sound levels increases.

Conclusions and Significance

We conclude that the cochlea amplifier resides at a small longitudinal region basal to the response peak in the sensitive cochlea. These data provides critical information for advancing our knowledge on cochlear mechanisms responsible for the remarkable hearing sensitivity, frequency selectivity and dynamic range.  相似文献   

15.
Römer H  Lang A  Hartbauer M 《PloS one》2010,5(10):e13325

Background

Understanding the diversity of animal signals requires knowledge of factors which may influence the different stages of communication, from the production of a signal by the sender up to the detection, identification and final decision-making in the receiver. Yet, many studies on signalling systems focus exclusively on the sender, and often ignore the receiver side and the ecological conditions under which signals evolve.

Methodology/Principal Findings

We study a neotropical katydid which uses airborne sound for long distance communication, but also an alternative form of private signalling through substrate vibration. We quantified the strength of predation by bats which eavesdrop on the airborne sound signal, by analysing insect remains at roosts of a bat family. Males do not arbitrarily use one or the other channel for communication, but spend more time with private signalling under full moon conditions, when the nocturnal rainforest favours predation by visually hunting predators. Measurements of metabolic CO2-production rate indicate that the energy necessary for signalling increases 3-fold in full moon nights when private signalling is favoured. The background noise level for the airborne sound channel can amount to 70 dB SPL, whereas it is low in the vibration channel in the low frequency range of the vibration signal. The active space of the airborne sound signal varies between 22 and 35 meters, contrasting with about 4 meters with the vibration signal transmitted on the insect''s favourite roost plant. Signal perception was studied using neurophysiological methods under outdoor conditions, which is more reliable for the private mode of communication.

Conclusions/Significance

Our results demonstrate the complex effects of ecological conditions, such as predation, nocturnal ambient light levels, and masking noise levels on the performance of receivers in detecting mating signals, and that the net advantage or disadvantage of a mode of communication strongly depends on these conditions.  相似文献   

16.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

17.

Background

Continuity of care (COC) is a widely accepted core principle of primary care and has been associated with patient satisfaction, healthcare utilization and mortality in many, albeit small, studies.

Objective

To assess the relationship between longitudinal continuity with a primary care physician (PCP) and likelihood of death in the French general population.

Design

Observational study based on reimbursement claims from the French national health insurance (NHI) database for salaried workers (2007–2010).

Setting

Primary care.

Patients

We extracted data on the number and pattern of visits made to a PCP and excluded all patients who did not visit a PCP at least twice within 6 months. We recorded age, gender, comorbidities, social status, and deaths.

Main outcome measures

The primary endpoint was death by all causes. We measured longitudinal continuity of care (COC) with a PCP twice a year between 2007 and 2010, using the COC index developed by Bice and Boxerman. We introduced the COC index as time-dependent variables in a survival analysis after adjustment for age, gender and stratifying on comorbidities and social status.

Results

A total of 325 742 patients were included in the analysis. The average COC index ranged from 0.74 (SD: 0.35) to 0.76 (0.35) (where 1.0 is perfect continuity). Likelihood of death was lower in patients with higher continuity (hazard ratio for an increase in 0.1 of continuity, adjusted for age, sex, and stratified on comorbidities and social status: 0.96 [0.95–0.96]).

Conclusion

Higher longitudinal continuity was associated with a reduced likelihood of death.  相似文献   

18.
T Kawashima  T Sato 《PloS one》2012,7(7):e41328

Background

When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue.

Methodology/Principal Findings

In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter''s ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz).

Conclusions/Significance

The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.  相似文献   

19.

Background

Wind turbine noise exposure and suspected health-related effects thereof have attracted substantial attention. Various symptoms such as sleep-related problems, headache, tinnitus and vertigo have been described by subjects suspected of having been exposed to wind turbine noise.

Objective

This review was conducted systematically with the purpose of identifying any reported associations between wind turbine noise exposure and suspected health-related effects.

Data Sources

A search of the scientific literature concerning the health-related effects of wind turbine noise was conducted on PubMed, Web of Science, Google Scholar and various other Internet sources.

Study Eligibility Criteria

All studies investigating suspected health-related outcomes associated with wind turbine noise exposure were included.

Results

Wind turbines emit noise, including low-frequency noise, which decreases incrementally with increases in distance from the wind turbines. Likewise, evidence of a dose-response relationship between wind turbine noise linked to noise annoyance, sleep disturbance and possibly even psychological distress was present in the literature. Currently, there is no further existing statistically-significant evidence indicating any association between wind turbine noise exposure and tinnitus, hearing loss, vertigo or headache.

Limitations

Selection bias and information bias of differing magnitudes were found to be present in all current studies investigating wind turbine noise exposure and adverse health effects. Only articles published in English, German or Scandinavian languages were reviewed.

Conclusions

Exposure to wind turbines does seem to increase the risk of annoyance and self-reported sleep disturbance in a dose-response relationship. There appears, though, to be a tolerable level of around LAeq of 35 dB. Of the many other claimed health effects of wind turbine noise exposure reported in the literature, however, no conclusive evidence could be found. Future studies should focus on investigations aimed at objectively demonstrating whether or not measureable health-related outcomes can be proven to fluctuate depending on exposure to wind turbines.  相似文献   

20.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号