首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking.

Methodology/Principal Findings

We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts.

Conclusions/Significance

The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion.  相似文献   

2.

Background

Humans can easily restore a speech signal that is temporally masked by an interfering sound (e.g., a cough masking parts of a word in a conversation), and listeners have the illusion that the speech continues through the interfering sound. This perceptual restoration for human speech is affected by prior experience. Here we provide evidence for perceptual restoration in complex vocalizations of a songbird that are acquired by vocal learning in a similar way as humans learn their language.

Methodology/Principal Findings

European starlings were trained in a same/different paradigm to report salient differences between successive sounds. The birds'' response latency for discriminating between a stimulus pair is an indicator for the salience of the difference, and these latencies can be used to evaluate perceptual distances using multi-dimensional scaling. For familiar motifs the birds showed a large perceptual distance if discriminating between song motifs that were muted for brief time periods and complete motifs. If the muted periods were filled with noise, the perceptual distance was reduced. For unfamiliar motifs no such difference was observed.

Conclusions/Significance

The results suggest that starlings are able to perceptually restore partly masked sounds and, similarly to humans, rely on prior experience. They may be a suitable model to study the mechanism underlying experience-dependent perceptual restoration.  相似文献   

3.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

4.
Papes S  Ladich F 《PloS one》2011,6(10):e26479

Background

Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.

Methodology/Principal Findings

Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.

Conclusions/Significance

These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus.  相似文献   

5.

Background

Fundamental for understanding the evolution of communication systems is both the variation in a signal and how this affects the behavior of receivers, as well as variation in preference functions of receivers, and how this affects the variability of the signal. However, individual differences in female preference functions and their proximate causation have rarely been studied.

Methodology/Principal Findings

Calling songs of male field crickets represent secondary sexual characters and are subject to sexual selection by female choice. Following predictions from the “matched filter hypothesis” we studied the tuning of an identified interneuron in a field cricket, known for its function in phonotaxis, and correlated this with the preference of the same females in two-choice trials. Females vary in their neuronal frequency tuning, which strongly predicts the preference in a choice situation between two songs differing in carrier frequency. A second “matched filter” exists in directional hearing, where reliable cues for sound localization occur only in a narrow frequency range. There is a strong correlation between the directional tuning and the behavioural preference in no-choice tests. This second “matched filter” also varies widely in females, and surprisingly, differs on average by 400 Hz from the neuronal frequency tuning.

Conclusions/Significance

Our findings on the mismatch of the two “matched filters” would suggest that the difference in these two filters appears to be caused by their evolutionary history, and the different trade-offs which exist between sound emission, transmission and detection, as well as directional hearing under specific ecological settings. The mismatched filter situation may ultimately explain the maintenance of considerable variation in the carrier frequency of the male signal despite stabilizing selection.  相似文献   

6.
Ren T  He W  Porsov E 《PloS one》2011,6(5):e20149

Background

To detect soft sounds, the mammalian cochlea increases its sensitivity by amplifying incoming sounds up to one thousand times. Although the cochlear amplifier is thought to be a local cellular process at an area basal to the response peak on the spiral basilar membrane, its location has not been demonstrated experimentally.

Methodology and Principal Findings

Using a sensitive laser interferometer to measure sub-nanometer vibrations at two locations along the basilar membrane in sensitive gerbil cochleae, here we show that the cochlea can boost soft sound-induced vibrations as much as 50 dB/mm at an area proximal to the response peak on the basilar membrane. The observed amplification works maximally at low sound levels and at frequencies immediately below the peak-response frequency of the measured apical location. The amplification decreases more than 65 dB/mm as sound levels increases.

Conclusions and Significance

We conclude that the cochlea amplifier resides at a small longitudinal region basal to the response peak in the sensitive cochlea. These data provides critical information for advancing our knowledge on cochlear mechanisms responsible for the remarkable hearing sensitivity, frequency selectivity and dynamic range.  相似文献   

7.

Background

Vocal learning is a central functional constituent of human speech, and recent studies showing that adult male mice emit ultrasonic sound sequences characterized as “songs” have suggested that the ultrasonic courtship sounds of mice provide a mammalian model of vocal learning.

Objectives

We tested whether mouse songs are learned, by examining the relative role of rearing environment in a cross-fostering experiment.

Methods and Findings

We found that C57BL/6 and BALB/c males emit a clearly different pattern of songs with different frequency and syllable compositions; C57BL/6 males showed a higher peak frequency of syllables, shorter intervals between syllables, and more upward frequency modulations with jumps, whereas BALB/c males produced more “chevron” and “harmonics” syllables. To establish the degree of environmental influences in mouse song development, sons of these two strains were cross-fostered to another strain of parents. Songs were recorded when these cross-fostered pups were fully developed and their songs were compared with those of male mice reared by the genetic parents. The cross-fostered animals sang songs with acoustic characteristics - including syllable interval, peak frequency, and modulation patterns - similar to those of their genetic parents. In addition their song elements retained sequential characteristics similar to those of their genetic parents'' songs.

Conclusion

These results do not support the hypothesis that mouse “song” is learned; we found no evidence for vocal learning of any sort under the conditions of this experiment. Our observation that the strain-specific character of the song profile persisted even after changing the developmental auditory environment suggests that the structure of these courtship sound sequences is under strong genetic control. Thus, the usefulness of mouse “song” as a model of mammalian vocal learning is limited, but mouse song has the potential to be an indispensable model to study genetic mechanisms for vocal patterning and behavioral sequences.  相似文献   

8.

Background

Male parasitic wasps attract females with a courtship song produced by rapid wing fanning. Songs have been described for several parasitic wasp species; however, beyond association with wing fanning, the mechanism of sound generation has not been examined. We characterized the male courtship song of Cotesia congregata (Hymenoptera: Braconidae) and investigated the biomechanics of sound production.

Methods and Principal Findings

Courtship songs were recorded using high-speed videography (2,000 fps) and audio recordings. The song consists of a long duration amplitude-modulated “buzz” followed by a series of pulsatile higher amplitude “boings,” each decaying into a terminal buzz followed by a short inter-boing pause while wings are stationary. Boings have higher amplitude and lower frequency than buzz components. The lower frequency of the boing sound is due to greater wing displacement. The power spectrum is a harmonic series dominated by wing repetition rate ∼220 Hz, but the sound waveform indicates a higher frequency resonance ∼5 kHz. Sound is not generated by the wings contacting each other, the substrate, or the abdomen. The abdomen is elevated during the first several wing cycles of the boing, but its position is unrelated to sound amplitude. Unlike most sounds generated by volume velocity, the boing is generated at the termination of the wing down stroke when displacement is maximal and wing velocity is zero. Calculation indicates a low Reynolds number of ∼1000.

Conclusions and Significance

Acoustic pressure is proportional to velocity for typical sound sources. Our finding that the boing sound was generated at maximal wing displacement coincident with cessation of wing motion indicates that it is caused by acceleration of the wing tips, consistent with a dipole source. The low Reynolds number requires a high wing flap rate for flight and predisposes wings of small insects for sound production.  相似文献   

9.

Background

Many people with tinnitus also suffer from hyperacusis. Both clinical and basic scientific data indicate an overlap in pathophysiologic mechanisms. In order to further elucidate the interplay between tinnitus and hyperacusis we compared clinical and demographic characteristics of tinnitus patients with and without hyperacusis by analyzing a large sample from an international tinnitus patient database.

Materials

The default dataset import [November 1st, 2012] from the Tinnitus Research Initiative [TRI] Database was used for analyses. Hyperacusis was defined by the question “Do sounds cause you pain or physical discomfort?” of the Tinnitus Sample Case History Questionnaire. Patients who answered this question with “yes” were contrasted with “no”-responders with respect to 41 variables.

Results

935 [55%] out of 1713 patients were characterized as hyperacusis patients. Hyperacusis in tinnitus was associated with younger age, higher tinnitus-related, mental and general distress; and higher rates of pain disorders and vertigo. In relation to objective audiological assessment patients with hyperacusis rated their subjective hearing function worse than those without hyperacusis. Similarly the tinnitus pitch was rated higher by hyperacusis patients in relation to the audiometrically determined tinnitus pitch. Among patients with tinnitus and hyperacusis the tinnitus was more frequently modulated by external noise and somatic maneuvers, i.e., exposure to environmental sounds and head and neck movements change the tinnitus percept.

Conclusions

Our findings suggest that the comorbidity of hyperacusis is a useful criterion for defining a sub-type of tinnitus which is characterized by greater need of treatment. The higher sensitivity to auditory, somatosensory and vestibular input confirms the notion of an overactivation of an unspecific hypervigilance network in tinnitus patients with hyperacusis.  相似文献   

10.

Background

Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements.

Methodology/Principal Findings

We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound.

Conclusions/Significance

Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action.  相似文献   

11.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

12.
Perez E  Edmonds BA 《PloS one》2012,7(3):e31831

Objective

A systematic review was conducted to identify and quality assess how studies published since 1999 have measured and reported the usage of hearing aids in older adults. The relationship between usage and other dimensions of hearing aid outcome, age and hearing loss are summarised.

Data sources

Articles were identified through systematic searches in PubMed/MEDLINE, The University of Nottingham Online Catalogue, Web of Science and through reference checking. Study eligibility criteria: (1) participants aged fifty years or over with sensori-neural hearing loss, (2) provision of an air conduction hearing aid, (3) inclusion of hearing aid usage measure(s) and (4) published between 1999 and 2011.

Results

Of the initial 1933 papers obtained from the searches, a total of 64 were found eligible for review and were quality assessed on six dimensions: study design, choice of outcome instruments, level of reporting (usage, age, and audiometry) and cross validation of usage measures. Five papers were rated as being of high quality (scoring 10–12), 35 papers were rated as being of moderate quality (scoring 7–9), 22 as low quality (scoring 4–6) and two as very low quality (scoring 0–2). Fifteen different methods were identified for assessing the usage of hearing aids.

Conclusions

Generally, the usage data reviewed was not well specified. There was a lack of consistency and robustness in the way that usage of hearing aids was assessed and categorised. There is a need for more standardised level of reporting of hearing aid usage data to further understand the relationship between usage and hearing aid outcomes.  相似文献   

13.

Background

Data on sex-specific differences in sound production, acoustic behaviour and hearing abilities in fishes are rare. Representatives of numerous catfish families are known to produce sounds in agonistic contexts (intraspecific aggression and interspecific disturbance situations) using their pectoral fins. The present study investigates differences in agonistic behaviour, sound production and hearing abilities in males and females of a callichthyid catfish.

Methodology/Principal Findings

Eight males and nine females of the armoured catfish Megalechis thoracata were investigated. Agonistic behaviour displayed during male-male and female-female dyadic contests and sounds emitted were recorded, sound characteristics analysed and hearing thresholds measured using the auditory evoked potential (AEP) recording technique. Male pectoral spines were on average 1.7-fold longer than those of same-sized females. Visual and acoustic threat displays differed between sexes. Males produced low-frequency harmonic barks at longer distances and thumps at close distances, whereas females emitted broad-band pulsed crackles when close to each other. Female aggressive sounds were significantly shorter than those of males (167 ms versus 219 to 240 ms) and of higher dominant frequency (562 Hz versus 132 to 403 Hz). Sound duration and sound level were positively correlated with body and pectoral spine length, but dominant frequency was inversely correlated only to spine length. Both sexes showed a similar U-shaped hearing curve with lowest thresholds between 0.2 and 1 kHz and a drop in sensitivity above 1 kHz. The main energies of sounds were located at the most sensitive frequencies.

Conclusions/Significance

Current data demonstrate that both male and female M. thoracata produce aggressive sounds, but the behavioural contexts and sound characteristics differ between sexes. Sexes do not differ in hearing, but it remains to be clarified if this is a general pattern among fish. This is the first study to describe sex-specific differences in agonistic behaviour in fishes.  相似文献   

14.

Background

Sound production is widespread among fishes and accompanies many social interactions. The literature reports twenty-nine cichlid species known to produce sounds during aggressive and courtship displays, but the precise range in behavioural contexts is unclear. This study aims to describe the various Oreochromis niloticus behaviours that are associated with sound production in order to delimit the role of sound during different activities, including agonistic behaviours, pit activities, and reproduction and parental care by males and females of the species.

Methodology/Principal Findings

Sounds mostly occur during the day. The sounds recorded during this study accompany previously known behaviours, and no particular behaviour is systematically associated with sound production. Males and females make sounds during territorial defence but not during courtship and mating. Sounds support visual behaviours but are not used alone. During agonistic interactions, a calling Oreochromis niloticus does not bite after producing sounds, and more sounds are produced in defence of territory than for dominating individuals. Females produce sounds to defend eggs but not larvae.

Conclusion/Significance

Sounds are produced to reinforce visual behaviours. Moreover, comparisons with O. mossambicus indicate two sister species can differ in their use of sound, their acoustic characteristics, and the function of sound production. These findings support the role of sounds in differentiating species and promoting speciation. They also make clear that the association of sounds with specific life-cycle roles cannot be generalized to the entire taxa.  相似文献   

15.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

16.
Li H  Wang Q  Steyger PS 《PloS one》2011,6(4):e19130

Background

Exposure to intense sound or high doses of aminoglycoside antibiotics can increase hearing thresholds, induce cochlear dysfunction, disrupt hair cell morphology and promote hair cell death, leading to permanent hearing loss. When the two insults are combined, synergistic ototoxicity occurs, exacerbating cochlear vulnerability to sound exposure. The underlying mechanism of this synergism remains unknown. In this study, we tested the hypothesis that sound exposure enhances the intra-cochlear trafficking of aminoglycosides, such as gentamicin, leading to increased hair cell uptake of aminoglycosides and subsequent ototoxicity.

Methods

Juvenile C57Bl/6 mice were exposed to moderate or intense sound levels, while fluorescently-conjugated or native gentamicin was administered concurrently or following sound exposure. Drug uptake was then examined in cochlear tissues by confocal microscopy.

Results

Prolonged sound exposure that induced temporary threshold shifts increased gentamicin uptake by cochlear hair cells, and increased gentamicin permeation across the strial blood-labyrinth barrier. Enhanced intra-cochlear trafficking and hair cell uptake of gentamicin also occurred when prolonged sound, and subsequent aminoglycoside exposure were temporally separated, confirming previous observations. Acute, concurrent sound exposure did not increase cochlear uptake of aminoglycosides.

Conclusions

Prolonged, moderate sound exposures enhanced intra-cochlear aminoglycoside trafficking into the stria vascularis and hair cells. Changes in strial and/or hair cell physiology and integrity due to acoustic overstimulation could increase hair cell uptake of gentamicin, and may represent one mechanism of synergistic ototoxicity.  相似文献   

17.

Background

Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals.

Methods

We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler''s Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler''s FSIQ or RPM in the regression models controlled for the effects of intelligence.

Results

In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism.

Conclusions

Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.  相似文献   

18.

Background

The capacity to memorize speech sounds is crucial for language acquisition. Newborn human infants can discriminate phonetic contrasts and extract rhythm, prosodic information, and simple regularities from speech. Yet, there is scarce evidence that infants can recognize common words from the surrounding language before four months of age.

Methodology/Principal Findings

We studied one hundred and twelve 1-5 day-old infants, using functional near-infrared spectroscopy (fNIRS). We found that newborns tested with a novel bisyllabic word show greater hemodynamic brain response than newborns tested with a familiar bisyllabic word. We showed that newborns recognize the familiar word after two minutes of silence or after hearing music, but not after hearing a different word.

Conclusions/Significance

The data show that retroactive interference is an important cause of forgetting in the early stages of language acquisition. Moreover, because neonates forget words in the presence of some –but not all– sounds, the results indicate that the interference phenomenon that causes forgetting is selective.  相似文献   

19.

Background

Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss.

Methodology/Principal Findings

A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05).

Conclusions/Significance

This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.  相似文献   

20.

Background

The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes.

Methodology/Principal Findings

We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz).

Conclusions/Significance

Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号