首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.  相似文献   

2.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.  相似文献   

3.
Binaural hearing involves using information relating to the differences between the signals that arrive at the two ears, and it can make it easier to detect and recognize signals in a noisy environment. This phenomenon of binaural hearing is quantified in laboratory studies as the binaural masking-level difference (BMLD). Mandarin is one of the most commonly used languages, but there are no publication values of BMLD or BILD based on Mandarin tones. Therefore, this study investigated the BMLD and BILD of Mandarin tones. The BMLDs of Mandarin tone detection were measured based on the detection threshold differences for the four tones of the voiced vowels /i/ (i.e., /i1/, /i2/, /i3/, and /i4/) and /u/ (i.e., /u1/, /u2/, /u3/, and /u4/) in the presence of speech-spectrum noise when presented interaurally in phase (S0N0) and interaurally in antiphase (SπN0). The BILDs of Mandarin tone recognition in speech-spectrum noise were determined as the differences in the target-to-masker ratio (TMR) required for 50% correct tone recognitions between the S0N0 and SπN0 conditions. The detection thresholds for the four tones of /i/ and /u/ differed significantly (p<0.001) between the S0N0 and SπN0 conditions. The average detection thresholds of Mandarin tones were all lower in the SπN0 condition than in the S0N0 condition, and the BMLDs ranged from 7.3 to 11.5 dB. The TMR for 50% correct Mandarin tone recognitions differed significantly (p<0.001) between the S0N0 and SπN0 conditions, at –13.4 and –18.0 dB, respectively, with a mean BILD of 4.6 dB. The study showed that the thresholds of Mandarin tone detection and recognition in the presence of speech-spectrum noise are improved when phase inversion is applied to the target speech. The average BILDs of Mandarin tones are smaller than the average BMLDs of Mandarin tones.  相似文献   

4.
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners.  相似文献   

5.
Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information.  相似文献   

6.
The habitat ambient noise may exert an important selective pressure on frequencies used in acoustic communication by animals. A previous study demonstrated the presence of a match between the low-frequency quiet region of the stream ambient noise (termed ‘quiet window’) and the main frequencies used for sound production and hearing by two stream gobies (Padogobius bonelli, Gobius nigricans). The present study examines the spectral features of ambient noise in very shallow freshwater, brackish and marine habitats and correlates them to the range of dominant frequencies of sounds used by nine species of Mediterranean gobies reproducing in these environments. Ambient noise spectra of these habitats featured a low-frequency quiet window centered at 100 Hz (stream, sandy/rocky sea shore), or at 200 Hz (spring, brackish lagoon). The analysis of the ambient noise/sound spectrum relationships showed the sound frequencies matched the frequency band of the quiet window in the ambient noise typical of their own habitat. Analogous ambient noise/sound frequency relationships were observed in other shallow-water teleosts living in similar underwater environments. Conclusions may be relevant to the understanding of evolution of fish acoustic communication and hearing.  相似文献   

7.
In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (−8.8 dB to −18.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.  相似文献   

8.

Objectives

(1) To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV) and auditory-only (AO) modes in Mandarin-speaking adults with cochlear implants (CIs); (2) to understand the effect of presentation levels on AV speech perception; (3) to learn the effect of hearing experience on AV speech perception.

Methods

Thirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female) who had used CIs for >6 months and 10 normal-hearing (NH) adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT), speech recognition threshold (SRT) and 10 dB SL (re:SRT).

Results

The prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016), and so did the NH group at SDT (p = 0.004). Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones) but were outperformed by the NH group at 10 dB SL (re:SRT) in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones). The recognition scores had a significant correlation with group with age and sex controlled (p<0.001).

Conclusions

Visual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users'' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.  相似文献   

9.
目的:电子耳蜗是一个帮助聋人恢复听觉的装置。它根据人耳的仿生学原理,用有限个电极刺激神经以恢复聋人听觉。目前实际应用的电子耳蜗技术已经能够在安静环境下帮助聋人恢复一定的听觉。本文在使用GIS方案的基础上,采取了频谱增强的方法,以提高电子耳蜗的在噪声环境下的性能。另外采用计算机仿真及声音合成的方法,以评估耳蜗植入者听到的声音。本实验获得了比较好的试听效果。其中提出的方法对耳子耳蜗的研究和工程现实具有一定的意义。  相似文献   

10.
In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG) pattern, “delta-brushes,” in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW) during routine EEG recording. Stimuli consisted of either low-volume technogenic “clicks” near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus (“click” and voice) was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and “click” stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for “click” and voice stimuli: responses to “clicks” became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory), these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex during fetal stages and provide a potential test of functional cortical maturation during fetal development.  相似文献   

11.

Background

Although nurses play an important role in humanitarian aid and disaster relief (HA/DR), little is known about the nursing activities that are performed in HA/DR. We aimed to clarify the nursing activities performed by Japanese nurses in HA/DR and to examine the factors associated with the frequency of nursing activities.

Methods

A self-administered questionnaire survey was completed by 147 nurses with HA/DR experience. The survey extracted information on demographic characteristics, past experience (e.g., disaster medical training experience, HA/DR experience), circumstances surrounding their dispatched to HA/DR (e.g., team size, disaster type, post-disaster phase, mission term), and the frequency of nursing activities performed under HA/DR. The frequency of nursing activities was rated on a 5-point Likert scale. Evaluation of nursing activities was conducted based on the “nursing activity score”, which represents the frequency of each nursing activity. Factors related to the nursing activity score were evaluated by multiple logistic regression analysis.

Results

Nurses were involved in 27 nursing activities in HA/DR, 10 of which were performed frequently. On analysis, factors significantly associated with nursing activity score were nursing license as a registered nurse (OR 7.79, 95% CI 2.95–20.57), two or more experiences with disaster medical training (OR 2.90 95%, CI 1.12–7.49) and a post-disaster phase of three weeks or longer (OR 8.77, 95% CI 2.59–29.67).

Conclusions

These results will contribute to the design of evidence-based disaster medical training that improves the quality of nursing activities.  相似文献   

12.
Studied in the work was human perception of acoustic signals changing in amplitude on the background of production and hearing of syllables composed, from the ontogenetic viewpoint, of the earliest and the latest consonants and vowels —[pa] and [ly], as well as on the background of noise. It was shown that on the background of the syllable pronunciation their recognition occurred at the same rate; however, the number of mistakes with the syllable [ly] was statistically significantly greater than with the syllable [pa]. According to the data of paired comparison and dispersion analysis the differences at recognition of external stimuli on the background of: hearing [pa] — pronunciation [ly]; hearing [ly] — pronunciation [pa]; hearing [ly] — pronunciation [ly] are statistically significant. The most difficult task turned out to be the task of signal recognition on the background of isolated articulation, i.e., reproduction without voice. When evaluating sound stimuli on the background of noise, the correctness of signal recognition was more affected by masking as compared to the time of reaction. The results of signal perception on the background of the wideband noise differ qualitatively and quantitatively from the data of recognition both at intensive verbal activity and at passive hearing of speech.Translated from Zhurnal Evolyutsionnoi Biokhimii i Fiziologii, Vol. 40, No. 5, 2004, pp. 423–426.Original Russian Text Copyright © 2004 by Vartanyan, Tokareva, Lange.To the 100-Anniversary of N. N. Traugott  相似文献   

13.
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.  相似文献   

14.

Background

Hearing thresholds of fishes are typically acquired under laboratory conditions. This does not reflect the situation in natural habitats, where ambient noise may mask their hearing sensitivities. In the current study we investigate hearing in terms of sound pressure (SPL) and particle acceleration levels (PAL) of two cichlid species within the naturally occurring range of noise levels. This enabled us to determine whether species with and without hearing specializations are differently affected by noise.

Methodology/Principal Findings

We investigated auditory sensitivities in the orange chromide Etroplus maculatus, which possesses anterior swim bladder extensions, and the slender lionhead cichlid Steatocranus tinanti, in which the swim bladder is much smaller and lacks extensions. E. maculatus was tested between 0.2 and 3kHz and S. tinanti between 0.1 and 0.5 kHz using the auditory evoked potential (AEP) recording technique. In both species, SPL and PAL audiograms were determined in the presence of quiet laboratory conditions (baseline) and continuous white noise of 110 and 130 dB RMS. Baseline thresholds showed greatest hearing sensitivity around 0.5 kHz (SPL) and 0.2 kHz (PAL) in E. maculatus and 0.2 kHz in S. tinanti. White noise of 110 dB elevated the thresholds by 0–11 dB (SPL) and 7–11 dB (PAL) in E. maculatus and by 1–2 dB (SPL) and by 1–4 dB (PAL) in S. tinanti. White noise of 130 dB elevated hearing thresholds by 13–29 dB (SPL) and 26–32 dB (PAL) in E. maculatus and 6–16 dB (SPL) and 6–19 dB (PAL) in S. tinanti.

Conclusions

Our data showed for the first time for SPL and PAL thresholds that the specialized species was masked by different noise regimes at almost all frequencies, whereas the non-specialized species was much less affected. This indicates that noise can limit sound detection and acoustic orientation differently within a single fish family.  相似文献   

15.
Mutations in otoferlin, a C2 domain-containing ferlin family protein, cause non-syndromic hearing loss in humans (DFNB9 deafness). Furthermore, transmitter secretion of cochlear inner hair cells is compromised in mice lacking otoferlin. In the present study, we show that the C2F domain of otoferlin directly binds calcium (KD = 267 μm) with diminished binding in a pachanga (D1767G) C2F mouse mutation. Calcium was found to differentially regulate binding of otoferlin C2 domains to target SNARE (t-SNARE) proteins and phospholipids. C2D–F domains interact with the syntaxin-1 t-SNARE motif with maximum binding within the range of 20–50 μm Ca2+. At 20 μm Ca2+, the dissociation rate was substantially lower, indicating increased binding (KD = ∼10−9) compared with 0 μm Ca2+ (KD = ∼10−8), suggesting a calcium-mediated stabilization of the C2 domain·t-SNARE complex. C2A and C2B interactions with t-SNAREs were insensitive to calcium. The C2F domain directly binds the t-SNARE SNAP-25 maximally at 100 μm and with reduction at 0 μm Ca2+, a pattern repeated for C2F domain interactions with phosphatidylinositol 4,5-bisphosphate. In contrast, C2F did not bind the vesicle SNARE protein synaptobrevin-1 (VAMP-1). Moreover, an antibody targeting otoferlin immunoprecipitated syntaxin-1 and SNAP-25 but not synaptobrevin-1. As opposed to an increase in binding with increased calcium, interactions between otoferlin C2F domain and intramolecular C2 domains occurred in the absence of calcium, consistent with intra-C2 domain interactions forming a “closed” tertiary structure at low calcium that “opens” as calcium increases. These results suggest a direct role for otoferlin in exocytosis and modulation of calcium-dependent membrane fusion.  相似文献   

16.
The design of acoustic signals and hearing sensitivity in socially communicating species would normally be expected to closely match in order to minimize signal degradation and attenuation during signal propagation. Nevertheless, other factors such as sensory biases as well as morphological and physiological constraints may affect strict correspondence between signal features and hearing sensitivity. Thus study of the relationships between sender and receiver characteristics in species utilizing acoustic communication can provide information about how acoustic communication systems evolve. The genus Gekko includes species emitting high-amplitude vocalizations for long-range communication (loud callers) as well as species producing only low-amplitude vocalizations when in close contact with conspecifics (quiet callers) which have rarely been investigated. In order to investigate relationships between auditory physiology and the frequency characteristics of acoustic signals in a quiet caller, Gekko subpalmatus we measured the subjects’ vocal signal characteristics as well as auditory brainstem responses (ABRs) to assess auditory sensitivity. The results show that G. subpalmatus males emit low amplitude calls when encountering females, ranging in dominant frequency from 2.47 to 4.17 kHz with an average at 3.35 kHz. The auditory range with highest sensitivity closely matches the dominant frequency of the vocalizations. This correspondence is consistent with the notion that quiet and loud calling species are under similar selection pressures for matching auditory sensitivity with spectral characteristics of vocalizations.  相似文献   

17.

Objectives

Questionnaire studies suggest that hearing is declining among young adults. However, few studies have examined the reliability of hearing questionnaires among young adult subjects. This study examined the associations between pure tone audiometrically assessed (PTA) hearing loss and questionnaire responses in young to middle aged adults.

Materials and Methods

A cross-sectional study using questionnaire and screening PTA (500 through 6000 Hz) data from 15322 Swedish subjects (62% women) aged 18 through 50 years. PTA hearing loss was defined as a hearing threshold above 20 dB in both ears at one or more frequencies. Data were analysed with chi-square tests, nonlinear regression, binary logistic regression, and the generalized estimating equation (GEE) approach.

Results

The prevalence of PTA hearing loss was 6.0% in men and 2.9% in women (p < 0.001). Slight hearing impairment was reported by 18.5% of the men and 14.8% of the women (p < 0.001), whereas 0.5% of men and women reported very impaired hearing. Using multivariate GEE modelling, the odds ratio of PTA hearing loss was 30.4 (95% CI, 12.7-72.9) in men and 36.5 (17.2-77.3) in women reporting very impaired hearing. The corresponding figures in those reporting slightly impaired hearing were 7.06 (5.25-9.49) in men and 8.99 (6.38-12.7) in women. These values depended on the sound stimulus frequency (p = 0.001). The area under the ROC curve was 0.904 (0.892-0.915) in men and 0.886 (0.872-0.900) in women.

Conclusions

Subjective hearing impairment predicted clinically assessed hearing loss, suggesting that there is cause for concern as regards the future development of hearing in young to middle-aged people.  相似文献   

18.
Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers’ faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., “face-overshadowing”. In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.  相似文献   

19.
Although many studies have focused on a role for hyaluronan (HA) of interstitial extracellular matrix (presumably produced by non-vascular “stromal” cells) in regulating vascular growth, we herein examine the influence of “autocrine HA” produced by vascular endothelial cells themselves on tubulogenesis, using human umbilical vein endothelial cells (HUVECs) in angiogenic and vasculogenic three-dimensional collagen gel cultures. Relative to unstimulated controls, tubulogenic HUVECs upregulated HAS2 mRNA and increased the synthesis of cell-associated HA (but not HA secreted into media). Confocal microscopy/immunofluorescence on cultures fixed with neutral-buffered 10% formalin (NBF) revealed cytoplasmic HAS2 in HUVEC cords and tubes. Cultures fixed with NBF (with cetylpyridinium chloride added to retain HA), stained for HA using “affinity fluorescence” (biotinylated HA-binding protein with streptavidin-fluor), and viewed by confocal microscopy showed HA throughout tube lumens, but little/no HA on the abluminal sides of the tubes or in the surrounding collagen gel. Lumen formation in angiogenic and vasculogenic cultures was strongly suppressed by metabolic inhibitors of HA synthesis (mannose and 4-methylumbelliferone). Hyaluronidase strongly inhibited lumen formation in angiogenic cultures, but not in vasculogenic cultures (where developing lumens are not open to culture medium). Collectively, our results point to a role for autocrine, luminal HA in microvascular sprouting and lumen development. (J Histochem Cytochem 69: 415–428, 2021)  相似文献   

20.
The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used:− pure tone audiometry with Békésy technique,− transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise;− psychoacoustical modulation transfer function,− forward masking,− speech recognition in noise,− tinnitus matching.A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号