首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Behavioral manifestations of processing deficits associated with auditory processing disorder (APD) have been well documented. However, little is known about their anatomical underpinnings, especially cochlear processing. Cochlear delays, a proxy for cochlear tuning, measured using stimulus frequency otoacoustic emission (SFOAE) group delay, and the influence of the medial olivocochlear (MOC) system activation at the auditory periphery was studied in 23 children suspected with APD (sAPD) and 22 typically developing (TD) children. Results suggest that children suspected with APD have longer SFOAE group delays (possibly due to sharper cochlear tuning) and reduced MOC function compared to TD children. Other differences between the groups include correlation between MOC function and SFOAE delay in quiet in the TD group, and lack thereof in the sAPD group. MOC-mediated changes in SFOAE delay were in opposite directions between groups: increase in delay in TD vs. reduction in delay in the sAPD group. Longer SFOAE group delays in the sAPD group may lead to longer cochlear filter ringing, and potential increase in forward masking. These results indicate differences in cochlear and MOC function between sAPD and TD groups. Further studies are warranted to explore the possibility of cochlea as a potential site for processing deficits in APD.  相似文献   

2.
Lesion-induced cochlear damage can result in synaptic outgrowth in the ventral cochlear nucleus (VCN). Tinnitus may be associated with the synaptic outgrowth and hyperactivity in the VCN. However, it remains unclear how hearing loss triggers structural synaptic modifications in the VCN of rats subjected to salicylate-induced tinnitus. To address this issue, we evaluated tinnitus-like behavior in rats after salicylate treatment and compared the amplitude of the distortion product evoked otoacoustic emission (DPOAE) and auditory brainstem response (ABR) between control and treated rats. Moreover, we observed the changes in the synaptic ultrastructure and in the expression levels of growth-associated protein (GAP-43), brain-derived neurotrophic factor (BDNF), the microglial marker Iba-1 and glial fibrillary acidic protein (GFAP) in the VCN. After salicylate treatment (300 mg/kg/day for 4 and 8 days), analysis of the gap prepulse inhibition of the acoustic startle showed that the rats were experiencing tinnitus. The changes in the DPOAE and ABR amplitude indicated an improvement in cochlear sensitivity and a reduction in auditory input following salicylate treatment. The treated rats displayed more synaptic vesicles and longer postsynaptic density in the VCN than the control rats. We observed that the GAP-43 expression, predominantly from medial olivocochlear (MOC) neurons, was significantly up-regulated, and that BDNF- and Iba-1-immunoreactive cells were persistently decreased after salicylate administration. Furthermore, GFAP-immunoreactive astrocytes, which is associated with synaptic regrowth, was significantly increased in the treated groups. Our study revealed that reduced auditory nerve activity triggers synaptic outgrowth and hyperactivity in the VCN via a MOC neural feedback circuit. Structural synaptic modifications may be a reflexive process that compensates for the reduced auditory input after salicylate administration. However, massive increases in excitatory synapses in the VCN may represent a detrimental process that causes central hyperactivity, leading to tinnitus.  相似文献   

3.
Previous studies have indicated that extended exposure to a high level of sound might increase the risk of hearing loss among professional symphony orchestra musicians. One of the major problems associated with musicians’ hearing loss is difficulty in estimating its risk simply on the basis of the physical amount of exposure, i.e. the exposure level and duration. The aim of this study was to examine whether the measurement of the medial olivocochlear reflex (MOCR), which is assumed to protect the cochlear from acoustic damage, could enable us to assess the risk of hearing loss among musicians. To test this, we compared the MOCR strength and the hearing deterioration caused by one-hour instrument practice. The participants in the study were music university students who are majoring in the violin, whose left ear is exposed to intense violin sounds (broadband sounds containing a significant number of high-frequency components) during their regular instrument practice. Audiogram and click-evoked otoacoustic emissions (CEOAEs) were measured before and after a one-hour violin practice. There was a larger exposure to the left ear than to the right ear, and we observed a left-ear specific temporary threshold shift (TTS) after the violin practice. Left-ear CEOAEs decreased proportionally to the TTS. The exposure level, however, could not entirely explain the inter-individual variation in the TTS and the decrease in CEOAE. On the other hand, the MOCR strength could predict the size of the TTS and CEOAE decrease. Our findings imply that, among other factors, the MOCR is a promising measure for assessing the risk of hearing loss among musicians.  相似文献   

4.
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.  相似文献   

5.

Objectives

(1) To report the speech perception and intelligibility results of Mandarin-speaking patients with large vestibular aqueduct syndrome (LVAS) after cochlear implantation (CI); (2) to compare their performance with a group of CI users without LVAS; (3) to understand the effects of age at implantation and duration of implant use on the CI outcomes. The obtained data may be used to guide decisions about CI candidacy and surgical timing.

Methods

Forty-two patients with LVAS participating in this study were divided into two groups: the early group received CI before 5 years of age and the late group after 5. Open-set speech perception tests (on Mandarin tones, words and sentences) were administered one year after implantation and at the most recent follow-up visit. Categories of auditory perception (CAP) and Speech Intelligibility Rating (SIR) scale scores were also obtained.

Results

The patients with LVAS with more than 5 years of implant use (18 cases) achieved a mean score higher than 80% on the most recent speech perception tests and reached the highest level on the CAP/SIR scales. The early group developed speech perception and intelligibility steadily over time, while the late group had a rapid improvement during the first year after implantation. The two groups, regardless of their age at implantation, reached a similar performance level at the most recent follow-up visit.

Conclusion

High levels of speech performance are reached after 5 years of implant use in patients with LVAS. These patients do not necessarily need to wait until their hearing thresholds are higher than 90 dB HL or PB word score lower than 40% to receive CI. They can do it “earlier” when their speech perception and/or speech intelligibility do not reach the performance level suggested in this study.  相似文献   

6.
One of the putative functions of the medial olivocochlear (MOC) system is to enhance signal detection in noise. The objective of this study was to elucidate the role of the MOC system in speech perception in noise. In normal-hearing human listeners, we examined (1) the association between magnitude of MOC inhibition and speech-in-noise performance, and (2) the association between MOC inhibition and the amount of contralateral acoustic stimulation (CAS)-induced shift in speech-in-noise acuity. MOC reflex measurements in this study considered critical measurement issues overlooked in past work by: recording relatively low-level, linear click-evoked otoacoustic emissions (CEOAEs), adopting 6 dB signal-to-noise ratio (SNR) criteria, and computing normalized CEOAE differences. We found normalized index to be a stable measure of MOC inhibition (mean = 17.21%). MOC inhibition was not related to speech-in-noise performance measured without CAS. However, CAS in a speech-in-noise task caused an SNRSP enhancement (mean = 2.45 dB), and this improvement in speech-in-noise acuity was directly related to their MOC reflex assayed by CEOAEs. Individuals do not necessarily use the available MOC-unmasking characteristic while listening to speech in noise, or do not utilize unmasking to the extent that can be shown by artificial MOC activation. It may be the case that the MOC is not actually used under natural listening conditions and the higher auditory centers recruit MOC-mediated mechanisms only in specific listening conditions–those conditions remain to be investigated.  相似文献   

7.
Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device.  相似文献   

8.
There is a natural symbiosis between vergence and vestibular responses. Deficits in vergence can lead to vertigo, disequilibrium, and postural instability. This study examines both vergence eye movements in patients with idiopathic bilateral vestibular loss, and their standing balance in relation to vergence. Eleven patients participated in the study and 16 controls. Bilateral loss of vestibular function was objectified with many tests; only patients without significant response to caloric tests, to video head impulse tests and without vestibular evoked myogenic potentials were included in the study.

Vergence testing (from 8 patients and 15 controls)

A LED display with targets at 20, 40, and 100 cm along the median plane was used to elicit vergence eye movements, recorded with the IRIS device.

Standing balance (11 patients and 16 controls)

Four conditions were run, each lasting 1 min: fixation of a LED at 40 cm (convergence of 9°), at 150 cm (convergence of 2.3°); this last condition was repeated with eyes closed. Comparison of the eyes closed-eyes open conditions at 150 cm allowed evaluation of the Romberg Quotient. In the forth condition, two LEDS, at 20 and at 100 cm, were light on, one after the other for 1 sec, causing the eyes to converge then diverge. Standing balance was recorded with an accelerometer placed at the back near the center of mass (McRoberts, Dynaport).

Results

Vergence

Relative to controls, convergence eye movements in patients showed significantly lower accuracy, lower mean velocity, and saccade intrusions of significantly higher amplitude.

Balance

The normalized 90% area of body sway was significantly higher for patients than for controls for all conditions. Yet, similarly to controls, postural stability was better while fixating at near (sustained convergence) than at far, or while making active vergence movements. We argue that vestibular loss deteriorates convergence, but even deficient, convergence can be helpful for postural control.  相似文献   

9.

Objectives

To investigate speech and language outcomes in children with cochlear implants (CIs) who had mutations in common deafness genes and to compare their performances with those without mutations.

Study Design

Prospective study.

Methods

Patients who received CIs before 18 years of age and had used CIs for more than 3 years were enrolled in this study. All patients underwent mutation screening of three common deafness genes: GJB2, SLC26A4 and the mitochondrial 12S rRNA gene. The outcomes with CIs were assessed at post-implant years 3 and 5 using the Categories of Auditory Performance (CAP) scale, Speech Intelligibility Rating (SIR) scale, speech perception tests and language skill tests.

Results

Forty-eight patients were found to have confirmative mutations in GJB2 or SLC26A4, and 123 without detected mutations were ascertained for comparison. Among children who received CIs before 3.5 years of age, patients with GJB2 or SLC26A4 mutations showed significantly higher CAP/SIR scores than those without mutations at post-implant year 3 (p = 0.001 for CAP; p = 0.004 for SIR) and year 5 (p = 0.035 for CAP; p = 0.038 for SIR). By contrast, among children who received CIs after age 3.5, no significant differences were noted in post-implant outcomes between patients with and without mutations (all p > 0.05).

Conclusion

GJB2 and SLC26A4 mutations are associated with good post-implant outcomes. However, their effects on CI outcomes may be modulated by the age at implantation: the association between mutations and CI outcomes is observed in young recipients who received CIs before age 3.5 years but not in older recipients.  相似文献   

10.

Objective

To examine the direct and indirect effects of demographical factors on speech perception and vocabulary outcomes of Mandarin-speaking children with cochlear implants (CIs).

Methods

115 participants implanted before the age of 5 and who had used CI before 1 to 3 years were evaluated using a battery of speech perception and vocabulary tests. Structural equation modeling was used to test the hypotheses proposed.

Results

Early implantation significantly contributed to speech perception outcomes while having undergone a hearing aid trial (HAT) before implantation, maternal educational level (MEL), and having undergone universal newborn hearing screening (UNHS) before implantation had indirect effects on speech perception outcomes via their effects on age at implantation. In addition, both age at implantation and MEL had direct and indirect effects on vocabulary skills, while UNHS and HAT had indirect effects on vocabulary outcomes via their effects on age at implantation.

Conclusion

A number of factors had indirect and direct effects on speech perception and vocabulary outcomes in Mandarin-speaking children with CIs and these factors were not necessarily identical to those reported among their English-speaking counterparts.  相似文献   

11.

Objective

To investigate the performance of monaural and binaural beamforming technology with an additional noise reduction algorithm, in cochlear implant recipients.

Method

This experimental study was conducted as a single subject repeated measures design within a large German cochlear implant centre. Twelve experienced users of an Advanced Bionics HiRes90K or CII implant with a Harmony speech processor were enrolled. The cochlear implant processor of each subject was connected to one of two bilaterally placed state-of-the-art hearing aids (Phonak Ambra) providing three alternative directional processing options: an omnidirectional setting, an adaptive monaural beamformer, and a binaural beamformer. A further noise reduction algorithm (ClearVoice) was applied to the signal on the cochlear implant processor itself. The speech signal was presented from 0° and speech shaped noise presented from loudspeakers placed at ±70°, ±135° and 180°. The Oldenburg sentence test was used to determine the signal-to-noise ratio at which subjects scored 50% correct.

Results

Both the adaptive and binaural beamformer were significantly better than the omnidirectional condition (5.3 dB±1.2 dB and 7.1 dB±1.6 dB (p<0.001) respectively). The best score was achieved with the binaural beamformer in combination with the ClearVoice noise reduction algorithm, with a significant improvement in SRT of 7.9 dB±2.4 dB (p<0.001) over the omnidirectional alone condition.

Conclusions

The study showed that the binaural beamformer implemented in the Phonak Ambra hearing aid could be used in conjunction with a Harmony speech processor to produce substantial average improvements in SRT of 7.1 dB. The monaural, adaptive beamformer provided an averaged SRT improvement of 5.3 dB.  相似文献   

12.
ObjectivesPrevious studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users.DesignSpeech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated.ResultsSRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type.ConclusionsCI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit.  相似文献   

13.
Normal hearing requires exquisite cooperation between bony and sensorineural structures within the cochlea. For example, the inner ear secretes proteins such as osteoprotegrin (OPG) that can prevent cochlear bone remodeling. Accordingly, diseases that affect bone regulation can also result in hearing loss. Patients with fibrous dysplasia develop trabecular bone overgrowth resulting in hearing loss if the lesions affect the temporal bones. Unfortunately, the mechanisms responsible for this hearing loss, which could be sensorineural and/or conductive, remain unclear. In this study, we used a unique transgenic mouse model of increased Gs G-protein coupled receptor (GPCR) signaling induced by expression of an engineered receptor, Rs1, in osteoblastic cells. These ColI(2.3)+/Rs1+ mice showed dramatic bone lesions that histologically and radiologically resembled fibrous dysplasia. We found that ColI(2.3)+/Rs1+ mice showed progressive and severe conductive hearing loss. Ossicular chain impingement increased with the size and number of dysplastic lesions. While sensorineural structures were unaffected, ColI(2.3)+/Rs1+ cochleae had abnormally high osteoclast activity, together with elevated tartrate resistant acid phosphatase (TRAP) activity and receptor activator of nuclear factor kappa-B ligand (Rankl) mRNA expression. ColI(2.3)+/Rs1+ cochleae also showed decreased expression of Sclerostin (Sost), an antagonist of the Wnt signaling pathway that normally increases bone formation. The osteocyte canalicular networks of ColI(2.3)+/Rs1+ cochleae were disrupted and showed abnormal osteocyte morphology. The osteocytes in the ColI(2.3)+/Rs1+ cochleae showed increased expression of matrix metalloproteinase 13 (MMP-13) and TRAP, both of which can support osteocyte-mediated peri-lacunar remodeling. Thus, while the ossicular chain impingement is sufficient to account for the progressive hearing loss in fibrous dysplasia, the deregulation of bone remodeling extends to the cochlea as well. Our findings suggest that factors regulating bone remodeling, including peri-lacunar remodeling by osteocytes, may be useful targets for treating the bony overgrowths and hearing changes of fibrous dysplasia and other bony pathologies.  相似文献   

14.
Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.  相似文献   

15.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.  相似文献   

16.
17.
18.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.  相似文献   

19.
20.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号