首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

2.

Background

Our motor actions normally generate sensory events, but how do we know which events were self generated and which have external causes? Here we use temporal adaptation to investigate the processing stage and generality of our sensorimotor timing estimates.

Methodology/Principal Findings

Adaptation to artificially-induced delays between action and event can produce a startling percept—upon removal of the delay it feels as if the sensory event precedes its causative action. This temporal recalibration of action and event occurs in a quantitatively similar manner across the sensory modalities. Critically, it is robust to the replacement of one sense during the adaptation phase with another sense during the test judgment.

Conclusions/Significance

Our findings suggest a high-level, supramodal recalibration mechanism. The effects are well described by a simple model which attempts to preserve the expected synchrony between action and event, but only when causality indicates it is reasonable to do so. We further demonstrate that this model successfully characterises related adaptation data from outside the sensorimotor domain.  相似文献   

3.

Background

When one watches a sports game, one may feel her/his own muscles moving in synchrony with the player''s. Such parallels between observed actions of others and one''s own has been well supported in the latest progress in neuroscience, and coined “mirror system.” It is likely that due to such phenomena, we are able to learn motor skills just by observing an expert''s performance. Yet it is unknown whether such indirect learning occurs only at higher cognitive levels, or also at basic sensorimotor levels where sensorimotor delay is compensated and the timing of sensory feedback is constantly calibrated.

Methodology/Principal Findings

Here, we show that the subject''s passive observation of an actor manipulating a computer mouse with delayed auditory feedback led to shifts in subjective simultaneity of self mouse manipulation and auditory stimulus in the observing subjects. Likewise, self adaptation to the delayed feedback modulated the simultaneity judgment of the other subjects manipulating a mouse and an auditory stimulus. Meanwhile, subjective simultaneity of a simple visual disc and the auditory stimulus (flash test) was not affected by observation of an actor nor self-adaptation.

Conclusions/Significance

The lack of shift in the flash test for both conditions indicates that the recalibration transfer is specific to the action domain, and is not due to a general sensory adaptation. This points to the involvement of a system for the temporal monitoring of actions, one that processes both one''s own actions and those of others.  相似文献   

4.
Liu P  Chen Z  Jones JA  Huang D  Liu H 《PloS one》2011,6(7):e22791

Background

Auditory feedback has been demonstrated to play an important role in the control of voice fundamental frequency (F0), but the mechanisms underlying the processing of auditory feedback remain poorly understood. It has been well documented that young adults can use auditory feedback to stabilize their voice F0 by making compensatory responses to perturbations they hear in their vocal pitch feedback. However, little is known about the effects of aging on the processing of audio-vocal feedback during vocalization.

Methodology/Principal Findings

In the present study, we recruited adults who were between 19 and 75 years of age and divided them into five age groups. Using a pitch-shift paradigm, the pitch of their vocal feedback was unexpectedly shifted ±50 or ±100 cents during sustained vocalization of the vowel sound/u/. Compensatory vocal F0 response magnitudes and latencies to pitch feedback perturbations were examined. A significant effect of age was found such that response magnitudes increased with increasing age until maximal values were reached for adults 51–60 years of age and then decreased for adults 61–75 years of age. Adults 51–60 years of age were also more sensitive to the direction and magnitude of the pitch feedback perturbations compared to younger adults.

Conclusion

These findings demonstrate that the pitch-shift reflex systematically changes across the adult lifespan. Understanding aging-related changes to the role of auditory feedback is critically important for our theoretical understanding of speech production and the clinical applications of that knowledge.  相似文献   

5.

Background

The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking.

Methodology/Principal Findings

We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts.

Conclusions/Significance

The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion.  相似文献   

6.

Background

Singing in songbirds is a complex, learned behavior which shares many parallels with human speech. The avian vocal organ (syrinx) has two potential sound sources, and each sound generator is under unilateral, ipsilateral neural control. Different songbird species vary in their use of bilateral or unilateral phonation (lateralized sound production) and rapid switching between left and right sound generation (interhemispheric switching of motor control). Bengalese finches (Lonchura striata domestica) have received considerable attention, because they rapidly modify their song in response to manipulations of auditory feedback. However, how the left and right sides of the syrinx contribute to acoustic control of song has not been studied.

Methodology

Three manipulations of lateralized syringeal control of sound production were conducted. First, unilateral syringeal muscular control was eliminated by resection of the left or right tracheosyringeal portion of the hypoglossal nerve, which provides neuromuscular innervation of the syrinx. Spectral and temporal features of song were compared before and after lateralized nerve injury. In a second experiment, either the left or right sound source was devoiced to confirm the role of each sound generator in the control of acoustic phonology. Third, air pressure was recorded before and after unilateral denervation to enable quantification of acoustic change within individual syllables following lateralized nerve resection.

Significance

These experiments demonstrate that the left sound source produces louder, higher frequency, lower entropy sounds, and the right sound generator produces lower amplitude, lower frequency, higher entropy sounds. The bilateral division of labor is complex and the frequency specialization is the opposite pattern observed in most songbirds. Further, there is evidence for rapid interhemispheric switching during song production. Lateralized control of song production in Bengalese finches may enhance acoustic complexity of song and facilitate the rapid modification of sound production following manipulations of auditory feedback.  相似文献   

7.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

8.
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.  相似文献   

9.
Liu H  Wang EQ  Metman LV  Larson CR 《PloS one》2012,7(3):e33629

Background

One of the most common symptoms of speech deficits in individuals with Parkinson''s disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency.

Methodology/Principal Findings

Twelve individuals with Parkinson''s disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD.

Conclusions/Significance

The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.  相似文献   

10.
T Kawashima  T Sato 《PloS one》2012,7(7):e41328

Background

When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue.

Methodology/Principal Findings

In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter''s ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz).

Conclusions/Significance

The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.  相似文献   

11.

Background

A stimulus approaching the body requires fast processing and appropriate motor reactions. In monkeys, fronto-parietal networks are involved both in integrating multisensory information within a limited space surrounding the body (i.e. peripersonal space, PPS) and in action planning and execution, suggesting an overlap between sensory representations of space and motor representations of action. In the present study we investigate whether these overlapping representations also exist in the human brain.

Methodology/Principal Findings

We recorded from hand muscles motor-evoked potentials (MEPs) induced by single-pulse of transcranial magnetic stimulation (TMS) after presenting an auditory stimulus either near the hand or in far space. MEPs recorded 50 ms after the near-sound onset were enhanced compared to MEPs evoked after far sounds. This near-far modulation faded at longer inter-stimulus intervals, and reversed completely for MEPs recorded 300 ms after the sound onset. At that time point, higher motor excitability was associated with far sounds. Such auditory modulation of hand motor representation was specific to a hand-centred, and not a body-centred reference frame.

Conclusions/Significance

This pattern of corticospinal modulation highlights the relation between space and time in the PPS representation: an early facilitation for near stimuli may reflect immediate motor preparation, whereas, at later time intervals, motor preparation relates to distant stimuli potentially approaching the body.  相似文献   

12.

Background

Since, similarly to humans, songbirds learn their vocalization through imitation during their juvenile stage, they have often been used as model animals to study the mechanisms of human verbal learning. Numerous anatomical and physiological studies have suggested that songbirds have a neural network called ‘song system’ specialized for vocal learning and production in their brain. However, it still remains unknown what molecular mechanisms regulate their vocal development. It has been suggested that type-II cadherins are involved in synapse formation and function. Previously, we found that type-II cadherin expressions are switched in the robust nucleus of arcopallium from cadherin-7-positive to cadherin-6B-positive during the phase from sensory to sensorimotor learning stage in a songbird, the Bengalese finch. Furthermore, in vitro analysis using cultured rat hippocampal neurons revealed that cadherin-6B enhanced and cadherin-7 suppressed the frequency of miniature excitatory postsynaptic currents via regulating dendritic spine morphology.

Methodology/Principal Findings

To explore the role of cadherins in vocal development, we performed an in vivo behavioral analysis of cadherin function with lentiviral vectors. Overexpression of cadherin-7 in the juvenile and the adult stages resulted in severe defects in vocal production. In both cases, harmonic sounds typically seen in the adult Bengalese finch songs were particularly affected.

Conclusions/Significance

Our results suggest that cadherins control vocal production, particularly harmonic sounds, probably by modulating neuronal morphology of the RA nucleus. It appears that the switching of cadherin expressions from sensory to sensorimotor learning stage enhances vocal production ability to make various types of vocalization that is essential for sensorimotor learning in a trial and error manner.  相似文献   

13.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

14.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

15.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

16.

Background

We physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.

Methodology/Principal Findings

Participants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject’s hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.

Conclusion/Significance

This study provides a new method to dynamically assess PPS representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way.  相似文献   

17.
Mochida T  Gomi H  Kashino M 《PloS one》2010,5(11):e13866

Background

There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified.

Methodology/Principal Findings

This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested.

Conclusions/Significance

The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.  相似文献   

18.
Papes S  Ladich F 《PloS one》2011,6(10):e26479

Background

Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.

Methodology/Principal Findings

Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.

Conclusions/Significance

These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus.  相似文献   

19.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

20.

Background

Humans can easily restore a speech signal that is temporally masked by an interfering sound (e.g., a cough masking parts of a word in a conversation), and listeners have the illusion that the speech continues through the interfering sound. This perceptual restoration for human speech is affected by prior experience. Here we provide evidence for perceptual restoration in complex vocalizations of a songbird that are acquired by vocal learning in a similar way as humans learn their language.

Methodology/Principal Findings

European starlings were trained in a same/different paradigm to report salient differences between successive sounds. The birds'' response latency for discriminating between a stimulus pair is an indicator for the salience of the difference, and these latencies can be used to evaluate perceptual distances using multi-dimensional scaling. For familiar motifs the birds showed a large perceptual distance if discriminating between song motifs that were muted for brief time periods and complete motifs. If the muted periods were filled with noise, the perceptual distance was reduced. For unfamiliar motifs no such difference was observed.

Conclusions/Significance

The results suggest that starlings are able to perceptually restore partly masked sounds and, similarly to humans, rely on prior experience. They may be a suitable model to study the mechanism underlying experience-dependent perceptual restoration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号