首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
Ward LM  MacLean SE  Kirschner A 《PloS one》2010,5(12):e14371
Neural synchronization is a mechanism whereby functionally specific brain regions establish transient networks for perception, cognition, and action. Direct addition of weak noise (fast random fluctuations) to various neural systems enhances synchronization through the mechanism of stochastic resonance (SR). Moreover, SR also occurs in human perception, cognition, and action. Perception, cognition, and action are closely correlated with, and may depend upon, synchronized oscillations within specialized brain networks. We tested the hypothesis that SR-mediated neural synchronization occurs within and between functionally relevant brain areas and thus could be responsible for behavioral SR. We measured the 40-Hz transient response of the human auditory cortex to brief pure tones. This response arises when the ongoing, random-phase, 40-Hz activity of a group of tuned neurons in the auditory cortex becomes synchronized in response to the onset of an above-threshold sound at its "preferred" frequency. We presented a stream of near-threshold standard sounds in various levels of added broadband noise and measured subjects' 40-Hz response to the standards in a deviant-detection paradigm using high-density EEG. We used independent component analysis and dipole fitting to locate neural sources of the 40-Hz response in bilateral auditory cortex, left posterior cingulate cortex and left superior frontal gyrus. We found that added noise enhanced the 40-Hz response in all these areas. Moreover, added noise also increased the synchronization between these regions in alpha and gamma frequency bands both during and after the 40-Hz response. Our results demonstrate neural SR in several functionally specific brain regions, including areas not traditionally thought to contribute to the auditory 40-Hz transient response. In addition, we demonstrated SR in the synchronization between these brain regions. Thus, both intra- and inter-regional synchronization of neural activity are facilitated by the addition of moderate amounts of random noise. Because the noise levels in the brain fluctuate with arousal system activity, particularly across sleep-wake cycles, optimal neural noise levels, and thus SR, could be involved in optimizing the formation of task-relevant brain networks at several scales under normal conditions.  相似文献   

2.
Althen H  Grimm S  Escera C 《PloS one》2011,6(12):e28522
The detection of deviant sounds is a crucial function of the auditory system and is reflected by the automatically elicited mismatch negativity (MMN), an auditory evoked potential at 100 to 250 ms from stimulus onset. It has recently been shown that rarely occurring frequency and location deviants in an oddball paradigm trigger a more negative response than standard sounds at very early latencies in the middle latency response of the human auditory evoked potential. This fast and early ability of the auditory system is corroborated by the finding of neurons in the animal auditory cortex and subcortical structures, which restore their adapted responsiveness to standard sounds, when a rare change in a sound feature occurs. In this study, we investigated whether the detection of intensity deviants is also reflected at shorter latencies than those of the MMN. Auditory evoked potentials in response to click sounds were analyzed regarding the auditory brain stem response, the middle latency response (MLR) and the MMN. Rare stimuli with a lower intensity level than standard stimuli elicited (in addition to an MMN) a more negative potential in the MLR at the transition from the Na to the Pa component at circa 24 ms from stimulus onset. This finding, together with the studies about frequency and location changes, suggests that the early automatic detection of deviant sounds in an oddball paradigm is a general property of the auditory system.  相似文献   

3.
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults’ difficulty in ignoring task-irrelevant stimuli.  相似文献   

4.
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.  相似文献   

5.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

6.
Vinnik E  Itskov PM  Balaban E 《PloS one》2011,6(2):e17266
Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise) when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40-66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes.  相似文献   

7.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

8.
Acoustic imaging of the respiratory system demonstrates regional changes of lung sounds that correspond to pulmonary ventilation. We investigated volume-dependent variations of lung sound phase and amplitude between two closely spaced sensors in five adults. Lung sounds were recorded at the posterior right upper, right lower, and left lower lobes during targeted breathing (1.2 +/- 0.2 l/s; volume = 20-50 and 50-80% of vital capacity) and passive sound transmission (< or =0.2 l/s; volumes as above). Average sound amplitudes were obtained after band-pass filtering to 75-150, 150-300, and 300-600 Hz. Cross correlation established the phase relation of sound between sensors. Volume-dependent variations in phase (< or =1.5 ms) and amplitude (< or =11 dB) were observed at the lower lobes in the 150- to 300-Hz band. During inspiration, increasing delay and amplitude of sound at the caudal relative to the cranial sensor were also observed during passive transmission in several subjects. This previously unrecognized behavior of lung sounds over short distances might reflect spatial variations of airways and diaphragms during breathing.  相似文献   

9.
Goense JB  Feng AS 《PloS one》2012,7(2):e31589
Natural auditory scenes such as frog choruses consist of multiple sound sources (i.e., individual vocalizing males) producing sounds that overlap extensively in time and spectrum, often in the presence of other biotic and abiotic background noise. Detection of a signal in such environments is challenging, but it is facilitated when the noise shares common amplitude modulations across a wide frequency range, due to a phenomenon called comodulation masking release (CMR). Here, we examined how properties of the background noise, such as its bandwidth and amplitude modulation, influence the detection threshold of a target sound (pulsed amplitude modulated tones) by single neurons in the frog auditory midbrain. We found that for both modulated and unmodulated masking noise, masking was generally stronger with increasing bandwidth, but it was weakened for the widest bandwidths. Masking was less for modulated noise than for unmodulated noise for all bandwidths. However, responses were heterogeneous, and only for a subpopulation of neurons the detection of the probe was facilitated when the bandwidth of the modulated masker was increased beyond a certain bandwidth - such neurons might contribute to CMR. We observed evidence that suggests that the dips in the noise amplitude are exploited by TS neurons, and observed strong responses to target signals occurring during such dips. However, the interactions between the probe and masker responses were nonlinear, and other mechanisms, e.g., selective suppression of the response to the noise, may also be involved in the masking release.  相似文献   

10.
Sounds were produced by the topmouth minnow Pseudorasbora parva , a common Eurasian cyprinid, during feeding but not during intraspecific interactions. Feeding sounds were short broadband pulses with main energies between 100 and 800 Hz. They varied in their characteristics (number of single sounds per feeding sequence, sound duration and period, and sound pressure level) depending on the food type (chironomid larvae, Tubifex worms and flake food). The loudest sounds were emitted when food was taken up at the water surface, most probably reflecting 'suctorial' feeding. Auditory sensitivities were determined between 100 and 4000 Hz utilizing the auditory evoked potentials recording technique. Under laboratory conditions and in the presence of natural ambient noise recorded in Lake Neusiedl in eastern Austria, best hearing sensitivities were between 300 and 800 Hz (57 dB re 1 μPa v . 72 dB in the presence of ambient noise). Threshold-to-noise ratios were positively correlated to the sound frequency. The correlation between sound spectra and auditory thresholds revealed that P. parva can detect conspecific sounds up to 40 cm distance under ambient noise conditions. Thus, feeding sounds could serve as an auditory cue for the presence of food during foraging.  相似文献   

11.
For a gleaning bat hunting prey from the ground, rustling sounds generated by prey movements are essential to invoke a hunting behaviour. The detection of prey-generated rustling sounds may depend heavily on the time structure of the prey-generated and the masking sounds due to their spectral similarity. Here, we systematically investigate the effect of the temporal structure on psychophysical rustling-sound detection in the gleaning bat, Megaderma lyra. A recorded rustling sound serves as the signal; the maskers are either Gaussian noise or broadband noise with various degrees of envelope fluctuations. Exploratory experiments indicate that the selective manipulation of the temporal structure of the rustling sound does not influence its detection in a Gaussian-noise masker. The results of the main experiment show, however, that the temporal structure of the masker has a strong and systematic effect on rustling-sound detection: When the width of irregularly spaced gaps in the masker exceeded about 0.3 ms, rustling-sound detection improved monotonically with increasing gap duration. Computer simulations of this experiment reveal that a combined detection strategy of spectral and temporal analysis underlies rustling-sound detection with fluctuating masking sounds.  相似文献   

12.
This evoked potential study of the bullfrog's auditory thalamic area (an auditory responsive region in the posterior dorsal thalamus) shows that complex processing, distinct from that reported in lower auditory regions, occurs in this center. An acoustic stimulus consisting of two tones, one which stimulates either the low-frequency or the mid-frequency sensitive population of auditory nerve fibers from the amphibian papilla and the other the high-frequency sensitive population of fibers from the basilar papilla, evoked a maximal response. The amplitude of the response to the simultaneous stimulation of the two auditory organs was, in some locations, much larger than the linear sum of the responses to the individual tones presented separately. Bimodal spectral stimuli that had relatively long rise-times (greater than or equal to 100 ms) evoked much larger responses than similar sounds with short rise-times. The optimal rise-times were close to those occurring in the bullfrog's mating call. The response was dependent on the waveform periodicity and harmonic content, with a fundamental frequency of 200 Hz producing a larger response than those with fundamentals of 50, 100 or 300 Hz. Six of the natural calls in the bullfrog's vocal repertoire were tested and the mating call and warning call were found to evoke the best responses. Each of these calls stimulate the two auditory organs simultaneously. The evoked response had a long refractory period which could not be altered by lesioning the efferent telencephalic pathways. The type of spectral and temporal information extracted by the auditory thalamic area suggests that this center is involved in processing complex sounds and likely plays an important role in the bullfrog's detection of some of its vocal signals.  相似文献   

13.
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.  相似文献   

14.
The goal of the study was to enlarge knowledge of discrimination of complex sound signals by the auditory system in masking noise. For that, influence of masking noise on detection of shift of rippled spectrum was studied in normal listeners. The signal was a shift of ripple phase within a 0.5-oct wide rippled spectrum centered at 2 kHz. The ripples were frequency-proportional (throughout the band, ripple spacing was a constant proportion of the ripple center frequency). Simultaneous masker was a 0.5-oct noise below-, on-, or above the signal band. Both the low-frequency (center frequency 1 kHz) and on-frequency (the same center frequency as for the signal) maskers increased the thresholds for detecting ripple phase shift. However, the threshold dependence on the masker level was different for these two maskers. For the on-frequency masker, the masking effect primarily depended on the masker/signal ratio: the threshold steeply increased at a ratio of 5 dB, and no shift was detectable at a ratio of 10 dB. For the low-frequency masker, the masking effect primarily depended on the masker level: the threshold increased at a masker level of 80 dB SPL, and no shift was detectable at a masker level of 90 dB (for a signal level of 50 dB) or 100 dB (for a signal level of 80 dB). The high-frequency masker had little effect. The data were successfully simulated using an excitation-pattern model. In this model, the effect of the on-frequency masker appeared to be primarily due to a decrease of ripple depth. The effect of the low-frequency masker appeared due to widening of the auditory filters at high sound levels.  相似文献   

15.
Temporal summation was estimated by measuring the detection thresholds for pulses with durations of 1–50 ms in the presence of noise maskers. The purpose of the study was to examine the effects of the spectral profiles and intensities of noise maskers on temporal summation, to investigate the appearance of signs of peripheral processing of pulses with various frequency-time structures in auditory responses, and to test the opportunity to use temporal summation for speech recognition. The central frequencies of pulses and maskers were similar. The maskers had ripple structures of the amplitude spectra of two types. In some maskers, the central frequencies coincided with the spectrum humps, whereas in other maskers, they coincided with spectrum dip (so-called on- and off-maskers). When the auditory system differentiated the masker humps, then the difference between the thresholds of recognition of the stimuli presented together with each of two types of maskers was not equal to zero. The assessment of temporal summation and the difference of the thresholds of pulse recognition under conditions of the presentation of the on- and off-maskers allowed us to make a conclusion on auditory sensitivity and the resolution of the spectral structure of maskers or frequency selectivity during presentation of pulses of various durations in local frequency areas. In order to estimate the effect of the dynamic properties of hearing on sensitivity and frequency selectivity, we changed the intensity of maskers. We measured temporal summation under the conditions of the presentation of on- and off-maskers of various intensities in two frequency ranges (2 and 4 kHz) in four subjects with normal hearing and one person with age-related hearing impairments who complained of a decrease in speech recognition under noise conditions. Pulses shorter than 10 ms were considered as simple models of consonant sounds, whereas tone pulses longer than 10 ms were considered as simple models of vowel sounds. In subjects with normal hearing in the range of moderate masker intensities, we observed an enhancement of temporal summation when the short pulses or consonant sounds were presented and an improvement of the resolution of the broken structure of masker spectra when the short and tone pulses, i.e., consonant and vowel sounds, were presented. We supposed that the enhancement of the summation was related to the refractoriness of the fibers of the auditory nerve. In the range of 4 kHz, the subject with age-related hearing impairments did not recognize the ripple structure of the maskers in the presence of the short pulses or consonant sounds. We supposed that these impairments were caused by abnormal synchronization of the responses of the auditory nerve fibers induced by the pulses, and this resulted in a decrease in speech recognition.  相似文献   

16.
Two freshwater gobies Padogobius martensii and Gobius nigricans live in shallow (5-70 cm) stony streams, and males of both species produce courtship sounds. A previous study demonstrated high noise levels near waterfalls, a quiet window in the noise around 100 Hz at noisy locations, and extremely short-range propagation of noise and goby signals. To investigate the relationship of this acoustic environment to communication, we determined audiograms for both species and measured parameters of courtship sounds produced in the streams. We also deflated the swimbladder in P. martensii to determine its effect on frequency utilization in sound production and hearing. Both species are maximally sensitive at 100 Hz and produce low-frequency sounds with main energy from 70 to 100-150 Hz. Swimbladder deflation does not affect auditory threshold or dominant frequency of courtship sounds and has no or minor effects on sound amplitude. Therefore, both species utilize frequencies for hearing and sound production that fall within the low-frequency quiet region, and the equivalent relationship between auditory sensitivity and maximum ambient noise levels in both species further suggests that ambient noise shapes hearing sensitivity.  相似文献   

17.
Cats were stimulated with tones and with natural sounds selected from the normal acoustic environment of the animal. Neural activity evoked by the natural sounds and tones was recorded in the cochlear nucleus and in the medial geniculate body. The set of biological sounds proved to be effective in influencing neural activity of single cells at both levels in the auditory system. At the level of the cochlear nucleus the response of a neuron evoked by a natural sound stimulus could be understood reasonably well on the basis of the structure of the spectrograms of the natural sounds and the unit's responses to tones. At the level of the medial geniculate body analysis with tones did not provide sufficient information to explain the responses to natural sounds. At this level the use of an ensemble of natural sound stimuli allows the investigation of neural properties, which are not seen by analysis with simple artificial stimuli. Guidelines for the construction of an ensemble of complex natural sound stimuli, based on the ecology and ethology of the animal under investigation are discussed. This stimulus ensemble is defined as the Acoustic Biotope.  相似文献   

18.
Klinge A  Beutelmann R  Klump GM 《PloS one》2011,6(10):e26124
The amount of masking of sounds from one source (signals) by sounds from a competing source (maskers) heavily depends on the sound characteristics of the masker and the signal and on their relative spatial location. Numerous studies investigated the ability to detect a signal in a speech or a noise masker or the effect of spatial separation of signal and masker on the amount of masking, but there is a lack of studies investigating the combined effects of many cues on the masking as is typical for natural listening situations. The current study using free-field listening systematically evaluates the combined effects of harmonicity and inharmonicity cues in multi-tone maskers and cues resulting from spatial separation of target signal and masker on the detection of a pure tone in a multi-tone or a noise masker. A linear binaural processing model was implemented to predict the masked thresholds in order to estimate whether the observed thresholds can be accounted for by energetic masking in the auditory periphery or whether other effects are involved. Thresholds were determined for combinations of two target frequencies (1 and 8 kHz), two spatial configurations (masker and target either co-located or spatially separated by 90 degrees azimuth), and five different masker types (four complex multi-tone stimuli, one noise masker). A spatial separation of target and masker resulted in a release from masking for all masker types. The amount of masking significantly depended on the masker type and frequency range. The various harmonic and inharmonic relations between target and masker or between components of the masker resulted in a complex pattern of increased or decreased masked thresholds in comparison to the predicted energetic masking. The results indicate that harmonicity cues affect the detectability of a tonal target in a complex masker.  相似文献   

19.
M Cornella  S Leung  S Grimm  C Escera 《PloS one》2012,7(8):e43604
Auditory deviance detection in humans is indexed by the mismatch negativity (MMN), a component of the auditory evoked potential (AEP) of the electroencephalogram (EEG) occurring at a latency of 100-250 ms after stimulus onset. However, by using classic oddball paradigms, differential responses to regularity violations of simple auditory features have been found at the level of the middle latency response (MLR) of the AEP occurring within the first 50 ms after stimulus (deviation) onset. These findings suggest the existence of fast deviance detection mechanisms for simple feature changes, but it is not clear whether deviance detection among more complex acoustic regularities could be observed at such early latencies. To test this, we examined the pre-attentive processing of rare stimulus repetitions in a sequence of tones alternating in frequency in both long and middle latency ranges. Additionally, we introduced occasional changes in the interaural time difference (ITD), so that a simple-feature regularity could be examined in the same paradigm. MMN was obtained for both repetition and ITD deviants, occurring at 150 ms and 100 ms after stimulus onset respectively. At the level of the MLR, a difference was observed between standards and ITD deviants at the Na component (20-30 ms after stimulus onset), for 800 Hz tones, but not for repetition deviants. These findings suggest that detection mechanisms for deviants to simple regularities, but not to more complex regularities, are already activated in the MLR range, supporting the view that the auditory deviance detection system is organized in a hierarchical manner.  相似文献   

20.
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号