首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

2.

Background

Body image distortion is a central symptom of Anorexia Nervosa (AN). Even if corporeal awareness is multisensory majority of AN studies mainly investigated visual misperception. We systematically reviewed AN studies that have investigated different nonvisual sensory inputs using an integrative multisensory approach to body perception. We also discussed the findings in the light of AN neuroimaging evidence.

Methods

PubMed and PsycINFO were searched until March, 2014. To be included in the review, studies were mainly required to: investigate a sample of patients with current or past AN and a control group and use tasks that directly elicited one or more nonvisual sensory domains.

Results

Thirteen studies were included. They studied a total of 223 people with current or past AN and 273 control subjects. Overall, results show impairment in tactile and proprioceptive domains of body perception in AN patients. Interoception and multisensory integration have been poorly explored directly in AN patients. A limitation of this review is the relatively small amount of literature available.

Conclusions

Our results showed that AN patients had a multisensory impairment of body perception that goes beyond visual misperception and involves tactile and proprioceptive sensory components. Furthermore, impairment of tactile and proprioceptive components may be associated with parietal cortex alterations in AN patients. Interoception and multisensory integration have been weakly explored directly. Further research, using multisensory approaches as well as neuroimaging techniques, is needed to better define the complexity of body image distortion in AN.

Key Findings

The review suggests an altered capacity of AN patients in processing and integration of bodily signals: body parts are experienced as dissociated from their holistic and perceptive dimensions. Specifically, it is likely that not only perception but memory, and in particular sensorimotor/proprioceptive memory, probably shapes bodily experience in patients with AN.  相似文献   

3.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

4.

Background

We physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.

Methodology/Principal Findings

Participants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject’s hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.

Conclusion/Significance

This study provides a new method to dynamically assess PPS representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way.  相似文献   

5.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

6.

Background

An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/Principal Findings

Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/Significance

The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.  相似文献   

7.

Background

Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication.

Methodology/Principal Findings

We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area.

Conclusions/Significance

Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.  相似文献   

8.
Neuhofer D  Ronacher B 《PloS one》2012,7(3):e34384

Background

Animals that communicate by sound face the problem that the signals arriving at the receiver often are degraded and masked by noise. Frequency filters in the receiver''s auditory system may improve the signal-to-noise ratio (SNR) by excluding parts of the spectrum which are not occupied by the species-specific signals. This solution, however, is hardly amenable to species that produce broad band signals or have ears with broad frequency tuning. In mammals auditory filters exist that work in the temporal domain of amplitude modulations (AM). Do insects also use this type of filtering?

Principal Findings

Combining behavioural and neurophysiological experiments we investigated whether AM filters may improve the recognition of masked communication signals in grasshoppers. The AM pattern of the sound, its envelope, is crucial for signal recognition in these animals. We degraded the species-specific song by adding random fluctuations to its envelope. Six noise bands were used that differed in their overlap with the spectral content of the song envelope. If AM filters contribute to reduced masking, signal recognition should depend on the degree of overlap between the song envelope spectrum and the noise spectra. Contrary to this prediction, the resistance against signal degradation was the same for five of six masker bands. Most remarkably, the band with the strongest frequency overlap to the natural song envelope (0–100 Hz) impaired acceptance of degraded signals the least. To assess the noise filter capacities of single auditory neurons, the changes of spike trains as a function of the masking level were assessed. Increasing levels of signal degradation in different frequency bands led to similar changes in the spike trains in most neurones.

Conclusions

There is no indication that auditory neurones of grasshoppers are specialized to improve the SNR with respect to the pattern of amplitude modulations.  相似文献   

9.

Background

The spatial unity between self and body can be disrupted by employing conflicting visual-somatosensory bodily input, thereby bringing neurological observations on bodily self-consciousness under scientific scrutiny. Here we designed a novel paradigm linking the study of bodily self-consciousness to the spatial representation of visuo-tactile stimuli by measuring crossmodal congruency effects (CCEs) for the full body.

Methodology/Principal Findings

We measured full body CCEs by attaching four vibrator-light pairs to the trunks (backs) of subjects who viewed their bodies from behind via a camera and a head mounted display (HMD). Subjects made speeded elevation (up/down) judgments of the tactile stimuli while ignoring light stimuli. To modulate self-identification for the seen body subjects were stroked on their backs with a stick and the felt stroking was either synchronous or asynchronous with the stroking that could be seen via the HMD.We found that (1) tactile stimuli were mislocalized towards the seen body (2) CCEs were modulated systematically during visual-somatosensory conflict when subjects viewed their body but not when they viewed a body-sized object, i.e. CCEs were larger during synchronous than during asynchronous stroking of the body and (3) these changes in the mapping of tactile stimuli were induced in the same experimental condition in which predictable changes in bodily self-consciousness occurred.

Conclusions/Significance

These data reveal that systematic alterations in the mapping of tactile stimuli occur in a full body illusion and thus establish CCE magnitude as an online performance proxy for subjective changes in global bodily self-consciousness.  相似文献   

10.
Ptitsyn A 《PloS one》2008,3(3):e1842

Background

Microarrays are widely used for estimation of expression of thousands of genes in a biological sample. The resolution ability of this method is limited by the background noise. Low expressed genes are detected with insufficient reliability and expression of many genes is never detected at all.

Methodology/Principal Findings

We have applied the principles of stochastic resonance to detect expression of genes from microarray signals below the background noise level. We report the periodic pattern detected in genes called “Absent” by traditional analysis. The pattern is consistent with expression of the conventionally detected genes and specific to the tissue of origin. This effect is corroborated by the analysis of oscillating gene expression in mouse (M.musculus) and yeast (S. cerevisae).

Conclusion/Significance

Most genes usually considered silent are in fact expressed at a very low level. Stochastic resonance can be applied to detect changes in expression pattern of low-expressed genes as well as for the validation of the probe performance in microarrays.  相似文献   

11.

Background:

One area of nanoscience deals with nanoscopic interactions between nanostructured materials and biological systems. To elucidate the effects of the substrate surface morphology and viscoelasticity on cell proliferation, fractal analysis was performed on endothelial cells cultured on nanocomposite samples based on silicone rubber (SR) and various concentrations of organomodified nanoclay (OC).

Methods:

The nanoclay/SR ratio was tailored to enhance cell behavior via changes in sample substrate surface roughness and viscoelasticity.

Results:

Surface roughness of the cured SR filled with negatively-charged nanosilicate layers had a greater effect than elasticity on cell growth. The surface roughness of SR nanocomposite samples increased with increasing the OC content, leading to enhanced cell growth and extracellular matrix (ECM) remodeling. This was consistent with the decrease in SR segmental motions and damping factor as the primary viscoelastic parameters by the nanosilicate layers with increasing clay concentrations.

Conclusions:

The inclusion of clay nanolayers affected the growth and behavior of endothelial cells on microtextured SR.Key Words: Silicone rubber, Nanoclay, Elastic Modulus, Roughness, Cell proliferation  相似文献   

12.

Background

Tinnitus is an auditory sensation characterized by the perception of sound or noise in the absence of any external sound source. Based on neurobiological research, it is generally accepted that most forms of tinnitus are attributable to maladaptive plasticity due to damage to auditory system. Changes have been observed in auditory structures such as the inferior colliculus, the thalamus and the auditory cortex as well as in non-auditory brain areas. However, the observed changes show great variability, hence lacking a conclusive picture. One of the reasons might be the selection of inhomogeneous groups in data analysis.

Methodology

The aim of the present study was to delineate the differences between the neural networks involved in narrow band noise and pure tone tinnitus conducting LORETA based source analysis of resting state EEG.

Conclusions

Results demonstrated that narrow band noise tinnitus patients differ from pure tone tinnitus patients in the lateral frontopolar (BA 10), PCC and the parahippocampal area for delta, beta and gamma frequency bands, respectively. The parahippocampal-PCC current density differences might be load dependent, as noise-like tinnitus constitutes multiple frequencies in contrast to pure tone tinnitus. The lateral frontopolar differences might be related to pitch specific memory retrieval.  相似文献   

13.
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver''s visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants'' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.  相似文献   

14.

Background

Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores.

Methods and Findings

In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05).

Conclusion

Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.  相似文献   

15.

Background

The sound-induced flash illusion is an auditory-visual illusion – when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.

Methodology/Principal Findings

The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees.

Conclusions/Significance

The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the “spatial rule” does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated “fusion” illusion reportedly occurring when multiple flashes are accompanied by a single beep.  相似文献   

16.

Background

The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking.

Methodology/Principal Findings

We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts.

Conclusions/Significance

The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion.  相似文献   

17.
WN Bair  T Kiemel  JJ Jeka  JE Clark 《PloS one》2012,7(7):e40932

Background

Developmental Coordination Disorder (DCD) is a leading movement disorder in children that commonly involves poor postural control. Multisensory integration deficit, especially the inability to adaptively reweight to changing sensory conditions, has been proposed as a possible mechanism but with insufficient characterization. Empirical quantification of reweighting significantly advances our understanding of its developmental onset and improves the characterization of its difference in children with DCD compared to their typically developing (TD) peers.

Methodology/Principal Findings

Twenty children with DCD (6.6 to 11.8 years) were tested with a protocol in which visual scene and touch bar simultaneously oscillateded medio-laterally at different frequencies and various amplitudes. Their data were compared to data on TD children (4.2 to 10.8 years) from a previous study. Gains and phases were calculated for medio-lateral responses of the head and center of mass to both sensory stimuli. Gains and phases were simultaneously fitted by linear functions of age for each amplitude condition, segment, modality and group. Fitted gains and phases at two comparison ages (6.6 and 10.8 years) were tested for reweighting within each group and for group differences. Children with DCD reweight touch and vision at a later age (10.8 years) than their TD peers (4.2 years). Children with DCD demonstrate a weak visual reweighting, no advanced multisensory fusion and phase lags larger than those of TD children in response to both touch and vision.

Conclusions/Significance

Two developmental perspectives, postural body scheme and dorsal stream development, are provided to explain the weak vision reweighting. The lack of multisensory fusion supports the notion that optimal multisensory integration is a slow developmental process and is vulnerable in children with DCD.  相似文献   

18.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

19.
Gao J  Hu J  Tung WW 《PloS one》2011,6(9):e24331

Background

Chaos and random fractal theories are among the most important for fully characterizing nonlinear dynamics of complicated multiscale biosignals. Chaos analysis requires that signals be relatively noise-free and stationary, while fractal analysis demands signals to be non-rhythmic and scale-free.

Methodology/Principal Findings

To facilitate joint chaos and fractal analysis of biosignals, we present an adaptive algorithm, which: (1) can readily remove nonstationarities from the signal, (2) can more effectively reduce noise in the signals than linear filters, wavelet denoising, and chaos-based noise reduction techniques; (3) can readily decompose a multiscale biosignal into a series of intrinsically bandlimited functions; and (4) offers a new formulation of fractal and multifractal analysis that is better than existing methods when a biosignal contains a strong oscillatory component.

Conclusions

The presented approach is a valuable, versatile tool for the analysis of various types of biological signals. Its effectiveness is demonstrated by offering new important insights into brainwave dynamics and the very high accuracy in automatically detecting epileptic seizures from EEG signals.  相似文献   

20.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号