首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.  相似文献   

2.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

3.
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.  相似文献   

4.
Reading performance during standing and walking was assessed for information presented on earth-fixed and head-fixed displays by determining the minimal duration during which a numerical time stimulus needed to be presented for 50% correct naming answers. Reading from the earth-fixed display was comparable during standing and walking, with optimal performance being attained for visual character sizes in the range of 0.2° to 1°. Reading from the head-fixed display was impaired for small (0.2-0.3°) and large (5°) visual character sizes, especially during walking. Analysis of head and eye movements demonstrated that retinal slip was larger during walking than during standing, but remained within the functional acuity range when reading from the earth-fixed display. The detrimental effects on performance of reading from the head-fixed display during walking could be attributed to loss of acuity resulting from large retinal slip. Because walking activated the angular vestibulo-ocular reflex, the resulting compensatory eye movements acted to stabilize gaze on the information presented on the earth-fixed display but destabilized gaze from the information presented on the head-fixed display. We conclude that the gaze stabilization mechanisms that normally allow visual performance to be maintained during physical activity adversely affect reading performance when the information is presented on a display attached to the head.  相似文献   

5.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback.  相似文献   

6.
During saccadic eye movements, the visual world shifts rapidly across the retina. Perceptual continuity is thought to be maintained by active neural mechanisms that compensate for this displacement, bringing the presaccadic scene into a postsaccadic reference frame. Because of this active mechanism, objects appearing briefly around the time of the saccade are perceived at erroneous locations, a phenomenon called perisaccadic mislocalization. The position and direction of localization errors can inform us about the different reference frames involved. It has been found, for example, that errors are not simply made in the direction of the saccade but directed toward the saccade target, indicating that the compensatory mechanism involves spatial compression rather than translation. A recent study confirmed that localization errors also occur in the direction orthogonal to saccade direction, but only for eccentricities far from the fovea, beyond the saccade target. This spatially specific pattern of distortion cannot be explained by a simple compression of space around the saccade target. Here I show that a change of reference frames (i.e., translation) in cortical (logarithmic) coordinates, taking into account the cortical magnification factor, can accurately predict these spatial patterns of mislocalization. The flashed object projects onto the cortex in presaccadic (fovea-centered) coordinates but is perceived in postsaccadic (target-centered) coordinates.  相似文献   

7.
To interpret visual scenes, visual systems need to segment or integrate multiple moving features into distinct objects or surfaces. Previous studies have found that the perceived direction separation between two transparently moving random-dot stimuli is wider than the actual direction separation. This perceptual “direction repulsion” is useful for segmenting overlapping motion vectors. Here we investigate the effects of motion noise on the directional interaction between overlapping moving stimuli. Human subjects viewed two overlapping random-dot patches moving in different directions and judged the direction separation between the two motion vectors. We found that the perceived direction separation progressively changed from wide to narrow as the level of motion noise in the stimuli was increased, showing a switch from direction repulsion to attraction (i.e. smaller than the veridical direction separation). We also found that direction attraction occurred at a wider range of direction separations than direction repulsion. The normalized effects of both direction repulsion and attraction were the strongest near the direction separation of ∼25° and declined as the direction separation further increased. These results support the idea that motion noise prompts motion integration to overcome stimulus ambiguity. Our findings provide new constraints on neural models of motion transparency and segmentation.  相似文献   

8.
As animals travel through the environment, powerful reflexes help stabilize their gaze by actively maintaining head and eyes in a level orientation. Gaze stabilization reduces motion blur and prevents image rotations. It also assists in depth perception based on translational optic flow. Here we describe side-to-side flight manoeuvres in honeybees and investigate how the bees’ gaze is stabilized against rotations during these movements. We used high-speed video equipment to record flight paths and head movements in honeybees visiting a feeder. We show that during their approach, bees generate lateral movements with a median amplitude of about 20 mm. These movements occur with a frequency of up to 7 Hz and are generated by periodic roll movements of the thorax with amplitudes of up to ±60°. During such thorax roll oscillations, the head is held close to horizontal, thereby minimizing rotational optic flow. By having bees fly through an oscillating, patterned drum, we show that head stabilization is based mainly on visual motion cues. Bees exposed to a continuously rotating drum, however, hold their head fixed at an oblique angle. This result shows that although gaze stabilization is driven by visual motion cues, it is limited by other mechanisms, such as the dorsal light response or gravity reception.  相似文献   

9.
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.  相似文献   

10.
Driving is associated with high activation of low-back and neck muscles due to the sitting position and perturbations imposed by the vehicle. The aim of this study was to investigate the use of a neck balance system together with a lumbar support on the activation of low-back and neck muscles during driving. Twelve healthy male subjects (age 32±6.71 years) were asked to drive in two conditions: 1) with devices; 2) without devices. During vehicle accelerations and decelerations root mean square (RMS) of surface electromyography (sEMG) was recorded from the erector spinae, semispinalis capitis and sternocleidomastoid muscles and expressed as a percentage of maximal voluntary contraction (MVC). The pitch of the head was obtained by means of an inertial sensor placed on the subjects’ head. A visual analog scale (VAS) was used to assess the level of perceived comfort. RMS of the low back muscles was lower with than without devices during both acceleration and deceleration of the vehicle (1.40±0.93% vs 29 2.32±1.90% and 1.88±1.45% vs 2.91±2.33%, respectively), while RMS of neck extensor muscles was reduced only during acceleration (5.18±1.96% vs 5.91±2.16%). There were no differences between the two conditions in RMS of neck flexor muscles, the pitch of the head and the VAS score. The use of these two ergonomic devices is therefore effective in reducing the activation of low-back and neck muscles during driving with no changes in the level of perceived comfort, which is likely due to rebalancing weight on the neck and giving a neutral position to lumbar segments.  相似文献   

11.
Migratory birds are known to be sensitive to external magnetic field (MF). Much indirect evidence suggests that the avian magnetic compass is localized in the retina. Previously, we showed that changes in the MF direction could modulate retinal responses in pigeons. In the present study, we performed similar experiments using the traditional model animal to study the magnetic compass, European robins. The photoresponses of isolated retina were recorded using ex vivo electroretinography (ERG). Blue- and red-light stimuli were applied under an MF with the natural intensity and two MF directions, when the angle between the plane of the retina and the field lines was 0° and 90°, respectively. The results were separately analysed for four quadrants of the retina. A comparison of the amplitudes of the a- and b-waves of the ERG responses to blue stimuli under the two MF directions revealed a small but significant difference in a- but not b-waves, and in only one (nasal) quadrant of the retina. The amplitudes of both the a- and b-waves of the ERG responses to red stimuli did not show significant effects of the MF direction. Thus, changes in the external MF modulate the European robin retinal responses to blue flashes, but not to red flashes. This result is in a good agreement with behavioural data showing the successful orientation of birds in an MF under blue, but not under red illumination.  相似文献   

12.
The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR) play an essential role in stabilizing the visual axis (gaze), while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space) and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1) quantify vestibularly-driven head movements in primates, and 2) assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.  相似文献   

13.
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.  相似文献   

14.
Contextual information can have a huge impact on our sensory experience. The tilt illusion is a classic example of contextual influence exerted by an oriented surround on a target''s perceived orientation. Traditionally, the tilt illusion has been described as the outcome of inhibition between cortical neurons with adjacent receptive fields and a similar preference for orientation. An alternative explanation is that tilted contexts could produce a re-calibration of the subjective frame of reference. Although the distinction is subtle, only the latter model makes clear predictions for unoriented stimuli. In the present study, we tested one such prediction by asking four naive subjects to estimate three positions (4, 6, and 8 o''clock) on an imaginary clock face within a tilted surround. To indicate their estimates, they used either an unoriented dot or a line segment, with one endpoint at fixation in the middle of the surround. The surround''s tilt was randomly chosen from a set of orientations (±75°, ±65°, ±55°, ±45°, ±35°, ±25°, ±15°, ±5° with respect to vertical) across trials. Our results showed systematic biases consistent with the tilt illusion in both conditions. Biases were largest when observers attempted to estimate the 4 and 8 o''clock positions, but there was no significant difference between data gathered with the dot and data gathered with the line segment. A control experiment confirmed that biases were better accounted for by a local coordinate shift than to torsional eye movements induced by the tilted context. This finding supports the idea that tilted contexts distort perceived positions as well as perceived orientations and cannot be readily explained by lateral interactions between orientation selective cells in V1.  相似文献   

15.
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.  相似文献   

16.

Background

Visual exploration of the surroundings during locomotion at heights has not yet been investigated in subjects suffering from fear of heights.

Methods

Eye and head movements were recorded separately in 16 subjects susceptible to fear of heights and in 16 non-susceptible controls while walking on an emergency escape balcony 20 meters above ground level. Participants wore mobile infrared eye-tracking goggles with a head-fixed scene camera and integrated 6-degrees-of-freedom inertial sensors for recording head movements. Video recordings of the subjects were simultaneously made to correlate gaze and gait behavior.

Results

Susceptibles exhibited a limited visual exploration of the surroundings, particularly the depth. Head movements were significantly reduced in all three planes (yaw, pitch, and roll) with less vertical head oscillations, whereas total eye movements (saccade amplitudes, frequencies, fixation durations) did not differ from those of controls. However, there was an anisotropy, with a preference for the vertical as opposed to the horizontal direction of saccades. Comparison of eye and head movement histograms and the resulting gaze-in-space revealed a smaller total area of visual exploration, which was mainly directed straight ahead and covered vertically an area from the horizon to the ground in front of the feet. This gaze behavior was associated with a slow, cautious gait.

Conclusions

The visual exploration of the surroundings by susceptibles to fear of heights differs during locomotion at heights from the earlier investigated behavior of standing still and looking from a balcony. During locomotion, anisotropy of gaze-in-space shows a preference for the vertical as opposed to the horizontal direction during stance. Avoiding looking into the abyss may reduce anxiety in both conditions; exploration of the “vertical strip” in the heading direction is beneficial for visual control of balance and avoidance of obstacles during locomotion.  相似文献   

17.
When we look at a stationary object, the perceived direction of gaze (where we are looking) is aligned with the physical direction of eyes (where our eyes are oriented) by which the object is foveated. However, this alignment may not hold in a dynamic situation. Our experiments assessed the perceived locations of two brief stimuli (1 ms) simultaneously displayed at two different physical locations during a saccade. The first stimulus was in the instantaneous location to which the eyes were oriented and the second one was always in the same location as the initial fixation point. When the timing of these stimuli was changed intra-saccadically, their perceived locations were dissociated. The first stimuli were consistently perceived near the target that will be foveated at saccade termination. The second stimuli once perceived near the target location, shifted in the direction opposite to that of saccades, as its latency from saccades increased. These results suggested an independent adjustment of gaze orientation from the physical orientation of eyes during saccades. The spatial dissociation of two stimuli may reflect sensorimotor control of gaze during saccades.  相似文献   

18.
We determined whether binocular central scotomas above or below the preferred retinal locus affect detection of hazards (pedestrians) approaching from the side. Seven participants with central field loss (CFL), and seven age-and sex-matched controls with normal vision (NV), each completed two sessions of 5 test drives (each approximately 10 minutes long) in a driving simulator. Participants pressed the horn when detecting pedestrians that appeared at one of four eccentricities (-14°, -4°, left, 4°, or 14°, right, relative to car heading). Pedestrians walked or ran towards the travel lane on a collision course with the participant’s vehicle, thus remaining in the same area of the visual field, assuming participant''s steady forward gaze down the travel lane. Detection rates were nearly 100% for all participants. CFL participant reaction times were longer (median 2.27s, 95% CI 2.13 to 2.47) than NVs (median 1.17s, 95%CI 1.10 to 2.13; difference p<0.01), and CFL participants would have been unable to stop for 21% of pedestrians, compared with 3% for NV, p<0.001. Although the scotomas were not expected to obscure pedestrian hazards, gaze tracking revealed that scotomas did sometimes interfere with detection; late reactions usually occurred when pedestrians were entirely or partially obscured by the scotoma (time obscured correlated with reaction times, r = 0.57, p<0.001). We previously showed that scotomas lateral to the preferred retinal locus delay reaction times to a greater extent; however, taken together, the results of our studies suggest that any binocular CFL might negatively impact timely hazard detection while driving and should be a consideration when evaluating vision for driving.  相似文献   

19.
Vection is an illusory perception of self-motion that can occur when visual motion fills the majority of the visual field. This study examines the effect of the duration of visual field movement (VFM) on the perceived strength of self-motion using an inertial nulling (IN) and a magnitude estimation technique based on the certainty that motion occurred (certainty estimation, CE). These techniques were then used to investigate the association between migraine diagnosis and the strength of perceived vection. Visual star-field stimuli consistent with either looming or receding motion were presented for 1, 4, 8 or 16s. Subjects reported the perceived direction of self-motion during the final 1s of the stimulus. For the IN method, an inertial nulling motion was delivered during this final 1s of the visual stimulus, and subjects reported the direction of perceived self-motion during this final second. The magnitude of inertial motion was varied adaptively to determine the point of subjective equality (PSE) at which forward or backward responses were equally likely. For the CE trials the same range of VFM was used but without inertial motion and subjects rated their certainty of motion on a scale of 0–100. PSE determined with the IN technique depended on direction and duration of visual motion and the CE technique showed greater certainty of perceived vection with longer VFM duration. A strong correlation between CE and IN techniques was present for the 8s stimulus. There was appreciable between-subject variation in both CE and IN techniques and migraine was associated with significantly increased perception of self-motion by CE and IN at 8 and 16s. Together, these results suggest that vection may be measured by both CE and IN techniques with good correlation. The results also suggest that susceptibility to vection may be higher in subjects with a history of migraine.  相似文献   

20.
Monocular threshold stimulus intensities (ΔIo, photons) were measured along the 0–180° meridian of human retinae for three observers. The test image was small (= 0.08°) and of short duration (= 0.20 second). ΔIo was found to decrease as the angular distance from the fovea was increased. Actual counts of the number of retinal elements per mm.2 along the 0–180° meridian (Østerberg) were compared with the obtained results. No direct correlation was found to exist between visual sensitivity and the number of retinal elements. Binocular threshold stimuli were also measured along the same meridian. The form of the function relating binocular visual sensitivity and retinal position was discovered to be essentially similar to that for monocular sensitivity, but is more symmetrical about the center of the fovea. The magnitude of the binocular measurement is in each case smaller than that of the monocular threshold stimulus intensity for the more sensitive eye. The ratio is statistically equal to 1.4 (a fact which suggests Piper''s rule). These results are shown to be consistent with the hypothesis that the process critical for the eventuation of the threshold response is localized in the central nervous system. They are not consistent with the view that the quantitative properties of visual data are directly determined by properties of the peripheral retina.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号