首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
The performance of a gymnastic exercise, the splits leap in rhythmic sports gymnastics, was compared with a subjective evaluation by an experienced referee who ranked the gymnasts skill. The execution of the splits leaps was quantified by measuring ground reaction forces, electromyographic activity in the leg muscles and by analyzing film. The referees evaluation of the splits leaps was found to depend on minimal bending in the knee joint of the take-off leg at touch-down and during the support phase. Maximal flexion in the knee joint was inversely correlated (r = -0.53, p less than 0.0002) with the ranking of the leap. The vertical component of the ground reaction force had a maximal amplitude of about 1900 N, equal to 3.5 times the force produced by the body weight of the gymnast. The record was smooth and without inflections in the highly ranked gymnasts, whereas the other gymnasts had one or two inflections in the rising phase of the vertical force record. Electromyographic activity in both the medial head of gastrocnemius and tibialis anterior began before touch-down. The pattern consisted of periods of activity with pauses in between. There was no correlation between rank and electromyographic pattern, but "talented" gymnasts had shorter periods of activity and longer pauses between the periods of activity.  相似文献   

2.
People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous). The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants'' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.  相似文献   

3.
It has previously been shown that male gymnasts using the "scooped" giant circling technique were able to flatten the path followed by their mass center, resulting in a larger margin for error when releasing the high bar (Hiley and Yeadon, 2003a). The circling technique prior to performing double layout somersault dismounts from the asymmetric bars in women's artistic gymnastics appears to be similar to the "traditional" technique used by some male gymnasts on the high bar. It was speculated that as a result the female gymnasts would have margins for error similar to those of male gymnasts who use the traditional technique. However, it is unclear how the technique of the female gymnasts is affected by the need to avoid the lower bar. A 4-segment planar simulation model of the gymnast and upper bar was used to determine the margins for error when releasing the bar for 9 double layout somersault dismounts at the Sydney 2000 Olympics. The elastic properties of the gymnast and bar were modeled using damped linear springs. Model parameters, primarily the inertia and spring parameters, were optimized to obtain a close match between simulated and actual performances in terms of rotation angle (1.2 degrees), bar displacement (0.011 m), and release velocities (<1%). Each matching simulation was used to determine the time window around the actual point of release for which the model had appropriate release parameters to complete the dismount successfully. The margins for error of the 9 female gymnasts (release window 43-102 ms) were comparable to those of the 3 male gymnasts using the traditional technique (release window 79-84 ms).  相似文献   

4.
Many elite gymnasts perform the straight arm backward longswing on rings in competition. Since points are deducted if gymnasts possess motion on completion of the movement, the ability to successfully perform the longswing to a stationary final handstand is of great importance. Sprigings et al. (1998) found that for a longswing initiated from a still handstand the optimum performance of an inelastic planar simulation model resulted in a residual swing of more than 3 degrees in the final handstand.For the present study, a three-dimensional simulation model of a gymnast swinging on rings, incorporating lateral arm movements used by gymnasts and mandatory apparatus elasticity, was used to investigate the possibility of performing a backward longswing initiated and completed in handstands with minimal swing. Root mean square differences between the actual and simulated performances for the orientations of the gymnast and rings cables, the combined cable tension and the extension of the gymnast were 3.2 degrees, 1.0 degrees, 270N and 0.05m respectively.The optimised simulated performance initiated from a handstand with 2.1 degrees of swing and using realistic changes to the gymnast's technique resulted in 0.6 degrees of residual swing in the final handstand. The sensitivity of the backward longswing to perturbations in the technique used for the optimised performance was determined. For a final handstand with minimal residual swing (2 degrees) the changes in body configuration must be timed to within 15 ms while a delay of 30 ms will result in considerable residual swing (7 degrees).  相似文献   

5.
Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.  相似文献   

6.
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.  相似文献   

7.
The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context.  相似文献   

8.
Using state-of-the-art technology, interactions of eye, head and intersegmental body movements were analyzed for the first time during multiple twisting somersaults of high-level gymnasts. With this aim, we used a unique combination of a 16-channel infrared kinemetric system; a three-dimensional video kinemetric system; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration. All data were synchronized and integrated in a multimodal software tool for three-dimensional analysis. During specific phases of the recorded movements, a previously unknown eye-head-body interaction was observed. The phenomenon was marked by a prolonged and complete suppression of gaze-stabilizing eye movements, in favor of a tight coupling with the head, spine and joint movements of the gymnasts. Potential reasons for these observations are discussed with regard to earlier findings and integrated within a functional model.  相似文献   

9.
Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period) seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a) fatigue due to chronotypical factors (being a ‘morning type’ or an ‘evening type’); b) fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day.  相似文献   

10.
Watching a speaker''s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.  相似文献   

11.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.  相似文献   

12.
Sense of agency, the experience of controlling external events through one''s actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers'' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant''s action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.  相似文献   

13.
We compared sensorimotor adaptation in the visual and the auditory modality. Subjects pointed to visual targets while receiving direct spatial information about fingertip position in the visual modality, or they pointed to visual targets while receiving indirect information about fingertip position in the visual modality, or they pointed to auditory targets while receiving indirect information about fingertip position in the auditory modality. Feedback was laterally shifted to induce adaptation, and aftereffects were tested with both target modalities and both hands. We found that aftereffects of adaptation were smaller when tested with the non-adapted hand, i.e., intermanual transfer was incomplete. Furthermore, aftereffects were smaller when tested in the non-adapted target modality, i.e., intermodal transfer was incomplete. Aftereffects were smaller following adaptation with indirect rather than direct feedback, but they were not smaller following adaptation with auditory rather than visual targets. From this we conclude that the magnitude of adaptive recalibration rather depends on the method of feedback delivery (indirect versus direct) than on the modality of feedback (visual versus auditory).  相似文献   

14.
In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object): the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar). Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.  相似文献   

15.
Visual search is markedly improved when a target color change is synchronized with a spatially non-informative auditory signal. This "pip and pop" effect is an automatic process as even a distractor captures attention when accompanied by a tone. Previous studies investigating visual attention have indicated that automatic capture is susceptible to the size of the attentional window. The present study investigated whether the pip and pop effect is modulated by the extent to which participants divide their attention across the visual field We show that participants were better in detecting a synchronized audiovisual event when they divided their attention across the visual field relative to a condition in which they focused their attention. We argue that audiovisual capture is reduced under focused conditions relative to distributed settings.  相似文献   

16.
In the optimisation of sports movements using computer simulation models, the joint actuators must be constrained in order to obtain realistic results. In models of a gymnast, the main constraint used in previous studies was maximum voluntary active joint torque. In the stalder, gymnasts reach their maximal hip flexion under the bar. The purpose of this study was to introduce a model of passive torque to assess the effect of the gymnast's flexibility on the technique of the straddled stalder. A three-dimensional kinematics driven simulation model was developed. The kinematics of the shoulder flexion, hip flexion and hip abduction were optimised to minimise torques for four hip flexion flexibilities: 100°, 110°, 120° and 130°. With decreased flexibility, the piked posture period is shorter and occurs later. Moreover the peaks of shoulder and hip torques increase. Gymnasts with low hip flexibility need to be stronger to achieve a stalder; hip flexibility should be considered by coaches before teaching this skill.  相似文献   

17.
In natural audio-visual environments, a change in depth is usually correlated with a change in loudness. In the present study, we investigated whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search paradigm. To test this hypothesis, we used a method similar to that used by van der Burg et al. to show that non-spatial transient (square-wave) modulations of loudness can drastically improve spatial visual search for a correlated luminance modulation. We used dynamic random-dot stereogram displays to produce pure disparity modulations. Target and distractors were small disparity-defined squares (either 6 or 10 in total). Each square moved back and forth in depth in front of the background plane at different phases. The target's depth modulation was synchronized with an amplitude-modulated auditory tone. Visual and auditory modulations were always congruent (both sine-wave or square-wave). In a speeded search task, five observers were asked to identify the target as quickly as possible. Results show a significant improvement in visual search times in the square-wave condition compared to the sine condition, suggesting that transient auditory information can efficiently drive visual search in the disparity domain. In a second experiment, participants performed the same task in the absence of sound and showed a clear set-size effect in both modulation conditions. In a third experiment, we correlated the sound with a distractor instead of the target. This produced longer search times, indicating that the correlation is not easily ignored.  相似文献   

18.
Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants’ attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.  相似文献   

19.
Franklin DW  So U  Burdet E  Kawato M 《PloS one》2007,2(12):e1336

Background

When learning to perform a novel sensorimotor task, humans integrate multi-modal sensory feedback such as vision and proprioception in order to make the appropriate adjustments to successfully complete the task. Sensory feedback is used both during movement to control and correct the current movement, and to update the feed-forward motor command for subsequent movements. Previous work has shown that adaptation to stable dynamics is possible without visual feedback. However, it is not clear to what degree visual information during movement contributes to this learning or whether it is essential to the development of an internal model or impedance controller.

Methodology/Principle Findings

We examined the effects of the removal of visual feedback during movement on the learning of both stable and unstable dynamics in comparison with the case when both vision and proprioception are available. Subjects were able to learn to make smooth movements in both types of novel dynamics after learning with or without visual feedback. By examining the endpoint stiffness and force after learning it could be shown that subjects adapted to both types of dynamics in the same way whether they were provided with visual feedback of their trajectory or not. The main effects of visual feedback were to increase the success rate of movements, slightly straighten the path, and significantly reduce variability near the end of the movement.

Conclusions/Significance

These findings suggest that visual feedback of the hand during movement is not necessary for the adaptation to either stable or unstable novel dynamics. Instead vision appears to be used to fine-tune corrections of hand trajectory at the end of reaching movements.  相似文献   

20.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号