首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

2.
It is widely known that the pinch-grip forces of the human hand are linearly related to the weight of the grasped object. Less is known about the relationship between grip force and grip stiffness. We set out to determine variations to these dependencies in different tasks with and without visual feedback. In two different settings, subjects were asked to (a) grasp and hold a stiffness-measuring manipulandum with a predefined grip force, differing from experiment to experiment, or (b) grasp and hold this manipulandum of which we varied the weight between trials in a more natural task. Both situations led to grip forces in comparable ranges. As the measured grip stiffness is the result of muscle and tendon properties, and since muscle/tendon stiffness increases more-or-less linearly as a function of muscle force, we found, as might be predicted, a linear relationship between grip force and grip stiffness. However, the measured stiffness ranges and the increase of stiffness with grip force varied significantly between the two tasks. Furthermore, we found a strong correlation between regression slope and mean stiffness for the force task which we ascribe to a force stiffness curve going through the origin. Based on a biomechanical model, we attributed the difference between both tasks to changes in wrist configuration, rather than to changes in cocontraction. In a new set of experiments where we prevent the wrist from moving by fixing it and resting it on a pedestal, we found subjects exhibiting similar stiffness/force characteristics in both tasks.  相似文献   

3.
Stepp CE  An Q  Matsuoka Y 《PloS one》2012,7(2):e32743
Most users of prosthetic hands must rely on visual feedback alone, which requires visual attention and cognitive resources. Providing haptic feedback of variables relevant to manipulation, such as contact force, may thus improve the usability of prosthetic hands for tasks of daily living. Vibrotactile stimulation was explored as a feedback modality in ten unimpaired participants across eight sessions in a two-week period. Participants used their right index finger to perform a virtual object manipulation task with both visual and augmentative vibrotactile feedback related to force. Through repeated training, participants were able to learn to use the vibrotactile feedback to significantly improve object manipulation. Removal of vibrotactile feedback in session 8 significantly reduced task performance. These results suggest that vibrotactile feedback paired with training may enhance the manipulation ability of prosthetic hand users without the need for more invasive strategies.  相似文献   

4.
Few studies have investigated the control of grip force when manipulating an object with an extremely small mass using a precision grip, although some related information has been provided by studies conducted in an unusual microgravity environment. Grip-load force coordination was examined while healthy adults (N = 17) held a moveable instrumented apparatus with its mass changed between 6 g and 200 g in 14 steps, with its grip surface set as either sandpaper or rayon. Additional measurements of grip-force-dependent finger-surface contact area and finger skin indentation, as well as a test of weight discrimination, were also performed. For each surface condition, the static grip force was modulated in parallel with load force while holding the object of a mass above 30 g. For objects with mass smaller than 30 g, on the other hand, the parallel relationship was changed, resulting in a progressive increase in grip-to-load force (GF/LF) ratio. The rayon had a higher GF/LF force ratio across all mass levels. The proportion of safety margin in the static grip force and normalized moment-to-moment variability of the static grip force were also elevated towards the lower end of the object mass for both surfaces. These findings indicate that the strategy of grip force control for holding objects with an extremely small mass differs from that with a mass above 30 g. The data for the contact area, skin indentation, and weight discrimination suggest that a decreased level of cutaneous feedback signals from the finger pads could have played some role in a cost function in efficient grip force control with low-mass objects. The elevated grip force variability associated with signal-dependent and internal noises, and anticipated inertial force on the held object due to acceleration of the arm and hand, could also have contributed to the cost function.  相似文献   

5.
To produce skilled movements, the brain flexibly adapts to different task requirements and movement contexts. Two core abilities underlie this flexibility. First, depending on the task, the motor system must rapidly switch the way it produces motor commands and how it corrects movements online, i.e. it switches between different (feedback) control policies. Second, it must also adapt to environmental changes for different tasks separately. Here we show these two abilities are related. In a bimanual movement task, we show that participants can switch on a movement-by-movement basis between two feedback control policies, depending only on a static visual cue. When this cue indicates that the hands control separate objects, reactions to force field perturbations of each arm are purely unilateral. In contrast, when the visual cue indicates a commonly controlled object, reactions are shared across hands. Participants are also able to learn different force fields associated with a visual cue. This is however only the case when the visual cue is associated with different feedback control policies. These results indicate that when the motor system can flexibly switch between different control policies, it is also able to adapt separately to the dynamics of different environmental contexts. In contrast, visual cues that are not associated with different control policies are not effective for learning different task dynamics.  相似文献   

6.
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.  相似文献   

7.
It has been shown that target-pointing arm movements without visual feedback shift downward in space microgravity and upward in centrifuge hypergravity. Under gravity changes in aircraft parabolic flight, however, arm movements have been reported shifting upward in hypergravity as well, but a downward shift under microgravity is contradicted. In order to explain this discrepancy, we reexamined the pointing movements using an experimental design which was different from prior ones. Arm-pointing movements were measured by goniometry around the shoulder joint of subjects with and without eyes closed or with a weight in the hand, during hyper- and microgravity in parabolic flight. Subjects were fastened securely to the seat with the neck fixed and the elbow maintained in an extended position, and the eyes were kept closed for a period of time before each episode of parabolic flight. Under these new conditions, the arm consistently shifted downward during microgravity and mostly upward during hypergravity, as expected. We concluded that arm-pointing deviation induced by parabolic flight could be also be valid for studying the mechanism underlying disorientation under varying gravity conditions.  相似文献   

8.
The posterior inner perisylvian region including the secondary somatosensory cortex (area SII) and the adjacent region of posterior insular cortex (pIC) has been implicated in haptic processing by integrating somato-motor information during hand-manipulation, both in humans and in non-human primates. However, motor-related properties during hand-manipulation are still largely unknown. To investigate a motor-related activity in the hand region of SII/pIC, two macaque monkeys were trained to perform a hand-manipulation task, requiring 3 different grip types (precision grip, finger exploration, side grip) both in light and in dark conditions. Our results showed that 70% (n = 33/48) of task related neurons within SII/pIC were only activated during monkeys’ active hand-manipulation. Of those 33 neurons, 15 (45%) began to discharge before hand-target contact, while the remaining neurons were tonically active after contact. Thirty-percent (n = 15/48) of studied neurons responded to both passive somatosensory stimulation and to the motor task. A consistent percentage of task-related neurons in SII/pIC was selectively activated during finger exploration (FE) and precision grasping (PG) execution, suggesting they play a pivotal role in control skilled finger movements. Furthermore, hand-manipulation-related neurons also responded when visual feedback was absent in the dark. Altogether, our results suggest that somato-motor neurons in SII/pIC likely contribute to haptic processing from the initial to the final phase of grasping and object manipulation. Such motor-related activity could also provide the somato-motor binding principle enabling the translation of diachronic somatosensory inputs into a coherent image of the explored object.  相似文献   

9.
In daily life, object manipulation is usually performed concurrently to the execution of cognitive tasks. The aim of the present study was to determine which aspects of precision grip require cognitive resources using a motor-cognitive dual-task paradigm. Eighteen healthy participants took part in the experiment, which comprised two conditions. In the first condition, participants performed a motor task without any concomitant cognitive task. They were instructed to grip, lift and hold an apparatus incorporating strain gauges allowing a continuous measurement of the force perpendicular to each contact surface (grip force, GF) as well as the total tangential force applied on the object (load force, LF). In the second condition, participants performed the same motor task while concurrently performing a cognitive task consisting in a complex visual search combined with counting. In the dual-task condition, we found a significant increase in the duration of the preload phase (time between initial contact of the fingers with the apparatus and onset of the load force), as well as a significant increase of the grip force during the holding phase, indicating that the cognitive task interfered with the initial force scaling performed during the preload phase and the fine-tuning of grip force during the hold phase. These findings indicate that these aspects of precision grip require cognitive resources. In contrast, other aspects of the precision grip, such as the temporal coupling between grip and load forces, were not affected by the cognitive task, suggesting that they reflect more automatic processes. Taken together, our results suggest that assessing the dynamic and temporal parameters of precision grip in the context of a concurrent cognitive task may constitute a more ecological and better-suited tool to characterize motor dysfunction in patients.  相似文献   

10.
When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers'' eye movements, focusing on two possibilities: (i) that observers'' gaze behaviour arises from processes involved in the prediction of the target object of the actor''s reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor''s hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers'' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action–observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.  相似文献   

11.
It has been argued that visual perception and the visual control of action depend upon functionally distinct and anatomically separable brain systems. Electrophysiological evidence indicates that binocular vision may be particularly important for the visuomotor processing within the posterior parietal cortex, and neuropsychological and psychophysical studies confirm that binocular vision is crucial for the accurate planning and control of prehension movements. An unresolved issue concerns the consequences for visuomotor processing of removing binocular vision. By one account, monocular viewing leads to reliance upon pictorial visual cues to calibrate grasping and results in disruption to normal size-constancy mechanisms. This proposal is based on the finding that maximum grip apertures are reduced with monocular vision. By a second account, monocular viewing results in the loss of binocular visual cues and leads to strategic changes in visuomotor processing by way of altered safety margins. This proposal is based on the finding that maximum grip apertures are increased with monocular vision. We measured both grip aperture and grip force during prehension movements executed with binocular and monocular viewing. We demonstrate that each of the above accounts may be correct and can be observed within the same task. Specifically, we show that, while grip apertures increase with monocular vision, consistent with altered visuomotor safety margins, maximum grip force is nevertheless reduced, consistent with a misperception of object size. These results are related to differences in visual processing required for calibrating grip aperture and grip force during reaching.  相似文献   

12.
Recent studies have shown that articulatory gestures are systematically associated with specific manual grip actions. Here we show that executing such actions can influence performance on a speech-categorization task. Participants watched and/or listened to speech stimuli while executing either a power or a precision grip. Grip performance influenced the syllable categorization by increasing the proportion of responses of the syllable congruent with the executed grip (power grip—[ke] and precision grip—[te]). Two follow-up experiments indicated that the effect was based on action-induced bias in selecting the syllable.  相似文献   

13.
When moving grasped objects, people automatically modulate grip force (GF) with movement-dependent load force (LF) in order to prevent object slip. However, GF can also be modulated voluntarily as when squeezing an object. Here we investigated possible interactions between automatic and voluntary GF control. Participants were asked to generate horizontal cyclic movements (between 0.6 and 2.0 Hz) of a hand-held object that was restrained by an elastic band such that the load force (LF) reached a peak once per movement cycle, and to simultaneously squeeze the object at each movement reversal (i.e., twice per cycle). Participants also performed two control tasks in which they either only moved (between 0.6 and 2.0 Hz) or squeezed (between 1.2 and 4.0 Hz) the object. The extent to which GF modulation in the simultaneous task could be predicted from the two control tasks was assessed using power spectral analyses. At all frequencies, the GF power spectra from the simultaneous task exhibited two prominent components that occurred at the cycle frequency (ƒ) and at twice this frequency (2ƒ), whereas the spectra from the movement and squeeze control task exhibited only single peaks at ƒ and 2ƒ, respectively. At lower frequencies, the magnitudes of both frequency components in the simultaneous task were similar to the magnitudes of the corresponding components in the control tasks. However, as frequency increased, the magnitudes of both components in the simultaneous task were greater than the magnitudes of the corresponding control task components. Moreover, the phase relationship between the ƒ components of GF and LF began to drift from the value observed in the movement control task. Overall these results suggest that, at lower movement frequencies, voluntary and automatic GF control processes operate at different hierarchical levels. Several mechanisms are discussed to account for interaction effects observed at higher movement frequencies.  相似文献   

14.
An observer traversing an environment actively relocates gaze to fixate objects. Evidence suggests that gaze is frequently directed toward the center of an object considered as target but more likely toward the edges of an object that appears as an obstacle. We suggest that this difference in gaze might be motivated by specific patterns of optic flow that are generated by either fixating the center or edge of an object. To support our suggestion we derive an analytical model that shows: Tangentially fixating the outer surface of an obstacle leads to strong flow discontinuities that can be used for flow-based segmentation. Fixation of the target center while gaze and heading are locked without head-, body-, or eye-rotations gives rise to a symmetric expansion flow with its center at the point being approached, which facilitates steering toward a target. We conclude that gaze control incorporates ecological constraints to improve the robustness of steering and collision avoidance by actively generating flows appropriate to solve the task.  相似文献   

15.
The objective of this study was to measure the forces applied on an object manipulated in different gravitational fields attained during parabolic flights. Eight subjects participated flights (ES) and four were inexperienced (NES). They had to move continuously an instrumented object up and down in three different gravitational conditions (1 g, 1.8 g, 0 g). In 1 g, the grip force precisely anticipated the fluctuations of load force which was maximum and minimum at the bottom and at the top of the arm trajectory respectively. When the gravity changed (0 g and 1.8 g), the grip-load force coupling persisted for all the subjects from the first parabola. While the ES immediately exerted a grip force appropriate to the gravity, the NES dramatically increased their grip when faced with hyper and microgravity for the first time. Then, they progressively released their grip until a continuous grip-load force relationship with regard to 1 g was established after the fifth parabola. We suggest that each new gravitational field is rapidly incorporated into an internal model within the CNS which can then be reused as required by the occasion.  相似文献   

16.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task. Young adults made aiming movements to targets on a horizontal plane, while looking at the rotated feedback (cursor) of hand movements on a monitor. To vary the task difficulty, three rotation angles (30°, 75°, and 150°) were tested in three groups. All groups shortened hand movement time and trajectory length with practice. However, control strategies used were different among groups. The 30° group used proportionately more implicit adjustments of hand movements than other groups. The 75° group used more on-line feedback control, whereas the 150° group used explicit strategic adjustments. Regarding eye-hand coordination, timing of gaze shift to the target was gradually changed with practice from the late to early phase of hand movements in all groups, indicating an emerging gaze-anchoring behavior. Gaze locations prior to the gaze anchoring were also modified with practice from the cursor vicinity to an area between the starting position and the target. Reflecting various task difficulties, these changes occurred fastest in the 30° group, followed by the 75° group. The 150° group persisted in gazing at the cursor vicinity. These results suggest that the function of gaze control during visuomotor adaptation changes from a reactive control for exploring the relation between cursor and hand movements to a predictive control for guiding the hand to the task goal. That gaze-anchoring behavior emerged in all groups despite various control strategies indicates a generality of this adaptive pattern for eye-hand coordination in goal-directed actions.  相似文献   

17.
Schema design and implementation of the grasp-related mirror neuron system   总被引:6,自引:0,他引:6  
 Mirror neurons within a monkey's premotor area F5 fire not only when the monkey performs a certain class of actions but also when the monkey observes another monkey (or the experimenter) perform a similar action. It has thus been argued that these neurons are crucial for understanding of actions by others. We offer the hand-state hypothesis as a new explanation of the evolution of this capability: the basic functionality of the F5 mirror system is to elaborate the appropriate feedback – what we call the hand state– for opposition-space based control of manual grasping of an object. Given this functionality, the social role of the F5 mirror system in understanding the actions of others may be seen as an exaptation gained by generalizing from one's own hand to an other's hand. In other words, mirror neurons first evolved to augment the “canonical” F5 neurons (active during self-movement based on observation of an object) by providing visual feedback on “hand state,” relating the shape of the hand to the shape of the object. We then introduce the MNS1 (mirror neuron system 1) model of F5 and related brain regions. The existing Fagg–Arbib–Rizzolatti–Sakata model represents circuitry for visually guided grasping of objects, linking the anterior intraparietal area (AIP) with F5 canonical neurons. The MNS1 model extends the AIP visual pathway by also modeling pathways, directed toward F5 mirror neurons, which match arm–hand trajectories to the affordances and location of a potential target object. We present the basic schemas for the MNS1 model, then aggregate them into three “grand schemas”– visual analysis of hand state, reach and grasp, and the core mirror circuit – for each of which we present a useful implementation (a non-neural visual processing system, a multijoint 3-D kinematics simulator, and a learning neural network, respectively). With this implementation we show how the mirror system may learnto recognize actions already in the repertoire of the F5 canonical neurons. We show that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition. We train the system on the basis of final grasp but then observe the whole time course of mirror neuron activity, yielding predictions for neurophysiological experiments under conditions of spatial perturbation, altered kinematics, and ambiguous grasp execution which highlight the importance of the timingof mirror neuron activity. Received: 6 August 2001 / Accepted in revised form: 5 February 2002  相似文献   

18.
Schizophrenia is characterized by an altered sense of the reality, associated with hallucinations and delusions. Some theories suggest that schizophrenia is related to a deficiency of the system that generates information about the sensory consequences of the actions realized by the subject. This system monitors the reafferent information resulting from an action and allows its anticipation. In the present study, we examined visual-event-related potentials (ERPs) generated by a sensorimotor task in 15 patients with schizophrenia and 15 normal controls. The visual feedback from hand movements performed by the subjects was experimentally distorted. Behavioral results showed that patients were impaired in recognizing their own movements. The ERP signal in patients also differed from those of control subjects. In patients, the ERP waveform was affected during the early part of the response (200 ms). This early effect in schizophrenic patients reveals a modified processing of the visual consequence of their actions.  相似文献   

19.
20.
Brain computer interface (BCI) technology has been proposed for motor neurorehabilitation, motor replacement and assistive technologies. It is an open question whether proprioceptive feedback affects the regulation of brain oscillations and therefore BCI control. We developed a BCI coupled on-line with a robotic hand exoskeleton for flexing and extending the fingers. 24 healthy participants performed five different tasks of closing and opening the hand: (1) motor imagery of the hand movement without any overt movement and without feedback, (2) motor imagery with movement as online feedback (participants see and feel their hand, with the exoskeleton moving according to their brain signals, (3) passive (the orthosis passively opens and closes the hand without imagery) and (4) active (overt) movement of the hand and rest. Performance was defined as the difference in power of the sensorimotor rhythm during motor task and rest and calculated offline for different tasks. Participants were divided in three groups depending on the feedback receiving during task 2 (the other tasks were the same for all participants). Group 1 (n = 9) received contingent positive feedback (participants'' sensorimotor rhythm (SMR) desynchronization was directly linked to hand orthosis movements), group 2 (n = 8) contingent “negative” feedback (participants'' sensorimotor rhythm synchronization was directly linked to hand orthosis movements) and group 3 (n = 7) sham feedback (no link between brain oscillations and orthosis movements). We observed that proprioceptive feedback (feeling and seeing hand movements) improved BCI performance significantly. Furthermore, in the contingent positive group only a significant motor learning effect was observed enhancing SMR desynchronization during motor imagery without feedback in time. Furthermore, we observed a significantly stronger SMR desynchronization in the contingent positive group compared to the other groups during active and passive movements. To summarize, we demonstrated that the use of contingent positive proprioceptive feedback BCI enhanced SMR desynchronization during motor tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号