首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 237 毫秒
1.
This study used kinematics to investigate the integration between vision and olfaction during grasping movements. Participants were requested to smell an odorant and then grasp an object presented in central vision. The results indicate that if the target was small (e.g., a strawberry), the time and amplitude of maximum hand aperture were later and greater, respectively, when the odor evoked a larger object (e.g., an orange) than when the odor evoked an object of a similar size as the target or no odor was presented. Conversely, the time and amplitude of maximum hand aperture were earlier and reduced, respectively, when the target was large (e.g., a peach) and the odor evoked a smaller sized object (e.g., an almond) than when the odor evoked an object of a similar size as the target or no odor was presented. Taken together, these results support the evidence of cross-modal links between olfaction and vision and extend this notion to goal-directed actions.  相似文献   

2.
Reaching-to-grasp has generally been classified as the coordination of two separate visuomotor processes: transporting the hand to the target object and performing the grip. An alternative view has recently been formed that grasping can be explained as pointing movements performed by the digits of the hand to target positions on the object. We have previously implemented the minimum variance model of human movement as an optimal control scheme suitable for control of a robot arm reaching to a target. Here, we extend that scheme to perform grasping movements with a hand and arm model. Since the minimum variance model requires that signal-dependent noise be present on the motor commands to the actuators of the movement, our approach is to plan the reach and the grasp separately, in line with the classical view, but using the same computational model for pointing, in line with the alternative view. We show that our model successfully captures some of the key characteristics of human grasping movements, including the observations that maximum grip size increases with object size (with a slope of approximately 0.8) and that this maximum grip occurs at 60–80% of the movement time. We then use our model to analyse contributions to the digit end-point variance from the two components of the grasp (the transport and the grip). We also briefly discuss further areas of investigation that are prompted by our model.  相似文献   

3.

Background

Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements.

Methodology/Principal Findings

Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained.

Conclusions/Significance

This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantial contribution to the current debate about the multisensory nature of the sensorimotor transformations underlying grasping.  相似文献   

4.
Preparing a goal directed movement often requires detailed analysis of our environment. When picking up an object, its orientation, size and relative distance are relevant parameters when preparing a successful grasp. It would therefore be beneficial if the motor system is able to influence early perception such that information processing needs for action control are met at the earliest possible stage. However, only a few studies reported (indirect) evidence for action-induced visual perception improvements. We therefore aimed to provide direct evidence for a feature-specific perceptual modulation during the planning phase of a grasping action. Human subjects were instructed to either grasp or point to a bar while simultaneously performing an orientation discrimination task. The bar could slightly change its orientation during grasping preparation. By analyzing discrimination response probabilities, we found increased perceptual sensitivity to orientation changes when subjects were instructed to grasp the bar, rather than point to it. As a control experiment, the same experiment was repeated using bar luminance changes, a feature that is not relevant for either grasping or pointing. Here, no differences in visual sensitivity between grasping and pointing were found. The present results constitute first direct evidence for increased perceptual sensitivity to a visual feature that is relevant for a certain skeletomotor act during the movement preparation phase. We speculate that such action-induced perception improvements are controlled by neuronal feedback mechanisms from cortical motor planning areas to early visual cortex, similar to what was recently established for spatial perception improvements shortly before eye movements.  相似文献   

5.
In anurans with axillary amplexus, males may be unable to handle females much different in body size from them due to physical limitation. Such mechanical constraint during the grasping processes is thought to be one of the proximate mechanisms leading to pairs to form size-assortively. Using a pairing experiment, the purpose of this study was to test this prediction for a temperate frog (Rana chensinensis) wherein some size-assortative matings occur in natural populations. We found a reduced probability of pairing success as the difference between sexes. When one female was much larger than one male that attempted to grasp her, she tended to dislodge aggressively him, suggesting a role of mechanical constraint in facilitating female choice against small-sized mates. By contrast, when the male was much larger than the female, he often failed to grasp her effectively or remain her in amplexus for longer, indicating the restriction of mechanical constraint to male pairing attempts and to female preference for large-sized mates.  相似文献   

6.
There is ample evidence that people plan their movements to ensure comfortable final grasp postures at the end of a movement. The end-state comfort effect has been found to be a robust constraint during unimanual movements, and leads to the inference that goal-postures are represented and planned prior to movement initiation. The purpose of this study was to examine whether individuals make appropriate corrections to ensure comfortable final goal postures when faced with an unexpected change in action goal. Participants reached for a horizontal cylinder and placed the left or right end of the object into the target disk. As soon as the participant began to move, a secondary stimuli was triggered, which indicated whether the intended action goal had changed or not. Confirming previous research, participants selected initial grasp postures that ensured end-state comfort during non-perturbed trials. In addition, participants made appropriate on-line corrections to their reach-to-grasp movements to ensure end-state comfort during perturbed trials. Corrections in grasp posture occurred early or late in the reach-to-grasp phase. The results indicate that individuals plan their movements to afford comfort at the end of the movement, and that grasp posture planning is controlled via both feedforward and feedback mechanisms.  相似文献   

7.
The classic understanding of prehension is that of coordinated reaching and grasping. An alternative view is that the grasping in prehension emerges from independently controlled individual digit movements (the double-pointing model). The current study tested this latter model in bimanual prehension: participants had to grasp an object between their two index fingers. Right after the start of the movement, the future end position of one of the digits was perturbed. The perturbations resulted in expected changes in the kinematics of the perturbed digit but also in adjusted kinematics in the unperturbed digit. The latter effects showed up when the end position of the right index finger was perturbed, but not when the end position of the left index finger was perturbed. Because the absence of a coupling between the digits is the core assumption of the double-pointing model, finding any perturbation effects challenges this account of prehension; the double-pointing model predicts that the unperturbed digit would be unaffected by the perturbation. The authors conclude that the movement of the digits in prehension is coupled into a grasping component.  相似文献   

8.
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.  相似文献   

9.
Ganel T  Freud E  Chajut E  Algom D 《PloS one》2012,7(4):e36253

Background

Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.

Methodology/Principal Findings

In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.

Conclusions/Significance

We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize.  相似文献   

10.
We reach for and grasp different sized objects numerous times per day. Most of these movements are visually-guided, but some are guided by the sense of touch (i.e. haptically-guided), such as reaching for your keys in a bag, or for an object in a dark room. A marked right-hand preference has been reported during visually-guided grasping, particularly for small objects. However, little is known about hand preference for haptically-guided grasping. Recently, a study has shown a reduction in right-hand use in blindfolded individuals, and an absence of hand preference if grasping was preceded by a short haptic experience. These results suggest that vision plays a major role in hand preference for grasping. If this were the case, then one might expect congenitally blind (CB) individuals, who have never had a visual experience, to exhibit no hand preference. Two novel findings emerge from the current study: first, the results showed that contrary to our expectation, CB individuals used their right hand during haptically-guided grasping to the same extent as visually-unimpaired (VU) individuals did during visually-guided grasping. And second, object size affected hand use in an opposite manner for haptically- versus visually-guided grasping. Big objects were more often picked up with the right hand during haptically-guided, but less often during visually-guided grasping. This result highlights the different demands that object features pose on the two sensory systems. Overall the results demonstrate that hand preference for grasping is independent of visual experience, and they suggest a left-hemisphere specialization for the control of grasping that goes beyond sensory modality.  相似文献   

11.
Reach-to-grasp movements change quantitatively in a lawful (i.e. predictable) manner with changes in object properties. We explored whether altering object texture would produce qualitative changes in the form of the precontact movement patterns. Twelve participants reached to lift objects from a tabletop. Nine objects were produced, each with one of three grip surface textures (high-friction, medium-friction and low-friction) and one of three widths (50 mm, 70 mm and 90 mm). Each object was placed at three distances (100 mm, 300 mm and 500 mm), representing a total of 27 trial conditions. We observed two distinct movement patterns across all trials--participants either: (i) brought their arm to a stop, secured the object and lifted it from the tabletop; or (ii) grasped the object 'on-the-fly', so it was secured in the hand while the arm was moving. A majority of grasps were on-the-fly when the texture was high-friction and none when the object was low-friction, with medium-friction producing an intermediate proportion. Previous research has shown that the probability of on-the-fly behaviour is a function of grasp surface accuracy constraints. A finger friction rig was used to calculate the coefficients of friction for the objects and these calculations showed that the area available for a stable grasp (the 'functional grasp surface size') increased with surface friction coefficient. Thus, knowledge of functional grasp surface size is required to predict the probability of observing a given qualitative form of grasping in human prehensile behaviour.  相似文献   

12.
Various movement parameters of grasping movements, like velocity or type of the grasp, have been successfully decoded from neural activity. However, the question of movement event detection from brain activity, that is, decoding the time at which an event occurred (e.g. movement onset), has been addressed less often. Yet, this may be a topic of key importance, as a brain-machine interface (BMI) that controls a grasping prosthesis could be realized by detecting the time of grasp, together with an optional decoding of which type of grasp to apply. We, therefore, studied the detection of time of grasps from human ECoG recordings during a sequence of natural and continuous reach-to-grasp movements. Using signals recorded from the motor cortex, a detector based on regularized linear discriminant analysis was able to retrieve the time-point of grasp with high reliability and only few false detections. Best performance was achieved using a combination of signal components from time and frequency domains. Sensitivity, measured by the amount of correct detections, and specificity, represented by the amount of false detections, depended strongly on the imposed restrictions on temporal precision of detection and on the delay between event detection and the time the event occurred. Including neural data from after the event into the decoding analysis, slightly increased accuracy, however, reasonable performance could also be obtained when grasping events were detected 125 ms in advance. In summary, our results provide a good basis for using detection of grasping movements from ECoG to control a grasping prosthesis.  相似文献   

13.
Patterns of precision grasp are described in stumptail macaques (Macaca arctoides) before and after lesions of the fasciculus cuneatus (FC). Three monkeys were videotaped while reaching for and grasping small food items. From these videotapes, records were made of the style and outcome of each grasp. Kinematic measurements were also made to describe grip formation and terminal grasp. During grip formation, grip aperture was measured as the distance between the tips of the index finger and the thumb. For terminal grasp, the joint angles of the index finger were measured. The majority of grasps by normal monkeys were of the precision type, in which the item was carried between the tips of the index finger and thumb. Each normal monkey approached objects with a highly consistent grip formation; that is, the fingertips formed a small grip aperture during the approach, and the aperture varied little on repeated grasps. To grasp an item, the forefinger moved in a multiarticular pattern, in which the proximal joint flexed and the distal joint extended. As a result of this combination of movements, the forefinger pad was placed directly onto the object. Following FC transection, the monkeys were studied for 10 months, beginning 1 month after the lesion, to allow for recovery from the acute effects of surgery. The monkeys could grasp the food items, but they rarely opposed the fingertips in precision grasp. Grip formation was altered and was characterized either by excessive grip aperture or by little to no finger opening. All of the monkeys used the table surface to help grasp items. Combined multiarticular patterns of flexion and extension were never observed postoperatively; they were replaced by flexion at all joints of the fingers. These results suggest that the FCs are more important for precision grasping than for other, less refined grasp forms (e.g., power grasps; Napier, 1956). The FCs provide critical proprioceptive feedback to cerebral areas involved in the planning and/or the execution of these movements.  相似文献   

14.

Background

Most of us are poor at faking actions. Kinematic studies have shown that when pretending to pick up imagined objects (pantomimed actions), we move and shape our hands quite differently from when grasping real ones. These differences between real and pantomimed actions have been linked to separate brain pathways specialized for different kinds of visuomotor guidance. Yet professional magicians regularly use pantomimed actions to deceive audiences.

Methodology and Principal Findings

In this study, we tested whether, despite their skill, magicians might still show kinematic differences between grasping actions made toward real versus imagined objects. We found that their pantomimed actions in fact closely resembled real grasps when the object was visible (but displaced) (Experiment 1), but failed to do so when the object was absent (Experiment 2).

Conclusions and Significance

We suggest that although the occipito-parietal visuomotor system in the dorsal stream is designed to guide goal-directed actions, prolonged practice may enable it to calibrate actions based on visual inputs displaced from the action.  相似文献   

15.
16.
Substantial evidence has highlighted the significant role of associative brain areas, such as the posterior parietal cortex (PPC) in transforming multimodal sensory information into motor plans. However, little is known about how different sensory information, which can have different delays or be absent, combines to produce a motor plan, such as executing a reaching movement. To address these issues, we constructed four biologically plausible network architectures to simulate PPC: 1) feedforward from sensory input to the PPC to a motor output area, 2) feedforward with the addition of an efference copy from the motor area, 3) feedforward with the addition of lateral or recurrent connectivity across PPC neurons, and 4) feedforward plus efference copy, and lateral connections. Using an evolutionary strategy, the connectivity of these network architectures was evolved to execute visually guided movements, where the target stimulus provided visual input for the entirety of each trial. The models were then tested on a memory guided motor task, where the visual target disappeared after a short duration. Sensory input to the neural networks had sensory delays consistent with results from monkey studies. We found that lateral connections within the PPC resulted in smoother movements and were necessary for accurate movements in the absence of visual input. The addition of lateral connections resulted in velocity profiles consistent with those observed in human and non-human primate visually guided studies of reaching, and allowed for smooth, rapid, and accurate movements under all conditions. In contrast, Feedforward or Feedback architectures were insufficient to overcome these challenges. Our results suggest that intrinsic lateral connections are critical for executing accurate, smooth motor plans.  相似文献   

17.
We tested the hypothesis that A.I., a subject who has total ophthalmoplegia, resulting in a lack of eye movements, used her head to orientate in a qualitatively similar way to eye-based orientating of control subjects. We used four classic eye-movement paradigms and measured A.I.''s head movements while she performed the tasks. These paradigms were (i) the gap paradigm, (ii) the remote-distractor effect, (iii) the anti-saccade paradigm, and (iv) tests of saccadic suppression. In all cases, A.I.''s head saccades were qualitatively similar to previously reported eye-movement data. We conclude that A.I.''s head movements are probably controlled by the same neural mechanisms that control eye movements in unimpaired subjects.  相似文献   

18.
Recent research has shown that neurophysiological activation during action planning depends on the orientation to initial or final action goals for precision grips. However, the neural signature for a distinct class of grasping, power grips, is still unknown. The aim of the present study was to differentiate between cerebral activity, by means of event-related potentials (ERPs), and its temporal organization during power grips executed with an emphasis on either the initial or final parts of movement sequences. In a grasp and transportation task, visual cues emphasized either the grip (the immediate goal) or the target location (the final goal). ERPs differed between immediate and final goal-cued conditions, suggesting different means of operation dependent on goal-relatedness. Differences in mean amplitude occurred earlier for power grips than for recently reported precision grips time-locked to grasping over parieto-occipital areas. Time-locked to final object placement, differences occurred within a similar time window for power and precision grips over frontal areas. These results suggest that a parieto-frontal network of activation is of crucial importance for grasp planning and execution. Our results indicate that power grip preparation and execution for goal-related actions are controlled by similar neural mechanisms as have been observed during precision grips, but with a distinct temporal pattern.  相似文献   

19.
Different primate species have developed extensive capacities for grasping and manipulating objects. However, the manual abilities of primates remain poorly known from a dynamic point of view. The aim of the present study was to quantify the functional and behavioral strategies used by captive bonobos (Pan paniscus) during tool use tasks. The study was conducted on eight captive bonobos which we observed during two tool use tasks: food extraction from a large piece of wood and food recovery from a maze. We focused on grasping postures, in‐hand movements, the sequences of grasp postures used that have not been studied in bonobos, and the kind of tools selected. Bonobos used a great variety of grasping postures during both tool use tasks. They were capable of in‐hand movement, demonstrated complex sequences of contacts, and showed more dynamic manipulation during the maze task than during the extraction task. They arrived on the location of the task with the tool already modified and used different kinds of tools according to the task. We also observed individual manual strategies. Bonobos were thus able to develop in‐hand movements similar to humans and chimpanzees, demonstrated dynamic manipulation, and they responded to task constraints by selecting and modifying tools appropriately, usually before they started the tasks. These results show the necessity to quantify object manipulation in different species to better understand their real manual specificities, which is essential to reconstruct the evolution of primate manual abilities.  相似文献   

20.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号