首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Successful object manipulation relies on the ability to form and retrieve sensorimotor memories of digit forces and positions used in previous object lifts. Past studies of patients affected by Parkinson''s disease (PD) have revealed that the basal ganglia play a crucial role in the acquisition and/or retrieval of sensorimotor memories for grasp control. Whereas it is known that PD impairs anticipatory control of digit forces during grasp, learning deficits associated with the planning of digit placement have yet to be explored. This question is motivated by recent work in healthy subjects revealing that anticipatory control of digit placement plays a crucial role for successful manipulation.

Methodology/Principal Findings

We asked ten PD patients off medication and ten age-matched controls to reach, grasp and lift an object whose center of mass (CM) was on the left, right or center. The only task requirement was to minimize object roll during lift. The CM remained the same across consecutive trials (blocked condition) or was altered from trial to trial (random condition). We hypothesized that impairment of the basal ganglia-thalamo-cortical circuits in PD patients would reduce their ability to anticipate digit placement appropriate to the CM location. Consequently, we predicted that PD patients would exhibit similar digit placement in the blocked vs. random conditions and produce larger peak object rolls than that of control subjects. In the blocked condition, PD patients exhibited significantly weaker modulation of fingertip contact points to CM location and larger object roll than controls (p<0.05 and p<0.01, respectively). Nevertheless, both controls and PD patients minimized object roll more in the blocked than in the random condition (p<0.01).

Conclusions/Significance

Our findings indicate that, even though PD patients may have a residual ability of anticipatory control of digit contact points and forces, they fail to implement a motor plan with the same degree of effectiveness as controls. We conclude that intact basal ganglia-thalamo-cortical circuits are necessary for successful sensorimotor learning of both grasp kinematics and kinetics required for dexterous hand-object interactions.  相似文献   

2.
Anticipatory force planning during grasping is based on visual cues about the object’s physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object’s center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object’s center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object’s CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.  相似文献   

3.
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.  相似文献   

4.
Seo HS  Hummel T 《Chemical senses》2011,36(3):301-309
Even though we often perceive odors while hearing auditory stimuli, surprisingly little is known about auditory-olfactory integration. This study aimed to investigate the influence of auditory cues on ratings of odor intensity and/or pleasantness, with a focus on 2 factors: "congruency" (Experiment 1) and the "halo/horns effect" of auditory pleasantness (Experiment 2). First, in Experiment 1, participants were presented with congruent, incongruent, or neutral sounds before and during the presentation of odor. Participants rated the odors as being more pleasant while listening to a congruent sound than while listening to an incongruent sound. In Experiment 2, participants received pleasant or unpleasant sounds before and during the presentation of either a pleasant or unpleasant odor. The hedonic valence of the sounds transferred to the odors, irrespective of the hedonic tone of the odor itself. The more the participants liked the preceding sound, the more pleasant the subsequent odor became. In contrast, the ratings of odor intensity appeared to be little or not at all influenced by the congruency or hedonic valence of the auditory cue. In conclusion, the present study for the first time provides an empirical demonstration that auditory cues can modulate odor pleasantness.  相似文献   

5.
Dexterous manipulation relies on modulation of digit forces as a function of digit placement. However, little is known about the sense of position of the vertical distance between finger pads relative to each other. We quantified subjects'' ability to match perceived vertical distance between the thumb and index finger pads (dy) of the right hand (“reference” hand) using the same or opposite hand (“test” hand) after a 10-second delay without vision of the hands. The reference hand digits were passively placed non-collinearly so that the thumb was higher or lower than the index finger (dy = 30 or –30 mm, respectively) or collinearly (dy = 0 mm). Subjects reproduced reference hand dy by using a congruent or inverse test hand posture while exerting negligible digit forces onto a handle. We hypothesized that matching error (reference hand dy minus test hand dy) would be greater (a) for collinear than non-collinear dys, (b) when reference and test hand postures were not congruent, and (c) when subjects reproduced dy using the opposite hand. Our results confirmed our hypotheses. Under-estimation errors were produced when the postures of reference and test hand were not congruent, and when test hand was the opposite hand. These findings indicate that perceived finger pad distance is reproduced less accurately (1) with the opposite than the same hand and (2) when higher-level processing of the somatosensory feedback is required for non-congruent hand postures. We propose that erroneous sensing of finger pad distance, if not compensated for during contact and onset of manipulation, might lead to manipulation performance errors as digit forces have to be modulated to perceived digit placement.  相似文献   

6.
Recent studies about sensorimotor control of the human hand have focused on how dexterous manipulation is learned and generalized. Here we address this question by testing the extent to which learned manipulation can be transferred when the contralateral hand is used and/or object orientation is reversed. We asked subjects to use a precision grip to lift a grip device with an asymmetrical mass distribution while minimizing object roll during lifting by generating a compensatory torque. Subjects were allowed to grasp anywhere on the object’s vertical surfaces, and were therefore able to modulate both digit positions and forces. After every block of eight trials performed in one manipulation context (i.e., using the right hand and at a given object orientation), subjects had to lift the same object in the second context for one trial (transfer trial). Context changes were made by asking subjects to switch the hand used to lift the object and/or rotate the object 180° about a vertical axis. Therefore, three transfer conditions, hand switch (HS), object rotation (OR), and both hand switch and object rotation (HS+OR), were tested and compared with hand matched control groups who did not experience context changes. We found that subjects in all transfer conditions adapted digit positions across multiple transfer trials similar to the learning of control groups, regardless of different changes of contexts. Moreover, subjects in both HS and HS+OR group also adapted digit forces similar to the control group, suggesting independent learning of the left hand. In contrast, the OR group showed significant negative transfer of the compensatory torque due to an inability to adapt digit forces. Our results indicate that internal representations of dexterous manipulation tasks may be primarily built through the hand used for learning and cannot be transferred across hands.  相似文献   

7.
The perception of pictorial gaze cues was examined in long-tailed macaques (Macaca fascicularis). A computerised object-location task was used to explore whether the monkeys would show faster response time to locate a target when its appearance was preceded with congruent as opposed to incongruent gaze cues. Despite existing evidence that macaques preferentially attend to the eyes in facial images and also visually orient with depicted gaze cues, the monkeys did not show faster response times on congruent trials either in response to schematic or photographic stimuli. These findings coincide with those reported for baboons testing with a similar paradigm in which gaze cues preceded a target identification task [Fagot, J., Deruelle, C., 2002. Perception of pictorial gaze by baboons (Papio papio). J. Exp. Psychol. 28, 298-308]. When tested with either pictorial stimuli or interactants, nonhuman primates readily follow gaze but do not seem to use this mechanism to identify a target object; there seems to be some mismatch in performance between attentional changes and manual responses to gaze cues on ostensibly similar tasks.  相似文献   

8.
The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.  相似文献   

9.
Attention is intrinsic to our perceptual representations of sensory inputs. Best characterized in the visual domain, it is typically depicted as a spotlight moving over a saliency map that topographically encodes strengths of visual features and feedback modulations over the visual scene. By introducing smells to two well-established attentional paradigms, the dot-probe and the visual-search paradigms, we find that a smell reflexively directs attention to the congruent visual image and facilitates visual search of that image without the mediation of visual imagery. Furthermore, such effect is independent of, and can override, top-down bias. We thus propose that smell quality acts as an object feature whose presence enhances the perceptual saliency of that object, thereby guiding the spotlight of visual attention. Our discoveries provide robust empirical evidence for a multimodal saliency map that weighs not only visual but also olfactory inputs.  相似文献   

10.
Neurons in the macaque Anterior Intraparietal area (AIP) encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio.  相似文献   

11.
Gottfried JA  Dolan RJ 《Neuron》2003,39(2):375-386
Human olfactory perception is notoriously unreliable, but shows substantial benefits from visual cues, suggesting important crossmodal integration between these primary sensory modalities. We used event-related fMRI to determine the underlying neural mechanisms of olfactory-visual integration in the human brain. Subjects participated in an olfactory detection task, whereby odors and pictures were delivered separately or together. By manipulating the degree of semantic correspondence between odor-picture pairs, we show a perceptual olfactory facilitation for semantically congruent (versus incongruent) trials. This behavioral advantage was associated with enhanced neural activity in anterior hippocampus and rostromedial orbitofrontal cortex. We suggest these findings can be interpreted as indicating that human hippocampus mediates reactivation of crossmodal semantic associations, even in the absence of explicit memory processing.  相似文献   

12.
The present study employs a stereoscopic manipulation to present sentences in three dimensions to subjects as they read for comprehension. Subjects read sentences with (a) no depth cues, (b) a monocular depth cue that implied the sentence loomed out of the screen (i.e., increasing retinal size), (c) congruent monocular and binocular (retinal disparity) depth cues (i.e., both implied the sentence loomed out of the screen) and (d) incongruent monocular and binocular depth cues (i.e., the monocular cue implied the sentence loomed out of the screen and the binocular cue implied it receded behind the screen). Reading efficiency was mostly unaffected, suggesting that reading in three dimensions is similar to reading in two dimensions. Importantly, fixation disparity was driven by retinal disparity; fixations were significantly more crossed as readers progressed through the sentence in the congruent condition and significantly more uncrossed in the incongruent condition. We conclude that disparity depth cues are used on-line to drive binocular coordination during reading.  相似文献   

13.
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations.  相似文献   

14.
Studies have shown that internal representations of manipulations of objects with asymmetric mass distributions that are generated within a specific orientation are not generalizable to novel orientations, i.e., subjects fail to prevent object roll on their first grasp-lift attempt of the object following 180° object rotation. This suggests that representations of these manipulations are specific to the reference frame in which they are formed. However, it is unknown whether that reference frame is specific to the hand, the body, or both, because rotating the object 180° modifies the relation between object and body as well as object and hand. An alternative, untested explanation for the above failure to generalize learned manipulations is that any rotation will disrupt grasp performance, regardless if the reference frame in which the manipulation was learned is maintained or modified. We examined the effect of rotations that (1) maintain and (2) modify relations between object and body, and object and hand, on the generalizability of learned two-digit manipulation of an object with an asymmetric mass distribution. Following rotations that maintained the relation between object and body and object and hand (e.g., rotating the object and subject 180°), subjects continued to use appropriate digit placement and load force distributions, thus generating sufficient compensatory moments to minimize object roll. In contrast, following rotations that modified the relation between (1) object and hand (e.g. rotating the hand around to the opposite object side), (2) object and body (e.g. rotating subject and hand 180°), or (3) both (e.g. rotating the subject 180°), subjects used the same, yet inappropriate digit placement and load force distribution, as those used prior to the rotation. Consequently, the compensatory moments were insufficient to prevent large object rolls. These findings suggest that representations of learned manipulation of objects with asymmetric mass distributions are specific to the body- and hand-reference frames in which they were learned.  相似文献   

15.
Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.  相似文献   

16.
Although infant speech perception in often studied in isolated modalities, infants'' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants’ looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.  相似文献   

17.
18.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

19.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

20.
Chimpanzee (Pan troglodytes) agonistic screams are graded vocal signals that are produced in a context-specific manner. Screams given by aggressors and victims can be discriminated based on their acoustic structure but the mechanisms of listener comprehension of these calls are currently unknown. In this study, we show that chimpanzees extract social information from these vocal signals that, combined with their more general social knowledge, enables them to understand the nature of out-of-sight social interactions. In playback experiments, we broadcast congruent and incongruent sequences of agonistic calls and monitored the response of bystanders. Congruent sequences were in accordance with existing social dominance relations; incongruent ones violated them. Subjects looked significantly longer at incongruent sequences, despite them being acoustically less salient (fewer call types from fewer individuals) than congruent ones. We concluded that chimpanzees categorised an apparently simple acoustic signal into victim and aggressor screams and used pragmatics to form inferences about third-party interactions they could not see.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号