首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study examined effects of hand movement on visual perception of 3-D movement. I used an apparatus in which a cursor position in a simulated 3-D space and the position of a stylus on a haptic device could coincide using a mirror. In three experiments, participants touched the center of a rectangle in the visual display with the stylus of the force-feedback device. Then the rectangle''s surface stereoscopically either protruded toward a participant or indented away from the participant. Simultaneously, the stylus either pushed back participant''s hand, pulled away, or remained static. Visual and haptic information were independently manipulated. Participants judged whether the rectangle visually protruded or dented. Results showed that when the hand was pulled away, subjects were biased to perceive rectangles indented; however, when the hand was pushed back, no effect of haptic information was observed (Experiment 1). This effect persisted even when the cursor position was spatially separated from the hand position (Experiment 2). But, when participants touched an object different from the visual stimulus, this effect disappeared (Experiment 3). These results suggest that the visual system tried to integrate the dynamic visual and haptic information when they coincided cognitively, and the effect of haptic information on visually perceived depth was direction-dependent.  相似文献   

2.
SD Kelly  BC Hansen  DT Clark 《PloS one》2012,7(8):e42620
Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.  相似文献   

3.

Background

To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled.

Methodology/Principal Findings

Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions.

Conclusions/Significance

These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system.  相似文献   

4.
Humans form impressions of others by associating persons (faces) with negative or positive social outcomes. This learning process has been referred to as social conditioning. In everyday life, affective nonverbal gestures may constitute important social signals cueing threat or safety, which therefore may support aforementioned learning processes. In conventional aversive conditioning, studies using electroencephalography to investigate visuocortical processing of visual stimuli paired with danger cues such as aversive noise have demonstrated facilitated processing and enhanced sensory gain in visual cortex. The present study aimed at extending this line of research to the field of social conditioning by pairing neutral face stimuli with affective nonverbal gestures. To this end, electro-cortical processing of faces serving as different conditioned stimuli was investigated in a differential social conditioning paradigm. Behavioral ratings and visually evoked steady-state potentials (ssVEP) were recorded in twenty healthy human participants, who underwent a differential conditioning procedure in which three neutral faces were paired with pictures of negative (raised middle finger), neutral (pointing), or positive (thumbs-up) gestures. As expected, faces associated with the aversive hand gesture (raised middle finger) elicited larger ssVEP amplitudes during conditioning. Moreover, theses faces were rated as to be more arousing and unpleasant. These results suggest that cortical engagement in response to faces aversively conditioned with nonverbal gestures is facilitated in order to establish persistent vigilance for social threat-related cues. This form of social conditioning allows to establish a predictive relationship between social stimuli and motivationally relevant outcomes.  相似文献   

5.
6.
7.
Varieties of nonmanipulative motor responses were observed in chimpanzees and squirrel monkeys. Chimpanzees displayed a right hand preference for touching their inanimate environments but used their right and left hands equally for touching their faces and their bodies. The latter result was not consistent with previous reports of a left hand preference for face touching in apes. The right hand preference for environmental touching was stronger in male than in female chimpanzees. Squirrel monkeys had a right preference for combined hand and foot responses directed to their bodies, but expressed no handedness for environmentally directed touching. These limb preferences in chimpanzees and squirrel monkeys indicate that neither precise, complex manipulation nor postural instability are necessary conditions for population level hand preferences. Factor analysis of the chimpanzee manual responses showed distinct self and environmentally directed factors. Analysis of the squirrel monkey data also showed self and environmental factors, except that body scratching had a negative loading on the environmental factor. This latter result suggests that self-scratching by squirrel monkeys is a displacement activity that suppresses manual exploration of the environment. © 1992 Wiley-Liss, Inc.  相似文献   

8.
Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources.  相似文献   

9.
The hypothesis that the ability to coordinate information between tactual and visual modalities is present at birth and dependent on perceptual inherent structures was tested in human newborns. Using an intersensory paired-preference procedure, we showed that newborns can visually recognize the shape of an object that they have previously manipulated with their right hand, out of sight. This is an experimental evidence that newborns can extract shape information in a tactual format and transform it in a visual format before they have had the opportunity to learn from the pairings of visual and tactual experience. This is contrary to a host of theories and models of perceptual learning, both traditional (empiricist philosophers) and modern (connectionist).  相似文献   

10.

Background

It has been reported that participants judge an object to be closer after a stick has been used to touch it than after touching it with the hand. In this study we try to find out why this is so.

Methodology

We showed six participants a cylindrical object on a table. On separate trials (randomly intermixed) participants either estimated verbally how far the object is from their body or they touched a remembered location. Touching was done either with the hand or with a stick (in separate blocks). In three different sessions, participants touched either the object location or the location halfway to the object location. Verbal judgments were given either in centimeters or in terms of whether the object would be reachable with the hand. No differences in verbal distance judgments or touching responses were found between the blocks in which the stick or the hand was used.

Conclusion

Instead of finding out why the judged distance changes when using a tool, we found that using a stick does not necessarily alter judged distances or judgments about the reachability of objects.  相似文献   

11.
Gestural communication in a group of 19 captive chimpanzees (Pan troglodytes) was observed, with particular attention paid to gesture sequences (combinations). A complete inventory of gesture sequences is reported. The majority of these sequences were repetitions of the same gestures, which were often tactile gestures and often occurred in play contexts. Other sequences combined gestures within a modality (visual, auditory, or tactile) or across modalities. The emergence of gesture sequences was ascribed to a recipient's lack of responsiveness rather than a premeditated combination of gestures to increase the efficiency of particular gestures. In terms of audience effects, the chimpanzees were sensitive to the attentional state of the recipient, and therefore used visually-based gestures mostly when others were already attending, as opposed to tactile gestures, which were used regardless of whether the recipient was attending or not. However, the chimpanzees did not use gesture sequences in which the first gesture served to attract the recipient's visual attention before they produced a second gesture that was visually-based. Instead, they used other strategies, such as locomoting in front of the recipient, before they produced a visually-based gesture.  相似文献   

12.
Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one''s own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.  相似文献   

13.
The topography of the somatosensory maps of our body can be largely shaped by alterations of peripheral sensory inputs. Following hand amputation, the hand cortical territory becomes responsive to facial cutaneous stimulation. Amputation-induced remapping, however, reverses after transplantation, as the grafted hand (re)gains its sensorimotor representation. Here, we investigate hand tactile perception in a former amputee by touching either grafted hand singly or in combination with another body part. The results showed that tactile sensitivity recovered rapidly, being remarkably good 5 months after transplant. In the right grafted hand, however, the newly acquired somatosensory awareness was strikingly hampered when the ipsilateral face was touched simultaneously, i.e., right face perception extinguished right hand perception. Ipsilateral face-hand extinction was present in the formerly dominant right hand 5 months after transplant and eventually disappeared 6 months afterwards. Control conditions' results showed that right hand tactile awareness was not extinguished either by contralateral left face and left hand stimulation or ipsilateral stimulation of the arm, which is bodily close to, but cortically far from, the hand. We suggest that ipsilateral face-hand extinction is a perceptual counterpart of the remapping that occurs after allograft and eyewitnesses the inherently competitive nature of sensory representations.  相似文献   

14.
Reconstructing ancient technical gestures associated with simple tool actions is crucial for understanding the co-evolution of the human forelimb and its associated control-related cognitive functions on the one hand, and of the human technological arsenal on the other hand. Although the topic of gesture is an old one in Paleolithic archaeology and in anthropology in general, very few studies have taken advantage of the new technologies from the science of kinematics in order to improve replicative experimental protocols. Recent work in paleoanthropology has shown the potential of monitored replicative experiments to reconstruct tool-use-related motions through the study of fossil bones, but so far comparatively little has been done to examine the dynamics of the tool itself. In this paper, we demonstrate that we can statistically differentiate gestures used in a simple scraping task through dynamic monitoring. Dynamics combines kinematics (position, orientation, and speed) with contact mechanical parameters (force and torque). Taken together, these parameters are important because they play a role in the formation of a visible archaeological signature, use-wear. We present our new affordable, yet precise methodology for measuring the dynamics of a simple hide-scraping task, carried out using a pull-to (PT) and a push-away (PA) gesture. A strain gage force sensor combined with a visual tag tracking system records force, torque, as well as position and orientation of hafted flint stone tools. The set-up allows switching between two tool configurations, one with distal and the other one with perpendicular hafting of the scrapers, to allow for ethnographically plausible reconstructions. The data show statistically significant differences between the two gestures: scraping away from the body (PA) generates higher shearing forces, but requires greater hand torque. Moreover, most benchmarks associated with the PA gesture are more highly variable than in the PT gesture. These results demonstrate that different gestures used in ‘common’ prehistoric tasks can be distinguished quantitatively based on their dynamic parameters. Future research needs to assess our ability to reconstruct these parameters from observed use-wear patterns.  相似文献   

15.
Research on sensory perception now often considers more than one sense at a time. This approach reflects real-world situations, such as when a visible object touches us. Indeed, vision and touch show great interdependence: the sight of a body part can reduce tactile target detection times [1], visual and tactile attentional systems are spatially linked [2], and the texture of surfaces that are actively touched with the fingertips is perceived using both vision and touch [3]. However, these previous findings might be mediated by spatial attention [1, 2] or by improved guidance of movement [3] via visually enhanced body position sense [4--6]. Here, we investigate the direct effects of viewing the body on passive touch. We measured tactile two-point discrimination thresholds [7] on the forearm while manipulating the visibility of the arm but holding gaze direction constant. The spatial resolution of touch was better when the arm was visible than when it was not. Tactile performance was further improved when the view of the arm was magnified. In contrast, performance was not improved by viewing a neutral object at the arm's location, ruling out improved spatial orienting as a possible account. Controls confirmed that no information about the tactile stimulation was provided by visibility of the arm. This visual enhancement of touch may point to online reorganization of tactile receptive fields.  相似文献   

16.
With a free-choice task, visual preference was estimated in five adult chimpanzees (Pan troglodytes). The subjects were presented with digitized color photographs of various species of primates on a CRT screen. Their touching responses to the photographs were reinforced by food reward irrespective of which photographs they touched. The results revealed that all chimpanzees touched the photographs of humans significantly more than any other species, or phylogenetic families of primates. This tendency was consistent across different stimulus sets. The results suggest that the chimpanzees showed visual preference for the photographs of humans over those of their own species. The results also suggest that the degree of this visual preference was not in accordance with phylogenetic distance from the subjects' species, chimpanzees. The preference for humans was stronger in the case of the colored photographs than in monochromatic ones. All of the five chimpanzees had been in captivity for at least 16 years. They were reared by humans from just after their birth, or at least from 1.5 years old. Their preference might have developed through social experience, especially that during infanthood. Electronic Publication  相似文献   

17.
The present study investigates whether producing gestures would facilitate route learning in a navigation task and whether its facilitation effect is comparable to that of hand movements that leave physical visible traces. In two experiments, we focused on gestures produced without accompanying speech, i.e., co-thought gestures (e.g., an index finger traces the spatial sequence of a route in the air). Adult participants were asked to study routes shown in four diagrams, one at a time. Participants reproduced the routes (verbally in Experiment 1 and non-verbally in Experiment 2) without rehearsal or after rehearsal by mentally simulating the route, by drawing it, or by gesturing (either in the air or on paper). Participants who moved their hands (either in the form of gestures or drawing) recalled better than those who mentally simulated the routes and those who did not rehearse, suggesting that hand movements produced during rehearsal facilitate route learning. Interestingly, participants who gestured the routes in the air or on paper recalled better than those who drew them on paper in both experiments, suggesting that the facilitation effect of co-thought gesture holds for both verbal and nonverbal recall modalities. It is possibly because, co-thought gesture, as a kind of representational action, consolidates spatial sequence better than drawing and thus exerting more powerful influence on spatial representation.  相似文献   

18.
How do animals determine when others are able and disposed to receive their communicative signals? In particular, it is futile to make a silent gesture when the intended audience cannot see it. Some non-human primates use the head and body orientation of their audience to infer visual attentiveness when signalling, but whether species relying less on visual information use such cues when producing visual signals is unknown. Here, we test whether African elephants (Loxodonta africana) are sensitive to the visual perspective of a human experimenter. We examined whether the frequency of gestures of head and trunk, produced to request food, was influenced by indications of an experimenter''s visual attention. Elephants signalled significantly more towards the experimenter when her face was oriented towards them, except when her body faced away from them. These results suggest that elephants understand the importance of visual attention for effective communication.  相似文献   

19.
How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person''s face being similarly touched produced a bias in recognizing one''s own face, in the direction of the other person included in the representation of one''s own face. Multisensory integration can update cognitive representations of one''s body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one''s face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.  相似文献   

20.

Background

Although gestural communication is widespread in primates, few studies focused on the cognitive processes underlying gestures produced by monkeys.

Methodology/Principal Findings

The present study asked whether red-capped mangabeys (Cercocebus torquatus) trained to produce visually based requesting gestures modify their gestural behavior in response to human’s attentional states. The experimenter held a food item and displayed five different attentional states that differed on the basis of body, head and gaze orientation; mangabeys had to request food by extending an arm toward the food item (begging gesture). Mangabeys were sensitive, at least to some extent, to the human’s attentional state. They reacted to some postural cues of a human recipient: they gestured more and faster when both the body and the head of the experimenter were oriented toward them than when they were oriented away. However, they did not seem to use gaze cues to recognize an attentive human: monkeys begged at similar levels regardless of the experimenter’s eyes state.

Conclusions/Significance

These results indicate that mangabeys lowered their production of begging gestures when these could not be perceived by the human who had to respond to it. This finding provides important evidence that acquired begging gestures of monkeys might be used intentionally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号