首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.

Methodology

Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.

Principal Findings

Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca''s area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.

Conclusions

Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.

Significance

Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.  相似文献   

2.
When we observe a motor act (e.g. grasping a cup) done by another individual, we extract, according to how the motor act is performed and its context, two types of information: the goal (grasping) and the intention underlying it (e.g. grasping for drinking). Here we examined whether children with autistic spectrum disorder (ASD) are able to understand these two aspects of motor acts. Two experiments were carried out. In the first, one group of high-functioning children with ASD and one of typically developing (TD) children were presented with pictures showing hand-object interactions and asked what the individual was doing and why. In half of the “why” trials the observed grip was congruent with the function of the object (“why-use” trials), in the other half it corresponded to the grip typically used to move that object (“why-place” trials). The results showed that children with ASD have no difficulties in reporting the goals of individual motor acts. In contrast they made several errors in the why task with all errors occurring in the “why-place” trials. In the second experiment the same two groups of children saw pictures showing a hand-grip congruent with the object use, but within a context suggesting either the use of the object or its placement into a container. Here children with ASD performed as TD children, correctly indicating the agent''s intention. In conclusion, our data show that understanding others'' intentions can occur in two ways: by relying on motor information derived from the hand-object interaction, and by using functional information derived from the object''s standard use. Children with ASD have no deficit in the second type of understanding, while they have difficulties in understanding others'' intentions when they have to rely exclusively on motor cues.  相似文献   

3.
Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson''s disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot''s end effectors. We varied the admittance gain of the robot''s mobile base controller and the stiffness of the robot''s arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers'' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach''s α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions.  相似文献   

4.
Prediction of “when” a partner will act and “what” he is going to do is crucial in joint-action contexts. However, studies on face-to-face interactions in which two people have to mutually adjust their movements in time and space are lacking. Moreover, while studies on passive observation have shown that somato-motor simulative processes are disrupted when the observed actor is perceived as an out-group or unfair individual, the impact of interpersonal perception on joint-actions has never been directly addressed. Here we explored this issue by comparing the ability of pairs of participants who did or did not undergo an interpersonal perception manipulation procedure to synchronise their reach-to-grasp movements during: i) a guided interaction, requiring pure temporal reciprocal coordination, and ii) a free interaction, requiring both time and space adjustments. Behavioural results demonstrate that while in neutral situations free and guided interactions are equally challenging for participants, a negative interpersonal relationship improves performance in guided interactions at the expense of the free interactive ones. This was paralleled at the kinematic level by the absence of movement corrections and by low movement variability in these participants, indicating that partners cooperating within a negative interpersonal bond executed the cooperative task on their own, without reciprocally adapting to the partner''s motor behaviour. Crucially, participants'' performance in the free interaction improved in the manipulated group during the second experimental session while partners became interdependent as suggested by higher movement variability and by the appearance of interference between the self-executed actions and those observed in the partner. Our study expands current knowledge about on-line motor interactions by showing that visuo-motor interference effects, mutual motor adjustments and motor-learning mechanisms are influenced by social perception.  相似文献   

5.
Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another's movement. This so-called "action-perception coupling" is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one's own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what 'human-like' means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents.  相似文献   

6.
Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.  相似文献   

7.
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans'' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot''s greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.  相似文献   

8.

Background

When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants'' ToM associated cortical activity.

Methodology/Principal Findings

By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner''s dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners'' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computerConclusions/SignificanceBoth regions correlating with the degree of human-likeness, the medial frontal cortex and the right temporo-parietal junction, have been associated with Theory-of-Mind. The results demonstrate that the tendency to build a model of another''s mind linearly increases with its perceived human-likeness. Moreover, the present data provides first evidence of a contribution of higher human cognitive functions such as ToM in direct interactions with artificial robots. Our results shed light on the long-lasting psychological and philosophical debate regarding human-machine interaction and the question of what makes humans being perceived as human.  相似文献   

9.
The goal of this work is to develop a humanoid robot's perceptual mechanisms through the use of learning aids. We describe methods to enable learning on a humanoid robot using learning aids such as books, drawing materials, boards, educational videos or other children toys. Visual properties of objects are learned and inserted into a recognition scheme, which is then applied to acquire new object representations - we propose learning through developmental stages. Inspired in infant development, we will also boost the robot's perceptual capabilities by having a human caregiver performing educational and play activities with the robot (such as drawing, painting or playing with a toy train on a railway). We describe original algorithms to extract meaningful percepts from such learning experiments. Experimental evaluation of the algorithms corroborates the theoretical framework.  相似文献   

10.
The ability to anticipate others'' actions is crucial for social interaction. It has been shown that this ability relies on motor areas of the human brain that are not only active during action execution and action observation, but also during anticipation of another person''s action. Recording electroencephalograms during a triadic social interaction, we assessed whether activation of motor areas pertaining to the human mirror-neuron system prior to action observation depends on the social relationship between the actor and the observer. Anticipatory motor activation was stronger when participants expected an interaction partner to perform a particular action than when they anticipated that the same action would be performed by a third person they did not interact with. These results demonstrate that social interaction modulates action simulation.  相似文献   

11.
Humanoid robots are designed and built to mimic human form and movement.Ultimately,they are meant to resemble the size and physical abilities of a human in order to function in human-oriented environments and to work autonomously but to pose no physical threat to humans.Here,a humanoid robot that resembles a human in appearance and movement is built using powerful actuators paired with gear trains,joint mechanisms,and motor drivers that are all encased in a package no larger than that of the human physique.In this paper,we propose the construction of a humanoid-applicable anthropomorphic 7-DoF arm complete with an 8-DoF hand.The novel mechanical design of this humanoid arm makes it sufficiently compact to be compatible with currently available narrating-model humanoids,and to be sufficiently powerful and flexible to be functional; the number of degrees of freedom endowed in this robotic arm is sufficient for executing a wide range of tasks,including dexterous hand movements.The developed humanoid arm and hand are capable of sensing and interpreting incoming external force using the motor in each joint current without conventional torque sensors.The humanoid arm adopts an algorithm to avoid obstacles and the dexterous hand is capable of grasping objects.The developed robotic arm is suitable for use in an interactive humanoid robot.  相似文献   

12.
Parkinson''s disease (PD) is characterized by typical extrapyramidal motor features and increasingly recognized non-motor symptoms such as working memory (WM) deficits. Using functional magnetic resonance imaging (fMRI), we investigated differences in neuronal activation during a motor WM task in 23 non-demented PD patients and 23 age- and gender-matched healthy controls. Participants had to memorize and retype variably long visuo-spatial stimulus sequences after short or long delays (immediate or delayed serial recall). PD patients showed deficient WM performance compared to controls, which was accompanied by reduced encoding-related activation in WM-related regions. Mirroring slower motor initiation and execution, reduced activation in motor structures such as the basal ganglia and superior parietal cortex was detected for both immediate and delayed recall. Increased activation in limbic, parietal and cerebellar regions was found during delayed recall only. Increased load-related activation for delayed recall was found in the posterior midline and the cerebellum. Overall, our results demonstrate that impairment of WM in PD is primarily associated with a widespread reduction of task-relevant activation, whereas additional parietal, limbic and cerebellar regions become more activated relative to matched controls. While the reduced WM-related activity mirrors the deficient WM performance, the additional recruitment may point to either dysfunctional compensatory strategies or detrimental crosstalk from “default-mode” regions, contributing to the observed impairment.  相似文献   

13.
Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans.  相似文献   

14.

Background

The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion.

Methodology/Principal Findings

We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else''s body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms.

Conclusions/Significance

The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”.  相似文献   

15.
16.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

17.
This article presents work carried out as part of the robot sécurisé d’assistance à la chirurgie endoscopique (Rosace) project (funding ANR TecSan06), involving both academic and clinical partners along with an industrial partner in charge of technology integration. The main subject is a lightweight and compact robot for assistance in the endoscopic surgery field. The goal of the project has been to improve then transfer on a medical-grade product some technologies initially developed by the two academic partners. These technologies are: a first prototype of a robotic endoscope holder, an original method for visual servoing based on instrument tracking and some work done on comanipulation concept which consists in synergic interaction between robot and user. In accordance with the initial goals, major improvements have been obtained on these three aspects of the project. Robotic architecture improvement has contributed to enhance robot's versatility while robot command has been made more efficient and simple to use thanks to instrument tracking and comanipulation. After this 3-year project, initial prototype has turned into a commercially available product integrating (or that will integrate in a few months) these new technologies.  相似文献   

18.
The philosophical and interdisciplinary debate about the nature of social cognition, and the processes involved, has important implications for psychiatry. On one account, mindreading depends on making theoretical inferences about another person''s mental states based on knowledge of folk psychology, the so-called “theory theory” (TT). On a different account, “simulation theory” (ST), mindreading depends on simulating the other''s mental states within one''s own mental or motor system. A third approach, “interaction theory” (IT), looks to embodied processes (involving movement, gesture, facial expression, vocal intonation, etc.) and the dynamics of intersubjective interactions (joint attention, joint action, and processes not confined to an individual system) in highly contextualized situations to explain social cognition, and disruptions of these processes in some psychopathological conditions. In this paper, we present a brief summary of these three theoretical frameworks (TT, ST, IT). We then focus on impaired social abilities in autism and schizophrenia from the perspective of the three approaches. We discuss the limitations of such approaches in the scientific studies of these and other pathologies, and we close with a short reflection on the future of the field. In this regard we argue that, to the extent that TT, ST and IT offer explanations that capture different (limited) aspects of social cognition, a pluralist approach might be best.  相似文献   

19.
The way we experience the space around us is highly subjective. It has been shown that motion potentialities that are intrinsic to our body influence our space categorization. Furthermore, we have recently demonstrated that in the extrapersonal space, our categorization also depends on the movement potential of other agents. When we have to categorize the space as “Near” or “Far” between a reference and a target, the space categorized as “Near” is wider if the reference corresponds to a biological agent that has the potential to walk, instead of a biological and non-biological agent that cannot walk. But what exactly drives this “Near space extension”? In the present paper, we tested whether abstract beliefs about the biological nature of an agent determine how we categorize the space between the agent and an object. Participants were asked to first read a Pinocchio story and watch a correspondent video in which Pinocchio acts like a real human, in order to become more transported into the initial story. Then they had to categorize the location ("Near" or "Far") of a target object located at progressively increasing or decreasing distances from a non-biological agent (i.e., a wooden dummy) and from a biological agent (i.e., a human-like avatar). The results indicate that being transported into the Pinocchio story, induces an equal “Near” space threshold with both the avatar and the wooden dummy as reference frames.  相似文献   

20.
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号