首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
To test the role of gestures in the origin of language, we studied hand preferences for grasping or pointing to objects at several spatial positions in human infants and adult baboons. If the roots of language are indeed in gestural communication, we expect that human infants and baboons will present a comparable difference in their pattern of laterality according to task: both should be more right-hand/left-hemisphere specialized when communicating by pointing than when simply grasping objects. Our study is the first to test both human infants and baboons on the same communicative task. Our results show remarkable convergence in the distribution of the two species' hand biases on the two kinds of tasks: In both human infants and baboons, right-hand preference was significantly stronger for the communicative task than for grasping objects. Our findings support the hypothesis that left-lateralized language may be derived from a gestural communication system that was present in the common ancestor of baboons and humans.  相似文献   

2.
Previous research has shown that young infants perceive others'' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others'' actions imposes a cognitive challenge for young infants. The current study explored infants'' ability to utilize their knowledge of others'' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants'' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor''s future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor''s future goal-directed behavior. These findings shed light on the processes that support “smart” social behavior in infants, as it may be a challenge for young infants to use information about others'' intentions to inform rapid predictions.  相似文献   

3.
Over 18 months almost one quarter of infants born before 30 weeks'' gestation in a tertiary perinatal centre who required intensive care had to be transferred to other tertiary centres because intensive care facilities were fully occupied. When infants with lethal congenital malformations were excluded half of the 34 infants who were transferred died; this was twice the mortality (24%) in the 111 infants remaining. The difference between the groups was significant (relative odds = 3.1) and remained so after adjustment for any discrepancies in gestational age (relative odds = 4.0). After adjustment for potential confounding variables by logistic function regression the risk of dying for those transferred remained significantly higher than that for infants who remained (relative odds = 4.6, 95% confidence interval 1.8 to 12.1). As the requirement for neonatal intensive care is episodic and unpredictable more flexibility has to be built into the perinatal health care system to enable preterm infants delivered in tertiary perinatal centres to be cared for where they are born.  相似文献   

4.
Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1) and speech distractors (Experiment 2). Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3), indicating that the items'' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4). Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.  相似文献   

5.
Certain regions of the human brain are activated both during action execution and action observation. This so-called ‘mirror neuron system’ has been proposed to enable an observer to understand an action through a process of internal motor simulation. Although there has been much speculation about the existence of such a system from early in life, to date there is little direct evidence that young infants recruit brain areas involved in action production during action observation. To address this question, we identified the individual frequency range in which sensorimotor alpha-band activity was attenuated in nine-month-old infants'' electroencephalographs (EEGs) during elicited reaching for objects, and measured whether activity in this frequency range was also modulated by observing others'' actions. We found that observing a grasping action resulted in motor activation in the infant brain, but that this activity began prior to observation of the action, once it could be anticipated. These results demonstrate not only that infants, like adults, display overlapping neural activity during execution and observation of actions, but that this activation, rather than being directly induced by the visual input, is driven by infants'' understanding of a forthcoming action. These results provide support for theories implicating the motor system in action prediction.  相似文献   

6.
Gaze following in human infants depends on communicative signals   总被引:1,自引:0,他引:1  
Humans are extremely sensitive to ostensive signals, like eye contact or having their name called, that indicate someone's communicative intention toward them [1-3]. Infants also pay attention to these signals [4-6], but it is unknown whether they appreciate their significance in the initiation of communicative acts. In two experiments, we employed video presentation of an actor turning toward one of two objects and recorded infants' gaze-following behavior [7-13] with eye-tracking techniques [11, 12]. We found that 6-month-old infants followed the adult's gaze (a potential communicative-referential signal) toward an object only when such an act is preceded by ostensive cues such as direct gaze (experiment 1) and infant-directed speech (experiment 2). Such a link between the presence of ostensive signals and gaze following suggests that this behavior serves a functional role in assisting infants to effectively respond to referential communication directed to them. Whereas gaze following in many nonhuman species supports social information gathering [14-18], in humans it initially appears to reflect the expectation of a more active, communicative role from the information source.  相似文献   

7.
One of the most important faculties of humans is to understand the behaviour of other conspecifics. The present study aimed at determining whether, in a social context, request gesture and gaze direction of an individual are enough to infer his/her intention to communicate, by searching for their effects on the kinematics of another individual's arm action. In four experiments participants reached, grasped and lifted a bottle filled of orange juice in presence of an empty glass. In experiment 1, the further presence of a conspecific not producing any request with a hand and gaze did not modify the kinematics of the sequence. Conversely, experiments 2 and 3 showed that the presence of a conspecific producing only a request of pouring by holding the glass with his/her right hand, or only a request of comunicating with the conspecific, by using his/her gaze, affected lifting and grasping of the sequence, respectively. Experiment 4 showed that hand gesture and eye contact simultaneously produced affected the entire sequence. The results suggest that the presence of both request gesture and direct gaze produced by an individual changes the control of a motor sequence executed by another individual. We propose that a social request activates a social affordance that interferes with the control of whatever sequence and that the gaze of the potential receiver who held the glass with her hand modulates the effectiveness of the manual gesture. This paradigm if applied to individuals affected by autism disorder can give new insight on the nature of their impairment in social interaction and communication.  相似文献   

8.
Understanding the intentions of others while watching their actions is a fundamental building block of social behavior. The neural and functional mechanisms underlying this ability are still poorly understood. To investigate these mechanisms we used functional magnetic resonance imaging. Twenty-three subjects watched three kinds of stimuli: grasping hand actions without a context, context only (scenes containing objects), and grasping hand actions performed in two different contexts. In the latter condition the context suggested the intention associated with the grasping action (either drinking or cleaning). Actions embedded in contexts, compared with the other two conditions, yielded a significant signal increase in the posterior part of the inferior frontal gyrus and the adjacent sector of the ventral premotor cortex where hand actions are represented. Thus, premotor mirror neuron areas—areas active during the execution and the observation of an action—previously thought to be involved only in action recognition are actually also involved in understanding the intentions of others. To ascribe an intention is to infer a forthcoming new goal, and this is an operation that the motor system does automatically.  相似文献   

9.
Dux PE  Marois R 《PloS one》2008,3(10):e3330

Background

The attentional blink (AB) refers to humans'' impaired ability to detect the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream of distractors if it appears within 200–600 ms of the first target (T1). Here we examined whether humans'' ability to inhibit distractors in the RSVP stream is a key determinant of individual differences in T1 performance and AB magnitude.

Methodology/Principal Findings

We presented subjects with RSVP streams (93.3 ms/item) of letters containing white distractors, a red T1 and a green T2. Subjects'' ability to suppress distractors was assessed by determining the extent to which their second target performance was primed by a preceding distractor that shared the same identity as T2. Individual subjects'' magnitude of T2 priming from this distractor was found to be negatively correlated with their T1 accuracy and positively related to their AB magnitude. In particular, subjects with attenuated ABs showed negative priming (i.e., worse T2 performance when the priming distractor appeared in the RSVP stream compared to when it was absent), whereas those with large ABs displayed positive priming (i.e., better T2 performance when the priming distractor appeared in the RSVP stream compared to when it was absent). Thus, a subject''s ability to suppress distractors, as assessed by T2 priming magnitude, predicted both their T1 performance and AB magnitude.

Conclusions/Significance

These results confirm that distractor suppression plays a key role in RSVP target selection and support the hypothesis that the AB results, at least in part, from a failure of distractor inhibition.  相似文献   

10.
"Optic ataxia" is caused by damage to the human posterior parietal cortex (PPC). It disrupts all components of a visually guided prehension movement, not only the transport of the hand toward an object's location, but also the in-flight finger movements pretailored to the metric properties of the object. Like previous cases, our patient (I.G.) was quite unable to open her handgrip appropriately when directly reaching out to pick up objects of different sizes. When first tested, she failed to do this even when she had previewed the target object 5 s earlier. Yet despite this deficit in "real" grasping, we found, counterintuitively, that I.G. showed good grip scaling when "pantomiming" a grasp for an object seen earlier but no longer present. We then found that, after practice, I.G. became able to scale her handgrip when grasping a real target object that she had previewed earlier. By interposing catch trials in which a different object was covertly substituted for the original object during the delay between preview and grasp, we found that I.G. was now using memorized visual information to calibrate her real grasping movements. These results provide new evidence that "off-line" visuomotor guidance can be provided by networks independent of the PPC.  相似文献   

11.

Background

When we observe an individual performing a motor act (e.g. grasping a cup) we get two types of information on the basis of how the motor act is done and the context: what the agent is doing (i.e. grasping) and the intention underlying it (i.e. grasping for drinking). Here we examined the temporal dynamics of the brain activations that follow the observation of a motor act and underlie the observer''s capacity to understand what the agent is doing and why.

Methodology/Principal Findings

Volunteers were presented with two-frame video-clips. The first frame (T0) showed an object with or without context; the second frame (T1) showed a hand interacting with the object. The volunteers were instructed to understand the intention of the observed actions while their brain activity was recorded with a high-density 128-channel EEG system. Visual event-related potentials (VEPs) were recorded time-locked with the frame showing the hand-object interaction (T1). The data were analyzed by using electrical neuroimaging, which combines a cluster analysis performed on the group-averaged VEPs with the localization of the cortical sources that give rise to different spatio-temporal states of the global electrical field. Electrical neuroimaging results revealed four major steps: 1) bilateral posterior cortical activations; 2) a strong activation of the left posterior temporal and inferior parietal cortices with almost a complete disappearance of activations in the right hemisphere; 3) a significant increase of the activations of the right temporo-parietal region with simultaneously co-active left hemispheric sources, and 4) a significant global decrease of cortical activity accompanied by the appearance of activation of the orbito-frontal cortex.

Conclusions/Significance

We conclude that the early striking left hemisphere involvement is due to the activation of a lateralized action-observation/action execution network. The activation of this lateralized network mediates the understanding of the goal of object-directed motor acts (mirror mechanism). The successive right hemisphere activation indicates that this hemisphere plays an important role in understanding the intention of others.  相似文献   

12.

Background

Converging evidence indicates that action observation and action-related sounds activate cross-modally the human motor system. Since olfaction, the most ancestral sense, may have behavioural consequences on human activities, we causally investigated by transcranial magnetic stimulation (TMS) whether food odour could additionally facilitate the human motor system during the observation of grasping objects with alimentary valence, and the degree of specificity of these effects.

Methodology/Principal Findings

In a repeated-measure block design, carried out on 24 healthy individuals participating to three different experiments, we show that sniffing alimentary odorants immediately increases the motor potentials evoked in hand muscles by TMS of the motor cortex. This effect was odorant-specific and was absent when subjects were presented with odorants including a potentially noxious trigeminal component.The smell-induced corticospinal facilitation of hand muscles during observation of grasping was an additive effect which superimposed to that induced by the mere observation of grasping actions for food or non-food objects. The odour-induced motor facilitation took place only in case of congruence between the sniffed odour and the observed grasped food, and specifically involved the muscle acting as prime mover for hand/fingers shaping in the observed action.

Conclusions/Significance

Complex olfactory cross-modal effects on the human corticospinal system are physiologically demonstrable. They are odorant-specific and, depending on the experimental context, muscle- and action-specific as well. This finding implies potential new diagnostic and rehabilitative applications.  相似文献   

13.
When we observe a motor act (e.g. grasping a cup) done by another individual, we extract, according to how the motor act is performed and its context, two types of information: the goal (grasping) and the intention underlying it (e.g. grasping for drinking). Here we examined whether children with autistic spectrum disorder (ASD) are able to understand these two aspects of motor acts. Two experiments were carried out. In the first, one group of high-functioning children with ASD and one of typically developing (TD) children were presented with pictures showing hand-object interactions and asked what the individual was doing and why. In half of the “why” trials the observed grip was congruent with the function of the object (“why-use” trials), in the other half it corresponded to the grip typically used to move that object (“why-place” trials). The results showed that children with ASD have no difficulties in reporting the goals of individual motor acts. In contrast they made several errors in the why task with all errors occurring in the “why-place” trials. In the second experiment the same two groups of children saw pictures showing a hand-grip congruent with the object use, but within a context suggesting either the use of the object or its placement into a container. Here children with ASD performed as TD children, correctly indicating the agent''s intention. In conclusion, our data show that understanding others'' intentions can occur in two ways: by relying on motor information derived from the hand-object interaction, and by using functional information derived from the object''s standard use. Children with ASD have no deficit in the second type of understanding, while they have difficulties in understanding others'' intentions when they have to rely exclusively on motor cues.  相似文献   

14.
The ‘uncanny valley’ response is a phenomenon involving the elicitation of a negative feeling and subsequent avoidant behaviour in human adults and infants as a result of viewing very realistic human-like robots or computer avatars. It is hypothesized that this uncanny feeling occurs because the realistic synthetic characters elicit the concept of ‘human’ but fail to satisfy it. Such violations of our normal expectations regarding social signals generate a feeling of unease. This conflict-induced uncanny valley between mutually exclusive categories (human and synthetic agent) raises a new question: could an uncanny feeling be elicited by other mutually exclusive categories, such as familiarity and novelty? Given that infants prefer both familiarity and novelty in social objects, we address this question as well as the associated developmental profile. Using the morphing technique and a preferential-looking paradigm, we demonstrated uncanny valley responses of infants to faces of mothers (i.e. familiarity) and strangers (i.e. novelty). Furthermore, this effect strengthened with the infant''s age. We excluded the possibility that infants detect and avoid traces of morphing. This conclusion follows from our finding that the infants equally preferred strangers’ faces and the morphed faces of two strangers. These results indicate that an uncanny valley between familiarity and novelty may accentuate the categorical perception of familiar and novel objects.  相似文献   

15.
EEG power in frequency bands beta2 (18.5-29.5 Hz) and low gamma (30-40 Hz) was compared for situations while reading aloud with the technique "self-regulative utterance" texts as follow: a text with neutral emotional-semantic dominant; literary texts with either a positive or a negative emotional-semantic dominant; personal texts--recollections with similar dominants. Two groups of healthy subjects participated--a group of actor students (N=22) and a group of non-actor students (N=23). EEG power values in the states of emotiogenic texts reading are reproducibly differed with statistical significance from those in the state of reading ofa non-emotiogenic text. States of reading emotionally-positive texts are characterized by increases of EEG power in these bands, while those for emotionally negative texts--by decreases if compared with the state of emotionally neutral reading.  相似文献   

16.
Understanding the intentions of others while watching their actions is a fundamental building block of social behavior. The neural and functional mechanisms underlying this ability are still poorly understood. To investigate these mechanisms we used functional magnetic resonance imaging. Twenty-three subjects watched three kinds of stimuli: grasping hand actions without a context, context only (scenes containing objects), and grasping hand actions performed in two different contexts. In the latter condition the context suggested the intention associated with the grasping action (either drinking or cleaning). Actions embedded in contexts, compared with the other two conditions, yielded a significant signal increase in the posterior part of the inferior frontal gyrus and the adjacent sector of the ventral premotor cortex where hand actions are represented. Thus, premotor mirror neuron areas—areas active during the execution and the observation of an action—previously thought to be involved only in action recognition are actually also involved in understanding the intentions of others. To ascribe an intention is to infer a forthcoming new goal, and this is an operation that the motor system does automatically.  相似文献   

17.
Recent evidence suggests that the visual control of prehension may be less dependent on binocular information than has previously been thought. Studies investigating this question, however, have generally only examined reaches to single objects presented in isolation, even though natural prehensile movements are typically directed at objects in cluttered scenes which contain many objects. The present study was designed, therefore, to assess the contribution of binocular information to the control of prehensile movements in multiple-object scenes. Subjects reached for and grasped objects presented either in isolation or in the presence of one, two or four additional 'flanking' objects, under binocular and monocular viewing conditions. So that the role of binocular information could be clearly determined, subjects made reaches both in the absence of a visible scene around the target objects (self-illuminated objects presented in the dark) and under normal ambient lighting conditions. Analysis of kinematic parameters indicated that the removal of binocular information did not significantly affect many of the major indices of the transport component, including peak wrist velocity. However, peak grip apertures increased and subjects spent more time in the final slow phase of movement, prior to grasping the object, during monocularly guided reaches. The dissociation between effects of binocular versus monocular viewing on transport and grasp parameters was observed irrespective of the presence of flanking objects. These results therefore further question the view that binocular vision is pre-eminent in the control of natural prehensile movements.  相似文献   

18.
We reach for and grasp different sized objects numerous times per day. Most of these movements are visually-guided, but some are guided by the sense of touch (i.e. haptically-guided), such as reaching for your keys in a bag, or for an object in a dark room. A marked right-hand preference has been reported during visually-guided grasping, particularly for small objects. However, little is known about hand preference for haptically-guided grasping. Recently, a study has shown a reduction in right-hand use in blindfolded individuals, and an absence of hand preference if grasping was preceded by a short haptic experience. These results suggest that vision plays a major role in hand preference for grasping. If this were the case, then one might expect congenitally blind (CB) individuals, who have never had a visual experience, to exhibit no hand preference. Two novel findings emerge from the current study: first, the results showed that contrary to our expectation, CB individuals used their right hand during haptically-guided grasping to the same extent as visually-unimpaired (VU) individuals did during visually-guided grasping. And second, object size affected hand use in an opposite manner for haptically- versus visually-guided grasping. Big objects were more often picked up with the right hand during haptically-guided, but less often during visually-guided grasping. This result highlights the different demands that object features pose on the two sensory systems. Overall the results demonstrate that hand preference for grasping is independent of visual experience, and they suggest a left-hemisphere specialization for the control of grasping that goes beyond sensory modality.  相似文献   

19.
How do animals determine when others are able and disposed to receive their communicative signals? In particular, it is futile to make a silent gesture when the intended audience cannot see it. Some non-human primates use the head and body orientation of their audience to infer visual attentiveness when signalling, but whether species relying less on visual information use such cues when producing visual signals is unknown. Here, we test whether African elephants (Loxodonta africana) are sensitive to the visual perspective of a human experimenter. We examined whether the frequency of gestures of head and trunk, produced to request food, was influenced by indications of an experimenter''s visual attention. Elephants signalled significantly more towards the experimenter when her face was oriented towards them, except when her body faced away from them. These results suggest that elephants understand the importance of visual attention for effective communication.  相似文献   

20.
Facial displays are important for communication, and their ontogeny has been studied primarily in chimpanzees and macaques. We investigated the ontogeny, communicative function and target of facial displays in Cebus apella. Our results show that facial displays are absent at birth and develop as infants grow older. Lip-smacking appears first (at about 1 month of age), followed by scalp-lifting, relaxed open-mouth, silent bared-teeth, open-mouth silent bared-teeth displays and finally the open-mouth threat face. Infants perform most facial displays in the same contexts as adults, with the exception of the silent bared-teeth display that young capuchins use primarily, or exclusively, in affiliative contexts. Interestingly, facial displays are exchanged very often with peers, less frequently with adults and almost never with the mother.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号