首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent studies have provided evidence that labeling can influence the outcome of infants’ visual categorization. However, what exactly happens during learning remains unclear. Using eye-tracking, we examined infants’ attention to object parts during learning. Our analysis of looking behaviors during learning provide insights going beyond merely observing the learning outcome. Both labeling and non-labeling phrases facilitated category formation in 12-month-olds but not 8-month-olds (Experiment 1). Non-linguistic sounds did not produce this effect (Experiment 2). Detailed analyses of infants’ looking patterns during learning revealed that only infants who heard labels exhibited a rapid focus on the object part successive exemplars had in common. Although other linguistic stimuli may also be beneficial for learning, it is therefore concluded that labels have a unique impact on categorization.  相似文献   

2.
Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.  相似文献   

3.
Many studies have shown that during the first year of life infants start learning the prosodic, phonetic and phonotactic properties of their native language. In parallel, infants start associating sound sequences with semantic representations. However, the question of how these two processes interact remains largely unknown. The current study explores whether (and when) the relative phonotactic probability of a sound sequence in the native language has an impact on infants’ word learning. We exploit the fact that Labial-Coronal (LC) words are more frequent than Coronal-Labial (CL) words in French, and that French-learning infants prefer LC over CL sequences at 10 months of age, to explore the possibility that LC structures might be learned more easily and thus at an earlier age than CL structures. Eye movements of French-learning 14- and 16-month-olds were recorded while they watched animated cartoons in a word learning task. The experiment involved four trials testing LC sequences and four trials testing CL sequences. Our data reveal that 16-month-olds were able to learn the LC and CL words, while14-month-olds were only able to learn the LC words, which are the words with the more frequent phonotactic pattern. The present results provide evidence that infants’ knowledge of their native language phonotactic patterns influences their word learning: Words with a frequent phonotactic structure could be acquired at an earlier age than those with a lower probability. Developmental changes are discussed and integrated with previous findings.  相似文献   

4.
Inferring the epistemic states of others is considered to be an essential requirement for humans to communicate; however, the developmental trajectory of this ability is unclear. The aim of the current study was to determine developmental trends in this ability by using pointing behavior as a dependent measure. Infants aged 13 to 18 months (n = 32, 16 females) participated in the study. The experiment consisted of two phases. In the Shared Experience Phase, both the participant and the experimenter experienced (played with) an object, and the participant experienced a second object while the experimenter was absent. In the Pointing Phase, the participant was seated on his/her mother’s lap, facing the experimenter, and the same two objects from the Shared Experience Phase were presented side-by-side behind the experimenter. The participants’ spontaneous pointing was analyzed from video footage. While the analysis of the Shared Experience Phase suggested that there was no significant difference in the duration of the participants’ visual attention to the two objects, the participants pointed more frequently to the object that could be considered “new” for the experimenter (in Experiment 1). This selective pointing was not observed when the experimenter could be considered unfamiliar with both of the objects (in Experiment 2). These findings suggest that infants in this age group spontaneously point, presumably to inform about an object, reflecting the partner’s attentional and knowledge states.  相似文献   

5.
‘Infant shyness’, in which infants react shyly to adult strangers, presents during the third quarter of the first year. Researchers claim that shy children over the age of three years are experiencing approach-avoidance conflicts. Counter-intuitively, shy children do not avoid the eyes when scanning faces; rather, they spend more time looking at the eye region than non-shy children do. It is currently unknown whether young infants show this conflicted shyness and its corresponding characteristic pattern of face scanning. Here, using infant behavioral questionnaires and an eye-tracking system, we found that highly shy infants had high scores for both approach and fear temperaments (i.e., approach-avoidance conflict) and that they showed longer dwell times in the eye regions than less shy infants during their initial fixations to facial stimuli. This initial hypersensitivity to the eyes was independent of whether the viewed faces were of their mothers or strangers. Moreover, highly shy infants preferred strangers with an averted gaze and face to strangers with a directed gaze and face. This initial scanning of the eye region and the overall preference for averted gaze faces were not explained solely by the infants’ age or temperament (i.e., approach or fear). We suggest that infant shyness involves a conflict in temperament between the desire to approach and the fear of strangers, and this conflict is the psychological mechanism underlying infants’ characteristic behavior in face scanning.  相似文献   

6.
Although infant speech perception in often studied in isolated modalities, infants'' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants’ looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.  相似文献   

7.
The present study asks when infants are able to selectively anticipate the goals of observed actions, and how this ability relates to infants’ own abilities to produce those specific actions. Using eye-tracking technology to measure on-line anticipation, 6-, 8- and 10-month-old infants and a control group of adults were tested while observing an adult reach with a whole hand grasp, a precision grasp or a closed fist towards one of two different sized objects. The same infants were also given a comparable action production task. All infants showed proactive gaze to the whole hand grasps, with increased degrees of proactivity in the older groups. Gaze proactivity to the precision grasps, however, was present from 8 months of age. Moreover, the infants’ ability in performing precision grasping strongly predicted their ability in using the actor’s hand shape cues to differentially anticipate the goal of the observed action, even when age was partialled out. The results are discussed in terms of the specificity of action anticipation, and the fine-grained relationship between action production and action perception.  相似文献   

8.
Human infants’ developing manipulatory transformations involved in classifying objects from ages 6 to 24 months were investigated. Infants’ manipulations develop from predominantly serial one-at-a-time acts with one object to predominantly parallel two-at-a-time acts with two objects. This shift is marked by increasingly overt transformational consequences for the objects manipulated. When infants construct parallel transformations they are initially different. With age they are increasingly identical or reciprocal. Also during this age period, as the number of objects manipulated at the same time increases, so does the frequency with which infants coordinate them. At the same time, the kind of objects infants manipulate simultaneously changes. Six-month-olds manipulate different objects when acting on more than one object at a time. By age 12 months, infants switch to manipulating identical objects at the same time, indicating that they are beginning to construct identity classes. Since this development occurs about a half year before human infants develop any substantial naming behavior, the origins of classification cannot depend on this linguistic development  相似文献   

9.
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space.  相似文献   

10.
The present study investigated whether infants reason about others’ social preferences based on the intentions of others’ interactive actions. In Experiment 1, 12-month-old infants were familiarized with an event in which an agent either successfully helped a circle to climb up a hill (successful-helping condition) or failed to help the circle to achieve its goal (failed-helping condition). During the test, the infants saw the circle approach either the helper (approach-helper event) or the hinderer (approach-hinderer event). In the successful-helping condition, the 12-month-old infants looked for longer at the approach-hinderer event than at the approach-helper event, but in the failed-helping condition, looking times were about equal for the two test events. These results suggest that 12-month-old infants could not infer the circle’s preference when the helper’s action did not lead to its intended outcome. In Experiment 2, 16-month-olds were tested in the failed-helping condition; they looked longer at the approach-hinderer event than at the approach-helper event, which suggests that they could reason about the third party’s social preferences based on the exhibited intentions. In Experiment 3, 12-month-olds were familiarized with events in which the final outcomes of helping and hindering actions were ambiguous. The results revealed that 12-month-old infants are also sensitive to intentions when inferring other’s social preferences. The results suggest that by 12-months of age, infants expect an agent to prefer and approach another who intends to help the circle to achieve its goal, regardless of the outcome. The current research has implications for moral reasoning and social evaluation in infancy.  相似文献   

11.
Infants are known to possess two different cognitive systems to encode numerical information. The first system encodes approximate numerosities, has no known upper limit and is functional from birth on. The second system relies on infants’ ability to track up to 3 objects in parallel, and enables them to represent exact numerosity for such small sets. It is unclear, however, whether infants may be able to represent numerosities from all ranges in a common format. In various studies, infants failed to discriminate a small vs. a large numerosity (e.g., 2 vs. 4, 3 vs. 6), although more recent studies presented evidence that infants can succeed at these discriminations in some situations. Here, we used a transfer paradigm between the tactile and visual modalities in 5-month-olds, assuming that such cross-modal paradigm may promote access to abstract representations of numerosities, continuous across the small and large ranges. Infants were first familiarized with 2 to 4 objects in the tactile modality, and subsequently tested for their preference between 2 vs. 4, or 3 vs. 6 visual objects. Results were mixed, with only partial evidence that infants may have transferred numerical information across modalities. Implications on 5-month-old infants’ ability to represent small and large numerosities in a single or in separate formats are discussed.  相似文献   

12.
Evidence that children maintain some memories of labels that are unlikely to be shared by the broader linguistic community suggests that children’s selective learning is not an all-or-none phenomenon. Across three experiments, we examine the contexts in which 24-month-olds show selective learning and whether they adjust their selective learning if provided with cues of in-context relevance. In each experiment, toddlers were first familiarized with a source who acted on familiar objects in either typical or atypical ways (e.g., used a car to mimic driving or hop like a rabbit) or labeled familiar objects incorrectly (e.g., called a spoon a “brush”). The source then labeled unfamiliar objects using either a novel word (e.g., fep; Experiment 1) or sound (e.g., ring; Experiments 2 and 3). Results indicated that toddlers learnt words from the typical source but not from the atypical or inaccurate source. In contrast, toddlers extended sound labels only when a source who had previously acted atypically provided the sound labels. Thus, toddlers, like preschoolers, avoid forming semantic representations of new object labels that are unlikely to be relevant in the broader community, but will form event-based memories of such labels if they have reason to suspect such labels will have in-context relevance.  相似文献   

13.
The discovery of mirror neurons in the monkey motor cortex has inspired wide-ranging hypotheses about the potential relationship between action control and social cognition. In this paper, we consider the hypothesis that this relationship supports the early development of a critical aspect of social understanding, the ability to analyse others’ actions in terms of goals. Recent investigations of infant action understanding have revealed rich connections between motor development and the analysis of goals in others’ actions. In particular, infants’ own goal-directed actions influence their analysis of others’ goals. This evidence indicates that the cognitive systems that drive infants’ own actions contribute to their analysis of goals in others’ actions. These effects occur at a relatively abstract level of analysis both in terms of the structure infants perceive in others’ actions and relevant structure in infants’ own actions. Although the neural bases of these effects in infants are not yet well understood, current evidence indicates that connections between action production and action perception in infancy involve the interrelated neural systems at work in generating planned, intelligent action.  相似文献   

14.
The sense of touch provides fundamental information about the surrounding world, and feedback about our own actions. Although touch is very important during the earliest stages of life, to date no study has investigated infants’ abilities to process visual stimuli implying touch. This study explores the developmental origins of the ability to visually recognize touching gestures involving others. Looking times and orienting responses were measured in a visual preference task, in which participants were simultaneously presented with two videos depicting a touching and a no-touching gesture involving human body parts (face, hand) and/or an object (spoon). In Experiment 1, 2-day-old newborns and 3-month-old infants viewed two videos: in one video a moving hand touched a static face, in the other the moving hand stopped before touching it. Results showed that only 3-month-olds, but not newborns, differentiated the touching from the no-touching gesture, displaying a preference for the former over the latter. To test whether newborns could manifest a preferential visual response when the touched body part is different from the face, in Experiment 2 newborns were presented with touching/no-touching gestures in which a hand or an inanimate object—i.e., a spoon- moved towards a static hand. Newborns were able to discriminate a hand-to-hand touching gesture, but they did not manifest any preference for the object-to-hand touch. The present findings speak in favour of an early ability to visually recognize touching gestures involving the interaction between human body parts.  相似文献   

15.
Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants’ orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth.  相似文献   

16.
Human infants rapidly learn new skills and customs via imitation, but the neural linkages between action perception and production are not well understood. Neuroscience studies in adults suggest that a key component of imitation–identifying the corresponding body part used in the acts of self and other–has an organized neural signature. In adults, perceiving someone using a specific body part (e.g., hand vs. foot) is associated with activation of the corresponding area of the sensory and/or motor strip in the observer’s brain–a phenomenon called neural somatotopy. Here we examine whether preverbal infants also exhibit somatotopic neural responses during the observation of others’ actions. 14-month-old infants were randomly assigned to watch an adult reach towards and touch an object using either her hand or her foot. The scalp electroencephalogram (EEG) was recorded and event-related changes in the sensorimotor mu rhythm were analyzed. Mu rhythm desynchronization was greater over hand areas of sensorimotor cortex during observation of hand actions and was greater over the foot area for observation of foot actions. This provides the first evidence that infants’ observation of someone else using a particular body part activates the corresponding areas of sensorimotor cortex. We hypothesize that this somatotopic organization in the developing brain supports imitation and cultural learning. The findings connect developmental cognitive neuroscience, adult neuroscience, action representation, and behavioral imitation.  相似文献   

17.
This study examined the effects of tonal and atonal music on respiratory sinus arrhythmia (RSA) in 40 mothers and their 3-month-old infants. The tonal music fragment was composed using the structure of a harmonic series that corresponds with the pitch ratio characteristics of mother–infant vocal dialogues. The atonal fragment did not correspond with a tonal structure. Mother–infant ECG and respiration were registered along with simultaneous video recordings. RR-interval, respiration rate, and RSA were calculated. RSA was corrected for any confounding respiratory and motor activities. The results showed that the infants’ and the mothers’ RSA-responses to the tonal and atonal music differed. The infants showed significantly higher RSA-levels during the tonal fragment than during the atonal fragment and baseline, suggesting increased vagal activity during tonal music. The mothers showed RSA-responses that were equal to their infants only when the infants were lying close to their bodies and when they heard the difference between the two fragments, preferring the tonal above the atonal fragment. The results are discussed with regard to music-related topics, psychophysiological integration and mother-infant vocal interaction processes.  相似文献   

18.
BackgroundExposure to second hand smoke (SHS) is one of the major causes of premature death and disease among children. While socioeconomic inequalities exist for adult smoking, such evidence is limited for SHS exposure in children. Thus, this study examined changes over time in socioeconomic inequalities in infants’ SHS exposure in Japan.MethodsThis is a repeated cross-sectional study of 41,833 infants born in 2001 and 32,120 infants born in 2010 in Japan from nationally representative surveys using questionnaires. The prevalence of infants’ SHS exposure was determined and related to household income and parental education level. The magnitudes of income and educational inequalities in infants’ SHS exposure were estimated in 2001 and 2010 using both absolute and relative inequality indices.ResultsThe prevalence of SHS exposure in infants declined from 2001 to 2010. The relative index of inequality increased from 0.85 (95% confidence interval [CI], 0.80 to 0.89) to 1.47 (95% CI, 1.37 to 1.56) based on income and from 1.22 (95% CI, 1.17 to 1.26) to 2.09 (95% CI, 2.00 to 2.17) based on education. In contrast, the slope index of inequality decreased from 30.9 (95% CI, 29.3 to 32.6) to 20.1 (95% CI, 18.7 to 21.5) based on income and from 44.6 (95% CI, 43.1 to 46.2) to 28.7 (95% CI, 27.3 to 30.0) based on education. Having only a father who smoked indoors was a major contributor to absolute income inequality in infants’ SHS exposure in 2010, which increased in importance from 45.1% in 2001 to 67.0% in 2010.ConclusionsThe socioeconomic inequalities in infants’ second hand smoke exposure increased in relative terms but decreased in absolute terms from 2001 to 2010. Further efforts are needed to encourage parents to quit smoking and protect infants from second hand smoke exposure, especially in low socioeconomic households that include non-smoking mothers.  相似文献   

19.
Plants produce dangerous chemical and physical defenses that have shaped the physiology and behavior of the herbivorous predators that feed on them. Here we explore the impact that these plant defenses may have had on humans by testing infants' responses to plants with and without sharp-looking thorns. To do this, we presented 8- to 18-month-olds with plants and control stimuli and measured their initial reaching behavior and subsequent object exploration behavior. Half of the stimuli had sharp-looking thorns or pointed parts while the other half did not. We found that infants exhibited both an initial reluctance to touch and minimized subsequent physical contact with plants compared to other object types. Further, infants treated all plants as potentially dangerous, whether or not they possessed sharp-looking thorns. These results reveal novel dimensions of a behavioral avoidance strategy in infancy that would mitigate potential harm from plants.  相似文献   

20.
It is widely accepted that people establish allocentric spatial representation after learning a map. However, it is unknown whether people can directly acquire egocentric representation after map learning. In two experiments, the participants learned a distal environment through a map and then performed the egocentric pointing tasks in that environment under three conditions: with the heading aligned with the learning perspective (baseline), after 240° rotation from the baseline (updating), and after disorientation (disorientation). Disorientation disrupted the internal consistency of pointing among objects when the participants learned the sequentially displayed map, on which only one object name was displayed at a time while the location of “self” remained on the screen all the time. However, disorientation did not affect the internal consistency of pointing among objects when the participants learned the simultaneously displayed map. These results suggest that the egocentric representation can be acquired from a sequentially presented map.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号