首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements.  相似文献   

2.
In the present review we will summarize evidence that the control of spoken language shares the same system involved in the control of arm gestures. Studies of primate premotor cortex discovered the existence of the so-called mirror system as well as of a system of double commands to hand and mouth. These systems may have evolved initially in the context of ingestion, and later formed a platform for combined manual and vocal communication. In humans, manual gestures are integrated with speech production, when they accompany speech. Lip kinematics and parameters of voice spectra during speech production are influenced by executing or observing transitive actions (i.e. guided by an object). Manual actions also play an important role in language acquisition in children, from the babbling stage onwards. Behavioural data reported here even show a reciprocal influence between words and symbolic gestures and studies employing neuroimaging and repetitive transcranial magnetic stimulation (rTMS) techniques suggest that the system governing both speech and gesture is located in Broca's area.  相似文献   

3.
The chimpanzee's use of American Sign Language (ASL) to communicate with humans and with each other has been empirically demonstrated in several reports, but this is the first research to experimentally examine their use of sign language in a nonsocial fashion: private signing. This experiment examined the private signing behavior of five signing chimpanzees, using a remote videotaping technique with no human present. It was found that all five chimpanzees signed to themselves for a total of 368 instances. These instances of private signing were classified into nine different functional categories as has been done in the analysis of private speech and signing in hearing and deaf human children. Similar to humans, a few of the categories accounted for the majority of the instances of private signing. These findings empirically demonstrate a behavior similar to private speech and signing in humans.  相似文献   

4.
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages.  相似文献   

5.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

6.

Objective

The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy.

Methods

The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior.

Results

Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children.

Conclusions

Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.  相似文献   

7.
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers’ comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.  相似文献   

8.
Research suggests that the experiences recollected from the dreams of persons who are deaf or who have hearing loss reflect their personal background and circumstances. However, this literature also indicated that few studies have surveyed the occurrence of color and communication styles. Individual differences in the perception of color and affect were especially noted. These differences appeared dependent upon whether the impairment was congenital or acquired. In this study, 24 deaf persons and a person with hearing loss who use American Sign Language (ASL) were compared to a sample of hearing persons regarding colors and communication occurring in their dreams. Both groups were found to communicate in dreams as they do in life, deaf persons and person with hearing loss by signing, and hearing persons by speech. The deaf persons and a person with hearing loss experienced more color and more vividness, and the time of onset for a hearing impairment showed differences among persons with hearing loss. The findings also suggest that utilizing dreams as therapeutic material when treating persons with hearing loss and nonimpaired persons may have clinical utility. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Deaf youth easily become communicatively isolated in public schools, where they are in a small minority among a majority of hearing peers and teachers. This article examines communicative strategies of deaf children in an American "mainstream " school setting to discover how they creatively manage their casual communicative interactions with hearing peers across multimodal communicative channels, visual and auditory. We argue that unshared sociolinguistic practices and hearing-oriented participation frameworks are crucial aspects of communicative failure in these settings. We also show that what look like "successful" conversational interactions between deaf and hearing children actually contain little real language and few of the complex communication skills vital to cognitive and social development. This study contributes to understanding the social production of communicative isolation of deaf students and implications of mainstream education for this minority group.  相似文献   

10.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

11.

Background

Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance.

Methods

Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5–15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children.

Results

Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants.

Conclusion

Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.  相似文献   

12.
Gestural communication in a group of 19 captive chimpanzees (Pan troglodytes) was observed, with particular attention paid to gesture sequences (combinations). A complete inventory of gesture sequences is reported. The majority of these sequences were repetitions of the same gestures, which were often tactile gestures and often occurred in play contexts. Other sequences combined gestures within a modality (visual, auditory, or tactile) or across modalities. The emergence of gesture sequences was ascribed to a recipient's lack of responsiveness rather than a premeditated combination of gestures to increase the efficiency of particular gestures. In terms of audience effects, the chimpanzees were sensitive to the attentional state of the recipient, and therefore used visually-based gestures mostly when others were already attending, as opposed to tactile gestures, which were used regardless of whether the recipient was attending or not. However, the chimpanzees did not use gesture sequences in which the first gesture served to attract the recipient's visual attention before they produced a second gesture that was visually-based. Instead, they used other strategies, such as locomoting in front of the recipient, before they produced a visually-based gesture.  相似文献   

13.

Background

To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled.

Methodology/Principal Findings

Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions.

Conclusions/Significance

These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system.  相似文献   

14.
The present study investigated haptic spatial configuration learning in deaf individuals, hearing sign language interpreters and hearing controls. In three trials, participants had to match ten shapes haptically to the cut-outs in a board as fast as possible. Deaf and hearing sign language users outperformed the hearing controls. A similar difference was observed for a rotated version of the board. The groups did not differ, however, on a free relocation trial. Though a significant sign language experience advantage was observed, comparison to results from a previous study testing the same task in a group of blind individuals showed it to be smaller than the advantage observed for the blind group. These results are discussed in terms of how sign language experience and sensory deprivation benefit haptic spatial configuration processing.  相似文献   

15.
SD Kelly  BC Hansen  DT Clark 《PloS one》2012,7(8):e42620
Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.  相似文献   

16.
The increasing body of research into human and non-human primates' gestural communication reflects the interest in a comparative approach to human communication, particularly possible scenarios of language evolution. One of the central challenges of this field of research is to identify appropriate criteria to differentiate a gesture from other non-communicative actions. After an introduction to the criteria currently used to define non-human primates' gestures and an overview of ongoing research, we discuss different pathways of how manual actions are transformed into manual gestures in both phylogeny and ontogeny. Currently, the relationship between actions and gestures is not only investigated on a behavioural, but also on a neural level. Here, we focus on recent evidence concerning the differential laterality of manual actions and gestures in apes in the framework of a functional asymmetry of the brain for both hand use and language.  相似文献   

17.
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals’ English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography. Using a hierarchical multiple regression analysis, this study examined the relationship of age of ASL exposure, ASL fluency, and fingerspelling skill on reading fluency in deaf college-age bilinguals. After controlling for ASL fluency, fingerspelling skill significantly predicted reading fluency, revealing for the first-time that fingerspelling, above and beyond ASL skills, contributes to reading fluency in deaf bilinguals. We suggest that both fingerspelling—in the visual-manual modality—and reading—in the visual-orthographic modality—are mutually facilitating because they share common underlying cognitive capacities of word decoding accuracy and automaticity of word recognition. The findings provide support for the hypothesis that the development of English reading proficiency may be facilitated through strengthening of the relationship among fingerspelling, sign language, and orthographic decoding en route to reading mastery, and may also reveal optimal approaches for reading instruction for deaf and hard of hearing children.  相似文献   

18.
Front Cover     
This review highlights the scientific advances concerning the origins of human right‐handedness and language (speech and gestures). The comparative approach we adopted provides evidence that research on human and non‐human animals’ behavioural asymmetries helps understand the processes that lead to the strong human left‐hemisphere specialisation. We review four major non‐mutually exclusive environmental factors that are likely to have shaped the evolution of human and non‐human primates’ manual asymmetry: socioecological lifestyle, postural characteristics, task‐level complexity and tool use. We hypothesise the following scenario for the evolutionary origins of human right‐handedness: the right‐direction of modern humans’ manual laterality would have emerged from our ecological (terrestrial) and social (multilevel system) lifestyle; then, it would have been strengthened by the gradual adoption of the bipedal stance associated with bipedal locomotion, and the increasing level of complexity of our daily tasks including bimanual coordinated actions and tool use. Although hemispheric functional lateralisation has been shaped through evolution, reports indicate that many factors and their mutual intertwinement can modulate human and non‐human primates’ manual laterality throughout their life cycle: genetic and environmental factors, mainly individual sociodemographic characteristics (e.g., age, sex and rank), behavioural characteristics (e.g., gesture per se and gestural sensory modality) and context‐related characteristics (e.g., emotional context and position of target). These environmental (evolutionary and life cycle) factors could also have influenced primates’ manual asymmetry indirectly through epigenetic modifications. All these findings led us to propose the hypothesis of a multicausal origin of human right‐handedness.  相似文献   

19.
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.  相似文献   

20.
The present study investigates whether producing gestures would facilitate route learning in a navigation task and whether its facilitation effect is comparable to that of hand movements that leave physical visible traces. In two experiments, we focused on gestures produced without accompanying speech, i.e., co-thought gestures (e.g., an index finger traces the spatial sequence of a route in the air). Adult participants were asked to study routes shown in four diagrams, one at a time. Participants reproduced the routes (verbally in Experiment 1 and non-verbally in Experiment 2) without rehearsal or after rehearsal by mentally simulating the route, by drawing it, or by gesturing (either in the air or on paper). Participants who moved their hands (either in the form of gestures or drawing) recalled better than those who mentally simulated the routes and those who did not rehearse, suggesting that hand movements produced during rehearsal facilitate route learning. Interestingly, participants who gestured the routes in the air or on paper recalled better than those who drew them on paper in both experiments, suggesting that the facilitation effect of co-thought gesture holds for both verbal and nonverbal recall modalities. It is possibly because, co-thought gesture, as a kind of representational action, consolidates spatial sequence better than drawing and thus exerting more powerful influence on spatial representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号