首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations.  相似文献   

2.

Background

Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals.

Methodology/Principal Findings

We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age.

Conclusions/Significance

This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.  相似文献   

3.
Conditions for the persistence (i.e., protection from loss) of a sign language are investigated assuming monogenic recessive inheritance of deafness, assortative mating for deafness or hearing, and cultural transmission of the sign language to deaf individuals from their deaf parents and deaf maternal grandparents. A new method is introduced to deal with the problem of grandparental transmission in which the basic variables are the frequencies of triplets comprising a mother, a father, and their daughter of permissible phenogenotypes. Usual stability analysis is then done on the system of linear recursions in the frequencies of these triplets, derived on the assumption that signers (users of the sign language) are rare. It is shown that assortative mating is the most important factor contributing to persistence, but that grandparental transmission can also have a significant effect when assortment is as strong as observed in England and the United States.  相似文献   

4.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

5.
We model the cultural transmission of sign language when there is one-locus genetic variation for deafness and hearing. Our premises are that the deaf are more motivated to learn sign language than the hearing, and that a vertically transmitted sign language, unlike recessive hereditary deafness, cannot "jump a generation." Conditions are obtained for persistence (i.e. protection from loss) of signers. These conditions are more easily satisfied the greater the fraction of the hearing who also learn sign language and as the frequency of the recessive gene for deafness increases. Persistence is also facilitated by assortative mating for deafness, but not by assortment for signing. With vertical transmission only, it is necessary that one signer parent be able to transmit sign language with greater than one-half the efficiency of two. Under the assumption that the hearing do not learn sign language, the following additional results are obtained. Persistence is more likely with dominant as opposed to recessive inheritance. When recessive hereditary and acquired deafness co-occur, increasing the frequency of the latter has opposite effects depending on the degree of assortment. Opportunities for the deaf to learn sign language outside the family seem not to affect the conditions for persistence.  相似文献   

6.
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers’ comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.  相似文献   

7.
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.  相似文献   

8.
We reach for and grasp different sized objects numerous times per day. Most of these movements are visually-guided, but some are guided by the sense of touch (i.e. haptically-guided), such as reaching for your keys in a bag, or for an object in a dark room. A marked right-hand preference has been reported during visually-guided grasping, particularly for small objects. However, little is known about hand preference for haptically-guided grasping. Recently, a study has shown a reduction in right-hand use in blindfolded individuals, and an absence of hand preference if grasping was preceded by a short haptic experience. These results suggest that vision plays a major role in hand preference for grasping. If this were the case, then one might expect congenitally blind (CB) individuals, who have never had a visual experience, to exhibit no hand preference. Two novel findings emerge from the current study: first, the results showed that contrary to our expectation, CB individuals used their right hand during haptically-guided grasping to the same extent as visually-unimpaired (VU) individuals did during visually-guided grasping. And second, object size affected hand use in an opposite manner for haptically- versus visually-guided grasping. Big objects were more often picked up with the right hand during haptically-guided, but less often during visually-guided grasping. This result highlights the different demands that object features pose on the two sensory systems. Overall the results demonstrate that hand preference for grasping is independent of visual experience, and they suggest a left-hemisphere specialization for the control of grasping that goes beyond sensory modality.  相似文献   

9.
Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals.  相似文献   

10.
The occipital cortex (OC) of early-blind humans is activated during various nonvisual perceptual and cognitive tasks, but little is known about its modular organization. Using functional MRI we tested whether processing of auditory versus tactile and spatial versus nonspatial information was dissociated in the OC of the early blind. No modality-specific OC activation was observed. However, the right middle occipital gyrus (MOG) showed a preference for spatial over nonspatial processing of both auditory and tactile stimuli. Furthermore, MOG activity was correlated with accuracy of individual sound localization performance. In sighted controls, most of extrastriate OC, including the MOG, was deactivated during auditory and tactile conditions, but the right MOG was more activated during spatial than nonspatial visual tasks. Thus, although the sensory modalities driving the neurons in the reorganized OC of blind individuals are altered, the functional specialization of extrastriate cortex is retained regardless of visual experience.  相似文献   

11.
Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.  相似文献   

12.
Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.  相似文献   

13.
14.

Objective

The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy.

Methods

The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior.

Results

Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children.

Conclusions

Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.  相似文献   

15.
Blind individuals manifest remarkable abilities in navigating through space despite their lack of vision. They have previously been shown to perform normally or even supra-normally in tasks involving spatial hearing in near space, a region that, however, can be calibrated with sensory-motor feedback. Here we show that blind individuals not only properly map auditory space beyond their peri-personal environment but also demonstrate supra-normal performance when subtle acoustic cues for target location and distance must be used to carry out the task. Moreover, it is generally postulated that such abilities rest in part on cross-modal cortical reorganizations, particularly in the immature brain, where important synaptogenesis is still possible. Nonetheless, we show for the first time that even late-onset blind subjects develop above-normal spatial abilities, suggesting that significant compensation can occur in the adult.  相似文献   

16.
A recent experimental study suggests that blind individuals may compensate for their lack of vision with better-than-normal hearing. This provides support for a view dating back to 18th century philosophers, but the data raise as many problems as they solve.  相似文献   

17.
We examined the effects of visual deprivation at birth on the development of the corpus callosum in a large group of congenitally blind individuals. We acquired high-resolution T1-weighted MRI scans in 28 congenitally blind and 28 normal sighted subjects matched for age and gender. There was no overall group effect of visual deprivation on the total surface area of the corpus callosum. However, subdividing the corpus callosum into five subdivisions revealed significant regional changes in its three most posterior parts. Compared to the sighted controls, congenitally blind individuals showed a 12% reduction in the splenium, and a 20% increase in the isthmus and the posterior part of the body. A shape analysis further revealed that the bending angle of the corpus callosum was more convex in congenitally blind compared to the sighted control subjects. The observed morphometric changes in the corpus callosum are in line with the well-described cross-modal functional and structural neuroplastic changes in congenital blindness.  相似文献   

18.
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.  相似文献   

19.
Sight is undoubtedly important for finding and appreciating food, and cooking. Blind individuals are strongly impaired in finding food, limiting the variety of flavours they are exposed to. We have shown before that compared to sighted controls, congenitally blind individuals have enhanced olfactory but reduced taste perception. In this study we tested the hypothesis that congenitally blind subjects have enhanced orthonasal but not retronasal olfactory skills. Twelve congenitally blind and 14 sighted control subjects, matched in age, gender and body mass index, were asked to identify odours using grocery-available food powders. Results showed that blind subjects were significantly faster and tended to be better at identifying odours presented orthonasally. This was not the case when odorants were presented retronasally. We also found a significant group x route interaction, showing that although both groups performed better for retronasally compared to orthonasally presented odours, this gain was less pronounced for blind subjects. Finally, our data revealed that blind subjects were more familiar with the orthonasal odorants and used the retronasal odorants less often for cooking than their sighted counterparts. These results confirm that orthonasal but not retronasal olfactory perception is enhanced in congenital blindness, a result that is concordant with the reduced food variety exposure in this group.  相似文献   

20.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号