首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine''s problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture.  相似文献   

2.
Thierry G  Giraud AL  Price C 《Neuron》2003,38(3):499-506
Patient studies suggest that speech and environmental sounds are differentially processed by the left and right hemispheres. Here, using functional imaging in normal subjects, we compared semantic processing of spoken words to equivalent processing of environmental sounds, after controlling for low-level perceptual differences. Words enhanced activation in left anterior and posterior superior temporal regions, while environmental sounds enhanced activation in a right posterior superior temporal region. This left/right dissociation was unchanged by different attentional/working memory contexts, but it was specific to tasks requiring semantic analysis. While semantic processing involves widely distributed networks in both hemispheres, our results support the hypothesis of a dual access route specific for verbal and nonverbal material, respectively.  相似文献   

3.
Memory may have evolved to preserve information processed in terms of its fitness-relevance. Based on the assumption that the human mind comprises different fitness-relevant adaptive mechanisms contributing to survival and reproductive success, we compared alternative fitness-relevant processing scenarios with survival processing. Participants rated words for relevancy to fitness-relevant and control conditions followed by a delay and surprise recall test (Experiment 1a). Participants recalled more words processed for their relevance to a survival situation. We replicated these findings in an online study (Experiment 2) and a study using revised fitness-relevant scenarios (Experiment 3). Across all experiments, we did not find a mnemonic benefit for alternative fitness-relevant processing scenarios, questioning assumptions associated with an evolutionary account of remembering. Based on these results, fitness-relevance seems to be too wide-ranging of a construct to account for the memory findings associated with survival processing. We propose that memory may be hierarchically sensitive to fitness-relevant processing instructions. We encourage future researchers to investigate the underlying mechanisms responsible for survival processing effects and work toward developing a taxonomy of adaptive memory.  相似文献   

4.
Recent demonstrations that music is capable of conveying semantically meaningful information has raised several questions as to what the underlying mechanisms of establishing meaning in music are, and if the meaning of music is represented in comparable fashion to language meaning. This paper presents evidence showing that expressed affect is a primary pathway to music meaning and that meaning in music is represented in a very similar fashion to language meaning. In two experiments using EEG and fMRI, it was shown that single chords varying in harmonic roughness (consonance/dissonance) and thus perceived affect could prime the processing of subsequently presented affective target words, as indicated by an increased N400 and activation of the right middle temporal gyrus (MTG). Most importantly, however, when primed by affective words, single chords incongruous to the preceding affect also elicited an N400 and activated the right posterior STS, an area implicated in processing meaning of a variety of signals (e.g. prosody, voices, motion). This provides an important piece of evidence in support of music meaning being represented in a very similar but also distinct fashion to language meaning: Both elicit an N400, but activate different portions of the right temporal lobe.  相似文献   

5.
6.
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly.  相似文献   

7.

Background

Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist.

Methods and Findings

Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task.

Results

Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations.

Conclusions

These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed.  相似文献   

8.
Beauchamp MS  Lee KE  Haxby JV  Martin A 《Neuron》2002,34(1):149-159
We tested the hypothesis that different regions of lateral temporal cortex are specialized for processing different types of visual motion by studying the cortical responses to moving gratings and to humans and manipulable objects (tools and utensils) that were either stationary or moving with natural or artificially generated motions. Segregated responses to human and tool stimuli were observed in both ventral and lateral regions of posterior temporal cortex. Relative to ventral cortex, lateral temporal cortex showed a larger response for moving compared with static humans and tools. Superior temporal cortex preferred human motion, and middle temporal gyrus preferred tool motion. A greater response was observed in STS to articulated compared with unarticulated human motion. Specificity for different types of complex motion (in combination with visual form) may be an organizing principle in lateral temporal cortex.  相似文献   

9.
Jiang Y  He S 《Current biology : CB》2006,16(20):2023-2029
Perceiving faces is critical for social interaction. Evidence suggests that different neural pathways may be responsible for processing face identity and expression information. By using functional magnetic resonance imaging (fMRI), we measured brain responses when observers viewed neutral, fearful, and scrambled faces, either visible or rendered invisible through interocular suppression. The right fusiform face area (FFA), the right superior temporal sulcus (STS), and the amygdala responded strongly to visible faces. However, when face images became invisible, activity in FFA to both neutral and fearful faces was much reduced, although still measurable; activity in the STS was robust only to invisible fearful faces but not to neutral faces. Activity in the amygdala was equally strong in both the visible and invisible conditions to fearful faces but much weaker in the invisible condition for the neutral faces. In the invisible condition, amygdala activity was highly correlated with that of the STS but not with FFA. The results in the invisible condition support the existence of dissociable neural systems specialized for processing facial identity and expression information. When images are invisible, cortical responses may reflect primarily feed-forward visual-information processing and thus allow us to reveal the distinct functions of FFA and STS.  相似文献   

10.
Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal simultaneity perception and improves temporal discrimination in audiovisual processing.  相似文献   

11.
Electrophysiological recording in the anterior superior temporal sulcus (STS) of monkeys has demonstrated separate cell populations responsive to direct and averted gaze. Human functional imaging has demonstrated posterior STS activation in gaze processing, particularly in coding the intentions conveyed by gaze, but to date has provided no evidence of dissociable coding of different gaze directions. Because the spatial resolution typical of group-based fMRI studies (approximately 6-10 mm) exceeds the size of cellular patches sensitive to different facial characteristics (1-4 mm in monkeys), a more sensitive technique may be required. We therefore used fMRI adaptation, which is considered to offer superior resolution, to investigate whether the human anterior STS contains representations of different gaze directions, as suggested by non-human primate research. Subjects viewed probe faces gazing left, directly ahead, or right. Adapting to leftward gaze produced a reduction in BOLD response to left relative to right (and direct) gaze probes in the anterior STS and inferior parietal cortex; rightward gaze adaptation produced a corresponding reduction to right gaze probes. Consistent with these findings, averted gaze in the adapted direction was misidentified as direct. Our study provides the first human evidence of dissociable neural systems for left and right gaze.  相似文献   

12.
The hedonic meaning of words affects word recognition, as shown by behavioral, functional imaging, and event-related potential (ERP) studies. However, the spatiotemporal dynamics and cognitive functions behind are elusive, partly due to methodological limitations of previous studies. Here, we account for these difficulties by computing combined electro-magnetoencephalographic (EEG/MEG) source localization techniques. Participants covertly read emotionally high-arousing positive and negative nouns, while EEG and MEG were recorded simultaneously. Combined EEG/MEG current-density reconstructions for the P1 (80–120 ms), P2 (150–190 ms) and EPN component (200–300 ms) were computed using realistic individual head models, with a cortical constraint. Relative to negative words, the P1 to positive words predominantly involved language-related structures (left middle temporal and inferior frontal regions), and posterior structures related to directed attention (occipital and parietal regions). Effects shifted to the right hemisphere in the P2 component. By contrast, negative words received more activation in the P1 time-range only, recruiting prefrontal regions, including the anterior cingulate cortex (ACC). Effects in the EPN were not statistically significant. These findings show that different neuronal networks are active when positive versus negative words are processed. We account for these effects in terms of an “emotional tagging” of word forms during language acquisition. These tags then give rise to different processing strategies, including enhanced lexical processing of positive words and a very fast language-independent alert response to negative words. The valence-specific recruitment of different networks might underlie fast adaptive responses to both approach- and withdrawal-related stimuli, be they acquired or biological.  相似文献   

13.
Neural processing of auditory looming in the human brain   总被引:2,自引:0,他引:2  
Acoustic intensity change, along with interaural, spectral, and reverberation information, is an important cue for the perception of auditory motion. Approaching sound sources produce increases in intensity, and receding sound sources produce corresponding decreases. Human listeners typically overestimate increasing compared to equivalent decreasing sound intensity and underestimate the time to contact of approaching sound sources. These characteristics could provide a selective advantage by increasing the margin of safety for response to looming objects. Here, we used dynamic intensity and functional magnetic resonance imaging to examine the neural underpinnings of the perceptual priority for rising intensity. We found that, consistent with activation by horizontal and vertical auditory apparent motion paradigms, rising and falling intensity activated the right temporal plane more than constant intensity. Rising compared to falling intensity activated a distributed neural network subserving space recognition, auditory motion perception, and attention and comprising the superior temporal sulci and the middle temporal gyri, the right temporoparietal junction, the right motor and premotor cortices, the left cerebellar cortex, and a circumscribed region in the midbrain. This anisotropic processing of acoustic intensity change may reflect the salience of rising intensity produced by looming sources in natural environments.  相似文献   

14.
Rubinsten O  Sury D 《PloS one》2011,6(9):e24079
In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.  相似文献   

15.
The proposal that motion is processed by multiple mechanisms in the human brain has received little anatomical support so far. Here, we compared higher- and lower-level motion processing in the human brain using functional magnetic resonance imaging. We observed activation of an inferior parietal lobule (IPL) motion region by isoluminant red-green gratings when saliency of one color was increased and by long-range apparent motion at 7 Hz but not 2 Hz. This higher order motion region represents the entire visual field, while traditional motion regions predominantly process contralateral motion. Our results suggest that there are two motion-processing systems in the human brain: a contralateral lower-level luminance-based system, extending from hMT/V5+ into dorsal IPS and STS, and a bilateral higher-level saliency-based system in IPL.  相似文献   

16.
The successful detection of biological motion can have important consequences for survival. Previous studies have demonstrated the ease and speed with which observers can extract a wide range of information from impoverished dynamic displays in which only an actor's joints are visible. Although it has often been suggested that such biological motion processing can be accomplished relatively automatically, few studies have directly tested this assumption by using behavioral methods. Here we used a flanker paradigm to assess how peripheral "to-be-ignored" walkers affect the processing of a central target walker. Our results suggest that task-irrelevant dynamic figures cannot be ignored and are processed to a level where they influence behavior. These findings provide the first direct evidence that complex dynamic patterns can be processed incidentally, a finding that may have important implications for cognitive, neurophysiological, and computational models of biological motion processing.  相似文献   

17.
Sound symbolism, or the nonarbitrary link between linguistic sound and meaning, has often been discussed in connection with language evolution, where the oral imitation of external events links phonetic forms with their referents (e.g., Ramachandran & Hubbard, 2001). In this research, we explore whether sound symbolism may also facilitate synchronic language learning in human infants. Sound symbolism may be a useful cue particularly at the earliest developmental stages of word learning, because it potentially provides a way of bootstrapping word meaning from perceptual information. Using an associative word learning paradigm, we demonstrated that 14-month-old infants could detect Köhler-type (1947) shape-sound symbolism, and could use this sensitivity in their effort to establish a word-referent association.  相似文献   

18.
The extent to which the auditory system, like the visual system, processes spatial stimulus characteristics such as location and motion in separate specialized neuronal modules or in one homogeneously distributed network is unresolved. Here we present a patient with a selective deficit for the perception and discrimination of auditory motion following resection of the right anterior temporal lobe and the right posterior superior temporal gyrus (STG). Analysis of stimulus identity and location within the auditory scene remained intact. In addition, intracranial auditory evoked potentials, recorded preoperatively, revealed motion-specific responses selectively over the resected right posterior STG, and electrical cortical stimulation of this region was experienced by the patient as incoming moving sounds. Collectively, these data present a patient with cortical motion deafness, providing evidence that cortical processing of auditory motion is performed in a specialized module within the posterior STG.  相似文献   

19.
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing.  相似文献   

20.
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号