首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The dual-route model of speech processing includes a dorsal stream that maps auditory to motor features at the sublexical level rather than at the lexico-semantic level. However, the literature on gesture is an invitation to revise this model because it suggests that the premotor cortex of the dorsal route is a major site of lexico-semantic interaction. Here we investigated lexico-semantic mapping using word-gesture pairs that were either congruent or incongruent. Using fMRI-adaptation in 28 subjects, we found that temporo-parietal and premotor activity during auditory processing of single action words was modulated by the prior audiovisual context in which the words had been repeated. The BOLD signal was suppressed following repetition of the auditory word alone, and further suppressed following repetition of the word accompanied by a congruent gesture (e.g. [“grasp” + grasping gesture]). Conversely, repetition suppression was not observed when the same action word was accompanied by an incongruent gesture (e.g. [“grasp” + sprinkle]). We propose a simple model to explain these results: auditory and visual information converge onto premotor cortex where it is represented in a comparable format to determine (in)congruence between speech and gesture. This ability of the dorsal route to detect audiovisual semantic (in)congruence suggests that its function is not restricted to the sublexical level.  相似文献   

2.
This article discusses recent functional magnetic resonance imaging (fMRI) and repetitive Transcranial Magnetic Stimulation (rTMS) data that suggest a direct involvement of premotor cortical areas in speech perception. These new data map well onto psychological theories advocating an active role of motor structures in the perception of speech sounds. It is proposed that the perception of speech is enabled--at least in part--by a process that simulates speech production.  相似文献   

3.
Speech perception is thought to be linked to speech motor production. This linkage is considered to mediate multimodal aspects of speech perception, such as audio-visual and audio-tactile integration. However, direct coupling between articulatory movement and auditory perception has been little studied. The present study reveals a clear dissociation between the effects of a listener’s own speech action and the effects of viewing another’s speech movements on the perception of auditory phonemes. We assessed the intelligibility of the syllables [pa], [ta], and [ka] when listeners silently and simultaneously articulated syllables that were congruent/incongruent with the syllables they heard. The intelligibility was compared with a condition where the listeners simultaneously watched another’s mouth producing congruent/incongruent syllables, but did not articulate. The intelligibility of [ta] and [ka] were degraded by articulating [ka] and [ta] respectively, which are associated with the same primary articulator (tongue) as the heard syllables. But they were not affected by articulating [pa], which is associated with a different primary articulator (lips) from the heard syllables. In contrast, the intelligibility of [ta] and [ka] was degraded by watching the production of [pa]. These results indicate that the articulatory-induced distortion of speech perception occurs in an articulator-specific manner while visually induced distortion does not. The articulator-specific nature of the auditory-motor interaction in speech perception suggests that speech motor processing directly contributes to our ability to hear speech.  相似文献   

4.
Perirhinal contributions to human visual perception   总被引:1,自引:0,他引:1  
Medial temporal lobe (MTL) structures including the hippocampus, entorhinal cortex, and perirhinal cortex are thought to be part of a unitary system dedicated to memory [1, 2], although recent studies suggest that at least one component-perirhinal cortex-might also contribute to perceptual processing [3, 4, 5, 6]. To date, the strongest evidence for this comes from animal lesion studies [7, 8, 9, 10, 11, 12, 13, 14]. In contrast, the findings from human patients with naturally occurring MTL lesions are less clear and suggest a possible functional difference between species [15, 16, 17, 18, 19, 20]. Here, both these issues were addressed with functional neuroimaging in healthy volunteers performing a perceptual discrimination task originally developed for monkeys [7]. This revealed perirhinal activation when the task required the integration of visual features into a view-invariant representation but not when it could be accomplished on the basis of simple features (e.g., color and shape). This activation pattern matched lateral inferotemporal regions classically associated with visual processing but differed from entorhinal cortex associated with memory encoding. The results demonstrate a specific role for the perirhinal cortex in visual perception and establish a functional homology for perirhinal cortex between species, although we propose that in humans, the region contributes to a wider behavioral repertoire including mnemonic, perceptual, and linguistic processes.  相似文献   

5.
Russ BE  Orr LE  Cohen YE 《Current biology : CB》2008,18(19):1483-1488
The detection of stimuli is critical for an animal's survival [1]. However, it is not adaptive for an animal to respond automatically to every stimulus that is present in the environment [2-5]. Given that the prefrontal cortex (PFC) plays a key role in executive function [6-8], we hypothesized that PFC activity should be involved in context-dependent responses to uncommon stimuli. As a test of this hypothesis, monkeys participated in a same-different task, a variant of an oddball task [2]. During this task, a monkey heard multiple presentations of a "reference" stimulus that were followed by a "test" stimulus and reported whether these stimuli were the same or different. While they participated in this task, we recorded from neurons in the ventrolateral prefrontal cortex (vPFC; a cortical area involved in aspects of nonspatial auditory processing [9, 10]). We found that vPFC activity was correlated with the monkeys' choices. This finding demonstrates a direct link between single neurons and behavioral choices in the PFC on a nonspatial auditory task.  相似文献   

6.
Levodopa (L-dopa) effects on the cardinal and axial symptoms of Parkinson’s disease (PD) differ greatly, leading to therapeutic challenges for managing the disabilities in this patient’s population. In this context, we studied the cerebral networks associated with the production of a unilateral hand movement, speech production, and a task combining both tasks in 12 individuals with PD, both off and on levodopa (L-dopa). Unilateral hand movements in the off medication state elicited brain activations in motor regions (primary motor cortex, supplementary motor area, premotor cortex, cerebellum), as well as additional areas (anterior cingulate, putamen, associative parietal areas); following L-dopa administration, the brain activation profile was globally reduced, highlighting activations in the parietal and posterior cingulate cortices. For the speech production task, brain activation patterns were similar with and without medication, including the orofacial primary motor cortex (M1), the primary somatosensory cortex and the cerebellar hemispheres bilaterally, as well as the left- premotor, anterior cingulate and supramarginal cortices. For the combined task off L-dopa, the cerebral activation profile was restricted to the right cerebellum (hand movement), reflecting the difficulty in performing two movements simultaneously in PD. Under L-dopa, the brain activation profile of the combined task involved a larger pattern, including additional fronto-parietal activations, without reaching the sum of the areas activated during the simple hand and speech tasks separately. Our results question both the role of the basal ganglia system in speech production and the modulation of task-dependent cerebral networks by dopaminergic treatment.  相似文献   

7.
Motor and cognitive functions of the ventral premotor cortex   总被引:21,自引:0,他引:21  
Recent data show that the ventral premotor cortex in both humans and monkeys has motor and cognitive functions. The cognitive functions include space perception, action understanding and imitation. The data also show a clear functional homology between monkey area F5 and human area 44. Preliminary evidence suggests that the ventral part of the lateral premotor cortex in humans may correspond to monkey area F4. A tentative map of the human lateral premotor areas founded on the reviewed evidence is presented.  相似文献   

8.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.  相似文献   

9.
The perception of vowels was studied in chimpanzees and humans, using a reaction time task in which reaction times for discrimination of vowels were taken as an index of similarity between vowels. Vowels used were five synthetic and natural Japanese vowels and eight natural French vowels. The chimpanzees required long reaction times for discrimination of synthetic [i] from [u] and [e] from [o], that is, they need long latencies for discrimination between vowels based on differences in frequency of the second formant. A similar tendency was observed for discrimination of natural [i] from [u]. The human subject required long reaction times for discrimination between vowels along the first formant axis. These differences can be explained by differences in auditory sensitivity between the two species and the motor theory of speech perception. A vowel, which is pronounced by different speakers, has different acoustic properties. However, humans can perceive these speech sounds as the same vowel. The phenomenon of perceptual constancy in speech perception was studied in chimpanzees using natural vowels and a synthetic [o]- [a] continuum. The chimpanzees ignored the difference in the sex of the speakers and showed a capacity for vocal tract normalization.  相似文献   

10.
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.  相似文献   

11.
Arithmetic is one of the complex forms of human intellectual activity. This kind of intellectual operation is usually studied by psychologists. It has been found that the learning of arithmetic by children is closely related to the development of speech, the perception of spatial relations, and the maturation of higher forms of analytic-synthetic activity by the cerebral cortex. The development of the intellectual operations of arithmetic goes through several stages from visual-operational forms to abstract forms [9-11].  相似文献   

12.
The extent to which areas in the visual cerebral cortex differ in their ability to support perceptions has been the subject of considerable speculation. Experiments examining the activity of individual neurons have suggested that activity in later stages of the visual cortex is more closely linked to perception than that in earlier stages [1-9]. In contrast, results from functional imaging, transcranial magnetic stimulation, and lesion studies have been interpreted as showing that earlier stages are more closely coupled to perception [10-15]. We examined whether neuronal activity in early and later stages differs in its ability to support detectable signals by measuring behavioral thresholds for detecting electrical microstimulation in different cortical areas in two monkeys. By training the animals to perform a two-alternative temporal forced-choice task, we obtained criterion-free thresholds from five visual areas--V1, V2, V3A, MT, and the inferotemporal cortex. Every site tested yielded a reliable threshold. Thresholds varied little within and between visual areas, rising gradually from early to later stages. We similarly found no systematic differences in the slopes of the psychometric detection functions from different areas. These results suggest that neuronal signals of similar magnitude evoked in any part of visual cortex can generate percepts.  相似文献   

13.
Seeing the articulatory gestures of the speaker (“speech reading”) enhances speech perception especially in noisy conditions. Recent neuroimaging studies tentatively suggest that speech reading activates speech motor system, which then influences superior-posterior temporal lobe auditory areas via an efference copy. Here, nineteen healthy volunteers were presented with silent videoclips of a person articulating Finnish vowels /a/, /i/ (non-targets), and /o/ (targets) during event-related functional magnetic resonance imaging (fMRI). Speech reading significantly activated visual cortex, posterior fusiform gyrus (pFG), posterior superior temporal gyrus and sulcus (pSTG/S), and the speech motor areas, including premotor cortex, parts of the inferior (IFG) and middle (MFG) frontal gyri extending into frontal polar (FP) structures, somatosensory areas, and supramarginal gyrus (SMG). Structural equation modelling (SEM) of these data suggested that information flows first from extrastriate visual cortex to pFS, and from there, in parallel, to pSTG/S and MFG/FP. From pSTG/S information flow continues to IFG or SMG and eventually somatosensory areas. Feedback connectivity was estimated to run from MFG/FP to IFG, and pSTG/S. The direct functional connection from pFG to MFG/FP and feedback connection from MFG/FP to pSTG/S and IFG support the hypothesis of prefrontal speech motor areas influencing auditory speech processing in pSTG/S via an efference copy.  相似文献   

14.
Speech perception provides compelling examples of a strong link between auditory and visual modalities. This link originates in the mechanics of speech production, which, in shaping the vocal tract, determine the movement of the face as well as the sound of the voice. In this paper, we present evidence that equivalent information about identity is available cross-modally from both the face and voice. Using a delayed matching to sample task, XAB, we show that people can match the video of an unfamiliar face, X, to an unfamiliar voice, A or B, and vice versa, but only when stimuli are moving and are played forward. The critical role of time-varying information is underlined by the ability to match faces to voices containing only the coarse spatial and temporal information provided by sine wave speech [5]. The effect of varying sentence content across modalities was small, showing that identity-specific information is not closely tied to particular utterances. We conclude that the physical constraints linking faces to voices result in bimodally available dynamic information, not only about what is being said, but also about who is saying it.  相似文献   

15.
In this study, we examined event-related potentials (ERPs) in rats performing a timing task. The ERPs were recorded during a timing task and a control task from five regions (frontal cortex, striatum, hippocampus, thalamus, and cerebellum) that are related to time perception. In the timing task, the rats were required to judge the interval between two tones. This interval could be either 500 or 2000 ms. In the control task, only the 500 ms interval between tones was presented and only one lever was available for responses. Any difference in ERPs between the two tasks was considered to reflect the processes that are related to temporal discrimination. The frontal cortex, striatum, and thalamus yielded concurrent differences in ERPs between the two tasks. The results suggest that these regions might play an important role in temporal discrimination.  相似文献   

16.
The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of 'sparse representations' assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a 'sparse' representation; we use words that differ only in their penultimate consonant ("coronal" [t] vs. "dorsal" [k] place of articulation) and for example distinguish between the German nouns Latz ([lats]; bib) and Lachs ([laks]; salmon). Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN) response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological 'sparsity': Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily 'tolerated' and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of 'representationally sparse' models of speech perception that are compatible with more general predictive mechanisms in auditory perception.  相似文献   

17.
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general ‘mirror-neuron’ circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.  相似文献   

18.
Previous studies have suggested that the premotor cortex plays a role in motor preparation. We have tested this hypothesis in macaque monkeys by examining neuronal activity during an enforced, 1.5-3.0 s delay period between the presentation of an instruction for movement and the onset of that movement. Two targets for movement were available to the monkey, one on the left and one on the right. Illumination of one of the targets served as the instruction for a forelimb movement. It is known that there are cells in the premotor cortex that have directionally specific, sustained activity increases or decreases following such instructions. If the premotor cortex is involved in the preparation for movement in a particular direction, then changing the target from one to the opposite side during the delay period should lead to a pronounced change in sustained neuronal activity. Further, removing the instruction, while still requiring movement to the target, should have little or no sustained effect. Seventy cells showed the predicted activity patterns, thus supporting the view that the premotor cortex plays a role in motor preparation.  相似文献   

19.
Sensorimotor learning configures the human mirror system   总被引:8,自引:0,他引:8  
Catmur C  Walsh V  Heyes C 《Current biology : CB》2007,17(17):1527-1531
Cells in the "mirror system" fire not only when an individual performs an action but also when one observes the same action performed by another agent [1-4]. The mirror system, found in premotor and parietal cortices of human and monkey brains, is thought to provide the foundation for social understanding and to enable the development of theory of mind and language [5-9]. However, it is unclear how mirror neurons acquire their mirror properties -- how they derive the information necessary to match observed with executed actions [10]. We address this by showing that it is possible to manipulate the selectivity of the human mirror system, and thereby make it operate as a countermirror system, by giving participants training to perform one action while observing another. Before this training, participants showed event-related muscle-specific responses to transcranial magnetic stimulation over motor cortex during observation of little- and index-finger movements [11-13]. After training, this normal mirror effect was reversed. These results indicate that the mirror properties of the mirror system are neither wholly innate [14] nor fixed once acquired; instead they develop through sensorimotor learning [15, 16]. Our findings indicate that the human mirror system is, to some extent, both a product and a process of social interaction.  相似文献   

20.
After unilateral stroke, the dorsal premotor cortex (PMd) in the intact hemisphere is often more active during movement of an affected limb. Whether this contributes to motor recovery is unclear. Functional magnetic resonance imaging (fMRI) was used to investigate short-term reorganization in right PMd after transcranial magnetic stimulation (TMS) disrupted the dominant left PMd, which is specialized for action selection. Even when 1 Hz left PMd TMS had no effect on behavior, there was a compensatory increase in activity in right PMd and connected medial premotor areas. This activity was specific to task periods of action selection as opposed to action execution. Compensatory activation changes were both functionally specific and anatomically specific: the same pattern was not seen after TMS of left sensorimotor cortex. Subsequent TMS of the reorganized right PMd did disrupt performance. Thus, this pattern of functional reorganization has a causal role in preserving behavior after neuronal challenge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号