首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The interhemispheric interactions in perception of Russian prosody were studied in the norm and in schizophrenia as a clinical model of impaired hemispheric interactions. Monaural presentation of stimuli and binaural presentation in a free acoustical field were used. Sentences with main variants of Russian prosodic intonations were used as stimuli. The response time and the number of erroneous responses were recorded. In binaural listening without headphones, no significant difference in the percent of errors in identifying the emotional prosody was found between healthy subjects and schizophrenics. Compared with the healthy subjects, the patients made more errors in understanding the logical stress and fewer errors in understanding the syntagmatic segmentation. By response time, a significant dominance of the left ear was revealed in the healthy subjects during monaural listening to sentences with emotional prosody and complete or incomplete sentences, whereas no significant ear dominance was found in the schizophrenics. During monaural listening to sentences with logical stress, the response time was shorter when stimuli were presented to the right ear both in the healthy subjects and in the schizophrenics. The results testified that the functional brain asymmetry in schizophrenics is flattened. The flattening was less evident in the perception of a logical stress in a sentence and did not significantly affect the efficiency of identification of emotional prosody and syntagmatic segmentation of a sentence.  相似文献   

2.

Background

Studies demonstrating the involvement of motor brain structures in language processing typically focus on time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts.

Methodology/Principal Findings

Participants listened to spoken action target words in either affirmative or negative sentences while holding a sensor in a precision grip. The participants were asked to count the sentences containing the name of a country to ensure attention. The grip force signal was recorded continuously. The action words elicited an automatic and significant enhancement of the grip force starting at approximately 300 ms after target word onset in affirmative sentences; however, no comparable grip force modulation was observed when these action words occurred in negative contexts.

Conclusions/Significance

Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically governed by the linguistic context and not vice versa.  相似文献   

3.
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.  相似文献   

4.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

5.
Relations between the brain hemispheres were studied during the human perception of various types of Russian intonations. Fifty healthy subjects with normal hearing took part in the tests based on the method of monaural presentation of stimuli—the sentences that represented the main kinds of Russian emotional and linguistic intonations. The linguistic intonations expressed: various communicative types of sentences; completeness/incompleteness of a statement; various types of the syntagmatic segmentation of the statements; various logical stress. Sentences that required the identification of the emotion quality were used to study the perception of emotional intonations. The results of statistical analysis of latent periods and errors made by the test subjects demonstrated a significant preference of theright hemisphere in perceiving emotional intonations and complete/incomplete sentences; sentences with different logical stress were perceived mainly by theleft hemisphere. No significant differences were found in the perception of various communicative types of sentences and statements with different syntagmatic segmentation. The obtained data also testify to a difference in the degree of the involvement of human hemispheres in the perception and analysis of prosodic characteristics of the speech in males and females.  相似文献   

6.
Although several cognitive processes, including speech processing, have been studied during sleep, working memory (WM) has never been explored up to now. Our study assessed the capacity of WM by testing speech perception when the level of background noise and the sentential semantic length (SSL) (amount of semantic information required to perceive the incongruence of a sentence) were modulated. Speech perception was explored with the N400 component of the event-related potentials recorded to sentence final words (50% semantically congruent with the sentence, 50% semantically incongruent). During sleep stage 2 and paradoxical sleep: (1) without noise, a larger N400 was observed for (short and long SSL) sentences ending with a semantically incongruent word compared to a congruent word (i.e. an N400 effect); (2) with moderate noise, the N400 effect (observed at wake with short and long SSL sentences) was attenuated for long SSL sentences. Our results suggest that WM for linguistic information is partially preserved during sleep with a smaller capacity compared to wake.  相似文献   

7.
Sheppard JP  Wang JP  Wong PC 《PloS one》2011,6(1):e16510
Aging is accompanied by substantial changes in brain function, including functional reorganization of large-scale brain networks. Such differences in network architecture have been reported both at rest and during cognitive task performance, but an open question is whether these age-related differences show task-dependent effects or represent only task-independent changes attributable to a common factor (i.e., underlying physiological decline). To address this question, we used graph theoretic analysis to construct weighted cortical functional networks from hemodynamic (functional MRI) responses in 12 younger and 12 older adults during a speech perception task performed in both quiet and noisy listening conditions. Functional networks were constructed for each subject and listening condition based on inter-regional correlations of the fMRI signal among 66 cortical regions, and network measures of global and local efficiency were computed. Across listening conditions, older adult networks showed significantly decreased global (but not local) efficiency relative to younger adults after normalizing measures to surrogate random networks. Although listening condition produced no main effects on whole-cortex network organization, a significant age group x listening condition interaction was observed. Additionally, an exploratory analysis of regional effects uncovered age-related declines in both global and local efficiency concentrated exclusively in auditory areas (bilateral superior and middle temporal cortex), further suggestive of specificity to the speech perception tasks. Global efficiency also correlated positively with mean cortical thickness across all subjects, establishing gross cortical atrophy as a task-independent contributor to age-related differences in functional organization. Together, our findings provide evidence of age-related disruptions in cortical functional network organization during speech perception tasks, and suggest that although task-independent effects such as cortical atrophy clearly underlie age-related changes in cortical functional organization, age-related differences also demonstrate sensitivity to task domains.  相似文献   

8.
Review concerns cortical mechanisms of postural control. Data on different role of the right and the left cerebral hemispheres in postural control are analyzed. These data are compared with the data on lateralization of perception and action. Peculiarities of sensory perception and motor control in the right and the left hemispheres are supposed to be connected with the specialization of the left hemisphere on the dynamic tasks and of the right hemisphere on the static tasks which include orthograde posture control in human beings.  相似文献   

9.
We investigated whether corticospinal excitability during motor imagery of actions (the power or the pincer grip) with objects was influenced by actually touching objects (tactile input) and by the congruency of posture with the imagined action (proprioceptive input). Corticospinal excitability was assessed by monitoring motor evoked potentials (MEPs) in the first dorsal interosseous following transcranial magnetic stimulation over the motor cortex. MEPs were recorded during imagery of the power grip of a larger-sized ball (7 cm) or the pincer grip of a smaller-sized ball (3 cm)--with or without passively holding the larger-sized ball with the holding posture or the smaller-sized ball with the pinching posture. During imagery of the power grip, MEPs amplitude was increased only while the actual posture was the same as the imagined action (the holding posture). On the other hand, during imagery of the pincer grip while touching the ball, MEPs amplitude was enhanced in both postures. To examine the pure effect of touching (tactile input), we recorded MEPs during imagery of the power and pincer grip while touching various areas of an open palm with a flat foam pad. The MEPs amplitude was not affected by the palmer touching. These findings suggest that corticospinal excitability during imagery with an object is modulated by actually touching an object through the combination of tactile and proprioceptive inputs.  相似文献   

10.
The postural control system has two main functions: first, to build up posture against gravity and ensure that balance is maintained; and second, to fix the orientation and position of the segments that serve as a reference frame for perception and action with respect to the external world. This dual function of postural control is based on four components: reference values, such as orientation of body segments and position of the center of gravity (an internal representation of the body or postural body scheme); multisensory inputs regulating orientation and stabilization of body segments; and flexible postural reactions or anticipations for balance recovery after disturbance, or postural stabilization during voluntary movement. The recent data related to the organization of this system will be discussed in normal subjects (during ontogenesis), the elderly and in patients with relevant deficits.  相似文献   

11.

Background

Theories of embodied language suggest that the motor system is differentially called into action when processing motor-related versus abstract content words or sentences. It has been recently shown that processing negative polarity action-related sentences modulates neural activity of premotor and motor cortices.

Methods and Findings

We sought to determine whether reading negative polarity sentences brought about differential modulation of cortico-spinal motor excitability depending on processing hand-action related or abstract sentences. Facilitatory paired-pulses Transcranial Magnetic Stimulation (pp-TMS) was applied to the primary motor representation of the right-hand and the recorded amplitude of induced motor-evoked potentials (MEP) was used to index M1 activity during passive reading of either hand-action related or abstract content sentences presented in both negative and affirmative polarity. Results showed that the cortico-spinal excitability was affected by sentence polarity only in the hand-action related condition. Indeed, in keeping with previous TMS studies, reading positive polarity, hand action-related sentences suppressed cortico-spinal reactivity. This effect was absent when reading hand action-related negative polarity sentences. Moreover, no modulation of cortico-spinal reactivity was associated with either negative or positive polarity abstract sentences.

Conclusions

Our results indicate that grammatical cues prompting motor negation reduce the cortico-spinal suppression associated with affirmative action sentences reading and thus suggest that motor simulative processes underlying the embodiment may involve even syntactic features of language.  相似文献   

12.
Nonnative speech poses a challenge to speech perception, especially in challenging listening environments. Audiovisual (AV) cues are known to improve native speech perception in noise. The extent to which AV cues benefit nonnative speech perception in noise, however, is much less well-understood. Here, we examined native American English-speaking and native Korean-speaking listeners'' perception of English sentences produced by a native American English speaker and a native Korean speaker across a range of signal-to-noise ratios (SNRs;−4 to −20 dB) in audio-only and audiovisual conditions. We employed psychometric function analyses to characterize the pattern of AV benefit across SNRs. For native English speech, the largest AV benefit occurred at intermediate SNR (i.e. −12 dB); but for nonnative English speech, the largest AV benefit occurred at a higher SNR (−4 dB). The psychometric function analyses demonstrated that the AV benefit patterns were different between native and nonnative English speech. The nativeness of the listener exerted negligible effects on the AV benefit across SNRs. However, the nonnative listeners'' ability to gain AV benefit in native English speech was related to their proficiency in English. These findings suggest that the native language background of both the speaker and listener clearly modulate the optimal use of AV cues in speech recognition.  相似文献   

13.
Musical training leads to sensory and motor neuroplastic changes in the human brain. Motivated by findings on enlarged corpus callosum in musicians and asymmetric somatomotor representation in string players, we investigated the relationship between musical training, callosal anatomy, and interhemispheric functional symmetry during music listening. Functional symmetry was increased in musicians compared to nonmusicians, and in keyboardists compared to string players. This increased functional symmetry was prominent in visual and motor brain networks. Callosal size did not significantly differ between groups except for the posterior callosum in musicians compared to nonmusicians. We conclude that the distinctive postural and kinematic symmetry in instrument playing cross-modally shapes information processing in sensory-motor cortical areas during music listening. This cross-modal plasticity suggests that motor training affects music perception.  相似文献   

14.
By means of dichotic listening test the study of perception dynamics was conducted in 20 persons during their intensive learning of French and German languages. Before the beginning of learning in all subjects, except three ones, the left hemisphere dominated in relation to verbal functions. In the process of learning in 84% of subjects activization of the opposite hemisphere was observed. In persons with the same initial level of language knowledge the success in colloquial practice was by expert evaluation as higher as bigger was the shift of the right ear coefficient at learning. The latter makes possible to evaluate quantitatively intellectual efforts spent in the process of learning. The authors suggest that such psychophysiological control of perception could be a support in solution of the problem of optimization of learning with the use of individual approach.  相似文献   

15.
This paper examines the effects of anthropometry on body posture of trumpeters playing in standing position. Sixteen virtuosi trumpeters were photographed while hitting three notes (low C, high F and high F sustained) during performance of musical tasks. Initial standing posture and anthropometric data were recorded. Six body segment angles were computed and a vectorial sum was obtained to describe whole body posture in neutral and playing conditions. Horn angle and dental overbite were also computed. Earlier results showed that the musical task has no effect on playing posture. One-way ANOVA showed notable differences between the neutral posture and the note-related playing postures. A multiple regression model showed that in addition to the note effect, anthropometric variables, mainly neck length, explain the changes in playing posture. Horn angle is determined by the dental overbite. The importance of the anthropometric variables in playing the more demanding notes indicate that anthropometry may act to constrain trumpeters' performance.  相似文献   

16.
There is ample evidence that people plan their movements to ensure comfortable final grasp postures at the end of a movement. The end-state comfort effect has been found to be a robust constraint during unimanual movements, and leads to the inference that goal-postures are represented and planned prior to movement initiation. The purpose of this study was to examine whether individuals make appropriate corrections to ensure comfortable final goal postures when faced with an unexpected change in action goal. Participants reached for a horizontal cylinder and placed the left or right end of the object into the target disk. As soon as the participant began to move, a secondary stimuli was triggered, which indicated whether the intended action goal had changed or not. Confirming previous research, participants selected initial grasp postures that ensured end-state comfort during non-perturbed trials. In addition, participants made appropriate on-line corrections to their reach-to-grasp movements to ensure end-state comfort during perturbed trials. Corrections in grasp posture occurred early or late in the reach-to-grasp phase. The results indicate that individuals plan their movements to afford comfort at the end of the movement, and that grasp posture planning is controlled via both feedforward and feedback mechanisms.  相似文献   

17.
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.  相似文献   

18.
The aim of the present study was to investigate whether the perception of presentation durations of pictures of different body postures was distorted as function of the embodied movement that originally produced these postures. Participants were presented with two pictures, one with a low-arousal body posture judged to require no movement and the other with a high-arousal body posture judged to require considerable movement. In a temporal bisection task with two ranges of standard durations (0.4/1.6 s and 2/8 s), the participants had to judge whether the presentation duration of each of the pictures was more similar to the short or to the long standard duration. The results showed that the duration was judged longer for the posture requiring more movement than for the posture requiring less movement. However the magnitude of this overestimation was relatively greater for the range of short durations than for that of longer durations. Further analyses suggest that this lengthening effect was mediated by an arousal effect of limited duration on the speed of the internal clock system.  相似文献   

19.
The study was aimed at a deeper understanding of the interaction between the system of vertical posture control and the system of voluntary movement control based on the analysis of postural muscle activity components resulting from the action of the former or the latter system. For this purpose, a quick arm raise was performed in the standing and sitting positions with body fixation at different levels, when the task of maintaining a vertical posture was simplified or completely eliminated. Under these conditions, the muscle activity associated with posture control was supposed to change, while the activity of muscles raising the arm was supposed to remain invariable. The results showed that the simplification of the posture control resulted in a decrease or elimination of anticipatory changes in the activity of some muscles. However, most of the muscle activity variations were retained even in the sitting position, and these variations appeared simultaneously with the activity of muscles raising the arm. The so-called “anticipatory postural activity” during an arm raise in a normal standing position is supposed to consist of two components: an initial component reflecting the work of the posture control system and a later component reflecting the work of the movement control system. It is suggested that the planning of muscle activity and exchange of information between these two systems take place only before the beginning of the movement; after that, they act independently and in parallel.  相似文献   

20.
Hoekert M  Bais L  Kahn RS  Aleman A 《PloS one》2008,3(5):e2244
In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400-1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号