首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background

Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships.

Methodology/Principal Findings

To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity.

Conclusions/Significance

The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.  相似文献   

2.
3.
The notion of the phase structure of the speech act—or to be more precise—the special structure of the "inner speech" stage in utterance production, belongs to L. S. Vygotsky. Vygotsky conceptualized the process of speech production, the progress from thought to word to external speech, as follows: "from the motive that engenders a thought, to the formulation of that thought, its mediation by the inner word, and then by the meanings of external words, and finally, by words themselves"1 Elsewhere he said, "Thought is an internally mediated process. It moves from a vague desire to the mediated formulation of meaning, or rather, not the formulation, but the fulfillment of the thought in the word." And finally, "Thought is not something ready-made that needs to be expressed. Thought strives to fulfill some function or goal. This is achieved by moving from the sensation of a task—through construction of meaning—to the elaboration of the thought itself."2  相似文献   

4.
Some of the most essential and currently promising achievements in the psychology of the 1930s, resulting from the application of the methodological principle of the unity of consciousness and activity, are here discriminated and analyzed. The starting conditions for a child's mastery of speech on the basis of initially practical communicative contacts with adults and material objects are set forth; these conditions have been ignored by cultural-historical theory and by nonactivity approaches. It is concluded that speech is not an activity. The relation between activity and behavior is examined.  相似文献   

5.
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.  相似文献   

6.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

7.
Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

Hearing one’s own voice is critical for fluent speech production, allowing detection and correction of vocalization errors in real-time. This study shows that the dorsal precentral gyrus is a critical component of a cortical network that monitors auditory feedback to produce fluent speech; this region is engaged specifically when speech production is effortful during articulation of long utterances.  相似文献   

8.
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.  相似文献   

9.
The internal genetic structure and outcrossing rate of a population of Araucaria angustifolia (Bert.) O. Kuntze were investigated using 16 allozyme loci. Estimates of the mean number of alleles per loci (1.6), percentage of polymorphic loci (43.8%), and expected genetic diversity (0.170) were similar to those obtained for other gymnosperms. The analysis of spatial autocorrelation demonstrated the presence of internal structure in the first distance classes (up to 70 m), suggesting the presence of family structure. The outcrossing rate was high (0.956), as expected for a dioecious species. However, it was different from unity, indicating outcrossings between related individuals and corroborating the presence of internal genetic structure. The results of this study have implications for the methodologies used in conservation collections and for the use or analysis of this forest species.  相似文献   

10.
People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory–motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory–motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.

In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. This study shows that language statistical learning is boosted by the recruitment of a fronto-parietal brain network related to auditory-motor synchronization and its interplay with a mandatory auditory-motor learning system.  相似文献   

11.
The purpose of the present study was to determine whether different cues to increase loudness in speech result in different internal targets (or goals) for respiratory movement and whether the neural control of the respiratory system is sensitive to changes in the speaker's internal loudness target. This study examined respiratory mechanisms during speech in 30 young adults at comfortable level and increased loudness levels. Increased loudness was elicited using three methods: asking subjects to target a specific sound pressure level, asking subjects to speak twice as loud as comfortable, and asking subjects to speak in noise. All three loud conditions resulted in similar increases in sound pressure level . However, the respiratory mechanisms used to support the increase in loudness differed significantly depending on how the louder speech was elicited. When asked to target at a particular sound pressure level, subjects used a mechanism of increasing the lung volume at which speech was initiated to take advantage of higher recoil pressures. When asked to speak twice as loud as comfortable, subjects increased expiratory muscle tension, for the most part, to increase the pressure for speech. However, in the most natural of the elicitation methods, speaking in noise, the subjects used a combined respiratory approach, using both increased recoil pressures and increased expiratory muscle tension. In noise, an additional target, possibly improving intelligibility of speech, was reflected in the slowing of speech rate and in larger volume excursions even though the speakers were producing the same number of syllables.  相似文献   

12.
Moore DR 《Current biology : CB》2000,10(10):R362-R364
Speech is thought to be perceived and processed in a unique way by the auditory system of the brain. A recent study has provided evidence that a part of the brain's temporal lobe is specifically responsive to speech and other vocal stimuli.  相似文献   

13.
Language can be viewed as a set of cues that modulate the comprehender’s thought processes. It is a very subtle instrument. For example, the literature suggests that people perceive direct speech (e.g., Joanne said: ‘I went out for dinner last night’) as more vivid and perceptually engaging than indirect speech (e.g., Joanne said that she went out for dinner last night). But how is this alleged vividness evident in comprehenders’ mental representations? We sought to address this question in a series of experiments. Our results do not support the idea that, compared to indirect speech, direct speech enhances the accessibility of information from the communicative or the referential situation during comprehension. Neither do our results support the idea that the hypothesized more vivid experience of direct speech is caused by a switch from the visual to the auditory modality. However, our results do show that direct speech leads to a stronger mental representation of the exact wording of a sentence than does indirect speech. These results show that language has a more subtle influence on memory representations than was previously suggested.  相似文献   

14.
The results of visualization of the stromules-like protrusions of the membrane environment of plastids in the root cells with the help of an electronic microscope are submitted. The cases of occurrence of long narrow protrusion of the external membrane with more short protrusion of the internal membrane of plastid environment inside it are discussed. The possible role of cytoskeleton and plastoskeleton in formation, accordingly, of "external" and "internal" protrusions is considered. The conclusion that the structure and functions of stromules in plant cells should be considered in unity with the structure and functions of the endoplasmic reticulum internal space is made.  相似文献   

15.
Speech perception is thought to be linked to speech motor production. This linkage is considered to mediate multimodal aspects of speech perception, such as audio-visual and audio-tactile integration. However, direct coupling between articulatory movement and auditory perception has been little studied. The present study reveals a clear dissociation between the effects of a listener’s own speech action and the effects of viewing another’s speech movements on the perception of auditory phonemes. We assessed the intelligibility of the syllables [pa], [ta], and [ka] when listeners silently and simultaneously articulated syllables that were congruent/incongruent with the syllables they heard. The intelligibility was compared with a condition where the listeners simultaneously watched another’s mouth producing congruent/incongruent syllables, but did not articulate. The intelligibility of [ta] and [ka] were degraded by articulating [ka] and [ta] respectively, which are associated with the same primary articulator (tongue) as the heard syllables. But they were not affected by articulating [pa], which is associated with a different primary articulator (lips) from the heard syllables. In contrast, the intelligibility of [ta] and [ka] was degraded by watching the production of [pa]. These results indicate that the articulatory-induced distortion of speech perception occurs in an articulator-specific manner while visually induced distortion does not. The articulator-specific nature of the auditory-motor interaction in speech perception suggests that speech motor processing directly contributes to our ability to hear speech.  相似文献   

16.
When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input.  相似文献   

17.
Swelling-induced human erythrocyte K-Cl cotransport is membrane potential independent and capable of uphill transport. However, a complete thermodynamic analysis of basal and stimulated K-Cl cotransport, at constant cell volume, is missing. This study was performed in low K sheep red blood cells before and after reducing cellular free Mg into the nanomolar range with the divalent cation ionophore A23187 and a chelator, an intervention known to stimulate K- Cl cotransport. The anion exchange inhibitor 4,4''diisothiocyanato- 2,2''disulfonic stilbene was used to clamp intracellular pH and Cl or NO3 concentrations. Cell volume was maintained constant as external and internal pH differed by more than two units. K-Cl cotransport was calculated from the K effluxes and Rb (as K congener) influxes measured in Cl and NO3, at constant internal K and external anions, and variable concentrations of extracellular Rb and internal anions, respectively. The external Rb concentration at which net K-Cl cotransport is zero was defined as flux reversal point which changed with internal pH and hence Cl. Plots of the ratio of external Rb concentrations corresponding to the flux reversal points and the internal K concentration versus the ratio of the internal and external Cl concentrations (i.e., the Donnan ratio of the transported ions) yielded slopes near unity for both control and low internal Mg cells. Thus, basal as well as low internal Mg-stimulated net K-Cl cotransport depends on the electrochemical potential gradient of KCl.  相似文献   

18.
In this paper we propose that the internal bracketing of a word with more than two morphemes is reflected in the phonetic implementation. We hypothesize that embedded forms show more phonetic reduction than forms at higher structural levels (‘Embedded Reduction Hypothesis’). This paper tests the prediction of the Embedded Reduction Hypothesis with triconstituent compounds. The analysis of the durational properties of almost 500 compound tokens shows that there is a lengthening effect on the non-embedded constituent, and a shortening effect on the adjacent embedded constituent. Yet, this predicted effect of embedding interacts with other lexical factors, above all the bigram frequency of the embedded compound. At a theoretical level, these effects mean that the durational properties of the cross-boundary constituents are indicative of the hierarchical structure and of the strength of the internal boundary of triconstituent compounds. Hence, morphological structure is reflected in the speech signal.  相似文献   

19.
Luo H  Poeppel D 《Neuron》2007,54(6):1001-1010
How natural speech is represented in the auditory cortex constitutes a major challenge for cognitive neuroscience. Although many single-unit and neuroimaging studies have yielded valuable insights about the processing of speech and matched complex sounds, the mechanisms underlying the analysis of speech dynamics in human auditory cortex remain largely unknown. Here, we show that the phase pattern of theta band (4-8 Hz) responses recorded from human auditory cortex with magnetoencephalography (MEG) reliably tracks and discriminates spoken sentences and that this discrimination ability is correlated with speech intelligibility. The findings suggest that an approximately 200 ms temporal window (period of theta oscillation) segments the incoming speech signal, resetting and sliding to track speech dynamics. This hypothesized mechanism for cortical speech analysis is based on the stimulus-induced modulation of inherent cortical rhythms and provides further evidence implicating the syllable as a computational primitive for the representation of spoken language.  相似文献   

20.
Much recent research has shown that the capacity for mental time travel and temporal reasoning emerges during the preschool years. Nothing is known so far, however, about young children''s grasp of the normative dimension of future-directed thought and speech. The present study is the first to show that children from age 4 understand the normative outreach of such future-directed speech acts: subjects at time 1 witnessed a speaker make future-directed speech acts about/towards an actor A, either in imperative mode (“A, do X!”) or as a prediction (“the actor A will do X”). When at time 2 the actor A performed an action that did not match the content of the speech act at time 1, children identified the speaker as the source of a mistake in the prediction case, and the actor as the source of the mistake in the imperative case and leveled criticism accordingly. These findings add to our knowledge about the emergence and development of temporal cognition in revealing an early sensitivity to the normative aspects of future-orientation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号