首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis'' validity; (i) Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii) speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii) speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined “clicks” and “faux-speech.” Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech antecedents within the primate lineage, and highlight potential articulatory homologies between great ape calls and human consonants and vowels.  相似文献   

2.
3.
Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep ‘prominence gradient’, i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a ‘stress-timed’ language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow ‘syntagmatic contrast’ between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence of alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and it is this analogical process which allows speech to be matched to external rhythms.  相似文献   

4.
Skliarov OP 《Biofizika》2003,48(3):553-557
There is increasing evidence that the synergistic interaction of the objectively accepted low-frequency rhythm and the subjectively accepted high-frequency sound is a source of speech "sense". The lack of one of these components or of their synergistic interaction results in the loss of "sense" of the acoustic signal being perceived. As for the universality of perception of the sound component of speech, there is a model of sound perception, which ensures this universality. This model describes the dynamics of audition using the synchronization model of oscillations in the Ruell-Takens scenario of transition to chaos. The oscillations are described by Hopf bifurcations known to be structurally stable, which just provides the universality of sound perception. However, up till now, nothing was known about the mechanism of rhythm. It was shown in our study that the mechanism of rhythm is described by the Feigenbaum scenario of transition to chaos. Upon transition from oscillations to chaos, this scenario incorporates a critical point near which the dynamics of the system is described in an universal way.  相似文献   

5.
A new method has been developed to compute the probability that each amino acid in a protein sequence is in a particular secondary structural element. Each of these probabilities is computed using the entire sequence and a set of predefined structural class models. This set of structural classes is patterned after Jane Richardson''s taxonomy for the domains of globular proteins. For each structural class considered, a mathematical model is constructed to represent constraints on the pattern of secondary structural elements characteristic of that class. These are stochastic models having discrete state spaces (referred to as hidden Markov models by researchers in signal processing and automatic speech recognition). Each model is a mathematical generator of amino acid sequences; the sequence under consideration is modeled as having been generated by one model in the set of candidates. The probability that each model generated the given sequence is computed using a filtering algorithm. The protein is then classified as belonging to the structural class having the most probable model. The secondary structure of the sequence is then analyzed using a "smoothing" algorithm that is optimal for that structural class model. For each residue position in the sequence, the smoother computes the probability that the residue is contained within each of the defined secondary structural elements of the model. This method has two important advantages: (1) the probability of each residue being in each of the modeled secondary structural elements is computed using the totality of the amino acid sequence, and (2) these probabilities are consistent with prior knowledge of realizable domain folds as encoded in each model. As an example of the method''s utility, we present its application to flavodoxin, a prototypical alpha/beta protein having a central beta-sheet, and to thioredoxin, which belongs to a similar structural class but shares no significant sequence similarity.  相似文献   

6.
Enhancer-promoter interactions in eukaryotic genomes are often controlled by sequence elements that block the actions of enhancers. Although the experimental evidence suggests that those sequence elements contribute to forming loops of chromatin, the molecular mechanism of how such looping affects the enhancer-blocking activity is still largely unknown. In this article, the roles of DNA looping in enhancer blocking are investigated by numerically simulating the DNA conformation of a prototypical model system of gene regulation. The simulated results show that the enhancer function is indeed blocked when the enhancer is looped out so that it is separated from the promoter, which explains experimental observations of gene expression in the model system. The local structural distortion of DNA caused by looping is important for blocking, so the ability of looping to block enhancers can be lost when the loop length is much larger than the persistence length of the chain.  相似文献   

7.
A key feature of speech is its stereotypical 5 Hz rhythm. One theory posits that this rhythm evolved through the modification of rhythmic facial movements in ancestral primates. If the hypothesis has any validity, then a comparative approach may shed some light. We tested this idea by using cineradiography (X-ray movies) to characterize and quantify the internal dynamics of the macaque monkey vocal tract during lip-smacking (a rhythmic facial expression) versus chewing. Previous human studies showed that speech movements are faster than chewing movements, and the functional coordination between vocal tract structures is different between the two behaviors. If rhythmic speech evolved through a rhythmic ancestral facial movement, then one hypothesis is that monkey lip-smacking versus chewing should also exhibit these differences. We found that the lips, tongue, and hyoid move with a speech-like 5 Hz rhythm during lip-smacking, but not during chewing. Most importantly, the functional coordination between these structures was distinct for each behavior. These data provide empirical support for the idea that the human speech rhythm evolved from the rhythmic facial expressions of ancestral primates.  相似文献   

8.
M C Yao  C H Yao 《Nucleic acids research》1994,22(25):5702-5708
Extensive programmed DNA deletion occurs in ciliates during development. In this study we examine the excised forms of two previously characterized deletion elements, the R- and M-element, in Tetrahymena. Using divergently oriented primers in polymerase chain reactions we have detected the junctions formed by joining the two ends of these elements, providing evidence for the presence of circular excised forms. These circular forms were detected in developing macronuclear DNA from 12-24 h after mating began, but not in micronuclear or whole cell DNA of vegetative cells. They are present at very low abundance, detectable after PCR only through hybridization with specific probes. Sequence analysis shows that the circle junctions occur at or very near the known ends of the elements. There is sequence microheterogeneity in these junctions, which does not support a simple reciprocal exchange model for DNA deletion. A model involving staggered cuts and variable mismatch repair is proposed to explain these results. This model also explains the sequence microheterogeneity previously detected among the junction sequences retained in the macronuclear chromosome.  相似文献   

9.
10.
Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or “entrained”) to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research—including neural entrainment and tACS—and reveal endogenous neural oscillations as a key underlying principle for speech perception.

Just as a child on a swing continues to move after the pushing stops, this study reveals similar entrained rhythmic echoes in brain activity after hearing speech and electrical brain stimulation; perturbation with tACS shows that these brain oscillations help listeners to understand speech.  相似文献   

11.
When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72–82% (freely-read CDS) and 90–98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages.  相似文献   

12.
Disordered speech can present with rhythmic problems, impacting on an individual''s ability to communicate. Effective treatment relies on the availability of sensitive methods to characterize the problem. Rhythm metrics based on segmental durations originally designed for cross-linguistic research have the potential to provide such information. However, these measures may be associated with problems that impact on their clinical usefulness. This paper aims to address the perceptual validity of cross-linguistic metrics as indicators of rhythmic disorder. Speakers with dysarthria and matched healthy participants performed a range of tasks, including syllable and sentence repetition and a spontaneous monologue. A range of rhythm metrics as well as clinical measures were applied. Results showed that none of the metrics could differentiate disordered from healthy speakers, despite clear perceptual differences, suggesting that factors beyond segment duration impacted on rhythm perception. The investigation also highlighted a number of areas where caution needs to be exercised in the application of rhythm metrics to disordered speech. The paper concludes that the underlying speech impairment leading to the perceptual and acoustic characterization of rhythmic problems needs to be established through detailed analysis of speech characteristics in order to construct effective treatment plans for individuals with speech disorders.  相似文献   

13.
The perception of prosodic cues in human speech may be rooted in mechanisms common to mammals. The present study explores to what extent bats use rhythm and frequency, typically carrying prosodic information in human speech, for the classification of communication call series. Using a two-alternative, forced choice procedure, we trained Megaderma lyra to discriminate between synthetic contact call series differing in frequency, rhythm on level of calls and rhythm on level of call series, and measured the classification performance for stimuli differing in only one, or two, of the above parameters. A comparison with predictions from models based on one, combinations of two, or all, parameters revealed that the bats based their decision predominantly on frequency and in addition on rhythm on the level of call series, whereas rhythm on level of calls was not taken into account in this paradigm. Moreover, frequency and rhythm on the level of call series were evaluated independently. Our results show that parameters corresponding to prosodic cues in human languages are perceived and evaluated by bats. Thus, these necessary prerequisites for a communication via prosodic structures in mammals have evolved far before human speech.  相似文献   

14.
15.
Splitting of locomotor activity rhythm in hamsters occurs when the animals are exposed for several weeks to constant light. The authors propose a mathematical model that explains splitting in terms of a switch in the sign of coupling of two oscillators, from positive to negative, due to long-term exposure to constant light. The model assumes that the two oscillators are not identical and that the negative coupling strengths achieved by each individual animal are variable. With these assumptions, the model provides a unified picture of all different splitting patterns presented by the hamsters, provides an explanation for why the two activity components cross each other during many patterns, and explains why the phase difference achieved by the split components is often near 180 degrees.  相似文献   

16.
Skliarov OP 《Biofizika》2005,50(4):735-742
It was shown that the Feigenbaum scenario for the transition of bifurcation of period doubling to chaos, which explains the singularities of V-rhythm disorders in the neighborhood of the critical point, is a good model of the development of phonetics in children speech. It was also shown that the singularities of the dynamics of V-rhythms in the bifurcation lacuna in the zone of chaos intrinsic to the Pomeau-Manneville scenario for route of bifurcations of period 3 to chaos are capable, basically, to describe some features of both remembrance at the extraction from memory during speech and remembering in memory during perception of speech.  相似文献   

17.
18.
Abstract

Enright's theory, which explains entrainment as periodically repeated phase response, is applied to Wever's self‐sustained oscillation model of are circadian rhythm and tested by computer simulation. Ranges of phase response and entrainment are compared and the oscillatory behaviour is shown in the phase diagram for the cases of phase response and entrainment. It is shown that Enright's theory is not valid for self‐sustained oscillations in general, but it need not necessarily fail in case of the biological circadian rhythm.  相似文献   

19.
20.
Rhythm is important in the production of motor sequences such as speech and song. Deficits in rhythm processing have been implicated in human disorders that affect speech and language processing, including stuttering, autism, and dyslexia. Songbirds provide a tractable model for studying the neural underpinnings of rhythm processing due to parallels with humans in neural structures and vocal learning patterns. In this study, adult zebra finches were exposed to naturally rhythmic conspecific song or arrhythmic song. Immunohistochemistry for the immediate early gene ZENK was used to detect neural activation in response to these two types of stimuli. ZENK was increased in response to arrhythmic song in the auditory association cortex homologs, caudomedial nidopallium (NCM) and caudomedial mesopallium (CMM), and the avian amygdala, nucleus taeniae (Tn). CMM also had greater ZENK labeling in females than males. The increased neural activity in NCM and CMM during perception of arrhythmic stimuli parallels increased activity in the human auditory cortex following exposure to unexpected, or perturbed, auditory stimuli. These auditory areas may be detecting errors in arrhythmic song when comparing it to a stored template of how conspecific song is expected to sound. CMM may also be important for females in evaluating songs of potential mates. In the context of other research in songbirds, we suggest that the increased activity in Tn may be related to the value of song for assessing mate choice and bonding or it may be related to perception of arrhythmic song as aversive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号