首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.  相似文献   

2.
A word like Huh?–used as a repair initiator when, for example, one has not clearly heard what someone just said– is found in roughly the same form and function in spoken languages across the globe. We investigate it in naturally occurring conversations in ten languages and present evidence and arguments for two distinct claims: that Huh? is universal, and that it is a word. In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. In support of the second claim we show that it is a lexical, conventionalised form that has to be learnt, unlike grunts or emotional cries. We discuss possible reasons for the cross-linguistic similarity and propose an account in terms of convergent evolution. Huh? is a universal word not because it is innate but because it is shaped by selective pressures in an interactional environment that all languages share: that of other-initiated repair. Our proposal enhances evolutionary models of language change by suggesting that conversational infrastructure can drive the convergent cultural evolution of linguistic items.  相似文献   

3.
Do principles of language processing in the brain affect the way grammar evolves over time or is language change just a matter of socio-historical contingency? While the balance of evidence has been ambiguous and controversial, we identify here a neurophysiological constraint on the processing of language that has a systematic effect on the evolution of how noun phrases are marked by case (i.e. by such contrasts as between the English base form she and the object form her). In neurophysiological experiments across diverse languages we found that during processing, participants initially interpret the first base-form noun phrase they hear (e.g. she…) as an agent (which would fit a continuation like … greeted him), even when the sentence later requires the interpretation of a patient role (as in … was greeted). We show that this processing principle is also operative in Hindi, a language where initial base-form noun phrases most commonly denote patients because many agents receive a special case marker ("ergative") and are often left out in discourse. This finding suggests that the principle is species-wide and independent of the structural affordances of specific languages. As such, the principle favors the development and maintenance of case-marking systems that equate base-form cases with agents rather than with patients. We confirm this evolutionary bias by statistical analyses of phylogenetic signals in over 600 languages worldwide, controlling for confounding effects from language contact. Our findings suggest that at least one core property of grammar systematically adapts in its evolution to the neurophysiological conditions of the brain, independently of socio-historical factors. This opens up new avenues for understanding how specific properties of grammar have developed in tight interaction with the biological evolution of our species.  相似文献   

4.
Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.  相似文献   

5.
This study investigated how speech recognition in noise is affected by language proficiency for individual non-native speakers. The recognition of English and Chinese sentences was measured as a function of the signal-to-noise ratio (SNR) in sixty native Chinese speakers who never lived in an English-speaking environment. The recognition score for speech in quiet (which varied from 15%–92%) was found to be uncorrelated with speech recognition threshold (SRTQ /2), i.e. the SNR at which the recognition score drops to 50% of the recognition score in quiet. This result demonstrates separable contributions of language proficiency and auditory processing to speech recognition in noise.  相似文献   

6.
In contrast with animal communication systems, diversity is characteristic of almost every aspect of human language. Languages variously employ tones, clicks, or manual signs to signal differences in meaning; some languages lack the noun-verb distinction (e.g., Straits Salish), whereas others have a proliferation of fine-grained syntactic categories (e.g., Tzeltal); and some languages do without morphology (e.g., Mandarin), while others pack a whole sentence into a single word (e.g., Cayuga). A challenge for evolutionary biology is to reconcile the diversity of languages with the high degree of biological uniformity of their speakers. Here, we model processes of language change and geographical dispersion and find a consistent pressure for flexible learning, irrespective of the language being spoken. This pressure arises because flexible learners can best cope with the observed high rates of linguistic change associated with divergent cultural evolution following human migration. Thus, rather than genetic adaptations for specific aspects of language, such as recursion, the coevolution of genes and fast-changing linguistic structure provides the biological basis for linguistic diversity. Only biological adaptations for flexible learning combined with cultural evolution can explain how each child has the potential to learn any human language.  相似文献   

7.
The necessary but not sufficient conditions for biological informational concepts like signs, symbols, memories, instructions, and messages are (1) an object or referent that the information is about, (2) a physical embodiment or vehicle that stands for what the information is about (the object), and (3) an interpreter or agent that separates the referent information from the vehicle’s material structure, and that establishes the stands-for relation. This separation is named the epistemic cut, and explaining clearly how the stands-for relation is realized is named the symbol-matter problem. (4) A necessary physical condition is that all informational vehicles are material boundary conditions or constraints acting on the lawful dynamics of local systems. It is useful to define a dependency hierarchy of information types: (1) syntactic information (i.e., communication theory), (2) heritable information acquired by variation and natural selection, (3) non-heritable learned or creative information, and (4) measured physical information in the context of natural laws. High information storage capacity is most reliably implemented by discrete linear sequences of non-dynamic vehicles, while the execution of information for control and construction is a non-holonomic dynamic process. The first epistemic cut occurs in self-replication. The first interpretation of base sequence information is by protein folding; the last interpretation of base sequence information is by natural selection. Evolution has evolved senses and nervous systems that acquire non-heritable information, and only very recently after billions of years, the competence for human language. Genetic and human languages are the only known complete general purpose languages. They have fundamental properties in common, but are entirely different in their acquisition, storage and interpretation.  相似文献   

8.

Background

Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only.

Methodology/Principal Findings

Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10–14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naïve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer.

Conclusions/Significance

Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual learning and suggest constraints on the use of perceptual-learning regimens during second language acquisition.  相似文献   

9.
We present data from 17 languages on the frequency with which a common set of words is used in everyday language. The languages are drawn from six language families representing 65 per cent of the world's 7000 languages. Our data were collected from linguistic corpora that record frequencies of use for the 200 meanings in the widely used Swadesh fundamental vocabulary. Our interest is to assess evidence for shared patterns of language use around the world, and for the relationship of language use to rates of lexical replacement, defined as the replacement of a word by a new unrelated or non-cognate word. Frequencies of use for words in the Swadesh list range from just a few per million words of speech to 191 000 or more. The average inter-correlation among languages in the frequency of use across the 200 words is 0.73 (p < 0.0001). The first principal component of these data accounts for 70 per cent of the variance in frequency of use. Elsewhere, we have shown that frequently used words in the Indo-European languages tend to be more conserved, and that this relationship holds separately for different parts of speech. A regression model combining the principal factor loadings derived from the worldwide sample along with their part of speech predicts 46 per cent of the variance in the rates of lexical replacement in the Indo-European languages. This suggests that Indo-European lexical replacement rates might be broadly representative of worldwide rates of change. Evidence for this speculation comes from using the same factor loadings and part-of-speech categories to predict a word's position in a list of 110 words ranked from slowest to most rapidly evolving among 14 of the world's language families. This regression model accounts for 30 per cent of the variance. Our results point to a remarkable regularity in the way that human speakers use language, and hint that the words for a shared set of meanings have been slowly evolving and others more rapidly evolving throughout human history.  相似文献   

10.
It is well known that natural languages share certain aspects of their design. For example, across languages, syllables like blif are preferred to lbif. But whether language universals are myths or mentally active constraints—linguistic or otherwise—remains controversial. To address this question, we used fMRI to investigate brain response to four syllable types, arrayed on their linguistic well-formedness (e.g., blif≻bnif≻bdif≻lbif, where ≻ indicates preference). Results showed that syllable structure monotonically modulated hemodynamic response in Broca''s area, and its pattern mirrored participants'' behavioral preferences. In contrast, ill-formed syllables did not systematically tax sensorimotor regions—while such syllables engaged primary auditory cortex, they tended to deactivate (rather than engage) articulatory motor regions. The convergence between the cross-linguistic preferences and English participants'' hemodynamic and behavioral responses is remarkable given that most of these syllables are unattested in their language. We conclude that human brains encode broad restrictions on syllable structure.  相似文献   

11.
Across spoken languages, properties of wordforms (e.g. the sounds in the word hammer) do not generally evoke mental images associated to meanings. However, across signed languages, many signforms readily evoke mental images (e.g. the sign HAMMER resembles the motion involved in hammering). Here we assess the relationship between language and imagery, comparing the performance of English speakers and British sign language (BSL) signers in meaning similarity judgement tasks. In experiment 1, we found that BSL signers used these imagistic properties in making meaning similarity judgements, in contrast with English speakers. In experiment 2, we found that English speakers behaved more like BSL signers when asked to develop mental images for the words before performing the same task. These findings show that language differences can bias users to attend more to those aspects of the world encoded in their language than to those that are not; and that language modality (spoken versus signed) can affect the degree to which imagery is involved in language.  相似文献   

12.
13.
Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world''s languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand'' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.  相似文献   

14.
There would be little adaptive value in a complex communication system like human language if there were no ways to detect and correct problems. A systematic comparison of conversation in a broad sample of the world’s languages reveals a universal system for the real-time resolution of frequent breakdowns in communication. In a sample of 12 languages of 8 language families of varied typological profiles we find a system of ‘other-initiated repair’, where the recipient of an unclear message can signal trouble and the sender can repair the original message. We find that this system is frequently used (on average about once per 1.4 minutes in any language), and that it has detailed common properties, contrary to assumptions of radical cultural variation. Unrelated languages share the same three functionally distinct types of repair initiator for signalling problems and use them in the same kinds of contexts. People prefer to choose the type that is the most specific possible, a principle that minimizes cost both for the sender being asked to fix the problem and for the dyad as a social unit. Disruption to the conversation is kept to a minimum, with the two-utterance repair sequence being on average no longer that the single utterance which is being fixed. The findings, controlled for historical relationships, situation types and other dependencies, reveal the fundamentally cooperative nature of human communication and offer support for the pragmatic universals hypothesis: while languages may vary in the organization of grammar and meaning, key systems of language use may be largely similar across cultural groups. They also provide a fresh perspective on controversies about the core properties of language, by revealing a common infrastructure for social interaction which may be the universal bedrock upon which linguistic diversity rests.  相似文献   

15.
Lexical reconstructions of Proto-Polynesian (Branstetter 1977), Proto-Miwok (Callaghan 1979), and Proto-Mayan (Fox 1977; Kaufman 1964, 1969) basic color term vocabularies are presented and compared. Although these reconstructed systems accord with the well-known Berlin and Kay (1969) color encoding sequence, they all contain a subtle distortion They generally are as large as color term systems found in contemporary daughter languages. Considering the additive nature of the color encoding sequence, these reconstructions are unlikely. Comparative linguistic evidence is presented suggesting that all three systems are over-reconstructed. Encoding sequences, then, prove to be instruments of considerable usefulness to add to the traditional tool kit of historical linguistics . [cognitive anthropology, color classification, language change, language universale, Mayan languages, Miwok languages, Polynesian languages]  相似文献   

16.
Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks.  相似文献   

17.
The complexity of different components of the grammars of human languages can be quantified. For example, languages vary greatly in the size of their phonological inventories, and in the degree to which they make use of inflectional morphology. Recent studies have shown that there are relationships between these types of grammatical complexity and the number of speakers a language has. Languages spoken by large populations have been found to have larger phonological inventories, but simpler morphology, than languages spoken by small populations. The results require further investigation, and, most importantly, the mechanism whereby the social context of learning and use affects the grammatical evolution of a language needs elucidation.  相似文献   

18.
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.  相似文献   

19.
Human language is a complex communication system with unlimited expressibility. Children spontaneously develop a native language by exposure to linguistic data from their speech community. Over historical time, languages change dramatically and unpredictably by accumulation of small changes and by interaction with other languages. We have previously developed a mathematical model for the acquisition and evolution of language in heterogeneous populations of speakers. This model is based on game dynamical equations with learning. Here, we show that simple examples of such equations can display complex limit cycles and chaos. Hence, language dynamical equations mimic complicated and unpredictable changes of languages over time. In terms of evolutionary game theory, we note that imperfect learning can induce chaotic switching among strict Nash equilibria.  相似文献   

20.

Background

Languages differ in the marking of the sentence mood of a polar interrogative (yes/no question). For instance, the interrogative mood is marked at the beginning of the surface structure in Polish, whereas the marker appears at the end in Chinese. In order to generate the corresponding sentence frame, the syntactic specification of the interrogative mood is early in Polish and late in Chinese. In this respect, German belongs to an interesting intermediate class. The yes/no question is expressed by a shift of the finite verb from its final position in the underlying structure into the utterance initial position, a move affecting, hence, both the sentence''s final and the sentence''s initial constituents. The present study aimed to investigate whether during generation of the semantic structure of a polar interrogative, i.e., the processing preceding the grammatical formulation, the interrogative mood is encoded according to its position in the syntactic structure at distinctive time points in Chinese, German, and Polish.

Methodology/Principal Findings

In a two-choice go/nogo experimental design, native speakers of the three languages responded to pictures by pressing buttons and producing utterances in their native language while their brain potentials were recorded. The emergence and latency of lateralized readiness potentials (LRP) in nogo conditions, in which speakers asked a yes/no question, should indicate the time point of processing the interrogative mood. The results revealed that Chinese, German, and Polish native speakers did not differ from each other in the electrophysiological indicator.

Conclusions/Significance

The findings suggest that the semantic encoding of the interrogative mood is temporally consistent across languages despite its disparate syntactic specification. The consistent encoding may be ascribed to economic processing of interrogative moods at various sentential positions of the syntactic structures in languages or, more generally, to the overarching status of sentence mood in the semantic structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号