首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Humans can easily restore a speech signal that is temporally masked by an interfering sound (e.g., a cough masking parts of a word in a conversation), and listeners have the illusion that the speech continues through the interfering sound. This perceptual restoration for human speech is affected by prior experience. Here we provide evidence for perceptual restoration in complex vocalizations of a songbird that are acquired by vocal learning in a similar way as humans learn their language.

Methodology/Principal Findings

European starlings were trained in a same/different paradigm to report salient differences between successive sounds. The birds'' response latency for discriminating between a stimulus pair is an indicator for the salience of the difference, and these latencies can be used to evaluate perceptual distances using multi-dimensional scaling. For familiar motifs the birds showed a large perceptual distance if discriminating between song motifs that were muted for brief time periods and complete motifs. If the muted periods were filled with noise, the perceptual distance was reduced. For unfamiliar motifs no such difference was observed.

Conclusions/Significance

The results suggest that starlings are able to perceptually restore partly masked sounds and, similarly to humans, rely on prior experience. They may be a suitable model to study the mechanism underlying experience-dependent perceptual restoration.  相似文献   

2.

Background

Zipf''s discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well.

Methodology/Principal Findings

By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage.

Conclusions/Significance

Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf''s law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics.  相似文献   

3.

Background

It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question.

Methodology/Principal Findings

MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100–250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at ∼130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at ∼115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at ∼140 ms, at a location coincident with the fMRI–defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus.

Conclusions/Significance

These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code.  相似文献   

4.

Background

Word frequency is the most important variable in language research. However, despite the growing interest in the Chinese language, there are only a few sources of word frequency measures available to researchers, and the quality is less than what researchers in other languages are used to.

Methodology

Following recent work by New, Brysbaert, and colleagues in English, French and Dutch, we assembled a database of word and character frequencies based on a corpus of film and television subtitles (46.8 million characters, 33.5 million words). In line with what has been found in the other languages, the new word and character frequencies explain significantly more of the variance in Chinese word naming and lexical decision performance than measures based on written texts.

Conclusions

Our results confirm that word frequencies based on subtitles are a good estimate of daily language exposure and capture much of the variance in word processing efficiency. In addition, our database is the first to include information about the contextual diversity of the words and to provide good frequency estimates for multi-character words and the different syntactic roles in which the words are used. The word frequencies are freely available for research purposes.  相似文献   

5.

Background

The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding ‘rapid temporal processing’.

Methodology

A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.

Conclusions

Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.  相似文献   

6.

Background

Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss.

Methodology/Principal Findings

A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05).

Conclusions/Significance

This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.  相似文献   

7.

Background

Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist.

Methods and Findings

Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task.

Results

Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations.

Conclusions

These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed.  相似文献   

8.

Background

DHA is accumulated in the central nervous system (CNS) before birth and is involved in early developmental processes, such as neurite outgrowth and gene expression.

Objective

To determine whether fetal DHA insufficiency occurs and constrains CNS development in term gestation infants.

Design

A risk reduction model using a randomized prospective study of term gestation single birth healthy infants born to women (n = 270) given a placebo or 400 mg/day DHA from 16 wk gestation to delivery. Fetal DHA deficiency sufficient to constrain CNS development was assessed based on increased risk that infants in the placebo group would not achieve neurodevelopment scores in the top quartile of all infants in the study.

Results

Infants in the placebo group were at increased risk of lower language development assessed as words understood (OR 3.22, CL 1.49–6.94, P = 0.002) and produced (OR 2.61, CL 1.22–5.58, P = 0.01) at 14 mo, and words understood (OR 2.77, CL 1.23–6.28, P = 0.03) and sentences produced (OR 2.60, CL 1.15–5.89, P = 0.02) at 18 mo using the McArthur Communicative Developmental Inventory; receptive (OR 2.23, CL 1.08–4.60, P = 0.02) and expressive language (OR 1.89, CL 0.94–3.83, P = 0.05) at 18 mo using the Bayley Scales of Infant Development III; and visual acuity (OR 2.69, CL 1.10–6.54, P = 0.03) at 2 mo.

Trial Registration

ClinicalTrials.gov NCT00620672  相似文献   

9.

Background

The existence and function of unilateral hemispheric projections within foveal vision may substantially affect foveal word recognition. The purpose of this research was to reveal these projections and determine their functionality.

Methodology

Single words (and pseudowords) were presented to the left or right of fixation, entirely within either foveal or extrafoveal vision. To maximize the likelihood of unilateral projections for foveal displays, stimuli in foveal vision were presented away from the midline. The processing of stimuli in each location was assessed by combining behavioural measures (reaction times, accuracy) with on-line monitoring of hemispheric activity using event-related potentials recorded over each hemisphere, and carefully-controlled presentation procedures using an eye-tracker linked to a fixation-contingent display.

Principal Findings

Event-related potentials 100–150 ms and 150–200 ms after stimulus onset indicated that stimuli in extrafoveal and foveal locations were projected unilaterally to the hemisphere contralateral to the presentation hemifield with no concurrent projection to the ipsilateral hemisphere. These effects were similar for words and pseudowords, suggesting this early division occurred before word recognition. Indeed, event-related potentials revealed differences between words and pseudowords 300–350 ms after stimulus onset, for foveal and extrafoveal locations, indicating that word recognition had now occurred. However, these later event-related potentials also revealed that the hemispheric division observed previously was no longer present for foveal locations but remained for extrafoveal locations. These findings closely matched the behavioural finding that foveal locations produced similar performance each side of fixation but extrafoveal locations produced left-right asymmetries.

Conclusions

These findings indicate that an initial division in unilateral hemispheric projections occurs in foveal vision away from the midline but is not apparent, or functional, when foveal word recognition actually occurs. In contrast, the division in unilateral hemispheric projections that occurs in extrafoveal locations is still apparent, and is functional, when extrafoveal word recognition takes place.  相似文献   

10.
Kanske P  Kotz SA 《PloS one》2012,7(1):e30086

Background

The study of emotional speech perception and emotional prosody necessitates stimuli with reliable affective norms. However, ratings may be affected by the participants'' current emotional state as increased anxiety and depression have been shown to yield altered neural responding to emotional stimuli. Therefore, the present study had two aims, first to provide a database of emotional speech stimuli and second to probe the influence of depression and anxiety on the affective ratings.

Methodology/Principal Findings

We selected 120 words from the Leipzig Affective Norms for German database (LANG), which includes visual ratings of positive, negative, and neutral word stimuli. These words were spoken by a male and a female native speaker of German with the respective emotional prosody, creating a total set of 240 auditory emotional stimuli. The recordings were rated again by an independent sample of subjects for valence and arousal, yielding groups of highly arousing negative or positive stimuli and neutral stimuli low in arousal. These ratings were correlated with participants'' emotional state measured with the Depression Anxiety Stress Scales (DASS). Higher depression scores were related to more negative valence of negative and positive, but not neutral words. Anxiety scores correlated with increased arousal and more negative valence of negative words.

Conclusions/Significance

These results underscore the importance of representatively distributed depression and anxiety scores in participants of affective rating studies. The LANG-audition database, which provides well-controlled, short-duration auditory word stimuli for the experimental investigation of emotional speech is available in Supporting Information S1.  相似文献   

11.
McDermott HJ 《PloS one》2011,6(7):e22358

Background

Recently two major manufacturers of hearing aids introduced two distinct frequency-lowering techniques that were designed to compensate in part for the perceptual effects of high-frequency hearing impairments. The Widex “Audibility Extender” is a linear frequency transposition scheme, whereas the Phonak “SoundRecover” scheme employs nonlinear frequency compression. Although these schemes process sound signals in very different ways, studies investigating their use by both adults and children with hearing impairment have reported significant perceptual benefits. However, the modifications that these innovative schemes apply to sound signals have not previously been described or compared in detail.

Methods

The main aim of the present study was to analyze these schemes''technical performance by measuring outputs from each type of hearing aid with the frequency-lowering functions enabled and disabled. The input signals included sinusoids, flute sounds, and speech material. Spectral analyses were carried out on the output signals produced by the hearing aids in each condition.

Conclusions

The results of the analyses confirmed that each scheme was effective at lowering certain high-frequency acoustic signals, although both techniques also distorted some signals. Most importantly, the application of either frequency-lowering scheme would be expected to improve the audibility of many sounds having salient high-frequency components. Nevertheless, considerably different perceptual effects would be expected from these schemes, even when each hearing aid is fitted in accordance with the same audiometric configuration of hearing impairment. In general, these findings reinforce the need for appropriate selection and fitting of sound-processing schemes in modern hearing aids to suit the characteristics and preferences of individual listeners.  相似文献   

12.

Background

Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals.

Methodology/Principal Findings

We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age.

Conclusions/Significance

This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.  相似文献   

13.

Background

Serotype-specific polysaccharide based group B streptococcus (GBS) vaccines are being developed. An understanding of the serotype epidemiology associated with maternal colonization and invasive disease in infants is necessary to determine the potential coverage of serotype-specific GBS vaccines.

Methods

Colonizing GBS isolates were identified by vaginal swabbing of mothers during active labor and from skin of their newborns post-delivery. Invasive GBS isolates from infants were identified through laboratory-based surveillance. GBS serotyping was done by latex agglutination. Serologically non-typeable isolates were typed by a serotype-specific PCR method. The invasive potential of GBS serotypes associated with sepsis within seven days of birth was evaluated in association to maternal colonizing serotypes.

Results

GBS was identified in 289 (52.4%) newborns born to 551 women with GBS-vaginal colonization and from 113 (5.6%) newborns born to 2,010 mothers in whom GBS was not cultured from vaginal swabs. The serotype distribution among vaginal-colonizing isolates was as follows: III (37.3%), Ia (30.1%), and II (11.3%), V (10.2%), Ib (6.7%) and IV (3.7%). There were no significant differences in serotype distribution between vaginal and newborn colonizing isolates (P = 0.77). Serotype distribution of invasive GBS isolates were significantly different to that of colonizing isolates (P<0.0001). Serotype III was the most common invasive serotype in newborns less than 7 days (57.7%) and in infants 7 to 90 days of age (84.3%; P<0.001). Relative to serotype III, other serotypes showed reduced invasive potential: Ia (0.49; 95%CI 0.31–0.77), II (0.30; 95%CI 0.13–0.67) and V (0.38; 95%CI 0.17–0.83).

Conclusion

In South Africa, an anti-GBS vaccine including serotypes Ia, Ib and III has the potential of preventing 74.1%, 85.4% and 98.2% of GBS associated with maternal vaginal-colonization, invasive disease in neonates less than 7 days and invasive disease in infants between 7–90 days of age, respectively.  相似文献   

14.

Background

Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth.

Methodology/Principal Findings

Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35th, 36th, and 37th weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants.

Conclusions/Significance

Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants ‘auditory processing’ or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3–4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed.  相似文献   

15.
Papes S  Ladich F 《PloS one》2011,6(10):e26479

Background

Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.

Methodology/Principal Findings

Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.

Conclusions/Significance

These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus.  相似文献   

16.

Background

The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes.

Methodology/Principal Findings

We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz).

Conclusions/Significance

Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed.  相似文献   

17.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

18.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

19.

Background

Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era.

Methodology

A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews'' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%.

Conclusions

Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell''s Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style.  相似文献   

20.

Objectives

To evaluate the long term neurodevelopmental outcome of premature infants exposed to either gram- negative sepsis (GNS) or neonatal Candida sepsis (NCS), and to compare their outcome with premature infants without sepsis.

Methods

Historical cohort study in a population of infants born at <30 weeks gestation and admitted to the Neonatal Intensive Care Unit (NICU) of the Academic Medical Center in Amsterdam during the period 1997–2007. Outcome of infants exposed to GNS or NCS and 120 randomly chosen uncomplicated controls (UC) from the same NICU were compared. Clinical data during hospitalization and neurodevelopmental outcome data (clinical neurological status; Bayley –test results and vision/hearing test results) at the corrected age of 24 months were collected. An association model with sepsis as the central determinant of either good or adverse outcome (death or severe developmental delay) was made, corrected for confounders using multiple logistic regression analysis.

Results

Of 1362 patients, 55 suffered from GNS and 29 suffered from NCS; cumulative incidence 4.2% and 2.2%, respectively. During the follow-up period the mortality rate was 34% for both GNS and NCS and 5% for UC. The adjusted Odds Ratio (OR) [95% CI] for adverse outcome in the GNS group compared to the NCS group was 1.4 [0.4–4.9]. The adjusted ORs [95% CI] for adverse outcome in the GNS and NCS groups compared to the UC group were 4.8 [1.5–15.9] and 3.2 [0.7–14.7], respectively.

Conclusions

We found no statistically significant difference in outcome at the corrected age of 24 months between neonatal GNS and NCS cases. Suffering from either gram –negative or Candida sepsis increased the odds for adverse outcome compared with an uncomplicated neonatal period.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号