首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Eight patients with Down syndrome, aged 9 years and 10 months to 25 years and 4 months, underwent partial glossectomy. Preoperative and postoperative videotaped samples of spoken words and connected speech were randomized and rated by two groups of listeners, only one of which knew of the surgery. Aesthetic appearance of speech or visual acceptability of the patient while speaking was judged from visual information only. Judgments of speech intelligibility were made from the auditory portion of the videotapes. Acceptability and intelligibility also were judged together during audiovisual presentation. Statistical analysis revealed that speech was significantly more acceptable aesthetically after surgery. No significant difference was found in speech intelligibility preoperatively and postoperatively. Ratings did not differ significantly depending on whether the rater knew of the surgery. Analysis of results obtained in various presentation modes revealed that the aesthetics of speech did not significantly affect judgment of intelligibility. Conversely, speech acceptability was greater in the presence of higher levels of intelligibility.  相似文献   

2.
Rehabilitation of the face in patients with Down's syndrome   总被引:2,自引:0,他引:2  
Fifty patients with Down's syndrome underwent surgery for improvement of the facial stigmata. Partial glossectomy, lateral canthoplasty, and nose, cheek, and chin augmentation were the common procedures. With a follow-up of 18 to 24 months, the results were recorded by a multidisciplinary team with similar judgments on the glossectomy, the most satisfactory procedure, and some discrepancy on the other procedures, canthoplasty, and cheek augmentation. There were no infections or extrusion of prostheses, and a rather high incidence of extrusion of prostheses, and a rather high incidence of bone resorption was noted in the mandibular area. The facial changes were satisfactory in the majority of the cases in both medical and nonmedical evaluation and improved self-confidence, especially in the older patients. The satisfactory results here presented advocate certain procedures for attenuation of the Down's syndrome stigmata and improvement of some functions by diminishing the size of the tongue.  相似文献   

3.
It is known from the literature that (1) sounds with complex spectral composition are assessed by summing the partial outputs of the spectral channels; (2) electrical stimuli used in cochlear implant systems bring about the perception of a frequency band; and (3) removal of different parts of the auditory spectrum significantly affects phrase intelligibility. The level of acoustic pressure (AP) at a comfortable loudness level and the phrase intelligibility after comb filtering of a speech signal were measured in normally hearing subjects. Using a software program for spectral transformation of the speech signal, the phrase spectrum was divided into frequency bands of various width and only the bands with odd numbers were summed. In three series, the width of odd bands was 50, 100, or 150 Hz and the width of even bands was varied. The filter period was equal to the sum of the even and odd bands. With the same period, the acoustic pressure of the output signal should be increased to reach the comfortable loudness level of a speech signal passed via the comb filter; the narrower the width of the test bands, the higher the AP increase. With the same width of the test band, the acoustic pressure of the output signal should be increased to reach the comfortable loudness level; the greater the filter period, the higher the increase should be. The speech signal redundancy with respect to its spectral content can be equal to or even exceed 97.5%.  相似文献   

4.
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a −5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a −5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At −5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.  相似文献   

5.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.  相似文献   

6.
A speech enhancement scheme is presented using diverse processing in sub-bands spaced according to a human-cochlear describing function. The binaural adaptive scheme decomposes the wide-band input signals into a number of band-limited signals, superficially similar to the treatment the human ears perform on incoming signals. The results of a series of intelligibility and formal listening tests are presented in which acoustic speech signals corrupted with recorded automobile noise were presented to 15 normal hearing volunteer subjects. For the experimental cases considered, the proposed binaural adaptive sub-band processing scheme delivers a statistically significant improvement in terms of both speech-intelligibility and perceived quality when compared with both the conventional wide-band processed and the noisy unprocessed case. The scheme is capable of extension to a potentially more flexible sub-band processing method based on a constrained artificial neural network (ANN).  相似文献   

7.
In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (−8.8 dB to −18.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.  相似文献   

8.
It was found that, at a test bandwidth range of 50 Hz, 100% speech intelligibility is retained in naive subjects when, on average, 950 Hz is removed from each subsequent 1000-Hz bandwidth. Thus, speech is 95% redundant with respect to the spectral content. The parameters of the comb filter were chosen from measurements of speech intelligibility in experienced subjects, at which no one subject with normal hearing taking part in the experiment for the first time exhibited 100% intelligibility. Two methods of learning to perceive spectrally deprived speech signals are compared: (1) aurally only and (2) with visual enhancement. In the latter case, speech intelligibility is significantly higher. The possibility of using a spectrally deprived speech signal to develop and assess the efficiency of auditory rehabilitation of implanted patients is discussed.  相似文献   

9.
M?bius syndrome is a complex congenital anomaly involving multiple cranial nerves, including the abducens (VI) and facial (II) nerves, and often associated with limb anomalies. Muscle transplantation has been used to address the lack of facial animation, lack of lower lip support, and speech difficulties these patients experience. The purpose of this study was to investigate the results of bilateral, segmental gracilis muscle transplantation to the face using the facial vessels for revascularization and the motor nerve to the masseter for reinnervation. The outcome of the two-stage procedure was assessed in 10 consecutive children with M?bius syndrome by direct interview, speech assessment, and oral commissure movement. Preoperative data were collected from direct questioning, viewing of preoperative videotapes, notes from prior medical evaluations, and rehabilitation medicine and speech pathology assessments. All of the patients developed reinnervation and muscle movement. The children who described self-esteem to be an issue preoperatively reported a significant posttransplant improvement. The muscle transplants produced a smile with an average commissure excursion of 1.37 cm. The frequency and severity of drooling and drinking difficulties decreased postoperatively in the seven symptomatic children. Speech difficulties improved in all children. Specifically, of the six children with bilabial incompetence, three received complete correction and three had significant improvement. Despite the length and complexity of these procedures, complications were minimal. Muscle transplantation had positive effects in all problematic areas, with a high degree of patient satisfaction and improvement in drooling, drinking, speech, and facial animation. The surgical technique is described in detail and the advantages over regional muscle transfers are outlined. Segmental gracilis muscle transplantation innervated by the motor nerve to the masseter is an effective method of treating patients with M?bius syndrome.  相似文献   

10.
Summary 235 cases of Down's syndrome were ascertained in a 10-year study of Down's syndrome in Western Australia. Although cytogenetic studies performed on 222 subjects confirmed that 95% of cases were trisomic due to nondisjunction, 4% were trisomic due to translocation, and 1% were mosaic, the ratio of inherited/sporadic translocations differed from that usually reported. Comparison of the results with those of an earlier Australian survey of Down's syndrome demonstrated a real fall in the incidence of Down's syndrome in Australia but no significant change in maternal age-specific incidences.  相似文献   

11.
Minutiae of the epidermal ridges were examined in 16 children with Down's syndrome and 50 children without genetic and familial abnormalities. Minutiae in standard areas on the hand palms (according to Grzeszyk's concept) were examined. Comparative analysis confirmed by the statistical analysis showed significant differences in the incidence of particular minutiae types on the hand palms of children with Down's syndrome and control group.  相似文献   

12.

Objectives

(1) To report the speech perception and intelligibility results of Mandarin-speaking patients with large vestibular aqueduct syndrome (LVAS) after cochlear implantation (CI); (2) to compare their performance with a group of CI users without LVAS; (3) to understand the effects of age at implantation and duration of implant use on the CI outcomes. The obtained data may be used to guide decisions about CI candidacy and surgical timing.

Methods

Forty-two patients with LVAS participating in this study were divided into two groups: the early group received CI before 5 years of age and the late group after 5. Open-set speech perception tests (on Mandarin tones, words and sentences) were administered one year after implantation and at the most recent follow-up visit. Categories of auditory perception (CAP) and Speech Intelligibility Rating (SIR) scale scores were also obtained.

Results

The patients with LVAS with more than 5 years of implant use (18 cases) achieved a mean score higher than 80% on the most recent speech perception tests and reached the highest level on the CAP/SIR scales. The early group developed speech perception and intelligibility steadily over time, while the late group had a rapid improvement during the first year after implantation. The two groups, regardless of their age at implantation, reached a similar performance level at the most recent follow-up visit.

Conclusion

High levels of speech performance are reached after 5 years of implant use in patients with LVAS. These patients do not necessarily need to wait until their hearing thresholds are higher than 90 dB HL or PB word score lower than 40% to receive CI. They can do it “earlier” when their speech perception and/or speech intelligibility do not reach the performance level suggested in this study.  相似文献   

13.
Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information.  相似文献   

14.
Reexamination of paternal age effect in Down's syndrome   总被引:2,自引:0,他引:2  
Summary The recent discovery that the extra chromosome in about 30% of cases of 47, trisomy 21 is of paternal origin has revived interest in the possibility of paternal age as a risk factor for a Down syndrome birth, independent of maternal age. Parental age distribution for 611 Down's syndrome 47,+21 cases was studied. The mean paternal age was 0.16 year greater than in the entire population of live births after controlling for maternal age. There was no evidence for a significant paternal age effect at the 0.05 level. For 242 of these Down's syndrome cases, control subjects were selected by rigidly matching in a systematic manner. Paternal age was the variable studied, with maternal age and time and place of birth controlled. There was no statistically significant association between paternal age and Down's syndrome. After adjustment for maternal age, these two studies were not consistent with an increase of paternal age in Down's syndrome.  相似文献   

15.
To test a hypothesis on potential role of large heterochromatic regions in chromosome nondisjunction polymorphism of C segments of chromosomes 1, 9, and 16 in 70 children with Down's syndrome were examined. The C segment lengths of the above chromosomes were shown not to deviate from the normal. To solve the problem, it seems unreasonable to examine children with Down's syndrome.  相似文献   

16.
The main goal of this study was to investigate reorganisation of the EEG systemic interactions spatial structure during mental speech production in preschool children: generating sentences from the set of words and generating words from the set of phonemes. In both cases, interhemisphere biopotential relations significantly increased as compared with the baseline (resting with closed eyes). Results of the EEG cross-correlation and coherent analyses showed that during verbal tasks marked intensification of hemisphere interaction was observed. High coefficients of statistic similarity between intercortical interactions patterns in adults and children during sentence and words generating were observed (SC = 0.71 & 0.62 respectively). Opposite to that, lesser coefficients of statistic similarity were observed between these two groups during grammar and semantic mistakes identification (SC < or = 0.50). According to this data we could expect a relatively high maturation level of central mechanisms which underlie the processes of speech production rather than mechanisms which underlie grammar and semantic mistakes identification in preschool children.  相似文献   

17.
A method of fluorescent microscopy with the aid of acridine orange was applied in these studies; some features of the changes in the structure of interphasic chromatin characteristic of their sick children were revealed on the short-term cultures of lymphoyctes obtained from the mothers with children suffering from Down's syndrome. Sibling girls also displayed deviations similar to the changes revealed in their mothers. The data obtained permit to suppose the existence of a definite population of women, peculiarities of whose genotype promoted the appearance on the structural chromatin organization was revealed only in the mothers and sibling girls it is suggested that the mentioned genotype peculiarities were hereditary and connected with genes (or certain chromatin areas) limited by sex.  相似文献   

18.
Invariant and noise-proof speech understanding is an important human ability, ensured by several mechanisms of the audioverbal system, which develops parallel to mastering linguistic rules. It is a fundamental problem of speech studies to clarify the mechanisms of this understanding, especially their role in the speech development. The article deals with of the regularities of auditory word recognition in noise by preschool children (healthy and with speech development disorders) and patients with cochlear implants. The authors studied the recognition of words using pictures (by children) and verbal monitoring, when the subjects were stimulated by isolated words with one or all syllables in noise. The study showed that children's ability to perceive distorted words develops in ontogeny and is closely related to the development of mental processes and mastering linguistic rules. The data on patients with cochlear implants also confirmed the key role of the central factors in understanding distorted speech.  相似文献   

19.
Elucidating the structure and function of joint vocal displays (e.g. duet, chorus) recorded with a conventional microphone has proved difficult in some animals owing to the complex acoustic properties of the combined signal, a problem reminiscent of multi-speaker conversations in humans. Towards this goal, we set out to simultaneously compare air-transmitted (AT) with radio-transmitted (RT) vocalizations in one pair of humans and one pair of captive Bolivian grey titi monkeys (Plecturocebus donacophilus) all equipped with an accelerometer – or vibration transducer – closely apposed to the larynx. First, we observed no crosstalk between the two radio transmitters when subjects produced vocalizations at the same time close to each other. Second, compared with AT acoustic recordings, sound segmentation and pitch tracking of the RT signal was more accurate, particularly in a noisy and reverberating environment. Third, RT signals were less noisy than AT signals and displayed more stable amplitude regardless of distance, orientation and environment of the animal. The microphone outperformed the accelerometer with respect to sound spectral bandwidth and speech intelligibility: the sounds of RT speech were more attenuated and dampened as compared to AT speech. Importantly, we show that vocal telemetry allows reliable separation of the subjects’ voices during production of joint vocalizations, which has great potential for future applications of this technique with free-ranging animals.  相似文献   

20.

Background

The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding ‘rapid temporal processing’.

Methodology

A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.

Conclusions

Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号