首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
An increased listing effort represents a major problem in humans with hearing impairment. Neurodiagnostic methods for an objective listening effort estimation might support hearing instrument fitting procedures. However the cognitive neurodynamics of listening effort is far from being understood and its neural correlates have not been identified yet. In this paper we analyze the cognitive neurodynamics of listening effort by using methods of forward neurophysical modeling and time-scale electroencephalographic neurodiagnostics. In particular, we present a forward neurophysical model for auditory late responses (ALRs) as large-scale listening effort correlates. Here endogenously driven top–down projections related to listening effort are mapped to corticothalamic feedback pathways which were analyzed for the selective attention neurodynamics before. We show that this model represents well the time-scale phase stability analysis of experimental electroencephalographic data from auditory discrimination paradigms. It is concluded that the proposed neurophysical and neuropsychological framework is appropriate for the analysis of listening effort and might help to develop objective electroencephalographic methods for its estimation in future.  相似文献   

2.
In the present study, we used transcranial magnetic stimulation (TMS) to investigate the influence of phonological and lexical properties of verbal items on the excitability of the tongue's cortical motor representation during passive listening. In particular, we aimed to clarify if the difference in tongue motor excitability found during listening to words and pseudo-words [Fadiga, L., Craighero, L., Buccino, G., Rizzolatti, G., 2002. Speech listening specifically modulates the excitability of tongue muscles: a TMS study. European Journal of Neuroscience 15, 399-402] is due to lexical frequency or to the presence of a meaning per se. In order to do this, we investigated the time-course of tongue motor-evoked potentials (MEPs) during listening to frequent words, rare words, and pseudo-words embedded with a double consonant requiring relevant tongue movements for its pronunciation. Results showed that at the later stimulation intervals (200 and 300 ms from the double consonant) listening to rare words evoked much larger MEPs than listening to frequent words. Moreover, by comparing pseudo-words embedded with a double consonant requiring or not tongue movements, we found that a pure phonological motor resonance was present only 100 ms after the double consonant. Thus, while the phonological motor resonance appears very early, the lexical-dependent motor facilitation takes more time to appear and depends on the frequency of the stimuli. The present results indicate that the motor system responsible for phonoarticulatory movements during speech production is also involved during speech listening in a strictly specific way. This motor facilitation reflects both the difference in the phonoarticulatory characteristics and the difference in the frequency of occurrence of the verbal material.  相似文献   

3.
Vélez A  Bee MA 《Animal behaviour》2011,(6):1319-1327
Dip listening refers to our ability to catch brief "acoustic glimpses" of speech and other sounds when fluctuating background noise levels momentarily decrease. Exploiting dips in natural fluctuations of noise contributes to our ability to overcome the "cocktail party problem" of understanding speech in multi-talker social environments. We presently know little about how nonhuman animals solve analogous communication problems. Here, we asked whether female grey treefrogs (Hyla chrysoscelis) might benefit from dip listening in selecting a mate in the noisy social setting of a breeding chorus. Consistent with a dip listening hypothesis, subjects recognized conspecific calls at lower thresholds when the dips in a chorus-like noise masker were long enough to allow glimpses of nine or more consecutive pulses. No benefits of dip listening were observed when dips were shorter and included five or fewer pulses. Recognition thresholds were higher when the noise fluctuated at a rate similar to the pulse rate of the call. In a second experiment, advertisement calls comprising six to nine pulses were necessary to elicit responses under quiet conditions. Together, these results suggest that in frogs, the benefits of dip listening are constrained by neural mechanisms underlying temporal pattern recognition. These constraints have important implications for the evolution of male signalling strategies in noisy social environments.  相似文献   

4.
In this study we tested the often suggested claim that people are able to recognize their dogs by their barks. Earlier studies in other species indicated that reliable discrimination between individuals cannot be made by listening to chaotically noisy vocalizations. As barking is typically such a chaotic noisy vocalization, we have hypothesized that reliable discrimination between individuals is not possible by listening to barks. In this study, playback experiments were conducted to explore (1) how accurately humans discriminate between dogs by hearing only their barks, (2) the impact of the eliciting context of calls on these discrimination performances, and (3) how much such discrimination depends on acoustic parameters (tonality and frequency of barks, and the intervals between the individual barks). Our findings were consistent with the previous studies: human performances did not pass the empirical threshold of reliable discrimination in most cases. But a significant effect of tonality was found: discrimination between individuals was more successful when listeners were listening to low harmonic-to-noise ratio (HNR) barks. The contexts in which barks were recorded affected significantly the listeners' performances: if the dog barked at a stranger, listeners were able to discriminate the vocalizations better than if they were listening to sounds recorded when the dog was separated from its owner. It is rendered probable that the bark might be a more efficient communication system between humans and dogs for communicating the motivational state of an animal than for discrimination among strange individuals.  相似文献   

5.
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.  相似文献   

6.
Sheppard JP  Wang JP  Wong PC 《PloS one》2011,6(1):e16510
Aging is accompanied by substantial changes in brain function, including functional reorganization of large-scale brain networks. Such differences in network architecture have been reported both at rest and during cognitive task performance, but an open question is whether these age-related differences show task-dependent effects or represent only task-independent changes attributable to a common factor (i.e., underlying physiological decline). To address this question, we used graph theoretic analysis to construct weighted cortical functional networks from hemodynamic (functional MRI) responses in 12 younger and 12 older adults during a speech perception task performed in both quiet and noisy listening conditions. Functional networks were constructed for each subject and listening condition based on inter-regional correlations of the fMRI signal among 66 cortical regions, and network measures of global and local efficiency were computed. Across listening conditions, older adult networks showed significantly decreased global (but not local) efficiency relative to younger adults after normalizing measures to surrogate random networks. Although listening condition produced no main effects on whole-cortex network organization, a significant age group x listening condition interaction was observed. Additionally, an exploratory analysis of regional effects uncovered age-related declines in both global and local efficiency concentrated exclusively in auditory areas (bilateral superior and middle temporal cortex), further suggestive of specificity to the speech perception tasks. Global efficiency also correlated positively with mean cortical thickness across all subjects, establishing gross cortical atrophy as a task-independent contributor to age-related differences in functional organization. Together, our findings provide evidence of age-related disruptions in cortical functional network organization during speech perception tasks, and suggest that although task-independent effects such as cortical atrophy clearly underlie age-related changes in cortical functional organization, age-related differences also demonstrate sensitivity to task domains.  相似文献   

7.
Effective conservation demands more accurate and reliable methods of survey and monitoring of populations. Surveys of gibbon populations have relied mostly on mapping of groups in “listening areas” using acoustical point-count data. Traditional methods of estimating density in have usually used counts of gibbon groups within fixed-radius areas or areas bounded by terrain barriers to sound transmission, and have not accounted for possible decline in detectability with distance. In this study we sampled the eastern hoolock gibbon (Hoolock leucogenys) population in Htamanthi Wildlife Sanctuary (WS), Myanmar, using two methods: the traditional point-count method with fixed-radius listening areas, and a newer method using point-transect Distance analysis from a sample point established in the center of each listening point array. The basic data were obtained by triangulating on singing groups from four LPs for 4 days, in 10 randomly selected sample areas within the sanctuary. The point transect method gave an average density of 3.13 groups km−2, higher than the estimates of group density within fixed-radius areas without correction for detectability. A new method of analysis of singing probability per day (p[1]) gave an estimate of 0.547. Htamanthi WS is an important conservation area containing an estimated 7000 (95% confidence interval: 5000–10,000) hoolock groups. Surveys at Htamanthi WS and locations in the Hukaung Valley suggest that the extensive evergreen forests in northern Myanmar have the capacity to support 2–4 (average about 3) groups of hoolock gibbons per km2, but most forests in its range have yet to be surveyed.  相似文献   

8.
Recent studies employing speech stimuli to investigate ‘cocktail-party’ listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an ‘inverted’ adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies.  相似文献   

9.
In multi-talker situations, individuals adapt behaviorally to this listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener’s goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of a spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto two distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.

This study investigates how intrinsic neural oscillations, acting in concert, tune into attentive listening. Using electroencephalography signals collected from people in a dual-talker listening task, the authors find that network connectivity of frontoparietal beta and posterior alpha oscillations is regulated according to the listener’s goal.  相似文献   

10.
We examined the effect of listening to two different types of music (with slow and fast rhythm), prior to supramaximal cycle exercise, on performance, heart rate, the concentration of lactate and ammonia in blood, and the concentration of catecholamines in plasma. Six male students participated in this study. After listening to slow rhythm or fast rhythm music for 20 min, the subjects performed supramaximal exercise for 45 s using a cycle ergometer. Listening to slow and fast rhythm music prior to supramaximal exercise did not significantly affect the mean power output. The plasma norepinephrine concentration immediately before the end of listening to slow rhythm music was significantly lower than before listening (p < 0.05). The plasma epinephrine concentration immediately before the end of listening to fast rhythm music was significantly higher than before listening (p < 0.05). The type of music had no effect on blood lactate and ammonia levels or on plasma catecholamine levels following exercise. In conclusion, listening to slow rhythm music decreases the plasma norepinephrine level, and listening to fast rhythm music increases the plasma epinephrine level. The type of music has no impact on power output during exercise.  相似文献   

11.
The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.  相似文献   

12.
Despite the importance of perceptually separating signals from background noise, we still know little about how nonhuman animals solve this problem. Dip listening, an ability to catch meaningful ‘acoustic glimpses’ of a target signal when fluctuating background noise levels momentarily drop, constitutes one possible solution. Amplitude-modulated noises, however, can sometimes impair signal recognition through a process known as modulation masking. We asked whether fluctuating noise simulating a breeding chorus affects the ability of female green treefrogs (Hyla cinerea) to recognize male advertisement calls. Our analysis of recordings of the sounds of green treefrog choruses reveal that their levels fluctuate primarily at rates below 10?Hz. In laboratory phonotaxis tests, we found no evidence for dip listening or modulation masking. Mean signal recognition thresholds in the presence of fluctuating chorus-like noises were never statistically different from those in the presence of a non-fluctuating control. An analysis of statistical effects sizes indicates that masker fluctuation rates, and the presence versus absence of fluctuations, had negligible effects on subject behavior. Together, our results suggest that females listening in natural settings should receive no benefits, nor experience any additional constraints, as a result of level fluctuations in the soundscape of green treefrog choruses.  相似文献   

13.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.  相似文献   

14.
Musical training leads to sensory and motor neuroplastic changes in the human brain. Motivated by findings on enlarged corpus callosum in musicians and asymmetric somatomotor representation in string players, we investigated the relationship between musical training, callosal anatomy, and interhemispheric functional symmetry during music listening. Functional symmetry was increased in musicians compared to nonmusicians, and in keyboardists compared to string players. This increased functional symmetry was prominent in visual and motor brain networks. Callosal size did not significantly differ between groups except for the posterior callosum in musicians compared to nonmusicians. We conclude that the distinctive postural and kinematic symmetry in instrument playing cross-modally shapes information processing in sensory-motor cortical areas during music listening. This cross-modal plasticity suggests that motor training affects music perception.  相似文献   

15.
How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions . This system might be critical for auditory action understanding and language evolution . Preliminary evidence suggests that a similar system may exist in humans . Using fMRI, we searched for brain areas that respond both during motor execution and when individuals listened to the sound of an action made by the same effector. We show that a left hemispheric temporo-parieto-premotor circuit is activated in both cases, providing evidence for a human auditory mirror system. In the left premotor cortex, a somatotopic pattern of activation was also observed: A dorsal cluster was more involved during listening and execution of hand actions, and a ventral cluster was more involved during listening and execution of mouth actions. Most of this system appears to be multimodal because it also responds to the sight of similar actions. Finally, individuals who scored higher on an empathy scale activated this system more strongly, adding evidence for a possible link between the motor mirror system and empathy.  相似文献   

16.
When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input.  相似文献   

17.
In Senegal, sexual disorders are a real diagnosis and support problem, which is due to the lack of information of patients, sexual taboos and the little number of these affections. Patients with an anejaculation without organic or iatrogenic etiology were sent to psychiatry in psychological support consultation. The individual sessions have identified common psychosocial factors over the circumstances leading up to and sustaining the psychogenic anejaculation. The first common factor is early sexual intercourse prior to puberty for all patients. In addition, we have noted the fear of a potential incest with guilt feelings, and a sort of over-appraisal of the anejaculation by partners, all these factors may contribute to the strengthening of the disorder Provision of a listening space associated with adjustments based on the behavioral cognitive approach could bring relief to patients.  相似文献   

18.
Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations.  相似文献   

19.
Some hearing-impaired persons with hearing aids complain of listening difficulty under reverberation. No method, however, is currently available for hearing aid fitting that permits evaluation of hearing difficulty caused by reverberations. In this study, we produced speech materials with a reverberation time of 2.02 s that mimicked a reverberant environment (a classroom). Speech materials with reverberation times of 0 and 1.01 s were also made. Listening tests were performed with these materials in hearing-impaired subjects and normal-hearing subjects in a soundproof booth. Listening tests were also done in a classroom. Our results showed that speech material with a reverberation time of 2.02 s had a decreased listening-test score in hearing-impaired subjects with both monaural and binaural hearing aids. Similar results were obtained in a reverberant environment. Our findings suggest the validity of using speech materials with different reverberation times to predict the listening performance under reverberation of hearing-impaired persons with hearing aids.  相似文献   

20.
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号