首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.  相似文献   

2.
Despite the importance of perceptually separating signals from background noise, we still know little about how nonhuman animals solve this problem. Dip listening, an ability to catch meaningful ‘acoustic glimpses’ of a target signal when fluctuating background noise levels momentarily drop, constitutes one possible solution. Amplitude-modulated noises, however, can sometimes impair signal recognition through a process known as modulation masking. We asked whether fluctuating noise simulating a breeding chorus affects the ability of female green treefrogs (Hyla cinerea) to recognize male advertisement calls. Our analysis of recordings of the sounds of green treefrog choruses reveal that their levels fluctuate primarily at rates below 10?Hz. In laboratory phonotaxis tests, we found no evidence for dip listening or modulation masking. Mean signal recognition thresholds in the presence of fluctuating chorus-like noises were never statistically different from those in the presence of a non-fluctuating control. An analysis of statistical effects sizes indicates that masker fluctuation rates, and the presence versus absence of fluctuations, had negligible effects on subject behavior. Together, our results suggest that females listening in natural settings should receive no benefits, nor experience any additional constraints, as a result of level fluctuations in the soundscape of green treefrog choruses.  相似文献   

3.
ObjectivesPrevious studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users.DesignSpeech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated.ResultsSRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type.ConclusionsCI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit.  相似文献   

4.
Radiotransmitted (RT) calls of monkeys equipped with laryngeal microtransmitters are compared with those recorded by an external microphone (AT). Sharp attenuation of background noise and echoes results in better sonograms with RT than AT sounds. Sensitive detection of unvoiced calls or phonatory noises leads to knowledge of the motivational state of the animals and the mechanisms of their vocal production. However, the laryngophone acts as a low pass filter which limits RT spectra below 3 kHz. Constant distance and orientation between sound source and microphone permit us to get absolute (low-pitched call) or relative (high-pitched call) intensity measurements. Their generalization should be possible with the use of a specific weighting filter which would reconstitute the original energy of calls. The system has interesting applications in behavioral and ecological studies.  相似文献   

5.
P. HANSEN 《Bioacoustics.》2013,22(1):51-68
ABSTRACT

Some acoustic signals produced by small insects are very low in amplitude and attenuate rapidly with distance. To achieve high quality recordings with such signals, the use of specialised microphones or of sound insulation chambers is necessary. This paper presents a simple and efficient method for the recording of acoustic signals emitted by small sources. Its principle is based upon the use of two simultaneous digital recordings from two microphones: one records the ambient noise while the other records the ambient noise plus the signal to analyse. Both these recordings are converted into digital files and then a simple subtraction between the two isolates the signal with a good signal-to-noise ratio. With this method of background noise removal, the recording of low amplitude sounds in an uninsulated room with common microphones becomes possible. We have applied this method to the study of 12 complete courtships of Drosophila melanogaster and particularly to the analysis of pulse sounds produced by the male in presence of a female. The study focuses mainly on the rhythm of production of pulse trains over the course of the courtship.  相似文献   

6.
This paper introduces passive wireless telemetry based operation for high frequency acoustic sensors. The focus is on the development, fabrication, and evaluation of wireless, battery-less SAW-IDT MEMS microphones for biomedical applications. Due to the absence of batteries, the developed sensors are small and as a result of the batch manufacturing strategy are inexpensive which enables their utilization as disposable sensors. A pulse modulated surface acoustic wave interdigital transducer (SAW-IDT) based sensing strategy has been formulated. The sensing strategy relies on detecting the ac component of the acoustic pressure signal only and does not require calibration. The proposed sensing strategy has been successfully implemented on an in-house fabricated SAW-IDT sensor and a variable capacitor which mimics the impedance change of a capacitive microphone. Wireless telemetry distances of up to 5 centimeters have been achieved. A silicon MEMS microphone which will be used with the SAW-IDT device is being microfabricated and tested. The complete passive wireless sensor package will include the MEMS microphone wire-bonded on the SAW substrate and interrogated through an on-board antenna. This work on acoustic sensors breaks new ground by introducing high frequency (i.e., audio frequencies) sensor measurement utilizing SAW-IDT sensors. The developed sensors can be used for wireless monitoring of body sounds in a number of different applications, including monitoring breathing sounds in apnea patients, monitoring chest sounds after cardiac surgery, and for feedback sensing in compression (HFCC) vests used for respiratory ventilation. Another promising application is monitoring chest sounds in neonatal care units where the miniature sensors will minimize discomfort for the newborns.  相似文献   

7.
Music or other background sounds are often played in barns as environmental enrichment for animals on farms or to mask sudden disruptive noises. However, previous studies looking at the effects of this practice on nonhuman animal well-being and productivity have found contradictory results. This study monitored the vocal responses of piglets, as indicators of well-being, to evaluate the effect of various sounds played during 2 simulations of stressful farm procedures: (a) the 5 min the animals were held as if for castration and (b) the first 20 hr after weaning. The sound treatments included pink noise, music, vocalizations made by other piglets during actual castrations or the first hours after weaning, and silence (control). The study presented pink noise and music both with and without a binaural beat in the delta-theta frequency range. In both the handling and weaning situations, none of the sound treatments reduced the piglets' call rate below that heard during the control. Piglets vocalized most during playback of pink noise and least during silence and playback of calls from other pigs. These results suggest that playing music or other sounds provides no improvement in conditions for piglets during handling and weaning.  相似文献   

8.
Music or other background sounds are often played in barns as environmental enrichment for animals on farms or to mask sudden disruptive noises. However, previous studies looking at the effects of this practice on nonhuman animal well-being and productivity have found contradictory results. This study monitored the vocal responses of piglets, as indicators of well-being, to evaluate the effect of various sounds played during 2 simulations of stressful farm procedures: (a) the 5 min the animals were held as if for castration and (b) the first 20 hr after weaning. The sound treatments included pink noise, music, vocalizations made by other piglets during actual castrations or the first hours after weaning, and silence (control). The study presented pink noise and music both with and without a binaural beat in the delta-theta frequency range. In both the handling and weaning situations, none of the sound treatments reduced the piglets' call rate below that heard during the control. Piglets vocalized most during playback of pink noise and least during silence and playback of calls from other pigs. These results suggest that playing music or other sounds provides no improvement in conditions for piglets during handling and weaning.  相似文献   

9.
10.
Acoustic detection of termite infestations in urban trees   总被引:4,自引:0,他引:4  
A portable, low-frequency acoustic system was used to detect termite infestations in urban trees. The likelihood of infestation was rated independently by a computer program and an experienced listener that distinguished insect sounds from background noises. Because soil is a good insulator, termite sounds could be detected easily underneath infested trees, despite the presence of high urban background noise. Termite sounds could be detected also in trunks, but background noise often made it difficult to identify termite signals unambiguously. High likelihoods of termite infestation were predicted at four live oak (Quercus virginiana Mill, Fagacae), two loblolly pine (Pinus taeda L., Pinacae), and two baldcypress (Taxodium distichum Rich. Pinacae) trees that wood-baited traps had identified as infested with Coptotermes formosanus Shiraki. Infestations also were predicted at two pine trees with confirmed recoveries of Reticulitermes flavipes (Kollar). Low likelihoods of infestation were predicted in four oak trees where no termites were found. Additional tests were conducted in anechoic environments to determine the range of acoustic detectability and the feasibility of acoustically estimating termite population levels. There was a significant regression between the activity rate and the number of termites present in a wood trap block, with a minimum detectable number of approximately 50 workers per liter of wood. The success of these field tests suggests that currently available acoustic systems have considerable potential to detect and monitor hidden infestations of termites in urban trees and around building perimeters in addition to their present uses to detect and monitor termite infestations in buildings.  相似文献   

11.
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.  相似文献   

12.
Accelerometer, electret microphone, and piezoelectric disk acoustic systems were evaluated for their potential to detect hidden insect infestations in soil and interior structures of plants. Coleopteran grubs (the scarabaeids Phyllophaga spp. and Cyclocephala spp.) and the curculionids Diaprepes abbreviatus (L.) and Otiorhynchus sulcatus (F.) weighing 50-300 mg were detected easily in the laboratory and in the field except under extremely windy or noisy conditions. Cephus cinctus Norton (Hymenoptera: Cephidae) larvae weighing 1-12 mg could be detected in small pots of wheat in the laboratory by taking moderate precautions to eliminate background noise. Insect sounds could be distinguished from background noises by differences in frequency and temporal patterns, but insects of similarly sized species could not be distinguished easily from each other. Insect activity was highly variable among individuals and species, although D. abbreviatus grubs tended to be more active than those of O. sulcatus. Tests were done to compare acoustically predicted infestations with the contents of soil samples taken at recording sites. Under laboratory or ideal field conditions, active insects within approximately 30 cm were identified with nearly 100% reliability. In field tests under adverse conditions, the reliability decreased to approximately 75%. These results indicate that acoustic systems with vibration sensors have considerable potential as activity monitors in the laboratory and as field tools for rapid, nondestructive scouting and mapping of soil insect populations.  相似文献   

13.
Autonomous acoustic recorders are an increasingly popular method for low‐disturbance, large‐scale monitoring of sound‐producing animals, such as birds, anurans, bats, and other mammals. A specialized use of autonomous recording units (ARUs) is acoustic localization, in which a vocalizing animal is located spatially, usually by quantifying the time delay of arrival of its sound at an array of time‐synchronized microphones. To describe trends in the literature, identify considerations for field biologists who wish to use these systems, and suggest advancements that will improve the field of acoustic localization, we comprehensively review published applications of wildlife localization in terrestrial environments. We describe the wide variety of methods used to complete the five steps of acoustic localization: (1) define the research question, (2) obtain or build a time‐synchronizing microphone array, (3) deploy the array to record sounds in the field, (4) process recordings captured in the field, and (5) determine animal location using position estimation algorithms. We find eight general purposes in ecology and animal behavior for localization systems: assessing individual animals' positions or movements, localizing multiple individuals simultaneously to study their interactions, determining animals' individual identities, quantifying sound amplitude or directionality, selecting subsets of sounds for further acoustic analysis, calculating species abundance, inferring territory boundaries or habitat use, and separating animal sounds from background noise to improve species classification. We find that the labor‐intensive steps of processing recordings and estimating animal positions have not yet been automated. In the near future, we expect that increased availability of recording hardware, development of automated and open‐source localization software, and improvement of automated sound classification algorithms will broaden the use of acoustic localization. With these three advances, ecologists will be better able to embrace acoustic localization, enabling low‐disturbance, large‐scale collection of animal position data.  相似文献   

14.
Peter P. Morgan 《CMAJ》1984,130(10):1255-1258
A case of musicogenic epilepsy is reported in which the seizures were precipitated by singing voices. It was found that some singers'' voices were particularly epileptogenic and that some of their songs, but not others, would precipitate a seizure. A study of the "offending" songs and singers did not reveal a common key, chord, harmonic interval, pitch or rhythm, and the emotional feeling or intensity of the music did not seem to be relevant. However, the voices that caused the seizures had a throaty, "metallic" quality. Such a singing voice results from incorrect positioning of the larynx such that it is not allowed to descend fully during singing; consequently, the vowel sounds produced must be manipulated by the lips or jaw to be distinguished. This trait is most common in singers with a low voice range who sing softly and use a microphone. It is not seen in trained operatic or musical theatre singers. The results of repeated testing showed that the seizures in this patient were caused by listening to singers who positioned the larynx incorrectly.  相似文献   

15.

Background

Improvement of the cochlear implant (CI) front-end signal acquisition is needed to increase speech recognition in noisy environments. To suppress the directional noise, we introduce a speech-enhancement algorithm based on microphone array beamforming and spectral estimation. The experimental results indicate that this method is robust to directional mobile noise and strongly enhances the desired speech, thereby improving the performance of CI devices in a noisy environment.

Methods

The spectrum estimation and the array beamforming methods were combined to suppress the ambient noise. The directivity coefficient was estimated in the noise-only intervals, and was updated to fit for the mobile noise.

Results

The proposed algorithm was realized in the CI speech strategy. For actual parameters, we use Maxflat filter to obtain fractional sampling points and cepstrum method to differentiate the desired speech frame and the noise frame. The broadband adjustment coefficients were added to compensate the energy loss in the low frequency band.

Discussions

The approximation of the directivity coefficient is tested and the errors are discussed. We also analyze the algorithm constraint for noise estimation and distortion in CI processing. The performance of the proposed algorithm is analyzed and further be compared with other prevalent methods.

Conclusions

The hardware platform was constructed for the experiments. The speech-enhancement results showed that our algorithm can suppresses the non-stationary noise with high SNR. Excellent performance of the proposed algorithm was obtained in the speech enhancement experiments and mobile testing. And signal distortion results indicate that this algorithm is robust with high SNR improvement and low speech distortion.  相似文献   

16.
Summary To study the diets of individual animals in the context of intraspecific resource partitioning, it is desirable to detect what individuals are eating without disturbing them. Animals such as slow-moving molluscs on two-dimensional algal foods would be convenient to study, but the mouth is usually difficult to see, especially with limpets. However, one can often hear how an herbivorous mollusc is feeding. Even when the mouth region can be checked for feeding movement, feeding noises can indicate to what degree a mollusc is licking microscopic material off the surface of a plant versus biting into the plant, though licking microscopic material off the plants seems to be rare. Noises also indicate the food's texture, identifying the food species when several different algae are near the mollusc's mouth. Comparing various molluscan taxa, differences in radular structure and movement are associated with different feeding noises, even while different molluscs are eating the same alga. Sound thus aids in specifying which species are feeding where molluscs are close together.Feeding is most common on wet surfaces at night. While the molluscs are above water or less than 5 cm deep in calm water, several listening methods are useful after some practice. Even the unaided ear can hear emerged molluscs rasping resonant kelps. One can detect rasping by molluscs greater than 1 cm in length by gently contacting the alga closest to the mouth with a stethoscope or with a gum rubber tube sealed against one's ear. A cassette tape recorder with a contact microphone and headphones is useful for both emerged and submerged animals. Representative feeding noises have been documented using oscillograms from tape recordings. Analogous sounds in both terrestrial and marine environments can be useful in numerous behavioral studies.  相似文献   

17.
The subjective representation of the sounds delivered to the two ears of a human listener is closely associated with the interaural delay and correlation of these two-ear sounds. When the two-ear sounds, e.g., arbitrary noises, arrive simultaneously, the single auditory image of the binaurally identical noises becomes increasingly diffuse, and eventually separates into two auditory images as the interaural correlation decreases. When the interaural delay increases from zero to several milliseconds, the auditory image of the binaurally identical noises also changes from a single image to two distinct images. However, measuring the effect of these two factors on an identical group of participants has not been investigated. This study examined the impacts of interaural correlation and delay on detecting a binaurally uncorrelated fragment (interaural correlation = 0) embedded in the binaurally correlated noises (i.e., binaural gap or break in interaural correlation). We found that the minimum duration of the binaural gap for its detection (i.e., duration threshold) increased exponentially as the interaural delay between the binaurally identical noises increased linearly from 0 to 8 ms. When no interaural delay was introduced, the duration threshold also increased exponentially as the interaural correlation of the binaurally correlated noises decreased linearly from 1 to 0.4. A linear relationship between the effect of interaural delay and that of interaural correlation was described for listeners participating in this study: a 1 ms increase in interaural delay appeared to correspond to a 0.07 decrease in interaural correlation specific to raising the duration threshold. Our results imply that a tradeoff may exist between the impacts of interaural correlation and interaural delay on the subjective representation of sounds delivered to two human ears.  相似文献   

18.
The echolocation calls of Tadarida teniotis were studied in an outdoor flight enclosure (captive individuals) and in the wild using single microphones or an array of four microphones. Calls were characterized by measures of 10 call variables. Comparison of individual calls recorded on four microphones arrayed in a tetrahedron with 1 m between each microphone revealed that all calls were not equally detectable by all microphones but that there were no significant differences in call features obtained from calls recorded on all four microphones. A comparison of 47 calls recorded by all four microphones showed no significant differences in the features of the four recordings of each call. Analysis of calls of five individuals flying individually in an outdoor flight cage revealed significant individual differences in call features. In the field, T. teniotis used long, narrowband search-phase calls, usually without harmonics. Analysis of 1876 search-phase echolocation calls of T. teniotis recorded in the field in Israel and Greece in 2002, 2005 and 2006 showed significant year-to-year and site-to-site differences in some call features. When flying in the presence of conspecifics, T. teniotis changed their echolocation calls. We found a range of different buzzes in the wild, and based on their structure we attempted to classify them as feeding and social buzzes. The features of individual calls comprising buzzes differed significantly among buzzes, and yet there were no consistent differences between what we classified as feeding and social buzzes.  相似文献   

19.
We studied the effects of the acoustic context on active and passive discrimination of moving sound signals. Different contexts were created by reversing the role of standard and deviant stimuli in the oddball blocks, while their acoustical features were kept the same. Three types of sounds were used as standard or deviant stimuli in different blocks: stationary midline noises and two (smooth and abrupt) moving sounds moving to the left or right of the midline. Auditory event-related potentials (ERPs) were recorded during passive listening (the sound stimulation ignored), and mismatch negativity potentials (MMNs) were obtained. Active discrimination of sound movements was measured by the hit rate (percent correct responses) and false alarm rate, as well as the reaction time. The influence of the stimulus context on active and passive discrimination of the moving sound stimuli was reflected in the phenomenon known as the effect of deviance direction. The hit rate and MMN amplitude were higher when the deviant moved faster than the standard. The MMN amplitude was more responsive to the velocity of sound stimuli than the hit rate and false alarm rate. The psychophysical measurements in the reversed contexts suggest that smooth and abrupt sound movements may belong to the same perceptual category (moving sounds), while the stationary stimuli form another perceptual category.  相似文献   

20.
Introduction

Expectations of physicians concerning e‑Health and perceived barriers to implementation in clinical practice are scarcely reported in the literature. The purpose of this study was to assess these aspects of cardiovascular e‑Health.

Methods

A survey was sent to members of the Netherlands Society of Cardiology. In total, the questionnaire contained 30 questions about five topics: personal use of smartphones, digital communication between respondents and patients, current e‑Health implementation in clinical practice, expectations about e‑Health and perceived barriers for e‑Health implementation. Age, personal use of smartphones and professional environment were noted as baseline characteristics.

Results

In total, 255 respondents filled out the questionnaire (response rate 25%); 89.4% of respondents indicated that they considered e‑Health to be clinically beneficial, improving patient satisfaction (90.2%), but also that it will increase the workload (83.9%). Age was a negative predictor and personal use of smartphones was a positive predictor of having high expectations. Lack of reimbursement was identified by 66.7% of respondents as a barrier to e‑Health implementation, as well as a lack of reliable devices (52.9%) and a lack of data integration with electronic medical records (EMRs) (69.4%).

Conclusion

Cardiologists are in general positive about the possibilities of e‑Health implementation in routine clinical care; however, they identify deficient data integration into the EMR, reimbursement issues and lack of reliable devices as major barriers. Age and personal use of smartphones are predictors of expectations of e‑Health, but the professional working environment is not.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号