首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces passive wireless telemetry based operation for high frequency acoustic sensors. The focus is on the development, fabrication, and evaluation of wireless, battery-less SAW-IDT MEMS microphones for biomedical applications. Due to the absence of batteries, the developed sensors are small and as a result of the batch manufacturing strategy are inexpensive which enables their utilization as disposable sensors. A pulse modulated surface acoustic wave interdigital transducer (SAW-IDT) based sensing strategy has been formulated. The sensing strategy relies on detecting the ac component of the acoustic pressure signal only and does not require calibration. The proposed sensing strategy has been successfully implemented on an in-house fabricated SAW-IDT sensor and a variable capacitor which mimics the impedance change of a capacitive microphone. Wireless telemetry distances of up to 5 centimeters have been achieved. A silicon MEMS microphone which will be used with the SAW-IDT device is being microfabricated and tested. The complete passive wireless sensor package will include the MEMS microphone wire-bonded on the SAW substrate and interrogated through an on-board antenna. This work on acoustic sensors breaks new ground by introducing high frequency (i.e., audio frequencies) sensor measurement utilizing SAW-IDT sensors. The developed sensors can be used for wireless monitoring of body sounds in a number of different applications, including monitoring breathing sounds in apnea patients, monitoring chest sounds after cardiac surgery, and for feedback sensing in compression (HFCC) vests used for respiratory ventilation. Another promising application is monitoring chest sounds in neonatal care units where the miniature sensors will minimize discomfort for the newborns.  相似文献   

2.
ABSTRACT

We introduce an inexpensive electronic technique for monitoring the temporal aspects of any captive animal's acoustic signals. The electronic apparatus, attached to a data acquisition unit and personal computer, compares microphone output to a pre-set level and stores calling/non-calling data to disk. Total time calling and temporal signaling patterns of up to 256 individuals can be monitored for indefinite lengths of time. Sampling rate is adjustable, with a maximum rate of 6 samples/microphone/second. The capabilities of the system are illustrated with the field cricket Gryllus integer. Temporal aspects of acoustic signaling are discussed in terms of monitoring time scale and recognition of individual variation, energetics research, and hypothesis testing of the costs and benefits associated with mating success and predation.  相似文献   

3.
ABSTRACT: Objectives Many microphones have been developed to meet with the implantable requirement of totally implantable cochlear implant (TICI). However, a biocompatible one without destroying the intactness of the ossicular chain still remains under investigation. Such an implantable floating piezoelectric microphone (FPM) has been manufactured and shows an efficient electroacoustic performance in vitro test at our lab. We examined whether it pick up sensitively from the intact ossicular chain and postulated whether it be an optimal implantable one. METHODS: Animal controlled experiment: five adult cats (eight ears) were sacrificed as the model to test the electroacoustic performance of the FPM. Three groups were studied: (1) the experiment group (on malleus): the FPM glued onto the handle of the malleus of the intact ossicular chains; (2) negative control group (in vivo): the FPM only hung into the tympanic cavity; (3) positive control group (Hy-M30): a HiFi commercial microphone placed close to the site of the experiment ear. The testing speaker played pure tones orderly ranged from 0.25 to 8.0 kHz. The FPM inside the ear and the HiFi microphone simultaneously picked up acoustic vibration which recorded as .wav files to analyze. RESULTS: The FPM transformed acoustic vibration sensitively and flatly as did the in vitro test across the frequencies above 2.0 kHz, whereas inefficiently below 1.0 kHz for its overloading mass. Although the HiFi microphone presented more efficiently than the FPM did, there was no significant difference at 3.0 kHz and 8.0 kHz. CONCLUSIONS: It is feasible to develop such an implantable FPM for future TICIs and TIHAs system on condition that the improvement of Micro Electromechanical System and piezoelectric ceramic material technology would be applied to reduce its weight and minimize its size.  相似文献   

4.
This work proposes a new online monitoring method for an assistance during laser osteotomy. The method allows differentiating the type of ablated tissue and the applied dose of laser energy. The setup analyzes the laser-induced acoustic emission, detected by an airborne microphone sensor. The analysis of the acoustic signals is carried out using a machine learning algorithm that is pre-trained in a supervised manner. The efficiency of the method is experimentally evaluated with several types of tissues, which are: skin, fat, muscle, and bone. Several cutting-edge machine learning frameworks are tested for the comparison with the resulting classification accuracy in the range of 84–99%. It is shown that the datasets for the training of the machine learning algorithms are easy to collect in real-life conditions. In the future, this method could assist the doctors during laser osteotomy, minimizing the damage of the nearby healthy tissues and provide cleaner pathologic tissue removal.  相似文献   

5.
1. The directionality of an echolocation system is determined by the acoustic properties of both the emitter and receiver, i.e., by the radiation pattern of the emitted pulse and the directionally of the external ears. We measured the directionality of the echolocation system of the greater mustache bat (Pteronotus parnellii) at the 30 kHz, 60 kHz and 90 kHz harmonics of its echolocation pulse by summing, at points throughout the frontal sound field, the echo attenuation due to the spread of pulse energy and the attenuation due to the spread of pulse energy and the attenuation due to the directionality of its external ears. The pulse radiation pattern at the 3 harmonics was measured by comparing the output of a microphone moved throughout the frontal sound field against a second reference microphone at the center of the field. External ear directionality at the 3. harmonics was measured by presenting free-field sounds throughout the frontal sound field, and recording the intensity thresholds of cochlear microphonic potentials, and the intensity thresholds of monaural neurons in the inferior colliculus tuned to one of the 3 harmonics. 2. When compared with ear directionality, the echolocation system was found to be more directional for the center of the sound field in several respects. At all harmonics, attenuation of sounds originating in the peripheral part of the field was increased by 10 to 13 dB. Areas of maximum sound intensity contracted toward the center of the field. Also, the isointensity contours of the echolocation system were more radially symmetrical about the center of the field. 3. At 60 kHz, sound intensity along the azimuth within the echolocation system was nearly constant 26 degrees to either side of the center of the field. This suggests that the radiation pattern of the echolocation pulse and the directionality of the external ears complement one another to produce an acoustic environment at the center of the sound field in which stimulus intensity is stabilized to allow more effective analysis of various aspects of the echolocation target. In particular, we suggest that this intensity stabilization may allow the bat to more effectively resolve the interaural intensity differences it uses to localize prey. 4. Predictions of the azimuthal spatial tuning of binaurally sensitive neurons in the inferior colliculus within the echolocation system were compared with their spatial tuning when only ear directionality is considered.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

6.
Summary The directionality of sound emission by a horseshoe bat (Rhinolophus ferrumequinum) has been determined for the constant frequency component of its orientation sounds. The bat was fixed in the center of an acoustic perimeter and the SPL of the orientation sounds measured with a scanning microphone at different angles compared with the SPL measured by another microphone located in the direction perpendicular to the plane of the horseshoe-like structure of the nose-leaf. The maximum SPL was always found in this direction which also corresponds to the flight direction of a bat in horizontal flight. Above and lateral to this direction the SPL decreases steadily with -6 dB-points at 24 above and 23 lateral. Below the flight direction we found a prominent side lobe with a -6 dB-point at 64 .When the present data are combined with measurements of the behavioral directionality of hearing at the same frequency (Grinnell and Schnitzler, 1977), the directionality diagram of the entire echolocation system is very narrow and points in the flight direction. The prominent downward side lobe of emission does not conspicuously increase echolocation effectiveness in the direction of the ground, since hearing sensitivity is falling off so steeply in that direction. However, without this downward beam of emission, signals from below the bat would be that much less effective.Interference with the structure of the nose-leaf by covering the upper part with vaseline or plugging the left nostril destroyed the smoothness of the normal sound field and demonstrated that this complex organ is a highly functional structure optimized in the course of evolution.With differences in mood or attention, the emitted pulses varied by as much as 20 dB (80–100 dB). The emission directionality pattern also varied. In most cases, as orientation sounds increased in SPL, the acoustic beam became smaller.Supported by Deutsche Forschungsgemeinschaft, grant No. Schn 138/1-6, Stiftung Volkswagenwerk, grant No. 111 858, and the Alexander von Humboldt StiftungWe thank W. Hollerbach and C. Nitsche for technical assistance.  相似文献   

7.
The source-filter theory of vocal production supports the idea that acoustic signatures are preferentially coded by the fundamental frequency (source-induced variability) and the distribution of energy among the frequency spectrum (filter-induced variability). By investigating the acoustic parameters supporting individuality in lamb bleats, a vocalization which mediates recognition by ewes, here we show that amplitude modulation – an acoustic feature largely independent of the shape of the acoustic tract – can also be an important cue defining an individual vocal signature. Female sheep (Ovis aries) show an acoustic preference for their own lamb. Although playback experiments have shown that this preference is established soon after birth and relies on a unique vocal signature contained in the bleats of the lamb, the physical parameters that encode this individual identity remained poorly identified. We recorded 152 bleats from 13 fifteen-day-old lambs and analyzed their acoustic structure with four complementary statistical methods (ANOVA, potential for individual identity coding PIC, entropy calculation 2Hs, discriminant function analysis DFA). Although there were slight differences in the acoustic parameters identified by the four methods, it remains that the individual signature relies on both the temporal and frequency domains. The coding of the identity is thus multi-parametric and integrates modulation of amplitude and energy parameters. Specifically, the contribution of the amplitude modulation is important, together with the fundamental frequency F 0 and the distribution of energy in the frequency spectrum.  相似文献   

8.
The link between stapedius muscle activity and acoustic structure of vocalization was analysed in cocks of age 20–30 to 90–100 days old. The results show that stapedius muscle activation depends on the acoustic structure of vocalization and changes during vocal development. This dependence was observed in spontaneous calls and in vocalizations elicited by stimulating the mesencephalic calling area. In 30-day-old cocks stapedius muscle EMG response is never associated with vocalizations with an acoustic energy content which is always distributed at frequencies higher than 2000 Hz. The coupling between vocalization and stapedius muscle activity begins later, when birds produce vocalizations with acoustic energy shifted towards lower frequencies. Overall, stapedius muscle activity is related to a bird's production of high amplitude low frequencies. These results support the hypothesis that the primary role of the stapedius muscle during normal vocal development is to dampen the amplitude of low frequency energy that reaches the cochlea during vocalization.  相似文献   

9.
The current laser atherectomy technologies to treat patients with challenging (to-cross) total chronic occlusions with a step-by-step (SBS) approach (without leading guide wire), are lacking real-time signal monitoring of the ablated tissues, and carry the risk for vessel perforation. We present first time post-classification of ablated tissues using acoustic signals recorded by a microphone placed nearby during five atherectomy procedures using 355 nm solid-state Auryon laser device performed with an SBS approach, some with highly severe calcification. Using our machine-learning algorithm, the classification results of these ablation signals recordings from five patients showed 93.7% classification accuracy with arterial vs nonarterial wall material. While still very preliminary and requiring a larger study and thereafter as commercial device, the results of these first acoustic post-classification in SBS cases are very promising. This study implies, as a general statement, that online recording of the acoustic signals using a noncontact microphone, may potentially serve for an online classification of the ablated tissue in SBS cases. This technology could be used to confirm correct positioning in the vasculature, and by this, to potentially further reduce the risk of perforation using 355 nm laser atherectomy in such procedures.  相似文献   

10.
Individually specific acoustic signals in birds are used in territorial defence. These signals enable a reduction of energy expenditure due to individual recognition between rivals and the associated threat levels. Mechanisms and acoustic cues used for individual recognition seem to be versatile among birds. However, most studies so far have been conducted on oscine species. Few studies have focused on exactly how the potential for individual recognition changes with distance between the signaller and receiver. We studied a nocturnally active rail species, the corncrake, which utters a seemingly simple disyllabic call. The inner call structure, however, is quite complex and expressed as intervals between maximal amplitude peaks, called pulse-to-pulse durations (PPD). The inner call is characterized by very low within-individual variation and high between-individuals difference. These variations and differences enable recognition of individuals. We conducted our propagation experiments in a natural corncrake habitat. We found that PPD was not affected by transmission. Correct individual identification was possible regardless of the distance and position of the microphone which was above the ground. The results for sounds from the extreme distance propagated through the vegetation compared to those transmitted above the vegetation were even better. These results support the idea that PPD structure has evolved under selection favouring individual recognition in a species signalling at night, in a dense environment and close to the ground.  相似文献   

11.
在杭州市郊小和山森林公园内约85hm2范围内,用SHARP-CE-15l型录音机(频响30—l4000Hz)和强指向麦克风(频响40—14000Hz),对夏季繁殖期连续分布的同一生境中不同个体的强脚树莺(Cettiafor-tipes)的鸣声进行了记录,并通过计算机声谱分析系统从句型结构、音图结构、时域和频域特征及短时能量等方面进行了分析和比较,发现在同一生境一个小范围内同一种鸟就有6种不同类型的鸣声。这些鸣声的音调各不相同,鸣声的结构差异很大,大多声学参数之间也存在显著或极显著差异。形成这种鸣声多样性的原因可能是繁殖竞争在声行为上的体现。  相似文献   

12.
We demonstrate that natural acoustic signals like speech or music contain synchronous phase information across multiple frequency bands and show how to extract this information using a spiking neural network. This network model is motivated by common neurophysiological findings in the auditory brainstem and midbrain of several species. A computer simulation of the model was tested by applying spoken vowels and organ pipe tones. As expected, spikes occurred synchronously in the activated frequency bands. This phase information may be used for sound separation with one microphone or sound localization with two microphones.  相似文献   

13.
Elucidating the structure and function of joint vocal displays (e.g. duet, chorus) recorded with a conventional microphone has proved difficult in some animals owing to the complex acoustic properties of the combined signal, a problem reminiscent of multi-speaker conversations in humans. Towards this goal, we set out to simultaneously compare air-transmitted (AT) with radio-transmitted (RT) vocalizations in one pair of humans and one pair of captive Bolivian grey titi monkeys (Plecturocebus donacophilus) all equipped with an accelerometer – or vibration transducer – closely apposed to the larynx. First, we observed no crosstalk between the two radio transmitters when subjects produced vocalizations at the same time close to each other. Second, compared with AT acoustic recordings, sound segmentation and pitch tracking of the RT signal was more accurate, particularly in a noisy and reverberating environment. Third, RT signals were less noisy than AT signals and displayed more stable amplitude regardless of distance, orientation and environment of the animal. The microphone outperformed the accelerometer with respect to sound spectral bandwidth and speech intelligibility: the sounds of RT speech were more attenuated and dampened as compared to AT speech. Importantly, we show that vocal telemetry allows reliable separation of the subjects’ voices during production of joint vocalizations, which has great potential for future applications of this technique with free-ranging animals.  相似文献   

14.
The amplification of acoustic waves due to the transfer of thermal energy from electrons to the neutral component of a glow discharge plasma is studied theoretically. It is shown that, in order for acoustic instability (sound amplification) to occur, the amount of energy transferred should exceed the threshold energy, which depends on the plasma parameters and the acoustic wave frequency. The energy balance equation for an electron gas in the positive column of a glow discharge is analyzed for conditions typical of experiments in which acoustic wave amplification has been observed. Based on this analysis, one can affirm that, first, the energy transferred to neutral gas in elastic electron-atom collisions is substantially lower than the threshold energy for acoustic wave amplification and, second, that the energy transferred from electrons to neutral gas in inelastic collisions is much higher than that transferred in elastic collisions and thus may exceed the threshold energy. It is also shown that, for amplification to occur, there should exist some heat dissipation mechanism more efficient than gas heat conduction. It is suggested that this may be convective radial mixing within a positive column due to acoustic streaming in the field of an acoustic wave. The features of the phase velocity of sound waves in the presence of acoustic instability are investigated.  相似文献   

15.
The LTER Grid Pilot Study was conducted by the National Center for Supercomputing Applications, the University of New Mexico, and Michigan State University, to design and build a prototype grid for the ecological community. The featured grid application, the Biophony Grid Portal, manages acoustic data from field sensors and allows researchers to conduct real-time digital signal processing analysis on high-performance systems via a web-based portal. Important characteristics addressed during the study include the management, access, and analysis of a large set of field collected acoustic observations from microphone sensors, single signon, and data provenance. During the development phase of this project, new features were added to standard grid middleware software and have already been successfully leveraged by other, unrelated grid projects. This paper provides an overview of the Biophony Grid Portal application and requirements, discusses considerations regarding grid architecture and design, details the technical implementation, and summarizes key experiences and lessons learned that are generally applicable to all developers and administrators in a grid environment.  相似文献   

16.
Speaker verification and speech recognition are closely related technologies. They both operate on spoken language that has been input to a microphone or telephone, and they both employ analog-to-digital signal-processing (DSP) techniques to extract information about acoustic data and patterns from that input. The principal distinction between speech recognition and speaker verification is functional, with each system differing markedly in what they do with the speech data once it has been processed.  相似文献   

17.
A method is presented for measuring the heart rate of avian eggs noninvasively during the last half of incubation. The technique involves briefly placing an egg in tightly sealed vessel containing an inexpensive condenser microphone. The amplified output of the microphone, termed the acoustocardiogram (ACG), is nearly sinusoidal in shape and synchronous with the electrocardiogram. The ACG can also be obtained by mounting the microphone directly on the shell with Plasticine. The method offers advantages over previously described techniques in simplicity, low cost, and noninvasiveness.  相似文献   

18.
To test the hypothesis that muscle sound amplitudes would remain constant during sustained submaximal isometric contractions, we recorded acoustic myograms from the abductor digiti minimi muscle in 12 subjects at 15, 25, 50, and 75% of a maximum voluntary contraction (MVC). Muscle sounds were detected with an omni-directional electret microphone encased in closed-cell foam and attached to the skin over the muscle. Acoustic amplitudes from the middle and end of the sustained contractions were compared with the amplitudes from the beginning of contractions to determine whether acoustic amplitudes varied in magnitude as force remained constant. Physiological tremor was eliminated from the acoustic signal by use of a Fourier truncation at 14 Hz. The amplitudes of the acoustic signal at a contraction intensity of 75% MVC remained constant, reflecting force production over time. At 50% MVC, the root-mean-square amplitude decreased from the beginning to the end of the contraction (P less than 0.05). Acoustic amplitudes increased over time at 15 and 25% MVC and were significantly higher at the end of the contractions than at the beginning (P less than 0.05). Alterations in the acoustic amplitude, which reflect changes in the lateral vibrations of the muscle, may be indicative of the different recruitment strategies used to maintain force during sustained isometric contractions.  相似文献   

19.
Autonomous acoustic recorders are an increasingly popular method for low‐disturbance, large‐scale monitoring of sound‐producing animals, such as birds, anurans, bats, and other mammals. A specialized use of autonomous recording units (ARUs) is acoustic localization, in which a vocalizing animal is located spatially, usually by quantifying the time delay of arrival of its sound at an array of time‐synchronized microphones. To describe trends in the literature, identify considerations for field biologists who wish to use these systems, and suggest advancements that will improve the field of acoustic localization, we comprehensively review published applications of wildlife localization in terrestrial environments. We describe the wide variety of methods used to complete the five steps of acoustic localization: (1) define the research question, (2) obtain or build a time‐synchronizing microphone array, (3) deploy the array to record sounds in the field, (4) process recordings captured in the field, and (5) determine animal location using position estimation algorithms. We find eight general purposes in ecology and animal behavior for localization systems: assessing individual animals' positions or movements, localizing multiple individuals simultaneously to study their interactions, determining animals' individual identities, quantifying sound amplitude or directionality, selecting subsets of sounds for further acoustic analysis, calculating species abundance, inferring territory boundaries or habitat use, and separating animal sounds from background noise to improve species classification. We find that the labor‐intensive steps of processing recordings and estimating animal positions have not yet been automated. In the near future, we expect that increased availability of recording hardware, development of automated and open‐source localization software, and improvement of automated sound classification algorithms will broaden the use of acoustic localization. With these three advances, ecologists will be better able to embrace acoustic localization, enabling low‐disturbance, large‐scale collection of animal position data.  相似文献   

20.
长颚斗蟋的鸣声结构与行为分析   总被引:5,自引:0,他引:5  
通过计算机外接话筒对长颚斗蟋(Velarifictorus asperses)在不同条件下的鸣声进行录音,利用软作Cool edit2000对其结构进行了较系统的分析。结果表明:长颚斗蟋的鸣叫声有7种类型,即召唤声,警戒声,挑衅声,胜利声,欢迎声,求爱声和催促声;这7种鸣声在声学特征上有明显的区别,并与行为有关。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号