首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
BOOK REVIEWS     
ABSTRACT

This paper provides our views on the areas of cetacean bioacoustics that are in the greatest need of study over the next several years. In doing this, we ask a number of questions we see as important to developing a better understanding of cetacean bioacoustics. The topics we will cover are: Auditory Capabilities, including hearing sensitivity, pathways of sound to the ear, intraspecific variation in hearing capabilities, and the effects of intense sound on hearing capabilities; Echolocation, including the information-bearing parameters exploited by dolphin sonar systems to discriminate and identify objects, and the functional characteristics of the internal representation generated by reflections from ensonified objects; and Acoustic Communication, including the nature of the cetacean sound generation mechanism, the behaviors associated with mysticete communication sounds, and the range over which mysticetes communicate. While other investigators may not fully agree with our suggestions as to which questions are most important for future studies of cetacean bioacoustics, it is clear that a considerable effort must still be made in order that we can better understand the bioacoustics and general behavior of these animals.  相似文献   

2.
In communication animals use a full range of signals: acoustic, visual, chemical, electrical and tactile. The processes involved in how and why animals communicate have long held veritable fascination for scientists. A branch of science concerned with the production of sound and its effects on living organisms is bioacoustics.The main purpose of the present study is to raise and discuss some issues related to the relationship between animals, their sounds and ecology, including presentation of methods of analysis of sound recordings. A better understanding of the relationship between the studied animals will allow for development of a better framework for future research, as well as a better grasp of interactions between different organisms, including humans. The paper discusses the significance of acoustic research in animal ecology and its possible applications in the future. The author also summarizes previous research in the field of sound communication of various animal species.The paper proves that vocalizations of every acoustically communicating animal are threatened by climate change. For marine animals, the source of changes in vocalization abilities is ocean acidification and increased ambient noise, which can affect communication and foraging behavior. For terrestrial animals, changes in precipitation and temperature may result in modifications of the sounds emitted, as well as certain modifications to the auditory system. Together with changes in species distribution due to environmental parameters, cumulatively these factors can cause changes in the entire landscape of acoustics ecosystems. Thanks to acoustic biomonitoring, we can understand how the sounds of entire habitats and acoustic ecosystems will change in response to the changing climate and how it will affect bioacoustics on a global scale.  相似文献   

3.
Monitoring animals by the sounds they produce is an important and challenging task, whether the application is outdoors in a natural habitat, or in the controlled environment of a laboratory setting. In the former case, the density and diversity of animal sounds can act as a measure of biodiversity. In the latter case, researchers often create control and treatment groups of animals, expose them to different interventions, and test for different outcomes. One possible manifestation of different outcomes may be changes in the bioacoustics of the animals. With such a plethora of important applications, there have been significant efforts to build bioacoustic classification tools. However, we argue that most current tools are severely limited. They often require the careful tuning of many parameters (and thus huge amounts of training data), are either too computationally expensive for deployment in resource-limited sensors, specialized for a very small group of species, or are simply not accurate enough to be useful. In this work we introduce a novel bioacoustic recognition/classification framework that mitigates or solves all of the above problems. We propose to classify animal sounds in the visual space, by treating the texture of their sonograms as an acoustic fingerprint using a recently introduced parameter-free texture measure as a distance measure. We further show that by searching for the most representative acoustic fingerprint, we can significantly outperform other techniques in terms of speed and accuracy.  相似文献   

4.
The green anole (Anolis carolinensis) is an invasive lizard on the Ogasawara Islands, Japan. Green anoles have negatively impacted the native fauna, and thus, green anole eradication measures, such as the use of PTFE-sheet fencing to restrict their movement have been implemented. However, the effectiveness of fencing appears inadequate; therefore, new methods are needed to deter this species. In this study, we explored the use of aversive bioacoustics by testing whether green anoles froze or stayed away on being exposed to predatory sounds and heterospecific alarm calls. Green anoles showed longer freezing times on hearing bird calls than on hearing no sound or single tones. In addition, they stayed far from the audio sources playing the calls of the red-tailed hawk (Buteo jamaicensis) and alarm calls of the warbling white-eye (Zosterops japonicus). Our results suggest that green anoles avoid certain bioacoustics, especially the calls of B. jamaicensis and the alarm call of Z. japonicus. Hence, these bioacoustics can be used as an effective control method to restrict the invasion and dispersion of green anoles.  相似文献   

5.
We present a study of buzzing sounds of several common species of bumblebees, with the focus on automatic classification of bumblebee species and types. Such classification is useful for bumblebee monitoring, which is important in view of evaluating the quality of their living environment and protecting the biodiversity of these important pollinators. We analysed natural buzzing frequencies for queens and workers of 12 species. In addition, we analysed changes in buzzing of Bombus hypnorum worker for different types of behaviour. We developed a bumblebee classification application using machine learning algorithms. We extracted audio features from sound recordings using a large feature library. We used the best features to train a classification model, with Random Forest proving to be the best training algorithm on the testing set of samples. The web and mobile application also allows expert users to upload new recordings that can be later used to improve the classification model and expand it to include more species.  相似文献   

6.
Many fish species use active sound production for communication in numerous behaviors. Additionally, likely all fish can make passive or incidental sounds that may also serve some signal functions. Despite the ecological importance of fish sounds, their evident passive acoustic monitoring applications, and extensive endeavors to document soniferous fish diversity, the fields of bioacoustics and ichthyology have historically lacked an easily accessible, global inventory of known fish sound production. To alleviate this limitation, we developed http://FishSounds.net, a website that compiles and disseminates fish sound production information and recordings. FishSounds Version 1.0 launched in 2021, cataloging documented examinations for active and passive sound production for 1185 fish species from 837 references as well as 239 exemplary audio recordings. FishSounds allows users to search by taxa (e.g., family or common name), geographical distribution (e.g., region or water body), sound type, or reference. We have also made available the code used to create the website, so that it may be used in other data-sharing efforts—acoustic or otherwise. Subsequent versions of the website will update the data and improve the website functionality. FishSounds will advance research into fish behavior, passive acoustic monitoring, and human impacts on underwater soundscapes; serve as a resource for public outreach; and provide the foundation needed to investigate more of the 96% of fish species that lack published examinations of sound production. We further hope the FishSounds design, implementation, and engagement strategies will serve as a model for future data management and sharing efforts.  相似文献   

7.
  1. Applications in bioacoustics and its sister discipline ecoacoustics have increased exponentially over the last decade. However, despite knowledge about aquatic bioacoustics dating back to the times of Aristotle and a vast amount of background literature to draw upon, freshwater applications of ecoacoustics have been lagging to date.
  2. In this special issue, we present nine studies that deal with underwater acoustics, plus three acoustic studies on water-dependent birds and frogs. Topics include automatic detection of freshwater organisms by their calls, quantifying habitat change by analysing entire soundscapes, and detecting change in behaviour when organisms are exposed to noise.
  3. We identify six major challenges and review progress through this special issue. Challenges include characterisation of sounds, accessibility of archived sounds as well as improving automated analysis methods. Study design considerations include characterisation analysis challenges of spatial and temporal variation. The final key challenge is the so far largely understudied link between ecological condition and underwater sound.
  4. We hope that this special issue will raise awareness about underwater soundscapes as a survey tool. With a diverse array of field and analysis tools, this issue can act as a manual for future monitoring applications that will hopefully foster further advances in the field.
  相似文献   

8.
Bioacoustics has become widely used in the study of acoustically active animals, and machine learning algorithms have emerged as efficient and effective strategies to identify species vocalizations. Current applications of machine learning in bioacoustics often identify acoustic events to the species-level but fail to capture the complex acoustic repertoires animals use to communicate, which can inform habitat associations, demography, behavior, and the life history of cryptic species. The penultimate layer of most machine learning algorithms results in a vector of numbers describing the input, called feature embeddings. Here, we demonstrate that the feature embeddings generated by the BirdNET algorithm can enable within-species classifications of acoustic events. First, we successfully differentiated adult and juvenile Great Gray Owls; second, we identified three unique sounds associated with Great Spotted Woodpeckers (series call, alarm call, and drumming). These applications of BirdNET feature embeddings suggest that researchers can classify vocalizations into groups when group membership is unknown, and that within-species grouping is possible even when target signals are extremely rare. These applications of a relatively “black-box” aspect of machine learning algorithms can be used to derive ecologically informative acoustic classifications, which can inform the conservation of cryptic and otherwise difficult to study species.  相似文献   

9.
The negative effects of human activities within the ecological space of whales remains an issue of concern to marine ecologists. The accurate detection and subsequent classification of whale species are vital in mitigating these negative effects. Automatic detection techniques have come in handy for the efficient detection of the various whale species without human error. Hidden Markov model (HMM) remains one the most efficient detectors of whale species. However, its performance efficiency is greatly influenced by the feature vectors adapted with it. In this work, we propose the use of the kernel dynamic mode decomposition (kDMD) algorithm as a tool to extract features of baleen whale species, which are then adapted with HMM for their detection. Dynamic mode decomposition (DMD) is an eigendecomposition-based algorithm that is capable of extracting latent underlying features of non-linear signals such as those vocalised by whales. However, the underlying cost of DMD is the singular value decomposition (SVD), which adds significant complexity to the modes derivation steps. Thus, this work is introducing the kernel method into the DMD, in order to find a more efficient way of computing DMD without explicitly using the SVD algorithm. Furthermore, the feature formation steps in the original DMD was modified (mDMD) in this work, to make it more generic for datasets with sparse whale sound samples. The performance of the detectors was tested on datasets containing sounds of southern right whales (SRWs) and humpback whales. The results obtained show a high true positive rate (TPR), high precision (PREC) and low error rate (ERR) for both species. The performance of the three DMD-based feature-extraction methods were compared. The kDMD-HMM generally performed better than the mDMD-HMM and DMD-HMM detectors. The methods proposed here can be tailored for the automatic detection and classification of other vocalising animal species through their sounds.  相似文献   

10.
The paper lists basic data on the role of sounds in fish behavior. The involvement of acoustic signaling in the control of reproductive, territorial, agonistic, aggressive, social, and feeding behavior in fish that differ in the systematics and mode of life is considered. Species and population specifics and individual sound variation in fish, diurnal and seasonal cyclicity of sound activity, and behavior that accompany acoustic signaling and the effects upon it of different environmental factors are considered. Evidence on the formation of acoustic signaling in ontogenesis of fish is provided; the range of sound signaling and correspondence between sound spectra and auditory sensitivity are discussed. Possible applied aspects of results of study of fish bioacoustics are analyzed.  相似文献   

11.
Electroencephalography (EEG) signals collected from human brains have generally been used to diagnose diseases. Moreover, EEG signals can be used in several areas such as emotion recognition, driving fatigue detection. This work presents a new emotion recognition model by using EEG signals. The primary aim of this model is to present a highly accurate emotion recognition framework by using both a hand-crafted feature generation and a deep classifier. The presented framework uses a multilevel fused feature generation network. This network has three primary phases, which are tunable Q-factor wavelet transform (TQWT), statistical feature generation, and nonlinear textural feature generation phases. TQWT is applied to the EEG data for decomposing signals into different sub-bands and create a multilevel feature generation network. In the nonlinear feature generation, an S-box of the LED block cipher is utilized to create a pattern, which is named as Led-Pattern. Moreover, statistical feature extraction is processed using the widely used statistical moments. The proposed LED pattern and statistical feature extraction functions are applied to 18 TQWT sub-bands and an original EEG signal. Therefore, the proposed hand-crafted learning model is named LEDPatNet19. To select the most informative features, ReliefF and iterative Chi2 (RFIChi2) feature selector is deployed. The proposed model has been developed on the two EEG emotion datasets, which are GAMEEMO and DREAMER datasets. Our proposed hand-crafted learning network achieved 94.58%, 92.86%, and 94.44% classification accuracies for arousal, dominance, and valance cases of the DREAMER dataset. Furthermore, the best classification accuracy of the proposed model for the GAMEEMO dataset is equal to 99.29%. These results clearly illustrate the success of the proposed LEDPatNet19.  相似文献   

12.
《IRBM》2022,43(6):694-704
BackgroundRespiratory sounds are associated with the flow rate, nasal flow pressure, and physical characteristics of airways. In this study, we aimed to develop the flow rate and nasal flow pressure estimation models for the clinical application, and find out the optimal feature set for estimation to achieve the optimal model performance.MethodsRespiratory sounds and flow rate were acquired from nine healthy volunteers. Respiratory sounds and nasal flow pressure were acquired from twenty-three healthy volunteers. Four types of respiratory sound features were extracted for flow rate and nasal flow pressure estimation using different estimation models. Effects of estimations using these features were evaluated using Bland-Altman analysis, estimation error, and respiratory sound feature calculation time. Besides, expiratory and inspiratory phases divided estimation errors were compared with united estimation errors.ResultsThe personalized logarithm model was selected as the optimal flow rate estimation model. Respiratory nasal flow pressure estimation based on this model was also performed. For the four different respiratory sound features, there is no statistically significant difference in flow rate and pressure estimation errors. LogEnvelope was, therefore, chosen as the optimal feature because of the lowest computational cost. In addition, for any type of respiratory sound feature, no statistically significant difference was observed between divided and united estimation errors (flow rate and pressure).ConclusionRespiratory flow rate and nasal flow pressure can be estimated accurately using respiratory sound features. Expiratory and inspiratory phases united estimation using respiratory sounds is a more reasonable estimation method than divided estimation. LogEnvelope can be used for this united respiratory flow rate and nasal flow pressure estimation with minimum computational cost and acceptable estimation error.  相似文献   

13.
The sounds and songs of birds have inspired the musical compositions of numerous cultures throughout the globe. This article examines a variety of compositions from Western music that feature birdsong and explores the concept of birds as both vocalists and instrumentalists. The concept of birds as composers is then developed-how they use rhythmic variations, pitch relationships, and combinations of notes similar to those found in music-and the theory that birds create variation in their songs partially to avoid monotony is considered. Various families of birds that borrow sounds from other species are surveyed, in particular the European starling (Sturnus vulgaris), which may have inspired a Mozart composition. We conclude that the fusion of avian bioacoustics and the study of birdsong in music may function as a conservation tool, raising the awareness of humans and stimulating future generations to save for posterity what remains of the natural world.  相似文献   

14.
15.
Although bioacoustics is increasingly used to study species and environments for their monitoring and conservation, detecting calls produced by species of interest is prohibitively time consuming when done manually. Here we compared four methods for detecting and identifying roar-barks of maned wolves (Chrysocyon brachyurus) within long sound recordings: (1) a manual method, (2) an automated detector method using Raven Pro 1.4, (3) an automated detector method using XBAT and (4) a mixed method using XBAT's detector followed by manual verification. Recordings were done using a song meter installed at the Serra da Canastra National Park (Minas Gerais, Brazil). For each method we evaluated the following variables in a 24-h recording: (1) total time required analysing files, (2) number of false positives identified and (3) number of true positives identified compared to total number of target sounds. Automated methods required less time to analyse the recordings (77–93 min) when compared to manual method (189 min), but consistently presented more false positives and were less efficient in identifying true positives (manual = 91.89%, Raven = 32.43% and XBAT = 84.86%). Adding a manual verification after XBAT detection dramatically increased efficiency in identifying target sounds (XBAT+manual = 100% true positives). Manual verification of XBAT detections seems to be the best way out of the proposed methods to collect target sound data for studies where large amounts of audio data need to be analysed in a reasonable time (111 min, 58.73% of the time required to find calls manually).  相似文献   

16.
Open audio databases such as Xeno-Canto are widely used to build datasets to explore bird song repertoire or to train models for automatic bird sound classification by deep learning algorithms. However, such databases suffer from the fact that bird sounds are weakly labelled: a species name is attributed to each audio recording without timestamps that provide the temporal localization of the bird song of interest. Manual annotations can solve this issue, but they are time consuming, expert-dependent, and cannot run on large datasets. Another solution consists in using a labelling function that automatically segments audio recordings before assigning a label to each segmented audio sample. Although labelling functions were introduced to expedite strong label assignment, their classification performance remains mostly unknown. To address this issue and reduce label noise (wrong label assignment) in large bird song datasets, we introduce a data-centric novel labelling function composed of three successive steps: 1) time-frequency sound unit segmentation, 2) feature computation for each sound unit, and 3) classification of each sound unit as bird song or noise with either an unsupervised DBSCAN algorithm or the supervised BirdNET neural network. The labelling function was optimized, validated, and tested on the songs of 44 West-Palearctic common bird species. We first showed that the segmentation of bird songs alone aggregated from 10% to 83% of label noise depending on the species. We also demonstrated that our labelling function was able to significantly reduce the initial label noise present in the dataset by up to a factor of three. Finally, we discuss different opportunities to design suitable labelling functions to build high-quality animal vocalizations with minimum expert annotation effort.  相似文献   

17.
We propose a novel, two-degree of freedom mathematical model of mechanical vibrations of the heart that generates heart sounds in CircAdapt, a complete real-time model of the cardiovascular system. Heart sounds during rest, exercise, biventricular (BiVHF), left ventricular (LVHF) and right ventricular heart failure (RVHF) were simulated to examine model functionality in various conditions. Simulated and experimental heart sound components showed both qualitative and quantitative agreements in terms of heart sound morphology, frequency, and timing. Rate of left ventricular pressure (LV dp/dtmax) and first heart sound (S1) amplitude were proportional with exercise level. The relation of the second heart sound (S2) amplitude with exercise level was less significant. BiVHF resulted in amplitude reduction of S1. LVHF resulted in reverse splitting of S2 and an amplitude reduction of only the left-sided heart sound components, whereas RVHF resulted in a prolonged splitting of S2 and only a mild amplitude reduction of the right-sided heart sound components. In conclusion, our hemodynamics-driven mathematical model provides fast and realistic simulations of heart sounds under various conditions and may be helpful to find new indicators for diagnosis and prognosis of cardiac diseases.New & noteworthyTo the best of our knowledge, this is the first hemodynamic-based heart sound generation model embedded in a complete real-time computational model of the cardiovascular system. Simulated heart sounds are similar to experimental and clinical measurements, both quantitatively and qualitatively. Our model can be used to investigate the relationships between heart sound acoustic features and hemodynamic factors/anatomical parameters.  相似文献   

18.
Otazu GH  Leibold C 《PloS one》2011,6(9):e24270
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal.  相似文献   

19.
Fishes use a variety of sensory systems to learn about their environments and to communicate. Of the various senses, hearing plays a particularly important role for fishes in providing information, often from great distances, from all around these animals. This information is in all three spatial dimensions, often overcoming the limitations of other senses such as vision, touch, taste and smell. Sound is used for communication between fishes, mating behaviour, the detection of prey and predators, orientation and migration and habitat selection. Thus, anything that interferes with the ability of a fish to detect and respond to biologically relevant sounds can decrease survival and fitness of individuals and populations. Since the onset of the Industrial Revolution, there has been a growing increase in the noise that humans put into the water. These anthropogenic sounds are from a wide range of sources that include shipping, sonars, construction activities (e.g., wind farms, harbours), trawling, dredging and exploration for oil and gas. Anthropogenic sounds may be sufficiently intense to result in death or mortal injury. However, anthropogenic sounds at lower levels may result in temporary hearing impairment, physiological changes including stress effects, changes in behaviour or the masking of biologically important sounds. The intent of this paper is to review the potential effects of anthropogenic sounds upon fishes, the potential consequences for populations and ecosystems and the need to develop sound exposure criteria and relevant regulations. However, assuming that many readers may not have a background in fish bioacoustics, the paper first provides information on underwater acoustics, with a focus on introducing the very important concept of particle motion, the primary acoustic stimulus for all fishes, including elasmobranchs. The paper then provides background material on fish hearing, sound production and acoustic behaviour. This is followed by an overview of what is known about effects of anthropogenic sounds on fishes and considers the current guidelines and criteria being used world-wide to assess potential effects on fishes. Most importantly, the paper provides the most complete summary of the effects of anthropogenic noise on fishes to date. It is also made clear that there are currently so many information gaps that it is almost impossible to reach clear conclusions on the nature and levels of anthropogenic sounds that have potential to cause changes in animal behaviour, or even result in physical harm. Further research is required on the responses of a range of fish species to different sound sources, under different conditions. There is a need both to examine the immediate effects of sound exposure and the longer-term effects, in terms of fitness and likely impacts upon populations.  相似文献   

20.
This paper presents a new module for heart sounds segmentation based on S-transform. The heart sounds segmentation process segments the PhonoCardioGram (PCG) signal into four parts: S1 (first heart sound), systole, S2 (second heart sound) and diastole. It can be considered one of the most important phases in the auto-analysis of PCG signals. The proposed segmentation module can be divided into three main blocks: localization of heart sounds, boundaries detection of the localized heart sounds and classification block to distinguish between S1 and S2. An original localization method of heart sounds are proposed in this study. The method named SSE calculates the Shannon energy of the local spectrum calculated by the S-transform for each sample of the heart sound signal. The second block contains a novel approach for the boundaries detection of S1 and S2. The energy concentrations of the S-transform of localized sounds are optimized by using a window width optimization algorithm. Then the SSE envelope is recalculated and a local adaptive threshold is applied to refine the estimated boundaries. To distinguish between S1 and S2, a feature extraction method based on the singular value decomposition (SVD) of the S-matrix is applied in this study. The proposed segmentation module is evaluated at each block according to a database of 80 sounds, including 40 sounds with cardiac pathologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号