首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Ecological Informatics》2012,7(6):345-353
Camera traps and the images they generate are becoming an essential tool for field biologists studying and monitoring terrestrial animals, in particular medium to large terrestrial mammals and birds. In the last five years, camera traps have made the transition to digital technology, where these devices now produce hundreds of instantly available images per month and a large amount of ancillary metadata (e.g., date, time, temperature, image size, etc.). Despite this accelerated pace in the development of digital image capture, field biologists still lack adequate software solutions to process and manage the increasing amount of information in a cost efficient way. In this paper we describe a software system that we have developed, called DeskTEAM, to address this issue. DeskTEAM has been developed in the context of the Tropical Ecology Assessment and Monitoring Network (TEAM), a global network that monitors terrestrial vertebrates. We describe the software architecture and functionality and its utility in managing and processing large amounts of digital camera trap data collected throughout the global TEAM network. DeskTEAM incorporates software features and functionality that make it relevant to the broad camera trapping community. These include the ability to run the application locally on a laptop or desktop computer, without requiring an Internet connection, as well as the ability to run on multiple operating systems; an intuitive navigational user interface with multiple levels of detail (from individual images, to whole groups of images) which allows users to easily manage hundreds or thousands of images; ability to automatically extract EXIF and custom metadata information from digital images to increase standardization; availability of embedded taxonomic lists to allow users to easily tag images with species identities; and the ability to export data packages consisting of data, metadata and images in standardized formats so that they can be transferred to online data warehouses for easy archiving and dissemination. Lastly, building these software tools for wildlife scientists provides valuable lessons for the ecoinformatics community.  相似文献   

2.
As the capacity to collect and store large amounts of data expands, identifying and evaluating strategies to efficiently convert raw data into meaningful information is increasingly necessary. Across disciplines, this data processing task has become a significant challenge, delaying progress and actionable insights. In ecology, the growing use of camera traps (i.e., remotely triggered cameras) to collect information on wildlife has led to an enormous volume of raw data (i.e., images) in need of review and annotation. To expedite camera trap image processing, many have turned to the field of artificial intelligence (AI) and use machine learning models to automate tasks such as detecting and classifying wildlife in images. To contribute understanding of the utility of AI tools for processing wildlife camera trap images, we evaluated the performance of a state-of-the-art computer vision model developed by Microsoft AI for Earth named MegaDetector using data from an ongoing camera trap study in Arctic Alaska, USA. Compared to image labels determined by manual human review, we found MegaDetector reliably determined the presence or absence of wildlife in images generated by motion detection camera settings (≥94.6% accuracy), however, performance was substantially poorer for images collected with time-lapse camera settings (≤61.6% accuracy). By examining time-lapse images where MegaDetector failed to detect wildlife, we gained practical insights into animal size and distance detection limits and discuss how those may impact the performance of MegaDetector in other systems. We anticipate our findings will stimulate critical thinking about the tradeoffs of using automated AI tools or manual human review to process camera trap images and help to inform effective implementation of study designs.  相似文献   

3.
The use of camera traps is now widespread and their importance in wildlife studies is well understood. Camera trap studies can produce millions of photographs and there is a need for a software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.  相似文献   

4.
The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.  相似文献   

5.
Investigating crop feeding patterns by primates is an increasingly important objective for primatologists and conservation practitioners alike. Although camera trap technology is used to study primates and other wildlife in numerous ways, i.e., activity patterns, social structure, species richness, abundance, density, diet, and demography, it is comparatively underused in the study of human–primate interactions. We compare photographic (N?=?210) and video (N?=?141) data of crop feeding moor macaques (Macaca maura) from remote sensor cameras, functioning for 231 trap days, with ethnographic data generated from semistructured interviews with local farmers. Our results indicate that camera traps can provide data on the following aspects of crop feeding behavior: species, crop type and phase targeted, harvesting technique used, and daily and seasonal patterns of crop feeding activity. We found camera traps less useful, however, in providing information on the individual identification and age/sex class of crop feeders, exact group size, and amount of crops consumed by the moor macaques. While farmer reports match camera trap data regarding crop feeding species and how wildlife access the gardens, they differ when addressing crop feeding event frequency and timing. Understanding the mismatches between camera trap data and farmer reports is valuable to conservation efforts that aim to mitigate the conflict between crop feeding wildlife and human livelihoods. For example, such information can influence changes in the way certain methods are used to deter crop feeding animals from damaging crops. Ultimately, we recommend using remote-sensing camera technology in conjunction with other methods to study crop feeding behavior.  相似文献   

6.
Camera traps are a popular tool for monitoring wildlife though they can fail to capture enough morphological detail for accurate small mammal species identification. Camera trapping small mammals is often limited by the inability of camera models to: (i) record at close distances; and (ii) provide standardised photos. This study aims to provide a camera trapping method that captures standardised images of the faces of small mammals for accurate species identification, with further potential for individual identification. A novel camera trap design coined the ‘selfie trap’ was developed. The selfie trap is a camera contained within an enclosed PVC pipe with a modified lens that produces standardised close images of small mammal species encountered in this study, including: Brown Antechinus (Antechinus stuartii), Bush Rat (Rattus fuscipes) and Sugar Glider (Petaurus breviceps). Individual identification was tested on the common arboreal Sugar Glider. Five individual Sugar Gliders were identified based on unique head stripe pelage. The selfie trap is an accurate camera trapping method for capturing detailed and standardised images of small mammal species. The design described may be useful for wildlife management as a reliable method for surveying small mammal species. However, intraspecies individual identification using the selfie trap requires further testing.  相似文献   

7.
The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers'' images for matches to an image of the skin.  相似文献   

8.
Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple photographs for each detection and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of photographs per study. The task of converting photographs to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g., camera malfunction or moving vegetation) or pictures of humans. We developed computer vision algorithms to detect and classify moving objects to aid the first step of camera trap image filtering—separating the animal detections from the empty frames and pictures of humans. Our new work couples foreground object segmentation through background subtraction with deep learning classification to provide a fast and accurate scheme for human–animal detection. We provide these programs as both Matlab GUI and command prompt developed with C++. The software reads folders of camera trap images and outputs images annotated with bounding boxes around moving objects and a text file summary of results. This software maintains high accuracy while reducing the execution time by 14 times. It takes about 6 seconds to process a sequence of ten frames (on a 2.6 GHZ CPU computer). For those cameras with excessive empty frames due to camera malfunction or blowing vegetation automatically removes 54% of the false‐triggers sequences without influencing the human/animal sequences. We achieve 99.58% on image‐level empty versus object classification of Serengeti dataset. We offer the first computer vision tool for processing camera trap images providing substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps.  相似文献   

9.
Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain.  相似文献   

10.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

11.
探讨我国森林野生动物红外相机监测规范   总被引:1,自引:0,他引:1  
野生动物多样性是生物多样性监测与保护管理评价的关键指标, 因此对野生动物进行长期监测是中国森林生物多样性监测网络(CForBio)等大尺度生物多样性监测研究计划的一个重要组成部分。2011年以来, CForBio网络陆续在多个森林动态监测样地开展以红外相机来监测野生动物多样性。随着我国野生动物红外相机监测网络的初步形成, 亟待建立和执行基于红外相机技术的统一监测规范。基于3年来在我国森林动态监测样地红外相机监测的进展情况, 以及热带生态评价与监测网络针对陆生脊椎动物(兽类和鸟类)所提出的红外相机监测规范, 本文从监测规范和监测注意事项等方面探讨了我国森林野生动物红外相机监测的现状和未来。  相似文献   

12.
13.
种群参数估计及空间分布格局是动物生态学和保护生物学领域的重要目标之一.最近十几年来, 相机陷阱(camera trap)作为野外调查的一种非损伤性技术手段,在传统调查方法难以实现的情况下表现出极大优势,被广泛应用于野生动物生态学和保护学研究中.相机陷阱所获取的动物出现数据为野生动物种群提供了极其重要的定量信息.本文从相机陷阱工作原理出发,主要阐述了目前在种群生态学中较为成熟的两类针对具有或不具有天然个体标志物种的模型原理及应用: 1)种群密度和种群数量估计; 2)空间占据率估计.论文特别关注了模型发展的逻辑过程、依赖的假定、使用范围、仍然存在的问题以及未来发展方向.最后, 本文综合分析了相机陷阱在种群参数估计应用中还需注意的问题, 以及其在种群动态和生物多样性研究等方面的发展潜力.  相似文献   

14.
Camera trap data are increasingly being used to characterise relationships between the spatiotemporal activity patterns of sympatric mammal species, often with a view to inferring inter‐specific interactions. In this context, we attempted to characterise the kleptoparasitic and predatory tendencies of spotted hyaenas Crocuta crocuta and lions Panthera leo from photographic data collected across 54 camera trap stations and two dry seasons in Tanzania's Ruaha National Park. We applied four different methods of quantifying spatiotemporal associations, including one strictly temporal approach (activity pattern overlap), one strictly spatial approach (co‐occupancy modelling), and two spatiotemporal approaches (co‐detection modelling and temporal spacing at shared camera trap sites). We expected a kleptoparasitic relationship between spotted hyaenas and lions to result in a positive spatiotemporal association, and further hypothesised that the association between lions and their favourite prey in Ruaha, the giraffe Giraffa camelopardalis and the zebra Equus quagga, would be stronger than those observed with non‐preferred prey species (the impala Aepyceros melampus and the dikdik Madoqua kirkii). Only approaches incorporating both the temporal and spatial components of camera trap data resulted in significant associative patterns. The latter were particularly sensitive to the temporal resolution chosen to define species detections (i.e. occasion length), and only revealed a significant positive association between lion and spotted hyaena detections, as well as a tendency for both species to follow each other at camera trap sites, during the dry season of 2013, but not that of 2014. In both seasons, observed spatiotemporal associations between lions and each of the four herbivore species considered provided no convincing or consistent indications of any predatory preferences. Our study suggests that, when making inferences on inter‐specific interactions from camera trap data, due regards should be given to the potential behavioural and methodological processes underlying observed spatiotemporal patterns.  相似文献   

15.
Bird surveys conducted using aerial images can be more accurate than those using airborne observers, but can also be more time‐consuming if images must be analyzed manually. Recent advances in digital cameras and image‐analysis software offer unprecedented potential for computer‐automated bird detection and counts in high‐resolution aerial images. We review the literature on this subject and provide an overview of the main image‐analysis techniques. Birds that contrast sharply with image backgrounds (e.g., bright birds on dark ground) are generally the most amenable to automated detection, in some cases requiring only basic image‐analysis software. However, the sophisticated analysis capabilities of modern object‐based image analysis software provide ways to detect birds in more challenging situations based on a variety of attributes including color, size, shape, texture, and spatial context. Some techniques developed to detect mammals may also be applicable to birds, although the prevalent use of aerial thermal‐infrared images for detecting large mammals is of limited applicability to birds because of the low pixel resolution of thermal cameras and the smaller size of birds. However, the increasingly high resolution of true‐color cameras and availability of small unmanned aircraft systems (drones) that can fly at very low altitude now make it feasible to detect even small shorebirds in aerial images. Continued advances in camera and drone technology, in combination with increasingly sophisticated image analysis software, now make it possible for investigators involved in monitoring bird populations to save time and resources by increasing their use of automated bird detection and counts in aerial images. We recommend close collaboration between wildlife‐monitoring practitioners and experts in the fields of remote sensing and computer science to help generate relevant, accessible, and readily applicable computer‐automated aerial photographic census techniques.  相似文献   

16.
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species.  相似文献   

17.
The giant panda is a flagship species in ecological conservation. The infrared camera trap is an effective tool for monitoring the giant panda. Images captured by infrared camera traps must be accurately recognized before further statistical analyses can be implemented. Previous research has demonstrated that spatiotemporal and positional contextual information and the species distribution model (SDM) can improve image detection accuracy, especially for difficult-to-see images. Difficult-to-see images include those in which individual animals are only partially observed and it is challenging for the model to detect those individuals. By utilizing the attention mechanism, we developed a unique method based on deep learning that incorporates object detection, contextual information, and the SDM to achieve better detection performance in difficult-to-see images. We obtained 1169 images of the wild giant panda and divided them into a training set and a test set in a 4:1 ratio. Model assessment metrics showed that our proposed model achieved an overall performance of 98.1% in mAP0.5 and 82.9% in recall on difficult-to-see images. Our research demonstrated that the fine-grained multimodal-fusing method applied to monitoring giant pandas in the wild can better detect the difficult-to-see panda images to enhance the wildlife monitoring system.  相似文献   

18.
Camera traps are used by scientists and natural resource managers to acquire ecological data, and the rapidly increasing camera trapping literature highlights how popular this technique has become. Nevertheless, the methodological information reported in camera trap publications can vary widely, making replication of the study difficult. Here we propose a series of guiding principles for reporting methods and results obtained using camera traps. Attributes of camera trapping we cover include: (i) specifying the model(s) of camera traps(s) used, (ii) mode of deployment, (iii) camera settings, and (iv) study design. In addition to suggestions regarding best practice data coding and analysis, we present minimum principles for standardizing information that we believe should be reported in all peer-reviewed papers. Standardised reporting enables more robust comparisons among studies, facilitates national and global reviews, enables greater ease of study replication, and leads to improved wildlife research and management outcomes.  相似文献   

19.
Camera traps are a powerful and increasingly popular tool for mammal research, but like all survey methods, they have limitations. Identifying animal species from images is a critical component of camera trap studies, yet while researchers recognize constraints with experimental design or camera technology, image misidentification is still not well understood. We evaluated the effects of a species’ attributes (body mass and distinctiveness) and individual observer variables (experience and confidence) on the accuracy of mammal identifications from camera trap images. We conducted an Internet‐based survey containing 20 questions about observer experience and 60 camera trap images to identify. Images were sourced from surveys in northern Australia and included 25 species, ranging in body mass from the delicate mouse (Pseudomys delicatulus, 10 g) to the agile wallaby (Macropus agilis, >10 kg). There was a weak relationship between the accuracy of mammal identifications and observer experience. However, accuracy was highest (100%) for distinctive species (e.g. Short‐beaked echidna [Tachyglossus aculeatus]) and lowest (36%) for superficially non‐distinctive mammals (e.g. rodents like the Pale field‐rat [Rattus tunneyi]). There was a positive relationship between the accuracy of identifications and body mass. Participant confidence was highest for large and distinctive mammals, but was not related to participant experience level. Identifications made with greater confidence were more likely to be accurate. Unreliability in identifications of mammal species is a significant limitation to camera trap studies, particularly where small mammals are the focus, or where similar‐looking species co‐occur. Integration of camera traps with conventional survey techniques (e.g. live‐trapping), use of a reference library or computer‐automated programs are likely to aid positive identifications, while employing a confidence rating system and/or multiple observers may lead to a collection of more robust data. Although our study focussed on Australian species, our findings apply to camera trap studies globally.  相似文献   

20.
Human disturbance from tourism and other non-consumptive activities in protected areas may be stressful to wildlife. Animals may move away in space or time to avoid human interaction. For species of particular conservation concern, such as Baird's tapirs (Tapirus bairdii) and jaguars (Panthera onca), a better understanding of how they respond to different levels and types of disturbance is needed in order to manage human visitation to parks in ways that minimize negative outcomes for wildlife. We describe the overlap in activity patterns of tapirs, jaguars, and humans at logged and unlogged sites and at places with low versus high human visitation using camera survey data from protected areas of NW Belize, 2013–2016. Tapirs were nocturnal in all study sites, with > 80% of all tapir detections occurring between 1900 hr and 0500 hr. Their activity patterns were not different in unlogged versus logged sites and did not change with increased human traffic. Jaguars were cathemeral across sites but had more nocturnal activity at the site with the most human impact. Activity pattern overlap between tapirs and jaguars did not differ significantly between logged and unlogged sites, nor between areas with low and high human activity. Human traffic increased from 2013 to 2016 at most of the study locations. In conclusion, this camera trap dataset suggests that non-consumptive human disturbance does not alter the activity patterns of tapirs and jaguars in protected areas lacking hunting pressure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号