首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
As the capacity to collect and store large amounts of data expands, identifying and evaluating strategies to efficiently convert raw data into meaningful information is increasingly necessary. Across disciplines, this data processing task has become a significant challenge, delaying progress and actionable insights. In ecology, the growing use of camera traps (i.e., remotely triggered cameras) to collect information on wildlife has led to an enormous volume of raw data (i.e., images) in need of review and annotation. To expedite camera trap image processing, many have turned to the field of artificial intelligence (AI) and use machine learning models to automate tasks such as detecting and classifying wildlife in images. To contribute understanding of the utility of AI tools for processing wildlife camera trap images, we evaluated the performance of a state-of-the-art computer vision model developed by Microsoft AI for Earth named MegaDetector using data from an ongoing camera trap study in Arctic Alaska, USA. Compared to image labels determined by manual human review, we found MegaDetector reliably determined the presence or absence of wildlife in images generated by motion detection camera settings (≥94.6% accuracy), however, performance was substantially poorer for images collected with time-lapse camera settings (≤61.6% accuracy). By examining time-lapse images where MegaDetector failed to detect wildlife, we gained practical insights into animal size and distance detection limits and discuss how those may impact the performance of MegaDetector in other systems. We anticipate our findings will stimulate critical thinking about the tradeoffs of using automated AI tools or manual human review to process camera trap images and help to inform effective implementation of study designs.  相似文献   

2.
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.  相似文献   

3.
Commercial camera traps are usually triggered by a Passive Infra-Red (PIR) motion sensor necessitating a delay between triggering and the image being captured. This often seriously limits the ability to record images of small and fast moving animals. It also results in many “empty” images, e.g., owing to moving foliage against a background of different temperature. In this paper we detail a new triggering mechanism based solely on the camera sensor. This is intended for use by citizen scientists and for deployment on an affordable, compact, low-power Raspberry Pi computer (RPi). Our system introduces a video frame filtering pipeline consisting of movement and image-based processing. This makes use of Machine Learning (ML) feasible on a live camera stream on an RPi. We describe our free and open-source software implementation of the system; introduce a suitable ecology efficiency measure that mediates between specificity and recall; provide ground-truth for a video clip collection from camera traps; and evaluate the effectiveness of our system thoroughly. Overall, our video camera trap turns out to be robust and effective.  相似文献   

4.
5.
The use of camera traps is now widespread and their importance in wildlife studies is well understood. Camera trap studies can produce millions of photographs and there is a need for a software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.  相似文献   

6.
Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain.  相似文献   

7.
8.
In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio‐economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events (event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal). A sequence of photographs was triggered by either animals (true trigger) or artefacts (false trigger). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images (“false” false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium‐sized mammal events. The type of crossing behavior (Entry or Refusal) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were “false” false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.  相似文献   

9.
  1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies.
  2. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers.
  3. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images.
  4. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
  相似文献   

10.
The Carpentarian Pseudantechinus (Pseudantechinus mimulus) is a poorly studied dasyurid marsupial that inhabits rocky outcrops in the Mount Isa Inlier bioregion in Queensland and the Gulf Coastal and Gulf Fall and Uplands bioregions in the Northern Territory. It is readily detected by passive infrared triggered camera traps (‘camera traps’). Camera trap data can be used to develop detection probability estimates from which activity patterns can be inferred, but no effort has previously been made to determine changes in the detectability of P. mimulus throughout the year. We undertook a 13-month baited camera trap survey across nine sampling periods at 60 locations of known historic presence or nearby suitable habitat to assess the change in detection rates and detection probabilities of P. mimulus across a year. Detection probabilities were calculated from camera trap data within a single-species, multi-season occupancy framework to determine optimal survey timing. Detection probability data were used to calculate the likelihood of false absences to determine optimal survey duration. We recorded 2493 detections of P. mimulus over 10 966 camera days. Detection probability ranged from 0.009 to 0.179 and was significantly higher from April to October than from November to March. The likelihood of false absences varied by sampling period and desired level of confidence. We find that camera trap surveys for P. mimulus are best conducted from April to October, but optimal survey duration is dependent upon the time of year and desired level of confidence that an observed absence from a given site reflects a true absence at that site. Attaining a minimum of 80% confidence of absence requires as few as 9 days of survey effort in May to 16 days of survey effort in October.  相似文献   

11.
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~ 40% and correctly identified > 90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~ 1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.  相似文献   

12.
Camera traps often produce massive images, and empty images that do not contain animals are usually overwhelming. Deep learning is a machine‐learning algorithm and widely used to identify empty camera trap images automatically. Existing methods with high accuracy are based on millions of training samples (images) and require a lot of time and personnel costs to label the training samples manually. Reducing the number of training samples can save the cost of manually labeling images. However, the deep learning models based on a small dataset produce a large omission error of animal images that many animal images tend to be identified as empty images, which may lead to loss of the opportunities of discovering and observing species. Therefore, it is still a challenge to build the DCNN model with small errors on a small dataset. Using deep convolutional neural networks and a small‐size dataset, we proposed an ensemble learning approach based on conservative strategies to identify and remove empty images automatically. Furthermore, we proposed three automatic identifying schemes of empty images for users who accept different omission errors of animal images. Our experimental results showed that these three schemes automatically identified and removed 50.78%, 58.48%, and 77.51% of the empty images in the dataset when the omission errors were 0.70%, 1.13%, and 2.54%, respectively. The analysis showed that using our scheme to automatically identify empty images did not omit species information. It only slightly changed the frequency of species occurrence. When only a small dataset was available, our approach provided an alternative to users to automatically identify and remove empty images, which can significantly reduce the time and personnel costs required to manually remove empty images. The cost savings were comparable to the percentage of empty images removed by models.  相似文献   

13.
Effective conservation and management of primates depend on our ability to accurately assess and monitor populations through research. Camera traps are proving to be useful tools for studying a variety of primate species, in diverse and often difficult habitats. Here, we discuss the use of camera traps in primatology to survey rare species, assess populations, and record behavior. We also discuss methodological considerations for primate studies, including camera trap research design, inherent biases, and some limitations of camera traps. We encourage other primatologists to use transparent and standardized methods, and when appropriate to consider using occupancy framework to account for imperfect detection, and complementary techniques, e.g., transect counts, interviews, behavioral observation, to ensure accuracy of data interpretation. In addition, we address the conservation implications of camera trapping, such as using data to inform industry, garner public support, and contributing photos to large-scale habitat monitoring projects. Camera trap studies such as these are sure to advance research and conservation of primate species. Finally, we provide commentary on the ethical considerations, e.g., photographs of humans and illegal activity, of using camera traps in primate research. We believe ethical considerations will be particularly important in future primate studies, although this topic has not previously been addressed for camera trap use in primatology or any wildlife species.  相似文献   

14.
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.  相似文献   

15.
Demographic and life history data from wild populations of long-lived primate species are difficult to acquire but are critical for evaluating population viability and the success of conservation efforts. Camera trapping provides an opportunity for researchers to monitor wild animal populations indirectly and could help provide demographic and life history data in a way that demands fewer person-hours in the field, is less disruptive to the study population because it requires less direct contact, and may be cost effective. Using data on group composition collected concurrently though both direct observation and camera trap monitoring, we evaluate whether camera traps can provide reliable information on population dynamics (births, disappearances, interbirth intervals, and other demographic variables) for a wild population of white-bellied spider monkeys (Ateles belzebuth), an Endangered species. We placed camera traps focused on the sole access point used by the monkeys to visit a geophagy site located roughly in the center of one group’s home range, and we reviewed all of the photos collected at that site over a roughly 3-yr period to identify the individual monkeys recorded in the pictures. Group composition based on 2947 photos containing 3977 individual monkey images matched perfectly data collected concurrently through direct observation. The camera traps also provided estimates of the dates when individuals disappeared from the study group, and of infant births during the study. We conclude that long-term camera trap monitoring of wild populations of white-bellied spider monkeys—and other animals that are individually recognizable and that regularly visit predictable resources—can be a useful tool for monitoring their population dynamics indirectly.  相似文献   

16.
Camera traps are a powerful and increasingly popular tool for mammal research, but like all survey methods, they have limitations. Identifying animal species from images is a critical component of camera trap studies, yet while researchers recognize constraints with experimental design or camera technology, image misidentification is still not well understood. We evaluated the effects of a species’ attributes (body mass and distinctiveness) and individual observer variables (experience and confidence) on the accuracy of mammal identifications from camera trap images. We conducted an Internet‐based survey containing 20 questions about observer experience and 60 camera trap images to identify. Images were sourced from surveys in northern Australia and included 25 species, ranging in body mass from the delicate mouse (Pseudomys delicatulus, 10 g) to the agile wallaby (Macropus agilis, >10 kg). There was a weak relationship between the accuracy of mammal identifications and observer experience. However, accuracy was highest (100%) for distinctive species (e.g. Short‐beaked echidna [Tachyglossus aculeatus]) and lowest (36%) for superficially non‐distinctive mammals (e.g. rodents like the Pale field‐rat [Rattus tunneyi]). There was a positive relationship between the accuracy of identifications and body mass. Participant confidence was highest for large and distinctive mammals, but was not related to participant experience level. Identifications made with greater confidence were more likely to be accurate. Unreliability in identifications of mammal species is a significant limitation to camera trap studies, particularly where small mammals are the focus, or where similar‐looking species co‐occur. Integration of camera traps with conventional survey techniques (e.g. live‐trapping), use of a reference library or computer‐automated programs are likely to aid positive identifications, while employing a confidence rating system and/or multiple observers may lead to a collection of more robust data. Although our study focussed on Australian species, our findings apply to camera trap studies globally.  相似文献   

17.
红外相机技术在我国野生动物监测中的应用: 问题与限制   总被引:2,自引:0,他引:2  
红外相机(camera traps)作为对野生动物进行“非损伤”性采样的技术, 已成为研究动物多样性、种群生态学及行为学的常用手段之一。其发展和普及为中国野生动物多样性和物种保育研究带来了诸多机会。如今, 国内大多数自然保护区都在运用红外相机技术开展物种监测工作。本文结合20年来已发表的相关研究, 从内容、实验设计以及发展趋势方面, 总结了目前红外相机技术在应用过程中出现的共性问题; 并就相机对动物的干扰性、影像识别、研究的适用范围及安全保障四个方面, 对该项技术在实践中存在的限制进行了探讨。最后结合红外相机技术未来的发展方向, 提出了建立技术规范、数据集成和共享、影像数据版权维护、提高监测效率等问题。  相似文献   

18.
《Ecological Informatics》2012,7(6):345-353
Camera traps and the images they generate are becoming an essential tool for field biologists studying and monitoring terrestrial animals, in particular medium to large terrestrial mammals and birds. In the last five years, camera traps have made the transition to digital technology, where these devices now produce hundreds of instantly available images per month and a large amount of ancillary metadata (e.g., date, time, temperature, image size, etc.). Despite this accelerated pace in the development of digital image capture, field biologists still lack adequate software solutions to process and manage the increasing amount of information in a cost efficient way. In this paper we describe a software system that we have developed, called DeskTEAM, to address this issue. DeskTEAM has been developed in the context of the Tropical Ecology Assessment and Monitoring Network (TEAM), a global network that monitors terrestrial vertebrates. We describe the software architecture and functionality and its utility in managing and processing large amounts of digital camera trap data collected throughout the global TEAM network. DeskTEAM incorporates software features and functionality that make it relevant to the broad camera trapping community. These include the ability to run the application locally on a laptop or desktop computer, without requiring an Internet connection, as well as the ability to run on multiple operating systems; an intuitive navigational user interface with multiple levels of detail (from individual images, to whole groups of images) which allows users to easily manage hundreds or thousands of images; ability to automatically extract EXIF and custom metadata information from digital images to increase standardization; availability of embedded taxonomic lists to allow users to easily tag images with species identities; and the ability to export data packages consisting of data, metadata and images in standardized formats so that they can be transferred to online data warehouses for easy archiving and dissemination. Lastly, building these software tools for wildlife scientists provides valuable lessons for the ecoinformatics community.  相似文献   

19.
Metal box (e.g., Elliott, Sherman) traps and remote cameras are two of the most commonly employed methods presently used to survey terrestrial mammals. However, their relative efficacy at accurately detecting cryptic small mammals has not been adequately assessed. The present study therefore compared the effectiveness of metal box (Elliott) traps and vertically oriented, close range, white flash camera traps in detecting small mammals occurring in the Scenic Rim of eastern Australia. We also conducted a preliminary survey to determine effectiveness of a conservation detection dog (CDD) for identifying presence of a threatened carnivorous marsupial, Antechinus arktos, in present‐day and historical locations, using camera traps to corroborate detections. 200 Elliott traps and 20 white flash camera traps were set for four deployments per method, across a site where the target small mammals, including A. arktos, are known to occur. Camera traps produced higher detection probabilities than Elliott traps for all four species. Thus, vertically mounted white flash cameras were preferable for detecting the presence of cryptic small mammals in our survey. The CDD, which had been trained to detect A. arktos scat, indicated in total 31 times when deployed in the field survey area, with subsequent camera trap deployments specifically corroborating A. arktos presence at 100% (3) indication locations. Importantly, the dog indicated twice within Border Ranges National Park, where historical (1980s–1990s) specimen‐based records indicate the species was present, but extensive Elliott and camera trapping over the last 5–10 years have resulted in zero A. arktos captures. Camera traps subsequently corroborated A. arktos presence at these sites. This demonstrates that detection dogs can be a highly effective means of locating threatened, cryptic species, especially when traditional methods are unable to detect low‐density mammal populations.  相似文献   

20.
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号