首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 59 毫秒
1.
《Ecological Informatics》2012,7(6):345-353
Camera traps and the images they generate are becoming an essential tool for field biologists studying and monitoring terrestrial animals, in particular medium to large terrestrial mammals and birds. In the last five years, camera traps have made the transition to digital technology, where these devices now produce hundreds of instantly available images per month and a large amount of ancillary metadata (e.g., date, time, temperature, image size, etc.). Despite this accelerated pace in the development of digital image capture, field biologists still lack adequate software solutions to process and manage the increasing amount of information in a cost efficient way. In this paper we describe a software system that we have developed, called DeskTEAM, to address this issue. DeskTEAM has been developed in the context of the Tropical Ecology Assessment and Monitoring Network (TEAM), a global network that monitors terrestrial vertebrates. We describe the software architecture and functionality and its utility in managing and processing large amounts of digital camera trap data collected throughout the global TEAM network. DeskTEAM incorporates software features and functionality that make it relevant to the broad camera trapping community. These include the ability to run the application locally on a laptop or desktop computer, without requiring an Internet connection, as well as the ability to run on multiple operating systems; an intuitive navigational user interface with multiple levels of detail (from individual images, to whole groups of images) which allows users to easily manage hundreds or thousands of images; ability to automatically extract EXIF and custom metadata information from digital images to increase standardization; availability of embedded taxonomic lists to allow users to easily tag images with species identities; and the ability to export data packages consisting of data, metadata and images in standardized formats so that they can be transferred to online data warehouses for easy archiving and dissemination. Lastly, building these software tools for wildlife scientists provides valuable lessons for the ecoinformatics community.  相似文献   

2.
Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple photographs for each detection and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of photographs per study. The task of converting photographs to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g., camera malfunction or moving vegetation) or pictures of humans. We developed computer vision algorithms to detect and classify moving objects to aid the first step of camera trap image filtering—separating the animal detections from the empty frames and pictures of humans. Our new work couples foreground object segmentation through background subtraction with deep learning classification to provide a fast and accurate scheme for human–animal detection. We provide these programs as both Matlab GUI and command prompt developed with C++. The software reads folders of camera trap images and outputs images annotated with bounding boxes around moving objects and a text file summary of results. This software maintains high accuracy while reducing the execution time by 14 times. It takes about 6 seconds to process a sequence of ten frames (on a 2.6 GHZ CPU computer). For those cameras with excessive empty frames due to camera malfunction or blowing vegetation automatically removes 54% of the false‐triggers sequences without influencing the human/animal sequences. We achieve 99.58% on image‐level empty versus object classification of Serengeti dataset. We offer the first computer vision tool for processing camera trap images providing substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps.  相似文献   

3.
This paper describes and explains design patterns for software that supports how analysts can efficiently inspect and classify camera trap images for wildlife‐related ecological attributes. Broadly speaking, a design pattern identifies a commonly occurring problem and a general reusable design approach to solve that problem. A developer can then use that design approach to create a specific software solution appropriate to the particular situation under consideration. In particular, design patterns for camera trap image analysis by wildlife biologists address solutions to commonly occurring problems they face while inspecting a large number of images and entering ecological data describing image attributes. We developed design patterns for image classification based on our understanding of biologists' needs that we acquired over 8 years during development and application of the freely available Timelapse image analysis system. For each design pattern presented, we describe the problem, a design approach that solves that problem, and a concrete example of how Timelapse addresses the design pattern. Our design patterns offer both general and specific solutions related to: maintaining data consistency, efficiencies in image inspection, methods for navigating between images, efficiencies in data entry including highly repetitious data entry, and sorting and filtering image into sequences, episodes, and subsets. These design patterns can inform the design of other camera trap systems and can help biologists assess how competing software products address their project‐specific needs along with determining an efficient workflow.  相似文献   

4.
Commercial camera traps are usually triggered by a Passive Infra-Red (PIR) motion sensor necessitating a delay between triggering and the image being captured. This often seriously limits the ability to record images of small and fast moving animals. It also results in many “empty” images, e.g., owing to moving foliage against a background of different temperature. In this paper we detail a new triggering mechanism based solely on the camera sensor. This is intended for use by citizen scientists and for deployment on an affordable, compact, low-power Raspberry Pi computer (RPi). Our system introduces a video frame filtering pipeline consisting of movement and image-based processing. This makes use of Machine Learning (ML) feasible on a live camera stream on an RPi. We describe our free and open-source software implementation of the system; introduce a suitable ecology efficiency measure that mediates between specificity and recall; provide ground-truth for a video clip collection from camera traps; and evaluate the effectiveness of our system thoroughly. Overall, our video camera trap turns out to be robust and effective.  相似文献   

5.
  1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies.
  2. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers.
  3. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images.
  4. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
  相似文献   

6.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

7.
A wide variety of information or ‘metadata’ is required when undertaking dendrochronological sampling. Traditionally, researchers record observations and measurements on field notebooks and/or paper recording forms, and use digital cameras and hand-held GPS devices to capture images and record locations. In the lab, field notes are often manually entered into spreadsheets or personal databases, which are then sometimes linked to images and GPS waypoints. This process is both time consuming and prone to human and instrument error. Specialised hardware technology exists to marry these data sources, but costs can be prohibitive for small scale operations (>$2000 USD). Such systems often include proprietary software that is tailored to very specific needs and might require a high level of expertise to use. We report on the successful testing and deployment of a dendrochronological field data collection system utilising affordable off-the-shelf devices ($100–300 USD). The method builds upon established open source software that has been widely used in developing countries for public health projects as well as to assist in disaster recovery operations. It includes customisable forms for digital data entry in the field, and a marrying of accurate GPS location with geotagged photographs (with possible extensions to other measuring devices via Bluetooth) into structured data fields that are easy to learn and operate. Digital data collection is less prone to human error and efficiently captures a range of important metadata. In our experience, the hardware proved field worthy in terms of size, ruggedness, and dependability (e.g., battery life). The system integrates directly with the Tellervo software to both create forms and populate the database, providing end users with the ability to tailor the solution to their particular field data collection needs.  相似文献   

8.
Camera traps are a popular tool for monitoring wildlife though they can fail to capture enough morphological detail for accurate small mammal species identification. Camera trapping small mammals is often limited by the inability of camera models to: (i) record at close distances; and (ii) provide standardised photos. This study aims to provide a camera trapping method that captures standardised images of the faces of small mammals for accurate species identification, with further potential for individual identification. A novel camera trap design coined the ‘selfie trap’ was developed. The selfie trap is a camera contained within an enclosed PVC pipe with a modified lens that produces standardised close images of small mammal species encountered in this study, including: Brown Antechinus (Antechinus stuartii), Bush Rat (Rattus fuscipes) and Sugar Glider (Petaurus breviceps). Individual identification was tested on the common arboreal Sugar Glider. Five individual Sugar Gliders were identified based on unique head stripe pelage. The selfie trap is an accurate camera trapping method for capturing detailed and standardised images of small mammal species. The design described may be useful for wildlife management as a reliable method for surveying small mammal species. However, intraspecies individual identification using the selfie trap requires further testing.  相似文献   

9.
Effective conservation and management of primates depend on our ability to accurately assess and monitor populations through research. Camera traps are proving to be useful tools for studying a variety of primate species, in diverse and often difficult habitats. Here, we discuss the use of camera traps in primatology to survey rare species, assess populations, and record behavior. We also discuss methodological considerations for primate studies, including camera trap research design, inherent biases, and some limitations of camera traps. We encourage other primatologists to use transparent and standardized methods, and when appropriate to consider using occupancy framework to account for imperfect detection, and complementary techniques, e.g., transect counts, interviews, behavioral observation, to ensure accuracy of data interpretation. In addition, we address the conservation implications of camera trapping, such as using data to inform industry, garner public support, and contributing photos to large-scale habitat monitoring projects. Camera trap studies such as these are sure to advance research and conservation of primate species. Finally, we provide commentary on the ethical considerations, e.g., photographs of humans and illegal activity, of using camera traps in primate research. We believe ethical considerations will be particularly important in future primate studies, although this topic has not previously been addressed for camera trap use in primatology or any wildlife species.  相似文献   

10.
Camera traps are a powerful and increasingly popular tool for mammal research, but like all survey methods, they have limitations. Identifying animal species from images is a critical component of camera trap studies, yet while researchers recognize constraints with experimental design or camera technology, image misidentification is still not well understood. We evaluated the effects of a species’ attributes (body mass and distinctiveness) and individual observer variables (experience and confidence) on the accuracy of mammal identifications from camera trap images. We conducted an Internet‐based survey containing 20 questions about observer experience and 60 camera trap images to identify. Images were sourced from surveys in northern Australia and included 25 species, ranging in body mass from the delicate mouse (Pseudomys delicatulus, 10 g) to the agile wallaby (Macropus agilis, >10 kg). There was a weak relationship between the accuracy of mammal identifications and observer experience. However, accuracy was highest (100%) for distinctive species (e.g. Short‐beaked echidna [Tachyglossus aculeatus]) and lowest (36%) for superficially non‐distinctive mammals (e.g. rodents like the Pale field‐rat [Rattus tunneyi]). There was a positive relationship between the accuracy of identifications and body mass. Participant confidence was highest for large and distinctive mammals, but was not related to participant experience level. Identifications made with greater confidence were more likely to be accurate. Unreliability in identifications of mammal species is a significant limitation to camera trap studies, particularly where small mammals are the focus, or where similar‐looking species co‐occur. Integration of camera traps with conventional survey techniques (e.g. live‐trapping), use of a reference library or computer‐automated programs are likely to aid positive identifications, while employing a confidence rating system and/or multiple observers may lead to a collection of more robust data. Although our study focussed on Australian species, our findings apply to camera trap studies globally.  相似文献   

11.
Remote cameras are a common method for surveying wildlife and recently have been promoted for implementing large‐scale regional biodiversity monitoring programs. The use of camera‐trap data depends on the correct identification of animals captured in the photographs, yet misidentification rates can be high, especially when morphologically similar species co‐occur, and this can lead to faulty inferences and hinder conservation efforts. Correct identification is dependent on diagnosable taxonomic characters, photograph quality, and the experience and training of the observer. However, keys rooted in taxonomy are rarely used for the identification of camera‐trap images and error rates are rarely assessed, even when morphologically similar species are present in the study area. We tested a method for ensuring high identification accuracy using two sympatric and morphologically similar chipmunk (Neotamias) species as a case study. We hypothesized that the identification accuracy would improve with use of the identification key and with observer training, resulting in higher levels of observer confidence and higher levels of agreement among observers. We developed an identification key and tested identification accuracy based on photographs of verified museum specimens. Our results supported predictions for each of these hypotheses. In addition, we validated the method in the field by comparing remote‐camera data with live‐trapping data. We recommend use of these methods to evaluate error rates and to exclude ambiguous records in camera‐trap datasets. We urge that ensuring correct and scientifically defensible species identifications is incumbent on researchers and should be incorporated into the camera‐trap workflow.  相似文献   

12.
13.
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.  相似文献   

14.
The giant panda is a flagship species in ecological conservation. The infrared camera trap is an effective tool for monitoring the giant panda. Images captured by infrared camera traps must be accurately recognized before further statistical analyses can be implemented. Previous research has demonstrated that spatiotemporal and positional contextual information and the species distribution model (SDM) can improve image detection accuracy, especially for difficult-to-see images. Difficult-to-see images include those in which individual animals are only partially observed and it is challenging for the model to detect those individuals. By utilizing the attention mechanism, we developed a unique method based on deep learning that incorporates object detection, contextual information, and the SDM to achieve better detection performance in difficult-to-see images. We obtained 1169 images of the wild giant panda and divided them into a training set and a test set in a 4:1 ratio. Model assessment metrics showed that our proposed model achieved an overall performance of 98.1% in mAP0.5 and 82.9% in recall on difficult-to-see images. Our research demonstrated that the fine-grained multimodal-fusing method applied to monitoring giant pandas in the wild can better detect the difficult-to-see panda images to enhance the wildlife monitoring system.  相似文献   

15.
The ecoinformatics community recognizes that ecological synthesis across studies, space, and time will require new informatics tools and infrastructure. Recent advances have been encouraging, but many problems still face ecologists who manage their own datasets, prepare data for archiving, and search data stores for synthetic research. In this paper, we describe how work by the Canopy Database Project (CDP) might enable use of database technology by field ecologists: increasing the quality of database design, improving data validation, and providing structural and semantic metadata — all of which might improve the quality of data archives and thereby help drive ecological synthesis.The CDP has experimented with conceptual components for database design, templates, to address information technology issues facing ecologists. Templates represent forest structures and observational measurements on these structures. Using our software, researchers select templates to represent their study’s data and can generate normalized relational databases. Information hidden in those databases is used by ancillary tools, including data intake forms and simple data validation, data visualization, and metadata export. The primary question we address in this paper is, which templates are the right templates.We argue for defining simple templates (with relatively few attributes) that describe the domain's major entities, and for coupling those with focused and flexible observation templates. We present a conceptual model for the observation data type, and show how we have implemented the model as an observation entity in the DataBank database designer and generator. We show how our visualization tool CanopyView exploits metadata made explicit by DataBank to help scientists with analysis and synthesis. We conclude by presenting future plans for tools to conduct statistical calculations common to forest ecology and to enhance data mining with DataBank databases.DataBank could be extended to another domain by replacing our forest–ecology-specific templates with those for the new domain. This work extends the basic computer science idea of abstract data types and user-defined types to ecology-specific database design tools for individual users, and applies to ecoinformatics the software engineering innovations of domain-specific languages, software patterns, components, refactoring, and end-user programming.  相似文献   

16.
Capture–recapture analysis of camera trap data is a conventional method to estimate the abundance of free-ranging wild felids. Due to notorious low detection rates of felids, it is important to increase the detection probability during sampling. In this study, we report the effectiveness of attractants as a tool for improving the efficiency of camera trap sampling in abundance estimation of Iberian lynx. We developed a grid system of camera stations in which stations with and without attractant lures were spatially alternated across known Iberian lynx habitat. Of the ten individuals identified, five were detected at stations with no attractant (blind sets), and nine, at the lured stations. Thirty-eight percent of blind set station’s independent captures and 10?% of lured station’s independent captures resulted in photographs unsuitable for correct individual identification. The total capture probability at lured stations was higher than that obtained at blind set stations. The estimates obtained with blind set cameras underestimated the number of lynxes compared to lured cameras. In our study, it appears that the use of lures increased the efficiency of trail camera captures and, therefore, the accuracy of capture–recapture analysis. The observed failure to detect known individuals at blind set camera stations may violate capture–recapture assumptions and bias abundance estimates.  相似文献   

17.
A quality assurance procedure has been developed for a prototype gamma-ray guided stereotactic biopsy system. The system consists of a compact small-field-of-view gamma-ray camera mounted to the rotational arm of a Lorad stereotactic biopsy system. The small-field-of-view gamma-ray camera has been developed for clinical applications where mammographic X-ray localization is not possible. Marker sources that can be imaged with the gamma-camera have been designed and built for quality assurance testing and to provide a fiducial reference mark. An algorithm for determining the three dimensional location of a region of interest, such as a lesion, relative to the fiducial mark has been implemented into the software control of the camera. This system can be used to determine the three-dimensional location of a region of interest from a stereo pair of images and that information can be used to guide a biopsy needle to that site. Point source phantom tests performed with the system have demonstrated that the camera can be used to localize a point of interest to within 1 mm, which is satisfactory for its use in needle localization.  相似文献   

18.
Habituation has been the standard methodology used to study the natural history of great apes and other primates. Habituation has invaluable strengths, particularly in quantity and diversity of data collected, but along with these come substantial weaknesses, i.e., costs both in time and effort, health risks, and potential exposure of subjects to poaching. With new technologies, we are able to extend our studies beyond the limitations of habituation; camera traps are one technology that can be used to study unhabituated primate groups. In this study we used eight camera traps over the course of 2 yr (1542 camera trap days) to capture thousands of still images of West African savanna chimpanzees (Pan troglodytes verus) in the Falémé region of southeastern Senegal. Images corroborated behavioral observations from habituated chimpanzees at the Fongoli site, where researchers have observed nocturnal activity and cave use. The cameras also captured interspecies interactions at water sources during the dry season and allowed us to determine demographic composition and minimum community size. The photographs provide data on local fauna, including predators (Panthera pardus pardus, Panthera leo senegalesis, and Crocuta crocuta), potential prey, and competitor species (Papio papio, Cercopithecus aethiops, and Erythrocebus patas). As primate habitat across Africa is further threatened and human–wildlife conflict increases, camera trapping could be used as an essential conservation tool, expanding studies of primates without exacerbating potential threats to the species.  相似文献   

19.
As the capacity to collect and store large amounts of data expands, identifying and evaluating strategies to efficiently convert raw data into meaningful information is increasingly necessary. Across disciplines, this data processing task has become a significant challenge, delaying progress and actionable insights. In ecology, the growing use of camera traps (i.e., remotely triggered cameras) to collect information on wildlife has led to an enormous volume of raw data (i.e., images) in need of review and annotation. To expedite camera trap image processing, many have turned to the field of artificial intelligence (AI) and use machine learning models to automate tasks such as detecting and classifying wildlife in images. To contribute understanding of the utility of AI tools for processing wildlife camera trap images, we evaluated the performance of a state-of-the-art computer vision model developed by Microsoft AI for Earth named MegaDetector using data from an ongoing camera trap study in Arctic Alaska, USA. Compared to image labels determined by manual human review, we found MegaDetector reliably determined the presence or absence of wildlife in images generated by motion detection camera settings (≥94.6% accuracy), however, performance was substantially poorer for images collected with time-lapse camera settings (≤61.6% accuracy). By examining time-lapse images where MegaDetector failed to detect wildlife, we gained practical insights into animal size and distance detection limits and discuss how those may impact the performance of MegaDetector in other systems. We anticipate our findings will stimulate critical thinking about the tradeoffs of using automated AI tools or manual human review to process camera trap images and help to inform effective implementation of study designs.  相似文献   

20.
Aim The development of software able to provide individual recognition of southern elephant seals, as a tool to study colonies. This analysis was performed within a framework of studies concerning environmental dispersion produced by the El Niño Southern Oscillation effect in the Southern Ocean Ecosystem. Location Digital photographs of reproductive female elephant seals were taken at Punta Norte (Península Valdés, Patagonia; 42°05′ S, 63°45′ W) during the 2002 breeding season (August to November). The data set under analysis is composed of 96 elephant seal images for a population of 56 individuals. Method Identification of specimens was carried out using digital pictures taken with a digital video camera, and processed through the ‘Eigenfaces’ method, which is based on principal components analysis. Special care was taken to control possible variations among images of the same individual, like distance, angle, light intensity, etc. To deal with these variations, an initial alignment procedure is proposed to have all images framed; in addition an initial histogram equalization was used which attenuates any potential variation in light intensity. The software was developed in IDL5.5 language. Results A complete set of empirical results is displayed showing the potential effectiveness of this technique. Individual recognition and pertinence to different population subsets (harems) tests have been carried out. A principal result of this work is that all 96 elephant seal images (representing 56 individuals) were correctly identified. Conclusion The Eigenfaces method can be used successfully for identification of elephant seals. With the appropriate preparatory treatment of images, high performance results can be expected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号