首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~ 40% and correctly identified > 90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~ 1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.  相似文献   

2.
3.
To study the process of morphogenesis, one often needs to collect and segment time-lapse images of living tissues to accurately track changing cellular morphology. This task typically involves segmenting and tracking tens to hundreds of individual cells over hundreds of image frames, a scale that would certainly benefit from automated routines; however, any automated routine would need to reliably handle a large number of sporadic, and yet typical problems (e.g., illumination inconsistency, photobleaching, rapid cell motions, and drift of focus or of cells moving through the imaging plane). Here, we present a segmentation and cell tracking approach based on the premise that users know their data best-interpreting and using image features that are not accounted for in any a priori algorithm design. We have developed a program, SeedWater Segmenter, that combines a parameter-less and fast automated watershed algorithm with a suite of manual intervention tools that enables users with little to no specialized knowledge of image processing to efficiently segment images with near-perfect accuracy based on simple user interactions.  相似文献   

4.
Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain.  相似文献   

5.
This paper describes and explains design patterns for software that supports how analysts can efficiently inspect and classify camera trap images for wildlife‐related ecological attributes. Broadly speaking, a design pattern identifies a commonly occurring problem and a general reusable design approach to solve that problem. A developer can then use that design approach to create a specific software solution appropriate to the particular situation under consideration. In particular, design patterns for camera trap image analysis by wildlife biologists address solutions to commonly occurring problems they face while inspecting a large number of images and entering ecological data describing image attributes. We developed design patterns for image classification based on our understanding of biologists' needs that we acquired over 8 years during development and application of the freely available Timelapse image analysis system. For each design pattern presented, we describe the problem, a design approach that solves that problem, and a concrete example of how Timelapse addresses the design pattern. Our design patterns offer both general and specific solutions related to: maintaining data consistency, efficiencies in image inspection, methods for navigating between images, efficiencies in data entry including highly repetitious data entry, and sorting and filtering image into sequences, episodes, and subsets. These design patterns can inform the design of other camera trap systems and can help biologists assess how competing software products address their project‐specific needs along with determining an efficient workflow.  相似文献   

6.
Understanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. To shorten the time to process the data we propose here Habitat-Net: a novel deep learning application based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net to the performance of a simple threshold based method, manual processing by a second researcher and a CNN approach called U-Net, upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 s per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites).  相似文献   

7.
Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple photographs for each detection and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of photographs per study. The task of converting photographs to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g., camera malfunction or moving vegetation) or pictures of humans. We developed computer vision algorithms to detect and classify moving objects to aid the first step of camera trap image filtering—separating the animal detections from the empty frames and pictures of humans. Our new work couples foreground object segmentation through background subtraction with deep learning classification to provide a fast and accurate scheme for human–animal detection. We provide these programs as both Matlab GUI and command prompt developed with C++. The software reads folders of camera trap images and outputs images annotated with bounding boxes around moving objects and a text file summary of results. This software maintains high accuracy while reducing the execution time by 14 times. It takes about 6 seconds to process a sequence of ten frames (on a 2.6 GHZ CPU computer). For those cameras with excessive empty frames due to camera malfunction or blowing vegetation automatically removes 54% of the false‐triggers sequences without influencing the human/animal sequences. We achieve 99.58% on image‐level empty versus object classification of Serengeti dataset. We offer the first computer vision tool for processing camera trap images providing substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps.  相似文献   

8.
探讨我国森林野生动物红外相机监测规范   总被引:1,自引:0,他引:1  
野生动物多样性是生物多样性监测与保护管理评价的关键指标, 因此对野生动物进行长期监测是中国森林生物多样性监测网络(CForBio)等大尺度生物多样性监测研究计划的一个重要组成部分。2011年以来, CForBio网络陆续在多个森林动态监测样地开展以红外相机来监测野生动物多样性。随着我国野生动物红外相机监测网络的初步形成, 亟待建立和执行基于红外相机技术的统一监测规范。基于3年来在我国森林动态监测样地红外相机监测的进展情况, 以及热带生态评价与监测网络针对陆生脊椎动物(兽类和鸟类)所提出的红外相机监测规范, 本文从监测规范和监测注意事项等方面探讨了我国森林野生动物红外相机监测的现状和未来。  相似文献   

9.
Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information.  相似文献   

10.
《Ecological Informatics》2012,7(6):345-353
Camera traps and the images they generate are becoming an essential tool for field biologists studying and monitoring terrestrial animals, in particular medium to large terrestrial mammals and birds. In the last five years, camera traps have made the transition to digital technology, where these devices now produce hundreds of instantly available images per month and a large amount of ancillary metadata (e.g., date, time, temperature, image size, etc.). Despite this accelerated pace in the development of digital image capture, field biologists still lack adequate software solutions to process and manage the increasing amount of information in a cost efficient way. In this paper we describe a software system that we have developed, called DeskTEAM, to address this issue. DeskTEAM has been developed in the context of the Tropical Ecology Assessment and Monitoring Network (TEAM), a global network that monitors terrestrial vertebrates. We describe the software architecture and functionality and its utility in managing and processing large amounts of digital camera trap data collected throughout the global TEAM network. DeskTEAM incorporates software features and functionality that make it relevant to the broad camera trapping community. These include the ability to run the application locally on a laptop or desktop computer, without requiring an Internet connection, as well as the ability to run on multiple operating systems; an intuitive navigational user interface with multiple levels of detail (from individual images, to whole groups of images) which allows users to easily manage hundreds or thousands of images; ability to automatically extract EXIF and custom metadata information from digital images to increase standardization; availability of embedded taxonomic lists to allow users to easily tag images with species identities; and the ability to export data packages consisting of data, metadata and images in standardized formats so that they can be transferred to online data warehouses for easy archiving and dissemination. Lastly, building these software tools for wildlife scientists provides valuable lessons for the ecoinformatics community.  相似文献   

11.
  1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies.
  2. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers.
  3. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images.
  4. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
  相似文献   

12.
Ratiometric time-lapse FRET analysis requires a robust and accurate processing pipeline to eliminate bias in intensity measurements on fluorescent images before further quantitative analysis can be conducted. This level of robustness can only be achieved by supplementing automated tools with built-in flexibility for manual ad-hoc adjustments. FRET-IBRA is a modular and fully parallelized configuration file-based tool written in Python. It simplifies the FRET processing pipeline to achieve accurate, registered, and unified ratio image stacks. The flexibility of this tool to handle discontinuous image frame sequences with tailored configuration parameters further streamlines the processing of outliers and time-varying effects in the original microscopy images. FRET-IBRA offers cluster-based channel background subtraction, photobleaching correction, and ratio image construction in an all-in-one solution without the need for multiple applications, image format conversions, and/or plug-ins. The package accepts a variety of input formats and outputs TIFF image stacks along with performance measures to detect both the quality and failure of the background subtraction algorithm on a per frame basis. Furthermore, FRET-IBRA outputs images with superior signal-to-noise ratio and accuracy in comparison to existing background subtraction solutions, whilst maintaining a fast runtime. We have used the FRET-IBRA package extensively to quantify the spatial distribution of calcium ions during pollen tube growth under mechanical constraints. Benchmarks against existing tools clearly demonstrate the need for FRET-IBRA in extracting reliable insights from FRET microscopy images of dynamic physiological processes at high spatial and temporal resolution. The source code for Linux and Mac operating systems is released under the BSD license and, along with installation instructions, test images, example configuration files, and a step-by-step tutorial, is freely available at github.com/gmunglani/fret-ibra.  相似文献   

13.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

14.
Effective conservation and management of primates depend on our ability to accurately assess and monitor populations through research. Camera traps are proving to be useful tools for studying a variety of primate species, in diverse and often difficult habitats. Here, we discuss the use of camera traps in primatology to survey rare species, assess populations, and record behavior. We also discuss methodological considerations for primate studies, including camera trap research design, inherent biases, and some limitations of camera traps. We encourage other primatologists to use transparent and standardized methods, and when appropriate to consider using occupancy framework to account for imperfect detection, and complementary techniques, e.g., transect counts, interviews, behavioral observation, to ensure accuracy of data interpretation. In addition, we address the conservation implications of camera trapping, such as using data to inform industry, garner public support, and contributing photos to large-scale habitat monitoring projects. Camera trap studies such as these are sure to advance research and conservation of primate species. Finally, we provide commentary on the ethical considerations, e.g., photographs of humans and illegal activity, of using camera traps in primate research. We believe ethical considerations will be particularly important in future primate studies, although this topic has not previously been addressed for camera trap use in primatology or any wildlife species.  相似文献   

15.
《Journal of Asia》2020,23(1):17-28
This work presents an automated insect pest counting and environmental condition monitoring system using integrated camera modules and an embedded system as the sensor node in a wireless sensor network. The sensor node can be used to simultaneously acquire images of sticky paper traps and measure temperature, humidity, and light intensity levels in a greenhouse. An image processing algorithm was applied to automatically detect and count insect pests on an insect sticky trap with 93% average temporal detection accuracy compared with manual counting. The integrated monitoring system was implemented with multiple sensor nodes in a greenhouse and experiments were performed to test the system’s performance. Experimental results show that the automatic counting of the monitoring system is comparable with manual counting, and the insect pest count information can be continuously and effectively recorded. Information on insect pest concentrations were further analyzed temporally and spatially with environmental factors. Analyses of experimental data reveal that the normalized hourly increase in the insect pest count appears to be associated with the change in light intensity, temperature, and relative humidity. With the proposed system, laborious manual counting can be circumvented and timely assessment of insect pest and environmental information can be achieved. The system also offers an efficient tool for long-term insect pest behavior observations, as well as for practical applications in integrated pest management (IPM).  相似文献   

16.
种群参数估计及空间分布格局是动物生态学和保护生物学领域的重要目标之一.最近十几年来, 相机陷阱(camera trap)作为野外调查的一种非损伤性技术手段,在传统调查方法难以实现的情况下表现出极大优势,被广泛应用于野生动物生态学和保护学研究中.相机陷阱所获取的动物出现数据为野生动物种群提供了极其重要的定量信息.本文从相机陷阱工作原理出发,主要阐述了目前在种群生态学中较为成熟的两类针对具有或不具有天然个体标志物种的模型原理及应用: 1)种群密度和种群数量估计; 2)空间占据率估计.论文特别关注了模型发展的逻辑过程、依赖的假定、使用范围、仍然存在的问题以及未来发展方向.最后, 本文综合分析了相机陷阱在种群参数估计应用中还需注意的问题, 以及其在种群动态和生物多样性研究等方面的发展潜力.  相似文献   

17.
The giant panda is a flagship species in ecological conservation. The infrared camera trap is an effective tool for monitoring the giant panda. Images captured by infrared camera traps must be accurately recognized before further statistical analyses can be implemented. Previous research has demonstrated that spatiotemporal and positional contextual information and the species distribution model (SDM) can improve image detection accuracy, especially for difficult-to-see images. Difficult-to-see images include those in which individual animals are only partially observed and it is challenging for the model to detect those individuals. By utilizing the attention mechanism, we developed a unique method based on deep learning that incorporates object detection, contextual information, and the SDM to achieve better detection performance in difficult-to-see images. We obtained 1169 images of the wild giant panda and divided them into a training set and a test set in a 4:1 ratio. Model assessment metrics showed that our proposed model achieved an overall performance of 98.1% in mAP0.5 and 82.9% in recall on difficult-to-see images. Our research demonstrated that the fine-grained multimodal-fusing method applied to monitoring giant pandas in the wild can better detect the difficult-to-see panda images to enhance the wildlife monitoring system.  相似文献   

18.
《Journal of molecular biology》2019,431(23):4569-4588
Recent research on population heterogeneity revealed fascinating insights into microbial behavior. In particular emerging single-cell technologies, image-based microfluidics lab-on-chip systems generate insights with spatio-temporal resolution, which are inaccessible with conventional tools. This review reports recent developments and applications of microfluidic single-cell cultivation technology, highlighting fields of broad interest such as growth, gene expression and antibiotic resistance and susceptibility. Combining advanced microfluidic single-cell cultivation technology for environmental control with automated time-lapse imaging as well as smart computational image analysis offers tremendous potential for novel investigation at the single-cell level. We propose on-chip control of parameters like temperature, gas supply, pressure or a change in cultivation mode providing a versatile technology platform to mimic more complex and natural habitats. Digital analysis of the acquired images is a requirement for the extraction of biological knowledge and statistically reliable results demand for robust and automated solutions. Focusing on microbial cultivations, we compare prominent software systems that emerged during the last decade, discussing their applicability, opportunities and limitations. Next-generation microfluidic devices with a high degree of environmental control combined with time-lapse imaging and automated image analysis will be highly inspiring and beneficial for fruitful interdisciplinary cooperation between microbiologists and microfluidic engineers and image analysts in the field of microbial single-cell analysis.  相似文献   

19.
Camera traps are used by scientists and natural resource managers to acquire ecological data, and the rapidly increasing camera trapping literature highlights how popular this technique has become. Nevertheless, the methodological information reported in camera trap publications can vary widely, making replication of the study difficult. Here we propose a series of guiding principles for reporting methods and results obtained using camera traps. Attributes of camera trapping we cover include: (i) specifying the model(s) of camera traps(s) used, (ii) mode of deployment, (iii) camera settings, and (iv) study design. In addition to suggestions regarding best practice data coding and analysis, we present minimum principles for standardizing information that we believe should be reported in all peer-reviewed papers. Standardised reporting enables more robust comparisons among studies, facilitates national and global reviews, enables greater ease of study replication, and leads to improved wildlife research and management outcomes.  相似文献   

20.
2017年5月至2018年5月, 我们在四川白水河国家级自然保护区内设置红外相机对地面活动鸟兽进行了初步调查。布设在24个位点的24台相机累计工作3,832天, 共获得可识别物种的独立有效照片535张。经鉴定, 兽类有4目10科17种, 鸟类有2目4科10种。其中, 国家I级重点保护野生动物5种, 国家II级重点保护野生动物8种, 中国豪猪(Hystrix hodgsoni)、宝兴歌鸫(Turdus mupinensis)和黑顶噪鹛(Trochalopteron affine)为保护区新记录种, 而大熊猫(Ailuropoda melanoleuca)为汶川地震后首次拍到。兽类中, 花面狸(Paguma larvata)、黄喉貂(Martes flavigula)和中华斑羚(Naemorhedus griseus) 3种动物的独立有效照片总数占全部兽类独立有效照片数的50.2%。鸟类中, 血雉(Ithaginis cruentus)和红腹角雉(Tragopan temminckii)的独立有效照片总数占全部鸟类独立有效照片数的91.6%。本研究为白水河国家级自然保护区野生动物资源管理和保护提供了参考依据。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号