首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high‐speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low‐cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high‐speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost‐prohibitive, allowing camera trap use in more research avenues and by more researchers.  相似文献   

2.
Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple photographs for each detection and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of photographs per study. The task of converting photographs to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g., camera malfunction or moving vegetation) or pictures of humans. We developed computer vision algorithms to detect and classify moving objects to aid the first step of camera trap image filtering—separating the animal detections from the empty frames and pictures of humans. Our new work couples foreground object segmentation through background subtraction with deep learning classification to provide a fast and accurate scheme for human–animal detection. We provide these programs as both Matlab GUI and command prompt developed with C++. The software reads folders of camera trap images and outputs images annotated with bounding boxes around moving objects and a text file summary of results. This software maintains high accuracy while reducing the execution time by 14 times. It takes about 6 seconds to process a sequence of ten frames (on a 2.6 GHZ CPU computer). For those cameras with excessive empty frames due to camera malfunction or blowing vegetation automatically removes 54% of the false‐triggers sequences without influencing the human/animal sequences. We achieve 99.58% on image‐level empty versus object classification of Serengeti dataset. We offer the first computer vision tool for processing camera trap images providing substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps.  相似文献   

3.
Traditional techniques of human motion analysis use markers located on body articulations. The position of each marker is extracted from each image. Temporal and kinematic analysis is given by matching these data with a reference model of the human body. However, as human skin is not rigidly linked with the skeleton, each movement causes displacements of the markers and induces uncertainty in results. Moreover, the experiments are mostly conducted in restricted laboratory conditions. The aim of our project was to develop a new method for human motion analysis which needs non-sophisticated recording devices, avoids constraints to the subject studied, and can be used in various surroundings such as stadiums or gymnasiums. Our approach consisted of identifying and locating body parts in image, without markers, by using a multi-sensory sensor. This sensor exploits both data given by a video camera delivering intensity images, and data given by a 3D sensor delivering in-depth images. Our goal, in this design, was to show up the feasibility of our approach. In any case the hardware we used could facilitate an automated motion analysis. We used a linked segment model which referred to Winter's model, and we applied our method not on a human subject but on a life size articulated locomotion model. Our approach consists of finding the posture of this articulated locomotion model in the image. By performing a telemetric image segmentation, we obtained an approximate correspondence between linked segment model position and locomotion model position. This posture was then improved by injecting segmentation results in an intensity image segmentation algorithm. Several tests were conducted with video/telemetric images taken in an outdoor surrounding with the articulated model. This real life-size model was equipped with movable joints which, in static positions, described two strides of a runner. With our fusion method, we obtained relevant limbs identification and location for most postures.  相似文献   

4.
5.
6.
Insect and pollinator populations are vitally important to the health of ecosystems, food production, and economic stability, but are declining worldwide. New, cheap, and simple monitoring methods are necessary to inform management actions and should be available to researchers around the world. Here, we evaluate the efficacy of a commercially available, close‐focus automated camera trap to monitor insect–plant interactions and insect behavior. We compared two video settings—scheduled and motion‐activated—to a traditional human observation method. Our results show that camera traps with scheduled video settings detected more insects overall than humans, but relative performance varied by insect order. Scheduled cameras significantly outperformed motion‐activated cameras, detecting more insects of all orders and size classes. We conclude that scheduled camera traps are an effective and relatively inexpensive tool for monitoring interactions between plants and insects of all size classes, and their ease of accessibility and set‐up allows for the potential of widespread use. The digital format of video also offers the benefits of recording, sharing, and verifying observations.  相似文献   

7.
ABSTRACT Investigators have used a variety of methods to inspect nest cavities, including wireless battery‐powered video cameras mounted on telescoping poles. Using such a monitoring system to inspect cavities located well above ground can be difficult because the weight of the camera can cause flexing of the telescoping pole, making it difficult to insert the camera into cavity entrances. We constructed a system made from commercially available products that transmits wireless video images from nest cavities, and is both lightweight (198 g) and relatively inexpensive (about $520 US). During a study of Pileated Woodpeckers (Dryocopus pileatus), we inspected more than 100 cavities using our monitoring system and found that images were clear enough to allow us to count eggs and nestlings, and determine the sex of adults and nestlings. Because of its light weight, our wireless camera system allows quick inspection of cavities (typically less than 2 min). Although we used our cavity‐monitoring system to inspect cavities used by Pileated Woodpecker, we believe that the diameter of the camera could be reduced from 5.6 cm to 4.7 cm to allow inspection of cavities with smaller entrances.  相似文献   

8.
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.  相似文献   

9.
《Journal of Asia》2020,23(1):17-28
This work presents an automated insect pest counting and environmental condition monitoring system using integrated camera modules and an embedded system as the sensor node in a wireless sensor network. The sensor node can be used to simultaneously acquire images of sticky paper traps and measure temperature, humidity, and light intensity levels in a greenhouse. An image processing algorithm was applied to automatically detect and count insect pests on an insect sticky trap with 93% average temporal detection accuracy compared with manual counting. The integrated monitoring system was implemented with multiple sensor nodes in a greenhouse and experiments were performed to test the system’s performance. Experimental results show that the automatic counting of the monitoring system is comparable with manual counting, and the insect pest count information can be continuously and effectively recorded. Information on insect pest concentrations were further analyzed temporally and spatially with environmental factors. Analyses of experimental data reveal that the normalized hourly increase in the insect pest count appears to be associated with the change in light intensity, temperature, and relative humidity. With the proposed system, laborious manual counting can be circumvented and timely assessment of insect pest and environmental information can be achieved. The system also offers an efficient tool for long-term insect pest behavior observations, as well as for practical applications in integrated pest management (IPM).  相似文献   

10.
Camera traps are a powerful and increasingly popular tool for mammal research, but like all survey methods, they have limitations. Identifying animal species from images is a critical component of camera trap studies, yet while researchers recognize constraints with experimental design or camera technology, image misidentification is still not well understood. We evaluated the effects of a species’ attributes (body mass and distinctiveness) and individual observer variables (experience and confidence) on the accuracy of mammal identifications from camera trap images. We conducted an Internet‐based survey containing 20 questions about observer experience and 60 camera trap images to identify. Images were sourced from surveys in northern Australia and included 25 species, ranging in body mass from the delicate mouse (Pseudomys delicatulus, 10 g) to the agile wallaby (Macropus agilis, >10 kg). There was a weak relationship between the accuracy of mammal identifications and observer experience. However, accuracy was highest (100%) for distinctive species (e.g. Short‐beaked echidna [Tachyglossus aculeatus]) and lowest (36%) for superficially non‐distinctive mammals (e.g. rodents like the Pale field‐rat [Rattus tunneyi]). There was a positive relationship between the accuracy of identifications and body mass. Participant confidence was highest for large and distinctive mammals, but was not related to participant experience level. Identifications made with greater confidence were more likely to be accurate. Unreliability in identifications of mammal species is a significant limitation to camera trap studies, particularly where small mammals are the focus, or where similar‐looking species co‐occur. Integration of camera traps with conventional survey techniques (e.g. live‐trapping), use of a reference library or computer‐automated programs are likely to aid positive identifications, while employing a confidence rating system and/or multiple observers may lead to a collection of more robust data. Although our study focussed on Australian species, our findings apply to camera trap studies globally.  相似文献   

11.
As the capacity to collect and store large amounts of data expands, identifying and evaluating strategies to efficiently convert raw data into meaningful information is increasingly necessary. Across disciplines, this data processing task has become a significant challenge, delaying progress and actionable insights. In ecology, the growing use of camera traps (i.e., remotely triggered cameras) to collect information on wildlife has led to an enormous volume of raw data (i.e., images) in need of review and annotation. To expedite camera trap image processing, many have turned to the field of artificial intelligence (AI) and use machine learning models to automate tasks such as detecting and classifying wildlife in images. To contribute understanding of the utility of AI tools for processing wildlife camera trap images, we evaluated the performance of a state-of-the-art computer vision model developed by Microsoft AI for Earth named MegaDetector using data from an ongoing camera trap study in Arctic Alaska, USA. Compared to image labels determined by manual human review, we found MegaDetector reliably determined the presence or absence of wildlife in images generated by motion detection camera settings (≥94.6% accuracy), however, performance was substantially poorer for images collected with time-lapse camera settings (≤61.6% accuracy). By examining time-lapse images where MegaDetector failed to detect wildlife, we gained practical insights into animal size and distance detection limits and discuss how those may impact the performance of MegaDetector in other systems. We anticipate our findings will stimulate critical thinking about the tradeoffs of using automated AI tools or manual human review to process camera trap images and help to inform effective implementation of study designs.  相似文献   

12.
Investigating crop feeding patterns by primates is an increasingly important objective for primatologists and conservation practitioners alike. Although camera trap technology is used to study primates and other wildlife in numerous ways, i.e., activity patterns, social structure, species richness, abundance, density, diet, and demography, it is comparatively underused in the study of human–primate interactions. We compare photographic (N?=?210) and video (N?=?141) data of crop feeding moor macaques (Macaca maura) from remote sensor cameras, functioning for 231 trap days, with ethnographic data generated from semistructured interviews with local farmers. Our results indicate that camera traps can provide data on the following aspects of crop feeding behavior: species, crop type and phase targeted, harvesting technique used, and daily and seasonal patterns of crop feeding activity. We found camera traps less useful, however, in providing information on the individual identification and age/sex class of crop feeders, exact group size, and amount of crops consumed by the moor macaques. While farmer reports match camera trap data regarding crop feeding species and how wildlife access the gardens, they differ when addressing crop feeding event frequency and timing. Understanding the mismatches between camera trap data and farmer reports is valuable to conservation efforts that aim to mitigate the conflict between crop feeding wildlife and human livelihoods. For example, such information can influence changes in the way certain methods are used to deter crop feeding animals from damaging crops. Ultimately, we recommend using remote-sensing camera technology in conjunction with other methods to study crop feeding behavior.  相似文献   

13.
Camera traps are increasingly used in ecological research. However, tests of their performance are scarce. It is already known from previous work that camera traps frequently fail to capture visits by animals. This can lead to a misinterpretation of ecological results such as density estimates or predation events. While previous work is mainly based on mammals, for birds, no data about if and how camera traps can be successfully used to estimate species diversity or density are available. Hence, the goal of our study was an empirical validation of six different camera traps in the field. We observed a total number of N = 4567 events (independent visits of a bird) in 100 different sessions from March 2017 until January 2018 while camera traps were deployed. In addition, N = 641 events are based on a comparison of the two close‐up camera traps especially designed for birds. These events were all directly observed by the authors. Thus, the cameras can be compared against the human observer. To give an overall assessment and a more generalizable result, we combined the data from the six camera traps and showed that bird size category (effect size = 0.207) and distance (effect size = 0.132) are the most important predictors for a successful trigger. Also, temperature had a small effect, and flock size had an impact with larger flocks being captured more often. The approach of the bird, whether it approached the camera frontally or laterally had no influence. In Table 8 , we give some recommendations, based on our results, at which distances camera traps should be placed to get a 25%, 50%, and 75% capture rate for a given bird size.  相似文献   

14.
The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers'' images for matches to an image of the skin.  相似文献   

15.
In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio‐economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events (event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal). A sequence of photographs was triggered by either animals (true trigger) or artefacts (false trigger). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images (“false” false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium‐sized mammal events. The type of crossing behavior (Entry or Refusal) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were “false” false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.  相似文献   

16.
The use of camera traps is now widespread and their importance in wildlife studies is well understood. Camera trap studies can produce millions of photographs and there is a need for a software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.  相似文献   

17.
An experimental technique is discussed in which the magnetic susceptibility of immunomagnetically labeled cells can be determined on a cell-by-cell basis. This technique is based on determining the magnetically induced velocity that an immunomagnetically labeled cell has in a well-defined magnetic energy gradient. This velocity is determined through the use of video recordings of microscopic images of cells moving in the magnetic energy gradient. These video images are then computer digitized and processed using a computer algorithm, cell tracking velocimetry, which allows larger numbers (>10(3)) of cells to be analyzed.  相似文献   

18.
A novel imaging sensor system for the determination of plasmid carrying yeast cells was developed. The sensor system consisted of an Silicon Intensifier Target (SIT) video camera, a fluorescent microscope, and a personal computer system equipped with an image memory board. This system was based on the fact that the membrane integrity of only plasmid-carrying cells is lost following cell growth in 5-fluoro-orotic acid (5-FOA) containing medium, and consequently these target cell can be stained with fluorescent probes and detected. In this study, plasmid-carrying cells were detected and their fraction determined in a mixture of both plasmid-carring and plasmid-free cells. A good correlation was observed between the values determined by this sensor system and the conventional method in the 30%-80% range, and one assay was possible within 4 h. This sensor system could be used for the monitoring of plasmid-carrying fraction in recombinant yeast cells during cultivation.  相似文献   

19.
Fu B  Pitter MC  Russell NA 《PloS one》2011,6(10):e26306
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps.  相似文献   

20.
The giant panda is a flagship species in ecological conservation. The infrared camera trap is an effective tool for monitoring the giant panda. Images captured by infrared camera traps must be accurately recognized before further statistical analyses can be implemented. Previous research has demonstrated that spatiotemporal and positional contextual information and the species distribution model (SDM) can improve image detection accuracy, especially for difficult-to-see images. Difficult-to-see images include those in which individual animals are only partially observed and it is challenging for the model to detect those individuals. By utilizing the attention mechanism, we developed a unique method based on deep learning that incorporates object detection, contextual information, and the SDM to achieve better detection performance in difficult-to-see images. We obtained 1169 images of the wild giant panda and divided them into a training set and a test set in a 4:1 ratio. Model assessment metrics showed that our proposed model achieved an overall performance of 98.1% in mAP0.5 and 82.9% in recall on difficult-to-see images. Our research demonstrated that the fine-grained multimodal-fusing method applied to monitoring giant pandas in the wild can better detect the difficult-to-see panda images to enhance the wildlife monitoring system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号