首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The correct identification of individuals is a requirement of capture-mark-recapture (CMR) methods, and it is commonly achieved by applying artificial marks or by mutilation of study-animals. An alternative, non-invasive method to identify individuals is to utilize the patterns of their natural body markings. However, the use of pattern mapping is not yet widespread, mainly because it is considered time consuming, particularly in large populations and/or long-term CMR studies. Here we explore the use of pattern mapping for the identification of adult individuals in the alpine (Ichthyosaura alpestris) and smooth (Lissotriton vulgaris) newts (Amphibia, Salamandridae), using the freely available, open-source software Wild-ID. Our photographic datasets comprised nearly 4000 captured animals’ images, taken during a 3-year period. The spot patterns of individual newts of both species did not change through time, and were sufficiently varied to allow their individual identification, even in the larger datasets. The pattern-recognition algorithm of Wild-ID was highly successful in identifying individual newts in both species. Our findings indicate that pattern mapping can be successfully employed for the identification of individuals in large populations of a broad range of animals that exhibit natural markings. The significance of pattern-mapping is accentuated in CMR studies that aim in obtaining long-term information on the demography and population dynamics of species of conservation interest, such as many amphibians facing population declines.  相似文献   

2.
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~ 40% and correctly identified > 90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~ 1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.  相似文献   

3.
The use of camera traps is now widespread and their importance in wildlife studies is well understood. Camera trap studies can produce millions of photographs and there is a need for a software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.  相似文献   

4.
Methods for long‐term monitoring of coastal species such as harbor seals (Phoca vitulina) are often costly, time‐consuming, and highly invasive, underscoring the need for improved techniques for data collection and analysis. Here, we propose the use of automated facial recognition technology for identification of individual seals and demonstrate its utility in ecological and population studies. We created a software package, SealNet, that automates photo identification of seals, using a graphical user interface (GUI) software to detect, align, and chip seal faces from photographs and a deep convolutional neural network (CNN) suitable for small datasets (e.g., 100 seals with five photos per seal) to classify individual seals. We piloted the SealNet technology with a population of harbor seals located within Casco Bay on the coast of Maine, USA. Across two years of sampling, 2019 and 2020, at seven haul‐out sites in Middle Bay, we obtained a dataset optimized for the development and testing of SealNet. We processed 1752 images representing 408 individual seals and achieved 88% Rank‐1 and 96% Rank‐5 accuracy in closed set seal identification. In identifying individual seals, SealNet software outperformed a similar face recognition method, PrimNet, developed for primates but retrained on seals. The ease and wealth of image data that can be processed using SealNet software contributes a vital tool for ecological and behavioral studies of marine mammals in the developing field of conservation technology.

In this paper, we describe the successful application of our newly developed automated facial recognition software as a tool for ecological analysis of harbor seals (Phoca vitulina). We outline an emerging method of data collection and analysis that facilitates rapid interpretation of large photo datasets over wide temporal and geographic scales. In addition, we use this machine learning‐based technology in a preliminary ecological study in a wild population of seals in the Casco Bay region of Maine to demonstrate the effectiveness of this non‐invasive method for use in mark‐recapture and site fidelity studies in the field.  相似文献   

5.
Comparative proteomic studies often use statistical tests included in the software for the analysis of digitized images of two-dimensional electrophoresis gels. As these programs include only limited capabilities for statistical analysis, many studies do not further describe their statistical approach. To find potential differences produced by different data processing, we compared the results of (1) Student's t-test using a spreadsheet program, (2) the intrinsic algorithms implemented in the Phoretix 2D gel analysis software, and (3) the SAM algorithm originally developed for microarray analysis. We applied the algorithms to proteome data of undifferentiated neural stem cells versus in vitro differentiated neural stem cells. We found (1) 367 spots differentially expressed using Student's t-test, (2) 203 spots using the algorithms in Phoretix 2D, and (3) 119 spots using the algorithms in SAM, respectively, with an overlap of 42 spots detected by all three algorithms. Applying different statistical approaches on the same dataset resulted in divergent set of protein spots labeled as statistically "significant". Currently, there is no agreement on statistical data processing of 2DE datasets, but the statistical tests applied in 2DE studies should be documented. Tools for the statistical analysis of proteome data should be implemented and documented in the existing 2DE software.  相似文献   

6.
S Schmidt  M Balke  S Lafogler 《ZooKeys》2012,(209):183-191
Here we describe a high-performance imaging system for creating high-resolution images of whole insect drawers. All components of the system are industrial standard and can be adapted to meet the specific needs of entomological collections. A controlling unit allows the setting of imaging area (drawer size), step distance between individual images, number of images, image resolution, and shooting sequence order through a set of parameters. The system is highly configurable and can be used with a wide range of different optical hardware and image processing software.  相似文献   

7.
Prior to performance of linkage analysis, elimination of all Mendelian inconsistencies in the pedigree data is essential. Often, identification of erroneous genotypes by visual inspection can be very difficult and time consuming. In fact, sometimes the errors are not recognized until the stage of running linkage-analysis software. The effort then required to find the erroneous genotypes and to cross-reference pedigree and marker data that may have been recoded and renumbered can be not only tedious but also quite daunting, in the case of very large pedigrees. We have implemented four error-checking algorithms in a new computer program, PedCheck, which will assist researchers in identifying all Mendelian inconsistencies in pedigree data and will provide them with useful and detailed diagnostic information to help resolve the errors. Our program, which uses many of the algorithms implemented in VITESSE, handles large data sets quickly and efficiently, accepts a variety of input formats, and offers various error-checking algorithms that match the subtlety of the pedigree error. These algorithms range from simple parent-offspring-compatibility checks to a single-locus likelihood-based statistic that identifies and ranks the individuals most likely to be in error. We use various real data sets to illustrate the power and effectiveness of our program.  相似文献   

8.
A method has been developed to run the general purpose operating system RDOS on the same disc of the head scanner computer as is used for scanner software and data. This made it possible to develop additional software in high level programming language for image processing, based on original image data on the disc. All new images produced by the program are stored on the disc in the same format as the original images. This makes it possible to handle processed images exactly as the original ones and to do multiple operations. The following processing has been included in the program so far: subtraction, smoothing, density profiles, vertical reconstructions, magnification and labelling. A set of operator commands has been developed which are very similar to the ordinary commands for the scanner, which makes the program to appear being a direct extension of the standard scanner software.  相似文献   

9.
A procedure named CROPCLASS was developed to semi-automate census parcel crop assessment in any agricultural area using multitemporal remote images. For each area, CROPCLASS consists of a) a definition of census parcels through vector files in all of the images; b) the extraction of spectral bands (SB) and key vegetation index (VI) average values for each parcel and image; c) the conformation of a matrix data (MD) of the extracted information; d) the classification of MD decision trees (DT) and Structured Query Language (SQL) crop predictive model definition also based on preliminary land-use ground-truth work in a reduced number of parcels; and e) the implementation of predictive models to classify unidentified parcels land uses. The software named CROPCLASS-2.0 was developed to semi-automatically perform the described procedure in an economically feasible manner. The CROPCLASS methodology was validated using seven GeoEye-1 satellite images that were taken over the LaVentilla area (Southern Spain) from April to October 2010 at 3- to 4-week intervals. The studied region was visited every 3 weeks, identifying 12 crops and others land uses in 311 parcels. The DT training models for each cropping system were assessed at a 95% to 100% overall accuracy (OA) for each crop within its corresponding cropping systems. The DT training models that were used to directly identify the individual crops were assessed with 80.7% OA, with a user accuracy of approximately 80% or higher for most crops. Generally, the DT model accuracy was similar using the seven images that were taken at approximately one-month intervals or a set of three images that were taken during early spring, summer and autumn, or set of two images that were taken at about 2 to 3 months interval. The classification of the unidentified parcels for the individual crops was achieved with an OA of 79.5%.  相似文献   

10.
MOTIVATION: Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. AVAILABILITY: CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. CONTACT: walter.georgescu@vanderbilt.edu SUPPLEMENTARY INFORMATION: Supplementary data available at Bioinformatics online.  相似文献   

11.
Population studies often incorporate capture‐mark‐recapture (CMR) techniques to gather information on long‐term biological and demographic characteristics. A fundamental requirement for CMR studies is that an individual must be uniquely and permanently marked to ensure reliable reidentification throughout its lifespan. Photographic identification involving automated photographic identification software has become a popular and efficient noninvasive method for identifying individuals based on natural markings. However, few studies have (a) robustly assessed the performance of automated programs by using a double‐marking system or (b) determined their efficacy for long‐term studies by incorporating multi‐year data. Here, we evaluated the performance of the program Interactive Individual Identification System (I3S) by cross‐validating photographic identifications based on the head scale pattern of the prairie lizard (Sceloporus consobrinus) with individual microsatellite genotyping (N = 863). Further, we assessed the efficacy of the program to identify individuals over time by comparing error rates between within‐year and between‐year recaptures. Recaptured lizards were correctly identified by I3S in 94.1% of cases. We estimated a false rejection rate (FRR) of 5.9% and a false acceptance rate (FAR) of 0%. By using I3S, we correctly identified 97.8% of within‐year recaptures (FRR = 2.2%; FAR = 0%) and 91.1% of between‐year recaptures (FRR = 8.9%; FAR = 0%). Misidentifications were primarily due to poor photograph quality (N = 4). However, two misidentifications were caused by indistinct scale configuration due to scale damage (N = 1) and ontogenetic changes in head scalation between capture events (N = 1). We conclude that automated photographic identification based on head scale patterns is a reliable and accurate method for identifying individuals over time. Because many lizard or reptilian species possess variable head squamation, this method has potential for successful application in many species.  相似文献   

12.

Background

In the past two decades the east African highlands have experienced several major malaria epidemics. Currently there is a renewed interest in exploring the possibility of anopheline larval control through environmental management or larvicide as an additional means of reducing malaria transmission in Africa. This study examined the landscape determinants of anopheline mosquito larval habitats and usefulness of remote sensing in identifying these habitats in western Kenya highlands.

Methods

Panchromatic aerial photos, Ikonos and Landsat Thematic Mapper 7 satellite images were acquired for a study area in Kakamega, western Kenya. Supervised classification of land-use and land-cover and visual identification of aquatic habitats were conducted. Ground survey of all aquatic habitats was conducted in the dry and rainy seasons in 2003. All habitats positive for anopheline larvae were identified. The retrieved data from the remote sensors were compared to the ground results on aquatic habitats and land-use. The probability of finding aquatic habitats and habitats with Anopheles larvae were modelled based on the digital elevation model and land-use types.

Results

The misclassification rate of land-cover types was 10.8% based on Ikonos imagery, 22.6% for panchromatic aerial photos and 39.2% for Landsat TM 7 imagery. The Ikonos image identified 40.6% of aquatic habitats, aerial photos identified 10.6%, and Landsate TM 7 image identified 0%. Computer models based on topographic features and land-cover information obtained from the Ikonos image yielded a misclassification rate of 20.3–22.7% for aquatic habitats, and 18.1–25.1% for anopheline-positive larval habitats.

Conclusion

One-metre spatial resolution Ikonos images combined with computer modelling based on topographic land-cover features are useful tools for identification of anopheline larval habitats, and they can be used to assist to malaria vector control in western Kenya highlands.  相似文献   

13.
Recognition of individuals within an animal population is central to a range of estimates about population structure and dynamics. However, traditional methods of distinguishing individuals, by some form of physical marking, often rely on capture and handling which may affect aspects of normal behavior. Photographic identification has been used as a less‐invasive alternative, but limitations in both manual and computer‐automated recognition of individuals are particularly problematic for smaller taxa (<500 g). In this study, we explored the use of photographic identification for individuals of a free‐ranging, small terrestrial reptile using (a) independent observers, and (b) automated matching with the Interactive Individual Identification System (I3S Pattern) computer algorithm. We tested the technique on individuals of an Australian skink in the Egernia group, Slater's skink Liopholis slateri, whose natural history and varied scale markings make it a potentially suitable candidate for photo‐identification. From ‘photographic captures’ of skink head profiles, we designed a multi‐choice key based on alternate character states and tested the abilities of observers — with or without experience in wildlife survey — to identify individuals using categorized test photos. We also used the I3S Pattern algorithm to match the same set of test photos against a database of 30 individuals. Experienced observers identified a significantly higher proportion of photos correctly (74%) than those with no experience (63%) while the I3S software correctly matched 67% as the first ranked match and 83% of images in the top five ranks. This study is one of the first to investigate photo identification with a free‐ranging small vertebrate. The method demonstrated here has the potential to be applied to the developing field of camera‐traps for wildlife survey and thus a wide range of survey and monitoring applications.  相似文献   

14.
《Ecological Informatics》2012,7(6):345-353
Camera traps and the images they generate are becoming an essential tool for field biologists studying and monitoring terrestrial animals, in particular medium to large terrestrial mammals and birds. In the last five years, camera traps have made the transition to digital technology, where these devices now produce hundreds of instantly available images per month and a large amount of ancillary metadata (e.g., date, time, temperature, image size, etc.). Despite this accelerated pace in the development of digital image capture, field biologists still lack adequate software solutions to process and manage the increasing amount of information in a cost efficient way. In this paper we describe a software system that we have developed, called DeskTEAM, to address this issue. DeskTEAM has been developed in the context of the Tropical Ecology Assessment and Monitoring Network (TEAM), a global network that monitors terrestrial vertebrates. We describe the software architecture and functionality and its utility in managing and processing large amounts of digital camera trap data collected throughout the global TEAM network. DeskTEAM incorporates software features and functionality that make it relevant to the broad camera trapping community. These include the ability to run the application locally on a laptop or desktop computer, without requiring an Internet connection, as well as the ability to run on multiple operating systems; an intuitive navigational user interface with multiple levels of detail (from individual images, to whole groups of images) which allows users to easily manage hundreds or thousands of images; ability to automatically extract EXIF and custom metadata information from digital images to increase standardization; availability of embedded taxonomic lists to allow users to easily tag images with species identities; and the ability to export data packages consisting of data, metadata and images in standardized formats so that they can be transferred to online data warehouses for easy archiving and dissemination. Lastly, building these software tools for wildlife scientists provides valuable lessons for the ecoinformatics community.  相似文献   

15.
Solutions to three problems in using small microcomputers for interactive cell image analysis are discussed. (1) To allow interactive processing of up to 62 X 88 pixels on inexpensive screens, data can be displayed in gray levels with an approximate logarithmic grading. Each pixel is composed of 32 screen coordinates, applying the dither matrix method to avoid artificial structures. (2) To mark special regions of interest in the image, a graphic cursor, handled from the keyboard, was implemented. (3) To evaluate parts of the image, as outlined by the cursor, the program must distinguish whether a particular pixel is outside, inside or on the border of the region. The developed algorithms permit practical interactive evaluation of cell images on a small microcomputer, with no image analysis implementation. However, it is necessary that the assembly language of the microprocessor be available for some sophisticated programming and that the operating system support graphic facilities with an appropriate resolution.  相似文献   

16.
Kastrikin  V. A.  Podol’skii  S. A.  Babykina  M. S. 《Biology Bulletin》2021,48(10):1857-1861
Biology Bulletin - Abstract—A new method for calculating the population density of terrestrial animals, which are not amenable to individual identification, using photos or video images...  相似文献   

17.
Remote sensing can be a valuable alternative or complement to traditional techniques for monitoring wildlife populations, but often entails operational bottlenecks at the image analysis stage. For example, photographic aerial surveys have several advantages over surveys employing airborne observers or other more intrusive monitoring techniques, but produce onerous amounts of imagery for manual analysis when conducted across vast areas, such as the Arctic. Deep learning algorithms, chiefly convolutional neural networks (CNNs), have shown promise for automatically detecting wildlife in large and/or complex image sets. But for sparsely distributed species, such as polar bears (Ursus maritimus), there may not be sufficient known instances of the animals in an image set to train a CNN. We investigated the feasibility of instead providing ‘synthesized’ training data to a CNN to detect polar bears throughout large volumes of aerial imagery from a survey of the Baffin Bay subpopulation. We harvested 534 miscellaneous images of polar bears from the Web that we edited to more closely resemble 21 known images of bears from the aerial survey that were solely used for validation. We combined the Web images of polar bears with 6292 random background images from the aerial survey to train a CNN (ResNet-50), which subsequently correctly classified 20/21 (95%) bear images from the survey and 1172/1179 (99.4%) random background validation images. Given that even a small background misclassification rate could produce multitudinous false positives over many thousands of photos, we describe a potential workflow to efficiently screen out erroneous detections. We also discuss potential avenues to improve CNN accuracy, and the broader applicability of our approach to other image-based wildlife monitoring scenarios. Our results demonstrate the feasibility of using miscellaneously sourced images of animals to train deep neural networks for specific wildlife detection tasks.  相似文献   

18.
We have developed a technique to detect, recognize, and track each individual low density lipoprotein receptor (LDL-R) molecule and small receptor clusters on the surface of human skin fibroblasts. Molecular recognition and high precision (30 nm) simultaneous automatic tracking of all of the individual receptors in the cell surface population utilize quantitative time-lapse low light level digital video fluorescence microscopy analyzed by purpose-designed algorithms executed on an image processing work station. The LDL-Rs are labeled with the biologically active, fluorescent LDL derivative dil-LDL. Individual LDL-Rs and unresolved small clusters are identified by measuring the fluorescence power radiated by the sub-resolution fluorescent spots in the image; identification of single particles is ascertained by four independent techniques. An automated tracking routine was developed to track simultaneously, and without user intervention, a multitude of fluorescent particles through a sequence of hundreds of time-lapse image frames. The limitations on tracking precision were found to depend on the signal-to-noise ratio of the tracked particle image and mechanical drift of the microscope system. We describe the methods involved in (i) time-lapse acquisition of the low-light level images, (ii) simultaneous automated tracking of the fluorescent diffraction limited punctate images, (iii) localizing particles with high precision and limitations, and (iv) detecting and identifying single and clustered LDL-Rs. These methods are generally applicable and provide a powerful tool to visualize and measure dynamics and interactions of individual integral membrane proteins on living cell surfaces.  相似文献   

19.
Accurate identification of humpback whales from photographic identification data depends on the quality of the photographs and the distinctiveness of the flukes. Criteria for evaluating photographic quality and individual distinctiveness were developed involving judgments about overall quality or distinctiveness and about specific aspects of each. These criteria were tested for the level of agreement among judges. The distinctiveness scheme was tested for the independence of distinctiveness judgments and photographic quality. Our results show that judges could agree when evaluating specific and overall aspects of photographic quality and individual distinctiveness. The level of agreement varied for different pairs of judges, and less adept judges were identified. Ability to agree on evaluations of photographic quality was independent of the experience of the judges. Overall photographic quality and overall distinctiveness were successfully predicted from more specific variables, but the agreement between judges for these was not significantly greater than the agreement for the overall measures judged directly. There was no correlation between individual distinctiveness and photographic quality for four of the five judges, but the power of this rest may be low. Analyses of photographic identification data frequently require evaluations of photographic quality and individual distinctiveness. To obtain reliable results from such analyses, evaluation schemes and judges should be tested to ensure reliable and consistent evaluations.  相似文献   

20.
MOTIVATION: High-resolution mass spectrometers generate large data files that are complex, noisy and require extensive processing to extract the optimal data from raw spectra. This processing is readily achieved in software and is often embedded in manufacturers' instrument control and data processing environments. However, the speed of this data processing is such that it is usually performed off-line, post data acquisition. We have been exploring strategies that would allow real-time advanced processing of mass spectrometric data, making use of the reconfigurable computing paradigm, which exploits the flexibility and versatility of Field Programmable Gate Arrays (FPGAs). This approach has emerged as a powerful solution for speeding up time-critical algorithms. We describe here a reconfigurable computing solution for processing raw mass spectrometric data generated by MALDI-ToF instruments. The hardware-implemented algorithms for de-noising, baseline correction, peak identification and deisotoping, running on a Xilinx Virtex 2 FPGA at 180 MHz, generate a mass fingerprint over 100 times faster than an equivalent algorithm written in C, running on a Dual 3 GHz Xeon workstation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号