首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fluorescence-assisted image analysis of freshwater microalgae   总被引:3,自引:0,他引:3  
We exploit a property of microalgae-that of their ability to autofluoresce when exposed to epifluorescence illumination-to tackle the problem of detecting and analysing microalgae in sediment samples containing complex scenes. We have added fluorescence excitation to the hardware portion of our microalgae image processing system. We quantitatively measured 120 characteristics of each object detected through fluorescence excitation, and used an optimized subset of these characteristics for later automated analysis and species classification. All specimens used for training and testing our system came from natural populations found in Lake Biwa, Japan. Without the use of fluorescence excitation, automated analysis of images containing algae specimens in sediment is near impossible. We also used fluorescence imaging to target microalgae in water samples containing large numbers of obtrusive nontargeted objects, which would otherwise slow processing speed and decrease species analysis and classification accuracy. Object drift problems associated with the necessity to use both a fluorescence and greyscale image of each microscope scene were solved using techniques such as template matching and a novel form of automated seeded region growing (SRG). Our system proved to be not only user-friendly, but also highly accurate in classifying two major genera of microalgae found in Lake Biwa-the cyanobacteria Anabaena spp. and Microcystis spp. Classification accuracy was measured to be over 97%.  相似文献   

2.
Conceptually, protein crystallization can be divided into two phases search and optimization. Robotic protein crystallization screening can speed up the search phase, and has a potential to increase process quality. Automated image classification helps to increase throughput and consistently generate objective results. Although the classification accuracy can always be improved, our image analysis system can classify images from 1536-well plates with high classification accuracy (85%) and ROC score (0.87), as evaluated on 127 human-classified protein screens containing 5600 crystal images and 189472 non-crystal images. Data mining can integrate results from high-throughput screens with information about crystallizing conditions, intrinsic protein properties, and results from crystallization optimization. We apply association mining, a data mining approach that identifies frequently occurring patterns among variables and their values. This approach segregates proteins into groups based on how they react in a broad range of conditions, and clusters cocktails to reflect their potential to achieve crystallization. These results may lead to crystallization screen optimization, and reveal associations between protein properties and crystallization conditions. We also postulate that past experience may lead us to the identification of initial conditions favorable to crystallization for novel proteins.  相似文献   

3.
ABSTRACT: INTRODUCTION: This paper concerns the analysis of the features obtained from thyroid ultrasound images in left and right transverse and longitudinal sections. In the image analysis, the thyroid lobe is treated as a texture for healthy subjects and patients with Hashimoto's disease. The applied methods of analysis and image processing were profiled to obtain 10 features of the image. Then, their significance in the classification was shown.MaterialIn this study, the examined group consisted of 29 healthy subjects aged 18 to 60 and 65 patients with Hashimoto's disease. For each subject, four ultrasound images were taken. They were all in transverse and longitudinal sections of the right and left lobe of the thyroid, which gave 376 images in total. METHOD: 10 different features obtained from each ultrasound image were suggested. The analyzed thyroid lobe was marked automatically or manually with a rectangular element. RESULTS: The analysis of 10 features and the creation for each one of them their own decision tree configuration resulted in distinguishing 3 most significant features. The results of the quality of classification show accuracy above 94% for a non-trimmed decision tree.  相似文献   

4.
5.

Background

Automated image analysis on virtual slides is evolving rapidly and will play an important role in the future of digital pathology. Due to the image size, the computational cost of processing whole slide images (WSIs) in full resolution is immense. Moreover, image analysis requires well focused images in high magnification.

Methods

We present a system that merges virtual microscopy techniques, open source image analysis software, and distributed parallel processing. We have integrated the parallel processing framework JPPF, so batch processing can be performed distributed and in parallel. All resulting meta data and image data are collected and merged. As an example the system is applied to the specific task of image sharpness assessment. ImageJ is an open source image editing and processing framework developed at the NIH having a large user community that contributes image processing algorithms wrapped as plug-ins in a wide field of life science applications. We developed an ImageJ plug-in that supports both basic interactive virtual microscope and batch processing functionality. For the application of sharpness inspection we employ an approach with non-overlapping tiles. Compute nodes retrieve image tiles of moderate size from the streaming server and compute the focus measure. Each tile is divided into small sub images to calculate an edge based sharpness criterion which is used for classification. The results are aggregated in a sharpness map.

Results

Based on the system we calculate a sharpness measure and classify virtual slides into one of the following categories - excellent, okay, review and defective. Generating a scaled sharpness map enables the user to evaluate sharpness of WSIs and shows overall quality at a glance thus reducing tedious assessment work.

Conclusions

Using sharpness assessment as an example, the introduced system can be used to process, analyze and parallelize analysis of whole slide images based on open source software.
  相似文献   

6.
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation.We created a database of 53,345 shark images covering 219 species of sharks, and packaged object-detection and image classification models into a Shark Detector bundle. The Shark Detector recognizes and classifies sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: collecting occurrence records from photographs taken by the public or citizen scientists, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity.The Shark Detector can classify 47 species pertaining to 26 genera. It sorted heterogeneous datasets of images sourced from Instagram with 91% accuracy and classified species with 70% accuracy. It located sharks in baited remote footage and YouTube videos with 89% accuracy, and classified located subjects to the species level with 69% accuracy. All data-generation methods were processed without manual interaction.As media-based remote monitoring appears to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.  相似文献   

7.
Data transformations prior to analysis may be beneficial in classification tasks. In this article we investigate a set of such transformations on 2D graph-data derived from facial images and their effect on classification accuracy in a high-dimensional setting. These transformations are low-variance in the sense that each involves only a fixed small number of input features. We show that classification accuracy can be improved when penalized regression techniques are employed, as compared to a principal component analysis (PCA) pre-processing step. In our data example classification accuracy improves from 47% to 62% when switching from PCA to penalized regression. A second goal is to visualize the resulting classifiers. We develop importance plots highlighting the influence of coordinates in the original 2D space. Features used for classification are mapped to coordinates in the original images and combined into an importance measure for each pixel. These plots assist in assessing plausibility of classifiers, interpretation of classifiers, and determination of the relative importance of different features.  相似文献   

8.
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.  相似文献   

9.
In the following work we discuss the application of image processing and pattern recognition to the field of quantitative phycology. We overview the area of image processing and review previously published literature pertaining to the image analysis of phycological images and, in particular, cyanobacterial image processing. We then discuss the main operations used to process images and quantify data contained within them. To demonstrate the utility of image processing to cyanobacteria classification, we present details of an image analysis system for automatically detecting and classifying several cyanobacterial taxa of Lake Biwa, Japan. Specifically, we initially target the genus Microcystis for detection and classification from among several species of Anabaena. We subsequently extend the system to classify a total of six cyanobacteria species. High-resolution microscope images containing a mix of the above species and other nontargeted objects are analyzed, and any detected objects are removed from the image for further analysis. Following image enhancement, we measure object properties and compare them to a previously compiled database of species characteristics. Classification of an object as belonging to a particular class membership (e.g., “Microcystis,”“A. smithii,”“Other,” etc.) is performed using parametric statistical methods. Leave-one-out classification results suggest a system error rate of approximately 3%. Received: September 6, 1999 / Accepted: February 6, 2000  相似文献   

10.
PurposeThe classification of urinary stones is important prior to treatment because the treatments depend on three types of urinary stones, i.e., calcium, uric acid, and mixture stones. We have developed an automatic approach for the classification of urinary stones into the three types based on microcomputed tomography (micro-CT) images using a convolutional neural network (CNN).Materials and methodsThirty urinary stones from different patients were scanned in vitro using micro-CT (pixel size: 14.96 μm; slice thickness: 15 μm); a total of 2,430 images (micro-CT slices) were produced. The slices (227 × 227 pixels) were classified into the three categories based on their energy dispersive X-ray (EDX) spectra obtained via scanning electron microscopy (SEM). The images of urinary stones from each category were divided into three parts; 66%, 17%, and 17% of the dataset were assigned to the training, validation, and test datasets, respectively. The CNN model with 15 layers was assessed based on validation accuracy for the optimization of hyperparameters such as batch size, learning rate, and number of epochs with different optimizers. Then, the model with the optimized hyperparameters was evaluated for the test dataset to obtain classification accuracy and error.ResultsThe validation accuracy of the developed approach with CNN with optimized hyperparameters was 0.9852. The trained CNN model achieved a test accuracy of 0.9959 with a classification error of 1.2%.ConclusionsThe proposed automated CNN-based approach could successfully classify urinary stones into three types, namely calcium, uric acid, and mixture stones, using micro-CT images.  相似文献   

11.
Leaf area and its derivatives (e.g. specific leaf area) are widely used in ecological assessments, especially in the fields of plant–animal interactions, plant community assembly, ecosystem functioning and global change. Estimating leaf area is highly time-consuming, even when using specialized software to process scanned leaf images, because manual inputs are invariably required for scale detection and leaf surface digitisation. We introduce Black Spot Leaf Area Calculator (hereafter, Black Spot), a technique and stand-alone software package for rapid and automated leaf area assessment from images of leaves taken with standard flatbed scanners. Black Spot operates on comprehensive rule-sets for colour band ratios to carry out pixel-based classification which isolates leaf surfaces from the image background. Importantly, the software extracts information from associated image meta-data to detect image scale, thereby eliminating the need for time-consuming manual scale calibration. Black Spot’s output provides the user with estimates of leaf area as well as classified images for error checking. We tested this method and software combination on a set of 100 leaves of 51 different plant species collected from the field. Leaf area estimates generated using Black Spot and by manual processing of the images using an image editing software generated statistically identical results. Mean error rate in leaf area estimates from Black Spot relative to manual processing was ?0.4 % (SD = 0.76). The key advantage of Black Spot is the ability to rapidly batch process multi-species datasets with minimal user effort and at low cost, thus making it a valuable tool for field ecologists.  相似文献   

12.
Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in ‘targeted’ alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.  相似文献   

13.
14.
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.  相似文献   

15.
A variety of recent imaging techniques are able to beat the diffraction limit in fluorescence microcopy by activating and localizing subsets of the fluorescent molecules in the specimen, and repeating this process until all of the molecules have been imaged. In these techniques there is a tradeoff between speed (activating more molecules per imaging cycle) and error rates (activating more molecules risks producing overlapping images that hide information on molecular positions), and so intelligent image processing approaches are needed to identify and reject overlapping images. We introduce here a formalism for defining error rates, derive a general relationship between error rates, image acquisition rates, and the performance characteristics of the image processing algorithms, and show that there is a minimum acquisition time irrespective of algorithm performance. We also consider algorithms that can infer molecular positions from images of overlapping blurs, and derive the dependence of the minimum acquisition time on algorithm performance.  相似文献   

16.
Algorithms for large-scale genotyping microarrays   总被引:7,自引:0,他引:7  
MOTIVATION: Analysis of many thousands of single nucleotide polymorphisms (SNPs) across whole genome is crucial to efficiently map disease genes and understanding susceptibility to diseases, drug efficacy and side effects for different populations and individuals. High density oligonucleotide microarrays provide the possibility for such analysis with reasonable cost. Such analysis requires accurate, reliable methods for feature extraction, classification, statistical modeling and filtering. RESULTS: We propose the modified partitioning around medoids as a classification method for relative allele signals. We use the average silhouette width, separation and other quantities as quality measures for genotyping classification. We form robust statistical models based on the classification results and use these models to make genotype calls and calculate quality measures of calls. We apply our algorithms to several different genotyping microarrays. We use reference types, informative Mendelian relationship in families, and leave-one-out cross validation to verify our results. The concordance rates with the single base extension reference types are 99.36% for the SNPs on autosomes and 99.64% for the SNPs on sex chromosomes. The concordance of the leave-one-out test is over 99.5% and is 99.9% higher for AA, AB and BB cells. We also provide a method to determine the gender of a sample based on the heterozygous call rate of SNPs on the X chromosome. See http://www.affymetrix.com for further information. The microarray data will also be available from the Affymetrix web site. AVAILABILITY: The algorithms will be available commercially in the Affymetrix software package.  相似文献   

17.
The early sign detection of liver lesions plays an extremely important role in preventing, diagnosing, and treating liver diseases. In fact, radiologists mainly consider Hounsfield Units to locate liver lesions. However, most studies focus on the analysis of unenhanced computed tomography images without considering an attenuation difference between Hounsfield Units before and after contrast injection. Therefore, the purpose of this work is to develop an improved method for the automatic detection and classification of common liver lesions based on deep learning techniques and the variations of the Hounsfield Units density on computed tomography scans. We design and implement a multi-phase classification model developed on the Faster Region-based Convolutional Neural Networks (Faster R–CNN), Region-based Fully Convolutional Networks (R–FCN), and Single Shot Detector Networks (SSD) with the transfer learning approach. The model considers the variations of the Hounsfield Unit density on computed tomography scans in four phases before and after contrast injection (plain, arterial, venous, and delay). The experiments are conducted on three common types of liver lesions including liver cysts, hemangiomas, and hepatocellular carcinoma. Experimental results show that the proposed method accurately locates and classifies common liver lesions. The liver lesions detection with Hounsfield Units gives high accuracy of 100%. Meanwhile, the lesion classification achieves an accuracy of 95.1%. The promising results show the applicability of the proposed method for automatic liver lesions detection and classification. The proposed method improves the accuracy of liver lesions detection and classification compared with some preceding methods. It is useful for practical systems to assist doctors in the diagnosis of liver lesions. In our further research, an improvement can be made with big data analysis to build real-time processing systems and we expand this study to detect lesions from all parts of the human body, not just the liver.  相似文献   

18.
Questions: What is the optimum combination of image dates across a growing season for tree species differentiation in multi‐spectral data and how does species composition affect overstorey canopy density? Location: Monks Wood, Cambridgeshire, eastern England, UK. Methods: Six overstorey tree species were mapped using five Airborne Thematic Mapper images acquired across the 2003 growing season (17 March, 30 May, 16 July, 23 September, 27 October). After image pre‐processing, supervised maximum likelihood classification was performed on the images and on all two‐, three‐, four‐ and five‐date combinations. Relationships between tree species composition and canopy density were assessed using regression analyses. Results: The image with the greatest tree species discrimination was acquired on 27/10 when the overstorey species were in different stages of leaf tinting and fall. In this image, tree species were mapped with an overall classification accuracy (OCA) of 71% (kappa 0.63). A similar OCA was achieved from the other four images combined (OCA 72%, kappa 0.64). The highest classification accuracy was achieved by combining three images: 17 March, 16 July, 27 October. This achieved an OCA of 84% (kappa 0.79), increasing to 88% (kappa 0.85) after a post‐classification clump and sieve procedure. Canopy height and percentage cover of oak explained 72% of variance in canopy density. Conclusions: The ability to discriminate and map temperate deciduous tree species in airborne multi‐spectral imagery is increased using time‐series data. An autumn image supplemented with an image from both the green‐up and full‐leaf phases was optimum. The derived tree species map provides a more powerful ecological tool for determining woodland structural/compositional relationships than field‐based measures.  相似文献   

19.
Protein fingerprinting is a widely used technique in epidemiological studies for typing bacterial strains. This study reports the development of a computer based gel analysis system. The system has the capability to analyse SDS-PAGE whole-cell protein profiles using digital image processing techniques. The software incorporates spatial and frequency domain operators for image enhancement, support for geometric correction of images and new algorithms for identification of strain tracks and protein bands. The system also provides facilities for correcting imaging defects for inter-gel comparison, similarity analysis, clustering and pictorial representation of results as a dendrogram. The software is highly interactive, user-friendly and can produce accurate results for differentiation of bacterial strains with minimal overhead of time.  相似文献   

20.
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号