首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《IRBM》2021,42(5):334-344
Active learning is an effective solution to interactively select a limited number of informative examples and use them to train a learning algorithm that can achieve its optimal performance for specific tasks. It is suitable for medical image applications in which unlabeled data are abundant but manual annotation could be very time-consuming and expensive. However, designing an effective active learning strategy for informative example selection is a challenging task, due to the intrinsic presence of noise in medical images, the large number of images, and the variety of imaging modalities. In this study, a novel low-rank modeling-based multi-label active learning (LRMMAL) method is developed to address these challenges and select informative examples for training a classifier to achieve the optimal performance. The proposed method independently quantifies image noise and integrates it with other measures to guide a pool-based sampling process to determine the most informative examples for training a classifier. In addition, an automatic adaptive cross entropy-based parameter determination scheme is proposed for further optimizing the example sampling strategy. Experimental results on varied medical image datasets and comparisons with other state-of-the-art multi-label active learning methods illustrate the superior performance of the proposed method.  相似文献   

2.
Using deep learning to estimate strawberry leaf scorch severity often achieves unsatisfactory results when a strawberry leaf image contains complex background information or multi-class diseased leaves and the number of annotated strawberry leaf images is limited. To solve these issues, in this paper, we propose a two-stage method including object detection and few-shot learning to estimate strawberry leaf scorch severity. In the first stage, Faster R-CNN is used to mark the location of strawberry leaf patches, where each single strawberry leaf patch is clipped from original strawberry leaf images to compose a new strawberry leaf patch dataset. In the second stage, the Siamese network trained on the new strawberry leaf patch dataset is used to identify the strawberry leaf patches and then estimate the severity of the original strawberry leaf scorch images according to the multi-instance learning concept. Experimental results from the first stage show that Faster R-CNN achieves better mAP in strawberry leaf patch detection than other object detection networks, at 94.56%. Results from the second stage reveal that the Siamese network achieves an accuracy of 96.67% in the identification of strawberry disease leaf patches, which is higher than the Prototype network. Comprehensive experimental results indicate that compared with other state-of-the-art models, our proposed two-stage method comprising the Faster R-CNN (VGG16) and Siamese networks achieves the highest estimation accuracy of 96.67%. Moreover, our trained two-stage model achieves an estimation accuracy of 88.83% on a new dataset containing 60 strawberry leaf images taken in the field, which indicates its excellent generalization ability.  相似文献   

3.
The scarcity of training annotation is one of the major challenges for the application of deep learning technology in medical image analysis. Recently, self-supervised learning provides a powerful solution to alleviate this challenge by extracting useful features from a large number of unlabeled training data. In this article, we propose a simple and effective self-supervised learning method for leukocyte classification by identifying the different transformations of leukocyte images, without requiring a large batch of negative sampling or specialized architectures. Specifically, a convolutional neural network backbone takes different transformations of leukocyte image as input for feature extraction. Then, a pretext task of self-supervised transformation recognition on the extracted feature is conducted by a classifier, which helps the backbone learn useful representations that generalize well across different leukocyte types and datasets. In the experiment, we systematically study the effect of different transformation compositions on useful leukocyte feature extraction. Compared with five typical baselines of self-supervised image classification, experimental results demonstrate that our method performs better in different evaluation protocols including linear evaluation, domain transfer, and finetuning, which proves the effectiveness of the proposed method.  相似文献   

4.
Seabird plays an important role in the marine ecosystem and is an indispensable part of the food chain. However, the seabird population has been experiencing a rapid decline due to various factors including climate change, fisheries, and invasive non-native species. To better protect seabirds, the first step is to accurately monitor them. Automatic classification of seabirds would significantly speed up the monitoring process. In this paper, we propose a dual transfer learning framework for improved seabird image classification based on spatial pyramid pooling. Specifically, a dual transfer learning framework is used to capture various patterns to improve the discriminability of the proposed model. Both InceptionV3 and DenseNet201 are used as the backbones, whose outputs are concatenated using a spatial pyramid pooling (SPP) layer. Here, SPP is used to address images of different sizes. Next, two types of pooling, global average-pooling (GAP) and global max-pooling (GMP) are applied to the output of the SPP layer, where the results of GAP and GMP are linearly added up. Our method takes both InceptionV3 and DenseNet201 as feature extractors and is trained offline in an end-to-end style. The proposed dual transfer learning framework-based seabird image classification method reached the accuracy, precision, recall, F1-score of 95.11%, 95.33%, 95.11%, 95.13% on the 10 classes seabird image dataset.  相似文献   

5.

Background

Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process.

Method

We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features.

Results

we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance.

Conclusions

Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study.
  相似文献   

6.
The goal of image chromatic adaptation is to remove the effect of illumination and to obtain color data that reflects precisely the physical contents of the scene. We present in this paper an approach to image chromatic adaptation using Neural Networks (NN) with application for detecting--adapting human skin color. The NN is trained on randomly chosen color images containing human subject under various illuminating conditions, thereby enabling the model to dynamically adapt to the changing illumination conditions. The proposed network predicts directly the illuminant estimate in the image so as to adapt to human skin color. The comparison of our method with Gray World, White Patch and NN on White Patch methods for skin color stabilization is presented. The skin regions in the NN stabilized images are successfully detected using a computationally inexpensive thresholding operation. We also present results on detecting skin regions on a data set of test images. The results are promising and suggest a new approach for adapting human skin color using neural networks.  相似文献   

7.
Plants, the only natural source of oxygen, are the most important resources for every species in the world. A proper identification of plants is important for different fields. The observation of leaf characteristics is a popular method as leaves are easily available for examination. Researchers are increasingly applying image processing techniques for the identification of plants based on leaf images. In this paper, we have proposed a leaf image classification model, called BLeafNet, for plant identification, where the concept of deep learning is combined with Bonferroni fusion learning. Initially, we have designed five classification models, using ResNet-50 architecture, where five different inputs are separately used in the models. The inputs are the five variants of the leaf grayscale images, RGB, and three individual channels of RGB - red, green, and blue. For fusion of the five ResNet-50 outputs, we have used the Bonferroni mean operator as it expresses better connectivity among the confidence scores, and it also obtains better results than the individual models. We have also proposed a two-tier training method for properly training the end-to-end model. To evaluate the proposed model, we have used the Malayakew dataset, collected at the Royal Botanic Gardens in New England, which is a very challenging dataset as many leaves from different species have a very similar appearance. Besides, the proposed method is evaluated using the Leafsnap and the Flavia datasets. The obtained results on both the datasets confirm the superiority of the model as it outperforms the results achieved by many state-of-the-art models.  相似文献   

8.
《IRBM》2020,41(2):106-114
ObjectivesBreast cancer (BC) is one of the most commonly reported health issues worldwide, especially in females. Early detection and diagnosis of BC can greatly reduce mortality rates. Samples obtained with different imaging methods such as mammography, computerized tomography, magnetic resonance, ultrasound, and biopsy are used in the diagnosis of BC. Histopathological images obtained from a biopsy contain vital information about the stage of the BC. Computer-aided systems are important tools to assist pathologists in the early detection of BC.Material and methodsIn the current study, the use of gray-level co-occurrence matrix (GLCM) of Shearlet Transform (ST) coefficients were first scrutinized as textural features. ST is an advanced decomposition-based method that can analyze images in various directions and is sensitive to edge singularities. These features make ST more robust than other decomposition methods such as Fourier and wavelet. Color channel histogram features were also utilized for a second level of evaluation in the diagnosis of the BC stage. These features are considered one of the most important building blocks that pathologists consider in the course of grading histopathological images. Then, by combining these two features, the classification results were re-assessed utilizing Support Vector Machine (SVM) as a classifier.ResultsThe assessments were performed on a BreaKHis dataset containing benign and malignant histopathological samples. The average accuracy scores were reported as being 98.2%, 97.2%, 97.8%, and 97.3% in the sub-databases with 40×, 100×, 200×, and 400× magnification factors, respectively.ConclusionsThe obtained results showed that the proposed method was quite efficient in histopathological image classification. Despite the relative simplicity of the approach, the obtained results were far superior to previously reported results.  相似文献   

9.
The benchmark method for the evaluation of breast cancers involves microscopic testing of a hematoxylin and eosin (H&E)‐stained tissue biopsy. Resurgery is required in 20% to 30% of cases because of incomplete excision of malignant tissues. Therefore, a more accurate method is required to detect the cancer margin to avoid the risk of recurrence. In the recent years, convolutional neural networks (CNNs) has achieved excellent performance in the field of medical images diagnosis. It automatically extracts the features from the images and classifies them. In the proposed study, we apply a pretrained Inception‐v3 CNN with reverse active learning for the classification of healthy and malignancy breast tissue using optical coherence tomography (OCT) images. This proposed method attained the sensitivity, specificity and accuracy is 90.2%, 91.7% and 90%, respectively, with testing datasets collected from 48 patients (22 normal fibro‐adipose tissue and 26 Invasive ductal carcinomas cancerous tissues). The trained network utilizes for the breast cancer margin assessment to predict the tumor with negative margins. Additionally, the network output is correlated with the corresponding histology image. Our results lay the foundation for the future that the proposed method can be used to perform automatic intraoperative identification of breast cancer margins in real‐time and to guide core needle biopsies.   相似文献   

10.
Improving gene quantification by adjustable spot-image restoration   总被引:1,自引:0,他引:1  
MOTIVATION: One of the major factors that complicate the task of microarray image analysis is that microarray images are distorted by various types of noise. In this study a robust framework is proposed, designed to take into account the effect of noise in microarray images in order to assist the demanding task of microarray image analysis. The proposed framework, incorporates in the microarray image processing pipeline a novel combination of spot adjustable image analysis and processing techniques and consists of the following stages: (1) gridding for facilitating spot identification, (2) clustering (unsupervised discrimination between spot and background pixels) applied to spot image for automatic local noise assessment, (3) modeling of local image restoration process for spot image conditioning (adjustable wiener restoration using an empirically determined degradation function), (4) automatic spot segmentation employing seeded-region-growing, (5) intensity extraction and (6) assessment of the reproducibility (real data) and the validity (simulated data) of the extracted gene expression levels. RESULTS: Both simulated and real microarray images were employed in order to assess the performance of the proposed framework against well-established methods implemented in publicly available software packages (Scanalyze and SPOT). Regarding simulated images, the novel combination of techniques, introduced in the proposed framework, rendered the detection of spot areas and the extraction of spot intensities more accurate. Furthermore, on real images the proposed framework proved of better stability across replicates. Results indicate that the proposed framework improves spots' segmentation and, consequently, quantification of gene expression levels. AVAILABILITY: All algorithms were implemented in Matlab (The Mathworks, Inc., Natick, MA, USA) environment. The codes that implement microarray gridding, adaptive spot restoration and segmentation/intensity extraction are available upon request. Supplementary results and the simulated microarray images used in this study are available for download from: ftp://users:bioinformatics@mipa.med.upatras.gr. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

11.
Advances in reporters for gene expression have made it possible to document and quantify expression patterns in 2D-4D. In contrast to microarrays, which provide data for many genes but averaged and/or at low resolution, images reveal the high spatial dynamics of gene expression. Developing computational methods to compare, annotate, and model gene expression based on images is imperative, considering that available data are rapidly increasing. We have developed a sparse Bayesian factor analysis model in which the observed expression diversity of among a large set of high-dimensional images is modeled by a small number of hidden common factors. We apply this approach on embryonic expression patterns from a Drosophila RNA in situ image database, and show that the automatically inferred factors provide for a meaningful decomposition and represent common co-regulation or biological functions. The low-dimensional set of factor mixing weights is further used as features by a classifier to annotate expression patterns with functional categories. On human-curated annotations, our sparse approach reaches similar or better classification of expression patterns at different developmental stages, when compared to other automatic image annotation methods using thousands of hard-to-interpret features. Our study therefore outlines a general framework for large microscopy data sets, in which both the generative model itself, as well as its application for analysis tasks such as automated annotation, can provide insight into biological questions.  相似文献   

12.
This paper presents a computational model to address one prominent psychological behavior of human beings to recognize images. The basic pursuit of our method can be concluded as that differences among multiple images help visual recognition. Generally speaking, we propose a statistical framework to distinguish what kind of image features capture sufficient category information and what kind of image features are common ones shared in multiple classes. Mathematically, the whole formulation is subject to a generative probabilistic model. Meanwhile, a discriminative functionality is incorporated into the model to interpret the differences among all kinds of images. The whole Bayesian formulation is solved in an Expectation-Maximization paradigm. After finding those discriminative patterns among different images, we design an image categorization algorithm to interpret how these differences help visual recognition within the bag-of-feature framework. The proposed method is verified on a variety of image categorization tasks including outdoor scene images, indoor scene images as well as the airborne SAR images from different perspectives.  相似文献   

13.
BACKGROUND: Epiluminescence microscopy (ELM) is a noninvasive clinical tool recently developed for the diagnosis of pigmented skin lesions (PSLs), with the aim of improving melanoma screening strategies. However, the complexity of the ELM grading protocol means that considerable expertise is required for differential diagnosis. In this paper we propose a computer-based tool able to screen ELM images of PSLs in order to aid clinicians in the detection of lesion patterns useful for differential diagnosis. METHODS: The method proposed is based on the supervised classification of pixels of digitized ELM images, and leads to the construction of classes of pixels used for image segmentation. This process has two major phases, i.e., a learning phase, where several hundred pixels are used in order to train and validate a classification model, and an application step, which consists of a massive classification of billions of pixels (i.e., the full image) by means of the rules obtained in the first phase. RESULTS: Our results show that the proposed method is suitable for lesion-from-background extraction, for complete image segmentation into several typical diagnostic patterns, and for artifact rejection. Hence, our prototype has the potential to assist in distinguishing lesion patterns which are associated with diagnostic information such as diffuse pigmentation, dark globules (black dots and brown globules), and the gray-blue veil. CONCLUSIONS: The system proposed in this paper can be considered as a tool to assist in PSL diagnosis.  相似文献   

14.
15.
In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.  相似文献   

16.
Feulgen-stained tissue sections of 187 invasive ductal carcinomas (94 with lymph node metastases; mean follow-up: 44 months) were studied using computer assisted image cytometry. Based on survival time, the prognostic significance of nuclear image analysis was compared with the results using conventional histopathological grading according to Bloom and Richardson, as well as with image cytometric DNA measurements. The histopathological grading has the disadvantage of poor interobserver reproducibility (71.1%). Despite statistically significant differences between the actuarial survival curves of grade 1 and grade 3 patients, the prognostic significance of the conventional grading method for individual patients seems to be low and the number of grade 2 cases (42.8%) is large. The quantitative morphological method for analyzing nuclear images gives more reproducible results. Compared to histopathological grading, the predictive values for good or poor prognosis are clearly higher and the number of cases with uncertain prognosis is significantly smaller (20.9%). DNA ploidy measurements also make it possible to distinguish statistically significant differences between favorable and unfavorable prognoses with respect to over-all survival time. However, the classification accuracy based on the best single parameter (DNA-histogram type according to Auer) is 70.2% compared with 78.9% for nuclear image analysis.  相似文献   

17.
OBJECTIVE: To segment and quantify microvessels in renal tumor angiogenesis based on a color image analysis method and to improve the accuracy and reproducibility of quantifying microvessel density. STUDY DESIGN: The segmentation task was based on a supervised learning scheme. First, 12 color features (RGB, HSI, I1I2I3 and L*a*b*) were extracted from a training set. The feature selection procedure selected I2L*S features as the best color feature vector. Then we segmented microvessels using the discriminant function made using the minimum error rate classification rule of Bayesian decision theory. In the quantification step, after applying a connected component-labeling algorithm, microvessels with discontinuities were connected and touching microvessels separated. We tested the proposed method on 23 images. RESULTS: The results were evaluated by comparing them with manual quantification of the same images. The comparison revealed that our computerized microvessel counting correlated highly with manual counting by an expert (r = 0.95754). The association between the number of microvessels after the initial segmentation and manual quantification was also assessed using Pearson's correlation coefficient (r = 0.71187). The results indicate that our method is better than conventional computerized image analysis methods. CONCLUSION: Our method correlated highly with quantification by an expert and could become a way to improve the accuracy, feasibility and reproducibility of quantifying microvessel density. We anticipate that it will become a useful diagnostic tool for angiogenesis studies.  相似文献   

18.
For modern biology, precise genome annotations are of prime importance, as they allow the accurate definition of genic regions. We employ state-of-the-art machine learning methods to assay and improve the accuracy of the genome annotation of the nematode Caenorhabditis elegans. The proposed machine learning system is trained to recognize exons and introns on the unspliced mRNA, utilizing recent advances in support vector machines and label sequence learning. In 87% (coding and untranslated regions) and 95% (coding regions only) of all genes tested in several out-of-sample evaluations, our method correctly identified all exons and introns. Notably, only 37% and 50%, respectively, of the presently unconfirmed genes in the C. elegans genome annotation agree with our predictions, thus we hypothesize that a sizable fraction of those genes are not correctly annotated. A retrospective evaluation of the Wormbase WS120 annotation [] of C. elegans reveals that splice form predictions on unconfirmed genes in WS120 are inaccurate in about 18% of the considered cases, while our predictions deviate from the truth only in 10%-13%. We experimentally analyzed 20 controversial genes on which our system and the annotation disagree, confirming the superiority of our predictions. While our method correctly predicted 75% of those cases, the standard annotation was never completely correct. The accuracy of our system is further corroborated by a comparison with two other recently proposed systems that can be used for splice form prediction: SNAP and ExonHunter. We conclude that the genome annotation of C. elegans and other organisms can be greatly enhanced using modern machine learning technology.  相似文献   

19.
Protein function prediction with high-throughput data   总被引:1,自引:0,他引:1  
Zhao XM  Chen L  Aihara K 《Amino acids》2008,35(3):517-530
  相似文献   

20.
OBJECTIVE: To introduce a new mathematical method based on the principles of fractal geometry analysis that permits more realistic quantification of some of the physical (morphologic) aspects of irregular bodies appearing under microscopy. STUDY DESIGN: The principles of the method were tested on microscopic images of irregular collagen deposition in liver tissue. The method uses an ad hoc rectified meter implemented in a computer-assisted planar image analysis system that has been adapted to give metric measures of irregular outlines and surfaces that can be used to produce an index capable of quantifying the typical wrinkledness of biologic objects. Prototypical example measures of liver fibrosis were made on biopsy specimens showing chronic hepatitis C virus-related disease. Measurements were also made of the microscopic images of the abnormal deposition of lipid droplets in hepatocytes, a case of amyloid deposition in an osteoarthromuscular structure and a cytologic specimen of human dendritic cells. RESULTS: The proposed computer-aided method permits rapid measurements of the image of a whole biopsy section digitized at high magnification. The snapshot measurement of liver fibrosis deposition offered by a biopsy pattern is a valid means of more rigorously identifying the staging of the process. CONCLUSION: This method can measure liver fibrosis during chronic liver disease as well as any other irregular biologic structure that cannot be correctly quantified using traditional Euclidean-based metric methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号