首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
OBJECTIVE: To investigate the applicability of different texture features in automatic discrimination of microscopic views from benign common nevi and malignant melanoma lesions. STUDY DESIGN: In tissue counter analysis (TCA) the images are dissected into square elements used for feature calculation. The first class of features is based on the histogram, the co-occurrence matrix and the texture moments. The second class is derived from spectral properties of the wavelet Daubechie 4 and the Fourier transform. Square elements from images of a training set are classified by Classification and Regression Trees analysis. RESULTS: Features from the histogram and the co-occurrence matrix enable correct classification of 94.7% of nevi elements and 92.6% of melanoma elements in the training set. Classification results are applied to individual test set cases. Discriminant analysis based on the percentage of "malignant elements" showed correct classification of all nevi cases and 95% of melanoma cases. Features derived from the wavelet and Fourier spectrum showed correct results for 88.8% and 79.3% of nevi and 85.6% and 81.5% of melanoma elements, respectively. CONCLUSION: TCA is a potential diagnostic tool in automatic analysis of melanocytic skin tumors. Histogram and co-occurrence matrix features are superior to the wavelet and the Fourier features.  相似文献   

2.

Background

Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process.

Method

We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features.

Results

we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance.

Conclusions

Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study.
  相似文献   

3.
4.
This paper presents a novel system to compute the automated classification of wireless capsule endoscope images. Classification is achieved by a classical statistical approach, but novel features are extracted from the wavelet domain and they contain both color and texture information. First, a shift-invariant discrete wavelet transform (SIDWT) is computed to ensure that the multiresolution feature extraction scheme is robust to shifts. The SIDWT expands the signal (in a shift-invariant way) over the basis functions which maximize information. Then cross-co-occurrence matrices of wavelet subbands are calculated and used to extract both texture and color information. Canonical discriminant analysis is utilized to reduce the feature space and then a simple 1D classifier with the leave one out method is used to automatically classify normal and abnormal small bowel images. A classification rate of 94.7% is achieved with a database of 75 images (41 normal and 34 abnormal cases). The high success rate could be attributed to the robust feature set which combines multiresolutional color and texture features, with shift, scale and semi-rotational invariance. This result is very promising and the method could be used in a computer-aided diagnosis system or a content-based image retrieval scheme.  相似文献   

5.
This paper presents a method for direct identification of fungal species solely by means of digital image analysis of colonies as seen after growth on a standard medium. The method described is completely automated and hence objective once digital images of the reference fungi have been established. Using a digital image it is possible to extract precise information from the surface of the fungal colony. This includes color distribution, colony dimensions and texture measurements. For fungal identification, this is normally done by visual observation that often results in a very subjective data recording. Isolates of nine different species of the genus Penicillium have been selected for the purpose. After incubation for 7 days, the fungal colonies are digitized using a very accurate digital camera. Prior to the image analysis each image is corrected for self-illumination, thereby gaining a set of directly corresponding images with respect to illumination. A Windows application has been developed to locate the position and size of up to three colonies in the digitized image. Using the estimated positions and sizes of the colonies, a number of relevant features can be extracted for further analysis. The method used to determine the position of the colonies will be covered as well as the feature selection. The texture measurements of colonies of the nine species were analyzed and a clustering of the data into the correct species was confirmed. This indicates that it is indeed possible to identify a given colony merely by macromorphological features. A classifier (in the normal distribution) based on measurements of 151 colonies incubated on yeast extract sucrose agar (YES) was used to discriminate between the species. This resulted in a correct classification rate of 100% when used on the training set and 96% using cross-validation. The same methods applied to 194 colonies incubated on Czapek yeast extract agar (CYA) resulted in a correct classification rate of 98% on the training set and 71% using cross-validation.  相似文献   

6.
The present paper proposes the development of a new approach for automated diagnosis, based on classification of magnetic resonance (MR) human brain images. Wavelet transform based methods are a well-known tool for extracting frequency space information from non-stationary signals. In this paper, the proposed method employs an improved version of orthogonal discrete wavelet transform (DWT) for feature extraction, called Slantlet transform, which can especially be useful to provide improved time localization with simultaneous achievement of shorter supports for the filters. For each two-dimensional MR image, we have computed its intensity histogram and Slantlet transform has been applied on this histogram signal. Then a feature vector, for each image, is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions, chosen according to a specific logic. The features hence derived are used to train a neural network based binary classifier, which can automatically infer whether the image is that of a normal brain or a pathological brain, suffering from Alzheimer's disease. An excellent classification ratio of 100% could be achieved for a set of benchmark MR brain images, which was significantly better than the results reported in a very recent research work employing wavelet transform, neural networks and support vector machines.  相似文献   

7.
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.  相似文献   

8.
OBJECTIVE: To evaluate the possibilities of describing and discriminating common nevi and malignant melanoma tissue with features based on spectral properties of the Daubechies 4 wavelet transform. STUDY DESIGN: Images of common nevi and malignant melanoma were dissected in square elements. The wavelet coefficients were calculated inside the square elements. The diagonal coefficients and related power spectra were used for further analysis. The analysis results served as guide for the selection of features, including standard deviations of wavelet coefficients inside the frequency bands and the energy of the frequency bands. These features describe properties of the frequency bands, representing information on different scales. To test the usefulness of the features for discrimination, a study set of 80 cases was classified by classification and regression trees analysis. The set was divided into a training set and a test set. RESULTS: In the case of benign common nevi, the energies of the lower frequency bands and higher, whereas malignant melanoma tissue shows more variability of the coefficients in higher-frequency bands. The influence on the detail properties of the images was studied by suppression of coefficients with low values, which are concentrated mainly in higher-frequency bands. In the case of benign common nevi the main information is contained in 15% of the coefficients and in the case of malignant melanoma, in 39%. The results of classification show a clear-cut difference between the cases. The classification correctly classified 95.78% of nevi elements and 94.22% of melanoma elements in the training set and 100% of cases of benign nevi and 80% of cases of malignant melanoma in the test set. CONCLUSION: Features based on the wavelet power spectrum contain sufficient information for differentiation between common nevi and malignant melanomas.  相似文献   

9.
10.
Deep learning based retinopathy classification with optical coherence tomography (OCT) images has recently attracted great attention. However, existing deep learning methods fail to work well when training and testing datasets are different due to the general issue of domain shift between datasets caused by different collection devices, subjects, imaging parameters, etc. To address this practical and challenging issue, we propose a novel deep domain adaptation (DDA) method to train a model on a labeled dataset and adapt it to an unlabelled dataset (collected under different conditions). It consists of two modules for domain alignment, that is, adversarial learning and entropy minimization. We conduct extensive experiments on three public datasets to evaluate the performance of the proposed method. The results indicate that there are large domain shifts between datasets, resulting a poor performance for conventional deep learning methods. The proposed DDA method can significantly outperform existing methods for retinopathy classification with OCT images. It achieves retinopathy classification accuracies of 0.915, 0.959 and 0.990 under three cross-domain (cross-dataset) scenarios. Moreover, it obtains a comparable performance with human experts on a dataset where no labeled data in this dataset have been used to train the proposed DDA method. We have also visualized the learnt features by using the t-distributed stochastic neighbor embedding (t-SNE) technique. The results demonstrate that the proposed method can learn discriminative features for retinopathy classification.  相似文献   

11.
Different methods are investigated in selecting and generating the appropriate microscope images for analysis of three-dimensional objects in quantitative microscopy. Traditionally, the ‘best’ focused image from a set is used for quantitative analysis. Such an objectively determined image is optimal for the extraction of some features, but may not be the best image for the extraction of all features. Various methods using multiple images are here developed to obtain a tighter distribution for all features.Three different approaches for analysis of images of stained cervical cells were analyzed. In the first approach, features are extracted from each image in the set. The feature values are then averaged to give the final result. In the second approach, a set of varying focused images are reconstructed to obtain a set of in-focus images. Features are then extracted from this set and averaged. In the third approach, a set of images in the three-dimensional scene is compressed into a single two-dimensional image. Four different compression methods are used. Features are then extracted from the resulting two-dimensional image. The third approach is employed on both the raw and transformed images.Each approach has its advantages and disadvantages. The first approach is fast and produces reasonable results. The second approach is more computationally expensive but produces the best results. The last approach overcomes the memory storage problem of the first two approaches since the set of images is compressed into one. The method of compression using the highest gradient pixel produces better results overall than other data reduction techniques and produces results comparable to the first approach.  相似文献   

12.
We present the results of a Gray Level Co-occurrence Matrix (GLCM) analysis for two sets of leaf epidermis images for the adaxial (20×_H) and abaxial sides (20×_E). The leaves were collected from a dry forest in Mona Island which is located between the Dominican Republic and Puerto Rico. For each set of images (GLCM) texture features were calculated namely the energy, correlation, contrast, absolute value, inverse difference, homogeneity, and entropy. From the calculated statistics a features matrix was obtained for each image and randomly divided into training set and test set using the hold-out method. In this method 70% of the images were considered as a training set and 30% as the test set. For each training and test set a linear discrimination analysis (LDA) was performed resulting in a average correct classification percent of 90% for the abaxial side in comparison with 80% for the adaxial side.  相似文献   

13.
Plant diseases cause significant food loss and hence economic loss around the globe. Therefore, automatic plant disease identification is a primary task to take proper medications for controlling the spread of the diseases. Large variety of plants species and their dissimilar phytopathological symptoms call for the implementation of supervised machine learning techniques for efficient and reliable disease identification and classification. With the development of deep learning strategies, convolutional neural network (CNN) has paved its way for classification of multiple plant diseases by extracting rich features. However, several characteristics of the input images especially captured in real world environment, viz. complex or indistinguishable background, presence of multiple leaves with the diseased leaf, small lesion area, solemnly affect the robustness and accuracy of the CNN modules. Available strategies usually applied standard CNN architectures on the images captured in the laboratory environment and very few have considered practical in-field leaf images for their studies. However, those studies are limited with very limited number of plant species. Therefore, there is need of a robust CNN module which can successfully recognize and classify the dissimilar leaf health conditions of non-identical plants from the in-field RGB images. To achieve the above goal, an attention dense learning (ADL) mechanism is proposed in this article by merging mixed sigmoid attention learning with the basic dense learning process of deep CNN. The basic dense learning process derives new features at higher layer considering all lower layer features and that provides fast and efficient training process. Further, the attention learning process amplifies the learning ability of the dense block by discriminating the meaningful lesion portions of the images from the background areas. Other than adding an extra layer for attention learning, in the proposed ADL block the output features from higher layer dense learning are used as an attention mask to the lower layers. For an effective and fast classification process, five ADL blocks are stacked to build a new CNN architecture named DADCNN-5 for obtaining classification robustness and higher testing accuracy. Initially, the proposed DADCNN-5 module is applied on publicly available extended PlantVillage dataset to classify 38 different health conditions of 14 plant species from 54,305 images. Classification accuracy of 99.93% proves that the proposed CNN module can be used for successful leaf disease identification. Further, the efficacy of the DADCNN-5 model is checked after performing stringent experiments on a new real world plant leaf database, created by the authors. The new leaf database contains 10,851 real-world RGB leaf images of 17 plant species for classifying their 44 distinguished health conditions. Experimental outcomes reveal that the proposed DADCNN-5 outperforms the existing machine learning and standard CNN architectures, and achieved 97.33% accuracy. The obtained sensitivity, specificity and false positive rate values are 96.57%, 99.94% and 0.063% respectively. The module takes approximately 3235 min for training process and achieves 99.86% of training accuracy. Visualization of Class activation mapping (CAM) depicts that DADCNN-5 is able to learn distinguishable features from semantically important regions (i.e. lesion regions) on the leaves. Further, the robustness of the DADCNN-5 is established after experimenting with augmented and noise contaminated images of the practical database.  相似文献   

14.
In this paper, a robust algorithm for disease type determination in brain magnetic resonance image (MRI) is presented. The proposed method classifies MRI into normal or one of the seven different diseases. At first two-level two-dimensional discrete wavelet transform (2D DWT) of input image is calculated. Our analysis show that the wavelet coefficients of detail sub-bands can be modeled by generalized autoregressive conditional heteroscedasticity (GARCH) statistical model. The parameters of GARCH model are considered as the primary feature vector. After feature vector normalization, principal component analysis (PCA) and linear discriminant analysis (LDA) are used to extract the proper features and remove the redundancy from the primary feature vector. Finally, the extracted features are applied to the K-nearest neighbor (KNN) and support vector machine (SVM) classifiers separately to determine the normal image or disease type. Experimental results indicate that the proposed algorithm achieves high classification rate and outperforms recently introduced methods while it needs less number of features for classification.  相似文献   

15.
We have developed an image-analysis and classification system for automatically scoring images from high-throughput protein crystallization trials. Image analysis for this system is performed by the Help Conquer Cancer (HCC) project on the World Community Grid. HCC calculates 12,375 distinct image features on microbatch-under-oil images from the Hauptman-Woodward Medical Research Institute’s High-Throughput Screening Laboratory. Using HCC-computed image features and a massive training set of 165,351 hand-scored images, we have trained multiple Random Forest classifiers that accurately recognize multiple crystallization outcomes, including crystals, clear drops, precipitate, and others. The system successfully recognizes 80% of crystal-bearing images, 89% of precipitate images, and 98% of clear drops.  相似文献   

16.
To understand the function of the encoded proteins, we need to be able to know the subcellular location of a protein. The most common method used for determining subcellular location is fluorescence microscopy which allows subcellular localizations to be imaged in high throughput. Image feature calculation has proven invaluable in the automated analysis of cellular images. This article proposes a novel method named LDPs for feature extraction based on invariant of translation and rotation from given images, the nature which is to count the local difference features of images, and the difference features are given by calculating the D-value between the gray value of the central pixel c and the gray values of eight pixels in the neighborhood. The novel method is tested on two image sets, the first set is which fluorescently tagged protein was endogenously expressed in 10 sebcellular locations, and the second set is which protein was transfected in 11 locations. A SVM was trained and tested for each image set and classification accuracies of 96.7 and 92.3 % were obtained on the endogenous and transfected sets respectively.  相似文献   

17.
《IRBM》2021,42(5):378-389
White Blood Cells play an important role in observing the health condition of an individual. The opinion related to blood disease involves the identification and characterization of a patient's blood sample. Recent approaches employ Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and merging of CNN and RNN models to enrich the understanding of image content. From beginning to end, training of big data in medical image analysis has encouraged us to discover prominent features from sample images. A single cell patch extraction from blood sample techniques for blood cell classification has resulted in the good performance rate. However, these approaches are unable to address the issues of multiple cells overlap. To address this problem, the Canonical Correlation Analysis (CCA) method is used in this paper. CCA method views the effects of overlapping nuclei where multiple nuclei patches are extracted, learned and trained at a time. Due to overlapping of blood cell images, the classification time is reduced, the dimension of input images gets compressed and the network converges faster with more accurate weight parameters. Experimental results evaluated using publicly available database show that the proposed CNN and RNN merging model with canonical correlation analysis determines higher accuracy compared to other state-of-the-art blood cell classification techniques.  相似文献   

18.

Background

Images of frozen hydrated [vitrified] virus particles were taken close-to-focus in an electron microscope containing structural signals at high spatial frequencies. These images had very low contrast due to the high levels of noise present in the image. The low contrast made particle selection, classification and orientation determination very difficult. The final purpose of the classification is to improve the signal-to-noise ratio of the particle representing the class, which is usually the average. In this paper, the proposed method is based on wavelet filtering and multi-resolution processing for the classification and reconstruction of this very noisy data. A multivariate statistical analysis (MSA) is used for this classification.

Results

The MSA classification method is noise dependant. A set of 2600 projections from a 3D map of a herpes simplex virus -to which noise was added- was classified by MSA. The classification shows the power of wavelet filtering in enhancing the quality of class averages (used in 3D reconstruction) compared to Fourier band pass filtering. A 3D reconstruction of a recombinant virus (VP5-VP19C) is presented as an application of multi-resolution processing for classification and reconstruction.

Conclusion

The wavelet filtering and multi-resolution processing method proposed in this paper offers a new way for processing very noisy images obtained from electron cryo-microscopes. The multi-resolution and filtering improves the speed and accuracy of classification, which is vital for the 3D reconstruction of biological objects. The VP5-VP19C recombinant virus reconstruction presented here is an example, which demonstrates the power of this method. Without this processing, it is not possible to get the correct 3D map of this virus.
  相似文献   

19.
Inspection of insect sticky paper traps is an essential task for an effective integrated pest management (IPM) programme. However, identification and counting of the insect pests stuck on the traps is a very cumbersome task. Therefore, an efficient approach is needed to alleviate the problem and to provide timely information on insect pests. In this research, an automatic method for the multi-class recognition of small-size greenhouse insect pests on sticky paper trap images acquired by wireless imaging devices is proposed. The developed algorithm features a cascaded approach that uses a convolutional neural network (CNN) object detector and CNN image classifiers, separately. The object detector was trained for detecting objects in an image, and a CNN classifier was applied to further filter out non-insect objects from the detected objects in the first stage. The obtained insect objects were then further classified into flies (Diptera: Drosophilidae), gnats (Diptera: Sciaridae), thrips (Thysanoptera: Thripidae) and whiteflies (Hemiptera: Aleyrodidae), using a multi-class CNN classifier in the second stage. Advantages of this approach include flexibility in adding more classes to the multi-class insect classifier and sample control strategies to improve classification performance. The algorithm was developed and tested for images taken by multiple wireless imaging devices installed in several greenhouses under natural and variable lighting environments. Based on the testing results from long-term experiments in greenhouses, it was found that the algorithm could achieve average F1-scores of 0.92 and 0.90 and mean counting accuracies of 0.91 and 0.90, as tested on a separate 6-month image data set and on an image data set from a different greenhouse, respectively. The proposed method in this research resolves important problems for the automated recognition of insect pests and provides instantaneous information of insect pest occurrences in greenhouses, which offers vast potential for developing more efficient IPM strategies in agriculture.  相似文献   

20.
Classification and subsequent diagnosis of cardiac arrhythmias is an important research topic in clinical practice. Confirmation of the type of arrhythmia at an early stage is critical for reducing the risk and occurrence of cardiovascular events. Nevertheless, diagnoses must be confirmed by a combination of specialist experience and electrocardiogram (ECG) examination, which can lead to delays in diagnosis. To overcome such obstacles, this study proposes an automatic ECG classification algorithm based on transfer learning and continuous wavelet transform (CWT). The transfer learning method is able to transfer the domain knowledge and features of images to a EGG, which is a one-dimensional signal when a convolutional neural network (CNN) is used for classification. Meanwhile, CWT is used to convert a one-dimensional ECG signal into a two-dimensional signal map consisting of time-frequency components. Considering that morphological features can be helpful in arrhythmia classification, eight features related to the R peak of an ECG signal are proposed. These auxiliary features are integrated with the features extracted by the CNN and then fed into the fully linked arrhythmia classification layer. The CNN developed in this study can also be used for bird activity detection. The classification experiments were performed after converting the two types of audio files containing songbird sounds and those without songbird sounds from the NIPS4Bplus bird song dataset into the Mel spectrum. Compared to the most recent methods in the same field, the classification results improved accuracy and recognition by 11.67% and 11.57%, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号