首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Subcellular localization of a protein is important to understand proteins’ functions and interactions. There are many techniques based on computational methods to predict protein subcellular locations, but it has been shown that many prediction tasks have a training data shortage problem. This paper introduces a new method to mine proteins with non-experimental annotations, which are labeled by non-experimental evidences of protein databases to overcome the training data shortage problem. A novel active sample selection strategy is designed, taking advantage of active learning technology, to actively find useful samples from the entire data pool of candidate proteins with non-experimental annotations. This approach can adequately estimate the “value” of each sample, automatically select the most valuable samples and add them into the original training set, to help to retrain the classifiers. Numerical experiments with for four popular multi-label classifiers on three benchmark datasets show that the proposed method can effectively select the valuable samples to supplement the original training set and significantly improve the performances of predicting classifiers.  相似文献   

2.
This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice.  相似文献   

3.
With great potential for assisting radiological image interpretation and decision making, content-based image retrieval in the medical domain has become a hot topic in recent years. Many methods to enhance the performance of content-based medical image retrieval have been proposed, among which the relevance feedback (RF) scheme is one of the most promising. Given user feedback information, RF algorithms interactively learn a user’s preferences to bridge the “semantic gap” between low-level computerized visual features and high-level human semantic perception and thus improve retrieval performance. However, most existing RF algorithms perform in the original high-dimensional feature space and ignore the manifold structure of the low-level visual features of images. In this paper, we propose a new method, termed dual-force ISOMAP (DFISOMAP), for content-based medical image retrieval. Under the assumption that medical images lie on a low-dimensional manifold embedded in a high-dimensional ambient space, DFISOMAP operates in the following three stages. First, the geometric structure of positive examples in the learned low-dimensional embedding is preserved according to the isometric feature mapping (ISOMAP) criterion. To precisely model the geometric structure, a reconstruction error constraint is also added. Second, the average distance between positive and negative examples is maximized to separate them; this margin maximization acts as a force that pushes negative examples far away from positive examples. Finally, the similarity propagation technique is utilized to provide negative examples with another force that will pull them back into the negative sample set. We evaluate the proposed method on a subset of the IRMA medical image dataset with a RF-based medical image retrieval framework. Experimental results show that DFISOMAP outperforms popular approaches for content-based medical image retrieval in terms of accuracy and stability.  相似文献   

4.
Optical coherence tomography (OCT) imaging shows a significant potential in clinical routines due to its noninvasive property. However, the quality of OCT images is generally limited by inherent speckle noise of OCT imaging and low sampling rate. To obtain high signal-to-noise ratio (SNR) and high-resolution (HR) OCT images within a short scanning time, we presented a learning-based method to recover high-quality OCT images from noisy and low-resolution OCT images. We proposed a semisupervised learning approach named N2NSR-OCT, to generate denoised and super-resolved OCT images simultaneously using up- and down-sampling networks (U-Net (Semi) and DBPN (Semi)). Additionally, two different super-resolution and denoising models with different upscale factors (2× and 4× ) were trained to recover the high-quality OCT image of the corresponding down-sampling rates. The new semisupervised learning approach is able to achieve results comparable with those of supervised learning using up- and down-sampling networks, and can produce better performance than other related state-of-the-art methods in the aspects of maintaining subtle fine retinal structures.  相似文献   

5.
Micro-CT provides a high-resolution 3D imaging of micro-architecture in a non-invasive way, which becomes a significant tool in biomedical research and preclinical applications. Due to the limited power of micro-focus X-ray tube, photon starving occurs and noise is inevitable for the projection images, resulting in the degradation of spatial resolution, contrast and image details. In this paper, we propose a C-GAN (Conditional Generative Adversarial Nets) denoising algorithm in projection domain for Micro-CT imaging. The noise statistic property is utilized directly and a novel variance loss is developed to suppress the blurry effects during denoising procedure. Conditional Generative Adversarial Networks (C-GAN) is employed as a framework to implement the denoising task. To guarantee the pixelwised accuracy, fully convolutional network is served as the generator structure. During the alternative training of the generator and the discriminator, the network is able to learn noise distribution automatically. Moreover, residual learning and skip connection architecture are applied for faster network training and further feature fusion. To evaluate the denoising performance, mouse lung, milkvetch root and bamboo stick are imaged by micro-CT in the experiments. Compared with BM3D, CNN-MSE and CNN-VGG, the proposed method can suppress noise effectively and recover image details without introducing any artifacts or blurry effect. The result proves that our method is feasible, efficient and practical.  相似文献   

6.
The scarcity of training annotation is one of the major challenges for the application of deep learning technology in medical image analysis. Recently, self-supervised learning provides a powerful solution to alleviate this challenge by extracting useful features from a large number of unlabeled training data. In this article, we propose a simple and effective self-supervised learning method for leukocyte classification by identifying the different transformations of leukocyte images, without requiring a large batch of negative sampling or specialized architectures. Specifically, a convolutional neural network backbone takes different transformations of leukocyte image as input for feature extraction. Then, a pretext task of self-supervised transformation recognition on the extracted feature is conducted by a classifier, which helps the backbone learn useful representations that generalize well across different leukocyte types and datasets. In the experiment, we systematically study the effect of different transformation compositions on useful leukocyte feature extraction. Compared with five typical baselines of self-supervised image classification, experimental results demonstrate that our method performs better in different evaluation protocols including linear evaluation, domain transfer, and finetuning, which proves the effectiveness of the proposed method.  相似文献   

7.
Camera traps are a method for monitoring wildlife and they collect a large number of pictures. The number of images collected of each species usually follows a long-tail distribution, i.e., a few classes have a large number of instances, while a lot of species have just a small percentage. Although in most cases these rare species are the ones of interest to ecologists, they are often neglected when using deep-learning models because these models require a large number of images for the training. In this work, a simple and effective framework called Square-Root Sampling Branch (SSB) is proposed, which combines two classification branches that are trained using square-root sampling and instance sampling to improve long-tail visual recognition, and this is compared to state-of-the-art methods for handling this task: square-root sampling, class-balanced focal loss, and balanced group softmax. To achieve a more general conclusion, the methods for handling long-tail visual recognition were systematically evaluated in four families of computer vision models (ResNet, MobileNetV3, EfficientNetV2, and Swin Transformer) and four camera-trap datasets with different characteristics. Initially, a robust baseline with the most recent training tricks was prepared and, then, the methods for improving long-tail recognition were applied. Our experiments show that square-root sampling was the method that most improved the performance for minority classes by around 15%; however, this was at the cost of reducing the majority classes' accuracy by at least 3%. Our proposed framework (SSB) demonstrated itself to be competitive with the other methods and achieved the best or the second-best results for most of the cases for the tail classes; but, unlike the square-root sampling, the loss in the performance of the head classes was minimal, thus achieving the best trade-off among all the evaluated methods. Our experiments also show that Swin Transformer can achieve high performance for rare classes without applying any additional method for handling imbalance, and attains an overall accuracy of 88.76% for the WCS dataset and 94.97% for Snapshot Serengeti using a location-based training/test partition. Despite the improvement in the tail classes' performance, our experiments highlight the need for better methods for handling long-tail visual recognition in camera-trap images, since state-of-the-art approaches achieve poor performance, especially in classes with just a few training instances.  相似文献   

8.
Optical coherence Doppler tomography (ODT) increasingly attracts attention because of its unprecedented advantages with respect to high contrast, capillary‐level resolution and flow speed quantification. However, the trade‐off between the signal‐to‐noise ratio of ODT images and A‐scan sampling density significantly slows down the imaging speed, constraining its clinical applications. To accelerate ODT imaging, a deep‐learning‐based approach is proposed to suppress the overwhelming phase noise from low‐sampling density. To handle the issue of limited paired training datasets, a generative adversarial network is performed to implicitly learn the distribution underlying Doppler phase noise and to generate the synthetic data. Then a 3D based convolutional neural network is trained and applied for the image denoising. We demonstrate this approach outperforms traditional denoise methods in noise reduction and image details preservation, enabling high speed ODT imaging with low A‐scan sampling density.  相似文献   

9.
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image.  相似文献   

10.
The giant panda is a flagship species in ecological conservation. The infrared camera trap is an effective tool for monitoring the giant panda. Images captured by infrared camera traps must be accurately recognized before further statistical analyses can be implemented. Previous research has demonstrated that spatiotemporal and positional contextual information and the species distribution model (SDM) can improve image detection accuracy, especially for difficult-to-see images. Difficult-to-see images include those in which individual animals are only partially observed and it is challenging for the model to detect those individuals. By utilizing the attention mechanism, we developed a unique method based on deep learning that incorporates object detection, contextual information, and the SDM to achieve better detection performance in difficult-to-see images. We obtained 1169 images of the wild giant panda and divided them into a training set and a test set in a 4:1 ratio. Model assessment metrics showed that our proposed model achieved an overall performance of 98.1% in mAP0.5 and 82.9% in recall on difficult-to-see images. Our research demonstrated that the fine-grained multimodal-fusing method applied to monitoring giant pandas in the wild can better detect the difficult-to-see panda images to enhance the wildlife monitoring system.  相似文献   

11.
Spectral clustering methods have been shown to be effective for image segmentation. Unfortunately, the presence of image noise as well as textural characteristics can have a significant negative effect on the segmentation performance. To accommodate for image noise and textural characteristics, this study introduces the concept of sub-graph affinity, where each node in the primary graph is modeled as a sub-graph characterizing the neighborhood surrounding the node. The statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty associated with the image noise and textural characteristics by utilizing more information than traditional spectral clustering methods. Experiments using both synthetic and natural images under various levels of noise contamination demonstrate that the proposed approach can achieve improved segmentation performance when compared to existing spectral clustering methods.  相似文献   

12.
The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.  相似文献   

13.
A new method is proposed to identify whether a query protein is singleplex or multiplex for improving the quality of protein subcellular localization prediction. Based on the transductive learning technique, this approach utilizes the information from the both query proteins and known proteins to estimate the subcellular location number of every query protein so that the singleplex and multiplex proteins can be recognized and distinguished. Each query protein is then dealt with by a targeted single-label or multi-label predictor to achieve a high-accuracy prediction result. We assess the performance of the proposed approach by applying it to three groups of protein sequences datasets. Simulation experiments show that the proposed approach can effectively identify the singleplex and multiplex proteins. Through a comparison, the reliably of this method for enhancing the power of predicting protein subcellular localization can also be verified.  相似文献   

14.
Cryo-electron microscopy (cryo-EM) single-particle analysis is a revolutionary imaging technique to resolve and visualize biomacromolecules. Image alignment in cryo-EM is an important and basic step to improve the precision of the image distance calculation. However, it is a very challenging task due to high noise and low signal-to-noise ratio. Therefore, we propose a new deep unsupervised difference learning (UDL) strategy with novel pseudo-label guided learning network architecture and apply it to pair-wise image alignment in cryo-EM. The training framework is fully unsupervised. Furthermore, a variant of UDL called joint UDL (JUDL), is also proposed, which is capable of utilizing the similarity information of the whole dataset and thus further increase the alignment precision. Assessments on both real-world and synthetic cryo-EM single-particle image datasets suggest the new unsupervised joint alignment method can achieve more accurate alignment results. Our method is highly efficient by taking advantages of GPU devices. The source code of our methods is publicly available at “http://www.csbio.sjtu.edu.cn/bioinf/JointUDL/” for academic use.  相似文献   

15.
Recently the single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interests. Considering that there are obviously repetitive image structures in medical images, in this study we propose a regularized SISR method via sparse coding and structural similarity. The pixel based recovery is incorporated as a regularization term to exploit the non-local structural similarities of medical images, which is very helpful in further improving the quality of recovered medical images. An alternative variables optimization algorithm is proposed and some medical images including CT, MRI and ultrasound images are used to investigate the performance of our proposed method. The results show the superiority of our method to its counterparts.  相似文献   

16.
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.  相似文献   

17.
This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.  相似文献   

18.
Despite increased image quality including medical imaging, image segmentation continues to represent a major bottleneck in practical applications due to noise and lack of contrast. In this paper, we present a new methodology to segment noisy, low contrast medical images, with a view to developing practical applications. Firstly, the contrast of the image is enhanced and then a modified graph-based method is followed. This paper has mainly two contributions: (1) a contrast enhancement stage performed by suitably utilizing the noise present in the medical data. This step is achieved through stochastic resonance theory applied in the wavelet domain and (2) a new weighting function is proposed for traditional graph-based approaches. Both qualitative (by our clinicians/radiologists) and quantitative evaluation performed on publicly available computed tomography (CT) (MICCAI 2007 Grand Challenge workshop database) and cardiac magnetic resonance (CMR) databases reflect the potential of the proposed method even in the presence of tumors/papillary muscles.  相似文献   

19.
In recent years, progressive application of convolutional neural networks in image processing has successfully filtered into medical diagnosis. As a prerequisite for images detection and classification, object segmentation in medical images has attracted a great deal of attention. This study is based on the fact that most of the analysis of pathological diagnoses requires nuclei detection as the starting phase for obtaining an insight into the underlying biological process and further diagnosis. In this paper, we introduce an embedded attention model in multi-bridge Wnet (AMB-Wnet) to achieve suppression of irrelevant background areas and obtain good features for learning image semantics and modality to automatically segment nuclei, inspired by the 2018 Data Science Bowl. The proposed architecture, consisting of the redesigned down sample group, up-sample group, and middle block (a new multiple-scale convolutional layers block), is designed to extract different level features. In addition, a connection group is proposed instead of skip-connection to transfer semantic information among different levels. In addition, the attention model is well embedded in the connection group, and the performance of the model is improved without increasing the amount of calculation. To validate the model's performance, we evaluated it using the BBBC038V1 data sets for nuclei segmentation. Our proposed model achieves 85.83% F1-score, 97.81% accuracy, 86.12% recall, and 83.52% intersection over union. The proposed AMB-Wnet exhibits superior results compared to the original U-Net, MultiResUNet, and recent Attention U-Net architecture.  相似文献   

20.
The benchmark method for the evaluation of breast cancers involves microscopic testing of a hematoxylin and eosin (H&E)‐stained tissue biopsy. Resurgery is required in 20% to 30% of cases because of incomplete excision of malignant tissues. Therefore, a more accurate method is required to detect the cancer margin to avoid the risk of recurrence. In the recent years, convolutional neural networks (CNNs) has achieved excellent performance in the field of medical images diagnosis. It automatically extracts the features from the images and classifies them. In the proposed study, we apply a pretrained Inception‐v3 CNN with reverse active learning for the classification of healthy and malignancy breast tissue using optical coherence tomography (OCT) images. This proposed method attained the sensitivity, specificity and accuracy is 90.2%, 91.7% and 90%, respectively, with testing datasets collected from 48 patients (22 normal fibro‐adipose tissue and 26 Invasive ductal carcinomas cancerous tissues). The trained network utilizes for the breast cancer margin assessment to predict the tumor with negative margins. Additionally, the network output is correlated with the corresponding histology image. Our results lay the foundation for the future that the proposed method can be used to perform automatic intraoperative identification of breast cancer margins in real‐time and to guide core needle biopsies.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号