首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
PurposeArtificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.MethodsA narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.ResultsWe first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.ConclusionsBiomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.  相似文献   

2.
PurposeMagnetic Resonance Imaging (MRI) provides an essential contribution in the screening, detection, diagnosis, staging, treatment and follow-up in patients with a neurological neoplasm. Deep learning (DL), a subdomain of artificial intelligence has the potential to enhance the characterization, processing and interpretation of MRI images. The aim of this review paper is to give an overview of the current state-of-art usage of DL in MRI for neuro-oncology.MethodsWe reviewed the Pubmed database by applying a specific search strategy including the combination of MRI, DL, neuro-oncology and its corresponding search terminologies, by focussing on Medical Subject Headings (Mesh) or title/abstract appearance. The original research papers were classified based on its application, into three categories: technological innovation, diagnosis and follow-up.ResultsForty-one publications were eligible for review, all were published after the year 2016. The majority (N = 22) was assigned to technological innovation, twelve had a focus on diagnosis and seven were related to patient follow-up. Applications ranged from improving the acquisition, synthetic CT generation, auto-segmentation, tumor classification, outcome prediction and response assessment. The majority of publications made use of standard (T1w, cT1w, T2w and FLAIR imaging), with only a few exceptions using more advanced MRI technologies. The majority of studies used a variation on convolution neural network (CNN) architectures.ConclusionDeep learning in MRI for neuro-oncology is a novel field of research; it has potential in a broad range of applications. Remaining challenges include the accessibility of large imaging datasets, the applicability across institutes/vendors and the validation and implementation of these technologies in clinical practise.  相似文献   

3.
《IRBM》2022,43(4):290-299
ObjectiveIn this research paper, the brain MRI images are going to classify by considering the excellence of CNN on a public dataset to classify Benign and Malignant tumors.Materials and MethodsDeep learning (DL) methods due to good performance in the last few years have become more popular for Image classification. Convolution Neural Network (CNN), with several methods, can extract features without using handcrafted models, and eventually, show better accuracy of classification. The proposed hybrid model combined CNN and support vector machine (SVM) in terms of classification and with threshold-based segmentation in terms of detection.ResultThe findings of previous studies are based on different models with their accuracy as Rough Extreme Learning Machine (RELM)-94.233%, Deep CNN (DCNN)-95%, Deep Neural Network (DNN) and Discrete Wavelet Autoencoder (DWA)-96%, k-nearest neighbors (kNN)-96.6%, CNN-97.5%. The overall accuracy of the hybrid CNN-SVM is obtained as 98.4959%.ConclusionIn today's world, brain cancer is one of the most dangerous diseases with the highest death rate, detection and classification of brain tumors due to abnormal growth of cells, shapes, orientation, and the location is a challengeable task in medical imaging. Magnetic resonance imaging (MRI) is a typical method of medical imaging for brain tumor analysis. Conventional machine learning (ML) techniques categorize brain cancer based on some handicraft property with the radiologist specialist choice. That can lead to failure in the execution and also decrease the effectiveness of an Algorithm. With a brief look came to know that the proposed hybrid model provides more effective and improvement techniques for classification.  相似文献   

4.
《IRBM》2022,43(1):62-74
BackgroundThe prediction of breast cancer subtypes plays a key role in the diagnosis and prognosis of breast cancer. In recent years, deep learning (DL) has shown good performance in the intelligent prediction of breast cancer subtypes. However, most of the traditional DL models use single modality data, which can just extract a few features, so it cannot establish a stable relationship between patient characteristics and breast cancer subtypes.DatasetWe used the TCGA-BRCA dataset as a sample set for molecular subtype prediction of breast cancer. It is a public dataset that can be obtained through the following link: https://portal.gdc.cancer.gov/projects/TCGA-BRCAMethodsIn this paper, a Hybrid DL model based on the multimodal data is proposed. We combine the patient's gene modality data with image modality data to construct a multimodal fusion framework. According to the different forms and states, we set up feature extraction networks respectively, and then we fuse the output of the two feature networks based on the idea of weighted linear aggregation. Finally, the fused features are used to predict breast cancer subtypes. In particular, we use the principal component analysis to reduce the dimensionality of high-dimensional data of gene modality and filter the data of image modality. Besides, we also improve the traditional feature extraction network to make it show better performance.ResultsThe results show that compared with the traditional DL model, the Hybrid DL model proposed in this paper is more accurate and efficient in predicting breast cancer subtypes. Our model achieved a prediction accuracy of 88.07% in 10 times of 10-fold cross-validation. We did a separate AUC test for each subtype, and the average AUC value obtained was 0.9427. In terms of subtype prediction accuracy, our model is about 7.45% higher than the previous average.  相似文献   

5.
Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL's classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes.  相似文献   

6.
Mammographic density has been proven as an independent risk factor for breast cancer. Women with dense breast tissue visible on a mammogram have a much higher cancer risk than women with little density. A great research effort has been devoted to incorporate breast density into risk prediction models to better estimate each individual’s cancer risk. In recent years, the passage of breast density notification legislation in many states in USA requires that every mammography report should provide information regarding the patient’s breast density. Accurate definition and measurement of breast density are thus important, which may allow all the potential clinical applications of breast density to be implemented. Because the two-dimensional mammography-based measurement is subject to tissue overlapping and thus not able to provide volumetric information, there is an urgent need to develop reliable quantitative measurements of breast density. Various new imaging technologies are being developed. Among these new modalities, volumetric mammographic density methods and three-dimensional magnetic resonance imaging are the most well studied. Besides, emerging modalities, including different x-ray–based, optical imaging, and ultrasound-based methods, have also been investigated. All these modalities may either overcome some fundamental problems related to mammographic density or provide additional density and/or compositional information. The present review article aimed to summarize the current established and emerging imaging techniques for the measurement of breast density and the evidence of the clinical use of these density methods from the literature.  相似文献   

7.
Sequence-based residue contact prediction plays a crucial role in protein structure reconstruction. In recent years, the combination of evolutionary coupling analysis (ECA) and deep learning (DL) techniques has made tremendous progress for residue contact prediction, thus a comprehensive assessment of current methods based on a large-scale benchmark data set is very needed. In this study, we evaluate 18 contact predictors on 610 non-redundant proteins and 32 CASP13 targets according to a wide range of perspectives. The results show that different methods have different application scenarios: (1) DL methods based on multi-categories of inputs and large training sets are the best choices for low-contact-density proteins such as the intrinsically disordered ones and proteins with shallow multi-sequence alignments (MSAs). (2) With at least 5L (L is sequence length) effective sequences in the MSA, all the methods show the best performance, and methods that rely only on MSA as input can reach comparable achievements as methods that adopt multi-source inputs. (3) For top L/5 and L/2 predictions, DL methods can predict more hydrophobic interactions while ECA methods predict more salt bridges and disulfide bonds. (4) ECA methods can detect more secondary structure interactions, while DL methods can accurately excavate more contact patterns and prune isolated false positives. In general, multi-input DL methods with large training sets dominate current approaches with the best overall performance. Despite the great success of current DL methods must be stated the fact that there is still much room left for further improvement: (1) With shallow MSAs, the performance will be greatly affected. (2) Current methods show lower precisions for inter-domain compared with intra-domain contact predictions, as well as very high imbalances in precisions between intra-domains. (3) Strong prediction similarities between DL methods indicating more feature types and diversified models need to be developed. (4) The runtime of most methods can be further optimized.  相似文献   

8.
Optical microscopy has emerged as a key driver of fundamental research since it provides the ability to probe into imperceptible structures in the biomedical world. For the detailed investigation of samples, a high-resolution image with enhanced contrast and minimal damage is preferred. To achieve this, an automated image analysis method is preferable over manual analysis in terms of both speed of acquisition and reduced error accumulation. In this regard, deep learning (DL)-based image processing can be highly beneficial. The review summarises and critiques the use of DL in image processing for the data collected using various optical microscopic techniques. In tandem with optical microscopy, DL has already found applications in various problems related to image classification and segmentation. It has also performed well in enhancing image resolution in smartphone-based microscopy, which in turn enablse crucial medical assistance in remote places.Graphical abstract  相似文献   

9.
Innovations in CT have been impressive among imaging and medical technologies in both the hardware and software domain. The range and speed of CT scanning improved from the introduction of multidetector-row CT scanners with wide-array detectors and faster gantry rotation speeds. To tackle concerns over rising radiation doses from its increasing use and to improve image quality, CT reconstruction techniques evolved from filtered back projection to commercial release of iterative reconstruction techniques, and recently, of deep learning (DL)-based image reconstruction. These newer reconstruction techniques enable improved or retained image quality versus filtered back projection at lower radiation doses. DL can aid in image reconstruction with training data without total reliance on the physical model of the imaging process, unique artifacts of PCD-CT due to charge sharing, K-escape, fluorescence x-ray emission, and pulse pileups can be handled in the data-driven fashion. With sufficiently reconstructed images, a well-designed network can be trained to upgrade image quality over a practical/clinical threshold or define new/killer applications. Besides, the much smaller detector pixel for PCD-CT can lead to huge computational costs with traditional model-based iterative reconstruction methods whereas deep networks can be much faster with training and validation. In this review, we present techniques, applications, uses, and limitations of deep learning-based image reconstruction methods in CT.  相似文献   

10.
Over the last few years, Deep learning (DL) approaches have been shown to outperform state-of-the-art machine learning (ML) techniques in many applications such as vegetation forecasting, sales forecast, weather conditions, crop yield prediction, landslides detection and even COVID-19 spread predictions. Several DL algorithms have been employed to facilitate vegetation forecasting research using Remotely Sensed (RS) data. Vegetation is an extremely important component of our global ecosystem and a necessary indicator of land cover dynamics and productivity. Vegetation phenology is influenced by lifecycle patterns, seasonality and weather conditions, leading to changes in their spectral reflectance. Various relevant information, such as vegetation indices (VIs), can be extracted from RS data for vegetation forecasting. Therefore, the Normalized Difference Vegetation Index (NDVI) is known as one of the most widely recognized indices for vegetation related studies. This paper reviews the related works on DL-based spatio-temporal vegetation forecasting using RS data over the period between 2015 and 2021. In this review, we present several DL-based studies and discuss DL algorithms and various sources of data that have been used in these studies. The purpose of this work is to highlight the open challenges such as spatio-temporal prediction issues, spatial and temporal non-stationarity, fusion data, hybrid approaches, deep transfer learning and large parameter requirements. We also attempt to figure out the future directions and limits of DL for vegetation forecasting.  相似文献   

11.
抑郁症是当今社会上造成首要危害且病因和病理机制最为复杂的精神疾病之一,寻找抑郁症的客观生物学标志物一直是精神医学研究和临床实践的重点和难点,而结合人工智能技术的磁共振影像(magnetic resonance imaging,MRI)技术被认为是目前抑郁症等精神疾病中最有可能率先取得突破进展的客观生物学标志物.然而,当前基于精神影像学的潜在抑郁症客观生物学标志物还未得到一致结论 .本文从精神影像学和以机器学习(machine learning,ML)与深度学习(deep learning, DL)等为代表的人工智能技术相结合的角度,首次从疾病诊断、预防和治疗等三大临床实践环节对抑郁症辅助诊疗的相关研究进行归纳分析,我们发现:a.具有诊断价值的脑区主要集中在楔前叶、扣带回、顶下缘角回、脑岛、丘脑以及海马等;b.具有预防价值的脑区主要集中在楔前叶、中央后回、背外侧前额叶、眶额叶、颞中回等;c.具有预测治疗反应价值的脑区主要集中在楔前叶、扣带回、顶下缘角回、额中回、枕中回、枕下回、舌回等.未来的研究可以通过多中心协作和数据变换提高样本量,同时将多元化的非影像学数据应用于数据挖掘,这将有利于提高人工智能模型的辅助分类能力,为探寻抑郁症的精神影像学客观生物学标志物及其临床应用提供科学证据和参考依据.  相似文献   

12.

Background

Clinical data, such as patient history, laboratory analysis, ultrasound parameters-which are the basis of day-to-day clinical decision support-are often used to guide the clinical management of cancer in the presence of microarray data. Several data fusion techniques are available to integrate genomics or proteomics data, but only a few studies have created a single prediction model using both gene expression and clinical data. These studies often remain inconclusive regarding an obtained improvement in prediction performance. To improve clinical management, these data should be fully exploited. This requires efficient algorithms to integrate these data sets and design a final classifier.LS-SVM classifiers and generalized eigenvalue/singular value decompositions are successfully used in many bioinformatics applications for prediction tasks. While bringing up the benefits of these two techniques, we propose a machine learning approach, a weighted LS-SVM classifier to integrate two data sources: microarray and clinical parameters.

Results

We compared and evaluated the proposed methods on five breast cancer case studies. Compared to LS-SVM classifier on individual data sets, generalized eigenvalue decomposition (GEVD) and kernel GEVD, the proposed weighted LS-SVM classifier offers good prediction performance, in terms of test area under ROC Curve (AUC), on all breast cancer case studies.

Conclusions

Thus a clinical classifier weighted with microarray data set results in significantly improved diagnosis, prognosis and prediction responses to therapy. The proposed model has been shown as a promising mathematical framework in both data fusion and non-linear classification problems.  相似文献   

13.
《Médecine Nucléaire》2007,31(4):132-141
Despite recent progress, breast cancer remains today a major public health problem as it represents the main morbidity incidence for woman with 42,000 new cases and 11,600 deaths per year in France. X-Ray mammography which is the “gold standard” exam for breast screening relies on an excellent sensitivity (nevertheless its quality is varying with respect to breast density). However, its specificity for malignancy diagnosis remains moderate leading to many useless interventions for lesions proven a posteriori to be benign by histology. The other imaging techniques such as echography and magnetic resonance imaging (MRI) also possess their own limits. Echography is strongly operator-dependent. Dynamic MRI with injection of contrast agents has a high sensitivity for breast cancer detection (>90%) but suffers from a moderate specificity (50 to 80% according to the type of cancer). In parallel, although it is strongly subjective, the act of palpation remains today a major act in the workflow of breast screening. Since Egyptian ancient ages, the physicians practise the act of palpating body parts in order to determine tissues stiffness and a hardly deformed mass within an organ is often related to the presence of an abnormal lesion. Palpation is not only useful for screening and diagnosis as the surgeon also uses it during interventions to be effectively guided towards the pathological area. Recently, new techniques based on ultrasound or magnetic resonance imaging finally made it possible to map organs elasticity in a quantitative way. These “elastography” techniques could play soon an important role in medical imaging.  相似文献   

14.
Background: Breast cancer is the most common type of cancer in women worldwide. Mammography is considered the "gold standard" in the evaluation of the breast from an imaging perspective. Apart from mammography, ultrasound examination and magnetic resonance imaging are being offered as adjuncts to the preoperative workup. Recently, other new modalities like positron emission tomography, 99mTc-sestamibi scintimammography, and electrical impedance tomography (EIT) are also being offered. However, there is still controversy over the most appropriate use of these new modalities. Based on the literature, this review evaluates the role of various modalities used in the screening and diagnosis of breast cancer. Methods and Results: Based on relevant literatures this article gives an overview of the old and new modalities used in the field of breast imaging. A narrative literature review of all the relevant papers known to the authors was conducted. The search of literatures was done using pubmed and ovid search engines. Additional references were found through bibliography reviews of relevant articles. It was clear that though various new technics and methods have emerged, none have substituted mammography and it is still the only proven screening method for the breast as of date. Conclusion: From the literature it is clear that apropos modern radiology's impact on diagnosis, staging and patient follow-up, only one imaging technique has had a significant impact on screening asymptomatic individuals for cancer i.e.; low-dose mammography. Mammography is the only screening test proven in breast imaging. Positron emission tomography (PET) also plays an important role in staging breast cancer and monitoring treatment response. As imaging techniques improve, the role of imaging will continue to evolve with the goal remaining a decrease in breast cancer morbidity and mortality. Progress in the development and commercialisation of EIT breast imaging system will definitely help to promote other systems and applications based on the EIT and similar visualization methods. Breast ultrasound and breast magnetic resonance imaging (MRI) are frequently used adjuncts to mammography in today's clinical practice and these techniques enhance the radiologist's ability to detect cancer and assess disease extent, which is crucial in treatment planning and staging.  相似文献   

15.
MOTIVATION: Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology. We are seeking to develop a computer system for powerful and reliable cancer diagnostic model creation based on microarray data. To keep a realistic perspective on clinical applications we focus on multicategory diagnosis. To equip the system with the optimum combination of classifier, gene selection and cross-validation methods, we performed a systematic and comprehensive evaluation of several major algorithms for multicategory classification, several gene selection methods, multiple ensemble classifier methods and two cross-validation designs using 11 datasets spanning 74 diagnostic categories and 41 cancer types and 12 normal tissue types. RESULTS: Multicategory support vector machines (MC-SVMs) are the most effective classifiers in performing accurate cancer diagnosis from gene expression data. The MC-SVM techniques by Crammer and Singer, Weston and Watkins and one-versus-rest were found to be the best methods in this domain. MC-SVMs outperform other popular machine learning algorithms, such as k-nearest neighbors, backpropagation and probabilistic neural networks, often to a remarkable degree. Gene selection techniques can significantly improve the classification performance of both MC-SVMs and other non-SVM learning algorithms. Ensemble classifiers do not generally improve performance of the best non-ensemble models. These results guided the construction of a software system GEMS (Gene Expression Model Selector) that automates high-quality model construction and enforces sound optimization and performance estimation procedures. This is the first such system to be informed by a rigorous comparative analysis of the available algorithms and datasets. AVAILABILITY: The software system GEMS is available for download from http://www.gems-system.org for non-commercial use. CONTACT: alexander.statnikov@vanderbilt.edu.  相似文献   

16.
Training and testing of conventional machine learning models on binary classification problems depend on the proportions of the two outcomes in the relevant data sets. This may be especially important in practical terms when real-world applications of the classifier are either highly imbalanced or occur in unknown proportions. Intuitively, it may seem sensible to train machine learning models on data similar to the target data in terms of proportions of the two binary outcomes. However, we show that this is not the case using the example of prediction of deleterious and neutral phenotypes of human missense mutations in human genome data, for which the proportion of the binary outcome is unknown. Our results indicate that using balanced training data (50% neutral and 50% deleterious) results in the highest balanced accuracy (the average of True Positive Rate and True Negative Rate), Matthews correlation coefficient, and area under ROC curves, no matter what the proportions of the two phenotypes are in the testing data. Besides balancing the data by undersampling the majority class, other techniques in machine learning include oversampling the minority class, interpolating minority-class data points and various penalties for misclassifying the minority class. However, these techniques are not commonly used in either the missense phenotype prediction problem or in the prediction of disordered residues in proteins, where the imbalance problem is substantial. The appropriate approach depends on the amount of available data and the specific problem at hand.  相似文献   

17.
MRI,PET,和CT等医学影像在新药研发和精准医疗中起着越来越重要的作用。影像技术可以被用来诊断疾病,评估药效,选择适应患者,或者确定用药剂量。 随着人工智能技术的发展,特别是机器学习以及深度学习技术在医学影像中的应用,使得我们可以用更短的时间,更少的放射剂量获取更高质量的影像。这些技术还可以帮助放射科医生缩短读片时间,提高诊断准确率。除此之外,机器学习技术还可以提高量化分析的可行性和精度,帮助建立影像与基因以及疾病的临床表现之间的关系。首先根据不同形态的医学影像,简单介绍他们在药物研发和精准医疗中的应用。并对机器学习在医学影像中的功能作一概括总结。最后讨论这个领域的挑战和机遇。  相似文献   

18.
Unsupervised clustering represents a powerful technique for self-organized segmentation of biomedical image time series data describing groups of pixels exhibiting similar properties of local signal dynamics. The theoretical background is presented in the beginning, followed by several medical applications demonstrating the flexibility and conceptual power of these techniques. These applications range from functional MRI data analysis to dynamic contrast-enhanced perfusion MRI and breast MRI. For fMRI, these methods can be employed to identify and separate time courses of interest, along with their associated spatial patterns. When applied to dynamic perfusion MRI, they identify groups of voxels associated with time courses that are clinically informative and straightforward to interpret. In breast MRI, a segmentation of the lesion is achieved and in addition a subclassification is obtained within the lesion with regard to regions characterized by different MRI signal time courses. In the present paper, we conclude that unsupervised clustering techniques provide a robust method for blind analysis of time series image data in the important and current field of functional and dynamic MRI.  相似文献   

19.
HER2-testing in breast and gastric cancers is mandatory for the treatment with trastuzumab. We hypothesized that imaging mass spectrometry (IMS) of breast cancers may be useful for generating a classifier that may determine HER2-status in other cancer entities irrespective of primary tumor site. A total of 107 breast (n = 48) and gastric (n = 59) cryo tissue samples was analyzed by IMS (HER2 was present in 29 cases). The obtained proteomic profiles were used to create HER2 prediction models using different classification algorithms. A breast cancer proteome derived classifier, with HER2 present in 15 cases, correctly predicted HER2-status in gastric cancers with a sensitivity of 65% and a specificity of 92%. To create a universal classifier for HER2-status, breast and nonbreast cancer samples were combined, which increased sensitivity to 78%, and specificity was 88%. Our proof of principle study provides evidence that HER2-status can be identified on a proteomic level across different cancer types suggesting that HER2 overexpression may constitute a unique molecular event independent of the tumor site. Furthermore, these results indicate that IMS may be useful for the determination of potential drugable targets, as it offers a quicker, cheaper, and more objective analysis than the standard HER2-testing procedures immunohistochemistry and fluorescence in situ hybridization.  相似文献   

20.
One common and challenging problem faced by many bioinformatics applications, such as promoter recognition, splice site prediction, RNA gene prediction, drug discovery and protein classification, is the imbalance of the available datasets. In most of these applications, the positive data examples are largely outnumbered by the negative data examples, which often leads to the development of sub-optimal prediction models having high negative recognition rate (Specificity = SP) and low positive recognition rate (Sensitivity = SE). When class imbalance learning methods are applied, usually, the SE is increased at the expense of reducing some amount of the SP. In this paper, we point out that in these data-imbalanced bioinformatics applications, the goal of applying class imbalance learning methods would be to increase the SE as high as possible by keeping the reduction of SP as low as possible. We explain that the existing performance measures used in class imbalance learning can still produce sub-optimal models with respect to this classification goal. In order to overcome these problems, we introduce a new performance measure called Adjusted Geometric-mean (AGm). The experimental results obtained on ten real-world imbalanced bioinformatics datasets demonstrates that the AGm metric can achieve a lower rate of reduction of SP than the existing performance metrics, when increasing the SE through class imbalance learning methods. This characteristic of AGm metric makes it more suitable for achieving the proposed classification goal in imbalanced bioinformatics datasets learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号