首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《IRBM》2022,43(6):573-584
ObjectivesThis paper presents a new method for Atrial Fibrillation detection based on the belief functions theory.Materials and methodsThe theoretical framework allows to handle missing and uncertain data, to aggregate evidence in an independent order of sources of information and to reject a decision in case of insufficient supporting evidence. The proposed method is evaluated on real signals acquired from Intensive Care Units available in the MIMIC-III database and compared to state-of-the-art technologies and methods.ResultsThe precision of the suggested method is 90.03%, which is 2% more than existing methods in the literature.ConclusionWhile almost all existing methods rely on high frequency sampled ECG signals, mainly at 125 Hz, to achieve a good accuracy, our proposed approach achieves a comparable performance using low frequency sampled physiological signals at 0.016 Hz without the need for an ECG which allows for a significant reduction in energy consumption, in data size and in processing complexity.  相似文献   

2.
IntroductionIncreased access to remote sensing datasets presents opportunities to model an animal's in-situ experience of the landscape to study behavior and test hypotheses such as geomagnetic map navigation. MagGeo is an open-source tool that combines high spatiotemporal resolution geomagnetic data with animal tracking data. Unlike gridded remote sensing data, satellite geomagnetic data are point-based measurements of the magnetic field at the location of each satellite. MagGeo converts these measurements into geomagnetic values at an animal's location and time. The objective of this paper is to evaluate different interpolation methods and data frameworks within the MagGeo software and quantify how accurately MagGeo can model geomagnetic values and patterns as experienced by animals.MethodWe tested MagGeo outputs against data from 109 terrestrial geomagnetic observatories across 7 years. Unlike satellite data, ground-based data are more likely to represent how animals near the Earth's surface experience geomagnetic field dynamics. Within the MagGeo framework, we compared an inverse-distance weighting interpolation with three different nearest-neighbour interpolation methods. We also compared model geomagnetic data with combined model and satellite data in their ability to capture geomagnetic fluctuations. Finally, we fit a linear mixed-effect model to understand how error is influenced by factors like geomagnetic activity and distance in space and time between satellite and point of interest.Results and conclusionsThe overall absolute difference between MagGeo outputs and observatory values was <1% of the total possible range of values for geomagnetic components. Satellite data measurements closest in time to the point of interest consistently had lowest error which likely reflects the ability of the nearest neighbour in time interpolation method to capture small continuous daily fluctuations and larger discrete events like geomagnetic storms. Combined model and satellite data also capture geomagnetic fluctuations better than model data alone across most geomagnetic activity levels. Our linear mixed-effect models suggest that most of the variation in error can be explained by location-specific effects originating largely from local crustal biases, and that high geomagnetic activity usually predicts higher error though ultimately remaining within the 1% error range. Our results indicate that MagGeo can help researchers explore how animals may use the geomagnetic field to navigate long distances by providing access to data and methods that accurately model how animals moving near the Earth's surface experience the geomagnetic field.  相似文献   

3.
R.R. Janghel  Y.K. Rathore 《IRBM》2021,42(4):258-267
ObjectivesAlzheimer's Disease (AD) is the most general type of dementia. In all leading countries, it is one of the primary reasons of death in senior citizens. Currently, it is diagnosed by calculating the MSME score and by the manual study of MRI Scan. Also, different machine learning methods are utilized for automatic diagnosis but existing has some limitations in terms of accuracy. So, main objective of this paper to include a preprocessing method before CNN model to increase the accuracy of classification.Materials and methodIn this paper, we present a deep learning-based approach for detection of Alzheimer's Disease from ADNI database of Alzheimer's disease patients, the dataset contains fMRI and PET images of Alzheimer's patients along with normal person's image. We have applied 3D to 2D conversion and resizing of images before applying VGG-16 architecture of Convolution neural network for feature extraction. Finally, for classification SVM, Linear Discriminate, K means clustering, and Decision tree classifiers are used.ResultsThe experimental result shows that the average accuracy of 99.95% is achieved for the classification of the fMRI dataset, while the average accuracy of 73.46% is achieved with the PET dataset. On comparing results on the basis of accuracy, specificity, sensitivity and on some other parameters we found that these results are better than existing methods.Conclusionsthis paper, suggested a unique way to increase the performance of CNN models by applying some preprocessing on image dataset before sending to CNN architecture for feature extraction. We applied this method on ADNI database and on comparing the accuracies with other similar approaches it shows better results.  相似文献   

4.
Xiong  Ying  Chen  Shuai  Tang  Buzhou  Chen  Qingcai  Wang  Xiaolong  Yan  Jun  Zhou  Yi 《BMC bioinformatics》2021,22(1):1-18
Background

For differential abundance analysis, zero-inflated generalized linear models, typically zero-inflated NB models, have been increasingly used to model microbiome and other sequencing count data. A common assumption in estimating the false discovery rate is that the p values are uniformly distributed under the null hypothesis, which demands that the postulated model fit the count data adequately. Mis-specification of the distribution of the count data may lead to excess false discoveries. Therefore, model checking is critical to control the FDR at a nominal level in differential abundance analysis. Increasing studies show that the method of randomized quantile residual (RQR) performs well in diagnosing count regression models. However, the performance of RQR in diagnosing zero-inflated GLMMs for sequencing count data has not been extensively investigated in the literature.

Results

We conduct large-scale simulation studies to investigate the performance of the RQRs for zero-inflated GLMMs. The simulation studies show that the type I error rates of the GOF tests with RQRs are very close to the nominal level; in addition, the scatter-plots and Q–Q plots of RQRs are useful in discerning the good and bad models. We also apply the RQRs to diagnose six GLMMs to a real microbiome dataset. The results show that the OTU counts at the genus level of this dataset (after a truncation treatment) can be modelled well by zero-inflated and zero-modified NB models.

Conclusion

RQR is an excellent tool for diagnosing GLMMs for zero-inflated count data, particularly the sequencing count data arising in microbiome studies. In the supplementary materials, we provided two generic R functions, called rqr.glmmtmb and rqr.hurdle.glmmtmb, for calculating the RQRs given fitting outputs of the R package glmmTMB.

  相似文献   

5.
MOTIVATION: Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. RESULTS: The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE algorithm. AVAILABILITY: The CMVE software is available upon request from the authors.  相似文献   

6.
BackgroundRecent development in neuroimaging and genetic testing technologies have made it possible to measure pathological features associated with Alzheimer''s disease (AD) in vivo. Mining potential molecular markers of AD from high-dimensional, multi-modal neuroimaging and omics data will provide a new basis for early diagnosis and intervention in AD. In order to discover the real pathogenic mutation and even understand the pathogenic mechanism of AD, lots of machine learning methods have been designed and successfully applied to the analysis and processing of large-scale AD biomedical data.ObjectiveTo introduce and summarize the applications and challenges of machine learning methods in Alzheimer''s disease multi-source data analysis.MethodsThe literature selected in the review is obtained from Google Scholar, PubMed, and Web of Science. The keywords of literature retrieval include Alzheimer''s disease, bioinformatics, image genetics, genome-wide association research, molecular interaction network, multi-omics data integration, and so on.ConclusionThis study comprehensively introduces machine learning-based processing techniques for AD neuroimaging data and then shows the progress of computational analysis methods in omics data, such as the genome, proteome, and so on. Subsequently, machine learning methods for AD imaging analysis are also summarized. Finally, we elaborate on the current emerging technology of multi-modal neuroimaging, multi-omics data joint analysis, and present some outstanding issues and future research directions.  相似文献   

7.
MOTIVATION: High-throughput measurement techniques for metabolism and gene expression provide a wealth of information for the identification of metabolic network models. Yet, missing observations scattered over the dataset restrict the number of effectively available datapoints and make classical regression techniques inaccurate or inapplicable. Thorough exploitation of the data by identification techniques that explicitly cope with missing observations is therefore of major importance. RESULTS: We develop a maximum-likelihood approach for the estimation of unknown parameters of metabolic network models that relies on the integration of statistical priors to compensate for the missing data. In the context of the linlog metabolic modeling framework, we implement the identification method by an Expectation-Maximization (EM) algorithm and by a simpler direct numerical optimization method. We evaluate performance of our methods by comparison to existing approaches, and show that our EM method provides the best results over a variety of simulated scenarios. We then apply the EM algorithm to a real problem, the identification of a model for the Escherichia coli central carbon metabolism, based on challenging experimental data from the literature. This leads to promising results and allows us to highlight critical identification issues.  相似文献   

8.

Background

Untargeted mass spectrometry (MS)-based metabolomics data often contain missing values that reduce statistical power and can introduce bias in biomedical studies. However, a systematic assessment of the various sources of missing values and strategies to handle these data has received little attention. Missing data can occur systematically, e.g. from run day-dependent effects due to limits of detection (LOD); or it can be random as, for instance, a consequence of sample preparation.

Methods

We investigated patterns of missing data in an MS-based metabolomics experiment of serum samples from the German KORA F4 cohort (n?=?1750). We then evaluated 31 imputation methods in a simulation framework and biologically validated the results by applying all imputation approaches to real metabolomics data. We examined the ability of each method to reconstruct biochemical pathways from data-driven correlation networks, and the ability of the method to increase statistical power while preserving the strength of established metabolic quantitative trait loci.

Results

Run day-dependent LOD-based missing data accounts for most missing values in the metabolomics dataset. Although multiple imputation by chained equations performed well in many scenarios, it is computationally and statistically challenging. K-nearest neighbors (KNN) imputation on observations with variable pre-selection showed robust performance across all evaluation schemes and is computationally more tractable.

Conclusion

Missing data in untargeted MS-based metabolomics data occur for various reasons. Based on our results, we recommend that KNN-based imputation is performed on observations with variable pre-selection since it showed robust results in all evaluation schemes.
  相似文献   

9.
PurposeAccurate detection and treatment of Coronary Artery Disease is mainly based on invasive Coronary Angiography, which could be avoided provided that a robust, non-invasive detection methodology emerged. Despite the progress of computational systems, this remains a challenging issue. The present research investigates Machine Learning and Deep Learning methods in competing with the medical experts' diagnostic yield. Although the highly accurate detection of Coronary Artery Disease, even from the experts, is presently implausible, developing Artificial Intelligence models to compete with the human eye and expertise is the first step towards a state-of-the-art Computer-Aided Diagnostic system.MethodsA set of 566 patient samples is analysed. The dataset contains Polar Maps derived from scintigraphic Myocardial Perfusion Imaging studies, clinical data, and Coronary Angiography results. The latter is considered as reference standard. For the classification of the medical images, the InceptionV3 Convolutional Neural Network is employed, while, for the categorical and continuous features, Neural Networks and Random Forest classifier are proposed.ResultsThe research suggests that an optimal strategy competing with the medical expert's accuracy involves a hybrid multi-input network composed of InceptionV3 and a Random Forest. This method matches the expert's accuracy, which is 79.15% in the particular dataset.ConclusionImage classification using deep learning methods can cooperate with clinical data classification methods to enhance the robustness of the predicting model, aiming to compete with the medical expert's ability to identify Coronary Artery Disease subjects, from a large scale patient dataset.  相似文献   

10.

Background

Gene Set Analysis (GSA) identifies differential expression gene sets amid the different phenotypes. The results of published papers in this filed are inconsistent and there is no consensus on the best method. In this paper two new methods, in comparison to the previous ones, are introduced for GSA.

Methods

The MMGSA and MRGSA methods based on multivariate nonparametric techniques were presented. The implementation of five GSA methods (Hotelling's T2, Globaltest, Abs_Cat, Med_Cat and Rs_Cat) and the novel methods to detect differential gene expression between phenotypes were compared using simulated and real microarray data sets.

Results

In a real dataset, the results showed that the powers of MMGSA and MRGSA were as well as Globaltest and Tsai. The MRGSA method has not a good performance in the simulation dataset.

Conclusions

The Globaltest method is the best method in the real or simulation datasets. The performance of MMGSA in simulation dataset is good in small-size gene sets. The GLS methods are not good in the simulated data, except the Med_Cat method in large-size gene sets.  相似文献   

11.
《IRBM》2023,44(3):100747
ObjectivesThe accurate preoperative segmentation of the uterus and uterine fibroids from magnetic resonance images (MRI) is an essential step for diagnosis and real-time ultrasound guidance during high-intensity focused ultrasound (HIFU) surgery. Conventional supervised methods are effective techniques for image segmentation. Recently, semi-supervised segmentation approaches have been reported in the literature. One popular technique for semi-supervised methods is to use pseudo-labels to artificially annotate unlabeled data. However, many existing pseudo-label generations rely on a fixed threshold used to generate a confidence map, regardless of the proportion of unlabeled and labeled data.Materials and MethodsTo address this issue, we propose a novel semi-supervised framework called Confidence-based Threshold Adaptation Network (CTANet) to improve the quality of pseudo-labels. Specifically, we propose an online pseudo-labels method to automatically adjust the threshold, producing high-confident unlabeled annotations and boosting segmentation accuracy. To further improve the network's generalization to fit the diversity of different patients, we design a novel mixup strategy by regularizing the network on each layer in the decoder part and introducing a consistency regularization loss between the outputs of two sub-networks in CTANet.ResultsWe compare our method with several state-of-the-art semi-supervised segmentation methods on the same uterine fibroids dataset containing 297 patients. The performance is evaluated by the Dice similarity coefficient, the precision, and the recall. The results show that our method outperforms other semi-supervised learning methods. Moreover, for the same training set, our method approaches the segmentation performance of a fully supervised U-Net (100% annotated data) but using 4 times less annotated data (25% annotated data, 75% unannotated data).ConclusionExperimental results are provided to illustrate the effectiveness of the proposed semi-supervised approach. The proposed method can contribute to multi-class segmentation of uterine regions from MRI for HIFU treatment.  相似文献   

12.
《IRBM》2022,43(2):107-113
Background and objectiveAn important task of the brain-computer interface (BCI) of motor imagery is to extract effective time-domain features, frequency-domain features or time-frequency domain features from the raw electroencephalogram (EEG) signals for classification of motor imagery. However, choosing an appropriate method to combine time domain and frequency domain features to improve the performance of motor imagery recognition is still a research hotspot.MethodsIn order to fully extract and utilize the time-domain and frequency-domain features of EEG in classification tasks, this paper proposed a novel dual-stream convolutional neural network (DCNN), which can use time domain signal and frequency domain signal as the inputs, and the extracted time-domain features and frequency-domain features are fused by linear weighting for classification training. Furthermore, the weight can be learned by the DCNN automatically.ResultsThe experiments based on BCI competition II dataset III and BCI competition IV dataset 2a showed that the model proposed by this study has better performance than other conventional methods. The model used time-frequency signal as the inputs had better performance than the model only used time-domain signals or frequency-domain signals. The accuracy of classification was improved for each subject compared with the models only used one signals as the inputs.ConclusionsFurther analysis shown that the fusion weight of different subject is specifically, adjusting the weight coefficient automatically is helpful to improve the classification accuracy.  相似文献   

13.
目的 N6-甲基化腺苷(N6-methyladenosine,m6A)是RNA中最常见、最丰富的化学修饰,在很多生物过程中发挥着重要作用。目前已经发展了一些预测m6A甲基化位点的计算方法。然而,这些方法在针对不同物种或不同组织时,缺乏稳健性。为了提升对不同组织中m6A甲基化位点预测的稳健性,本文提出一种能结合序列反向信息来提取数据更高级特征的双层双向门控循环单元(bidirectional gated recurrent unit,BiGRU)网络模型。方法 本文选取具有代表性的哺乳动物组织m6A甲基化位点数据集作为训练数据,通过对模型网络、网络结构、层数和优化器等进行搭配,构建双层BiGRU网络。结果 将模型应用于人类、小鼠和大鼠共11个组织的m6A甲基化位点预测上,并与其他方法在这11个组织上的预测能力进行了全面的比较。结果表明,本文构建的模型平均预测接受者操作特征曲线下面积(area under the receiver operating characteristic curve,AUC)达到93.72%,与目前最好的预测方法持平,而预测准确率(accuracy,ACC)、敏感性(sensitivity,SN)、特异性(specificity,SP)和马修斯相关系数(Matthews correlation coefficient,MCC)分别为90.07%、90.30%、89.84%和80.17%,均高于目前的m6A甲基化位点预测方法。结论 和已有研究方法相比,本文方法对11个哺乳动物组织的m6A甲基化位点的预测准确性均达到最高,说明本文方法具有较好的泛化能力。  相似文献   

14.

Background

Imbalanced data classification is an inevitable problem in medical intelligent diagnosis. Most of real-world biomedical datasets are usually along with limited samples and high-dimensional feature. This seriously affects the classification performance of the model and causes erroneous guidance for the diagnosis of diseases. Exploring an effective classification method for imbalanced and limited biomedical dataset is a challenging task.

Methods

In this paper, we propose a novel multilayer extreme learning machine (ELM) classification model combined with dynamic generative adversarial net (GAN) to tackle limited and imbalanced biomedical data. Firstly, principal component analysis is utilized to remove irrelevant and redundant features. Meanwhile, more meaningful pathological features are extracted. After that, dynamic GAN is designed to generate the realistic-looking minority class samples, thereby balancing the class distribution and avoiding overfitting effectively. Finally, a self-adaptive multilayer ELM is proposed to classify the balanced dataset. The analytic expression for the numbers of hidden layer and node is determined by quantitatively establishing the relationship between the change of imbalance ratio and the hyper-parameters of the model. Reducing interactive parameters adjustment makes the classification model more robust.

Results

To evaluate the classification performance of the proposed method, numerical experiments are conducted on four real-world biomedical datasets. The proposed method can generate authentic minority class samples and self-adaptively select the optimal parameters of learning model. By comparing with W-ELM, SMOTE-ELM, and H-ELM methods, the quantitative experimental results demonstrate that our method can achieve better classification performance and higher computational efficiency in terms of ROC, AUC, G-mean, and F-measure metrics.

Conclusions

Our study provides an effective solution for imbalanced biomedical data classification under the condition of limited samples and high-dimensional feature. The proposed method could offer a theoretical basis for computer-aided diagnosis. It has the potential to be applied in biomedical clinical practice.
  相似文献   

15.
N. Bhaskar  M. Suchetha 《IRBM》2021,42(4):268-276
ObjectivesIn this paper, we propose a computationally efficient Correlational Neural Network (CorrNN) learning model and an automated diagnosis system for detecting Chronic Kidney Disease (CKD). A Support Vector Machine (SVM) classifier is integrated with the CorrNN model for improving the prediction accuracy.Material and methodsThe proposed hybrid model is trained and tested with a novel sensing module. We have monitored the concentration of urea in the saliva sample to detect the disease. Experiments are carried out to test the model with real-time samples and to compare its performance with conventional Convolutional Neural Network (CNN) and other traditional data classification methods.ResultsThe proposed method outperforms the conventional methods in terms of computational speed and prediction accuracy. The CorrNN-SVM combined network achieved a prediction accuracy of 98.67%. The experimental evaluations show a reduction in overall computation time of about 9.85% compared to the conventional CNN algorithm.ConclusionThe use of the SVM classifier has improved the capability of the network to make predictions more accurately. The proposed framework substantially advances the current methodology, and it provides more precise results compared to other data classification methods.  相似文献   

16.
Multispecies occupancy models can estimate species richness from spatially replicated multispecies detection/non‐detection survey data, while accounting for imperfect detection. A model extension using data augmentation allows inferring the total number of species in the community, including those completely missed by sampling (i.e., not detected in any survey, at any site). Here we investigate the robustness of these estimates. We review key model assumptions and test performance via simulations, under a range of scenarios of species characteristics and sampling regimes, exploring sensitivity to the Bayesian priors used for model fitting. We run tests when assumptions are perfectly met and when violated. We apply the model to a real dataset and contrast estimates obtained with and without predictors, and for different subsets of data. We find that, even with model assumptions perfectly met, estimation of the total number of species can be poor in scenarios where many species are missed (>15%–20%) and that commonly used priors can accentuate overestimation. Our tests show that estimation can often be robust to violations of assumptions about the statistical distributions describing variation of occupancy and detectability among species, but lower‐tail deviations can result in large biases. We obtain substantially different estimates from alternative analyses of our real dataset, with results suggesting that missing relevant predictors in the model can result in richness underestimation. In summary, estimates of total richness are sensitive to model structure and often uncertain. Appropriate selection of priors, testing of assumptions, and model refinement are all important to enhance estimator performance. Yet, these do not guarantee accurate estimation, particularly when many species remain undetected. While statistical models can provide useful insights, expectations about accuracy in this challenging prediction task should be realistic. Where knowledge about species numbers is considered truly critical for management or policy, survey effort should ideally be such that the chances of missing species altogether are low.  相似文献   

17.
18.
《IRBM》2022,43(1):62-74
BackgroundThe prediction of breast cancer subtypes plays a key role in the diagnosis and prognosis of breast cancer. In recent years, deep learning (DL) has shown good performance in the intelligent prediction of breast cancer subtypes. However, most of the traditional DL models use single modality data, which can just extract a few features, so it cannot establish a stable relationship between patient characteristics and breast cancer subtypes.DatasetWe used the TCGA-BRCA dataset as a sample set for molecular subtype prediction of breast cancer. It is a public dataset that can be obtained through the following link: https://portal.gdc.cancer.gov/projects/TCGA-BRCAMethodsIn this paper, a Hybrid DL model based on the multimodal data is proposed. We combine the patient's gene modality data with image modality data to construct a multimodal fusion framework. According to the different forms and states, we set up feature extraction networks respectively, and then we fuse the output of the two feature networks based on the idea of weighted linear aggregation. Finally, the fused features are used to predict breast cancer subtypes. In particular, we use the principal component analysis to reduce the dimensionality of high-dimensional data of gene modality and filter the data of image modality. Besides, we also improve the traditional feature extraction network to make it show better performance.ResultsThe results show that compared with the traditional DL model, the Hybrid DL model proposed in this paper is more accurate and efficient in predicting breast cancer subtypes. Our model achieved a prediction accuracy of 88.07% in 10 times of 10-fold cross-validation. We did a separate AUC test for each subtype, and the average AUC value obtained was 0.9427. In terms of subtype prediction accuracy, our model is about 7.45% higher than the previous average.  相似文献   

19.

Introduction

A common problem in metabolomics data analysis is the existence of a substantial number of missing values, which can complicate, bias, or even prevent certain downstream analyses. One of the most widely-used solutions to this problem is imputation of missing values using a k-nearest neighbors (kNN) algorithm to estimate missing metabolite abundances. kNN implicitly assumes that missing values are uniformly distributed at random in the dataset, but this is typically not true in metabolomics, where many values are missing because they are below the limit of detection of the analytical instrumentation.

Objectives

Here, we explore the impact of nonuniformly distributed missing values (missing not at random, or MNAR) on imputation performance. We present a new model for generating synthetic missing data and a new algorithm, No-Skip kNN (NS-kNN), that accounts for MNAR values to provide more accurate imputations.

Methods

We compare the imputation errors of the original kNN algorithm using two distance metrics, NS-kNN, and a recently developed algorithm KNN-TN, when applied to multiple experimental datasets with different types and levels of missing data.

Results

Our results show that NS-kNN typically outperforms kNN when at least 20–30% of missing values in a dataset are MNAR. NS-kNN also has lower imputation errors than KNN-TN on realistic datasets when at least 50% of missing values are MNAR.

Conclusion

Accounting for the nonuniform distribution of missing values in metabolomics data can significantly improve the results of imputation algorithms. The NS-kNN method imputes missing metabolomics data more accurately than existing kNN-based approaches when used on realistic datasets.
  相似文献   

20.
This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号