首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 742 毫秒
1.
Z Li  J M?tt?nen  M J Sillanp?? 《Heredity》2015,115(6):556-564
Linear regression-based quantitative trait loci/association mapping methods such as least squares commonly assume normality of residuals. In genetics studies of plants or animals, some quantitative traits may not follow normal distribution because the data include outlying observations or data that are collected from multiple sources, and in such cases the normal regression methods may lose some statistical power to detect quantitative trait loci. In this work, we propose a robust multiple-locus regression approach for analyzing multiple quantitative traits without normality assumption. In our method, the objective function is least absolute deviation (LAD), which corresponds to the assumption of multivariate Laplace distributed residual errors. This distribution has heavier tails than the normal distribution. In addition, we adopt a group LASSO penalty to produce shrinkage estimation of the marker effects and to describe the genetic correlation among phenotypes. Our LAD-LASSO approach is less sensitive to the outliers and is more appropriate for the analysis of data with skewedly distributed phenotypes. Another application of our robust approach is on missing phenotype problem in multiple-trait analysis, where the missing phenotype items can simply be filled with some extreme values, and be treated as outliers. The efficiency of the LAD-LASSO approach is illustrated on both simulated and real data sets.  相似文献   

2.
Identification of protein coding regions is fundamentally a statistical pattern recognition problem. Discriminant analysis is a statistical technique for classifying a set of observations into predefined classes and it is useful to solve such problems. It is well known that outliers are present in virtually every data set in any application domain, and classical discriminant analysis methods (including linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA)) do not work well if the data set has outliers. In order to overcome the difficulty, the robust statistical method is used in this paper. We choose four different coding characters as discriminant variables and an approving result is presented by the method of robust discriminant analysis.  相似文献   

3.

Aim

Species distribution data play a pivotal role in the study of ecology, evolution, biogeography and biodiversity conservation. Although large amounts of location data are available and accessible from public databases, data quality remains problematic. Of the potential sources of error, positional errors are critical for spatial applications, particularly where these errors place observations beyond the environmental or geographical range of species. These outliers need to be identified, checked and removed to improve data quality and minimize the impact on subsequent analyses. Manually checking all species records within large multispecies datasets is prohibitively costly. This work investigates algorithms that may assist in the efficient vetting of outliers in such large datasets.

Location

We used real, spatially explicit environmental data derived from the western part of Victoria, Australia, and simulated species distributions within this same region.

Methods

By adapting species distribution modelling (SDM), we developed a pseudo‐SDM approach for detecting outliers in species distribution data, which was implemented with random forest (RF) and support vector machine (SVM) resulting in two new methods: RF_pdSDM and SVM_pdSDM. Using virtual species, we compared eight existing multivariate outlier detection methods with these two new methods under various conditions.

Results

The two new methods based on the pseudo‐SDM approach had higher true skill statistic (TSS) values than other approaches, with TSS values always exceeding 0. More than 70% of the true outliers in datasets for species with a low and intermediate prevalence can be identified by checking 10% of the data points with the highest outlier scores.

Main conclusions

Pseudo‐SDM‐based methods were more effective than other outlier detection methods. However, this outlier detection procedure can only be considered as a screening tool, and putative outliers must be examined by experts to determine whether they are actual errors or important records within an inherently biased set of data.  相似文献   

4.
ABSTRACT: BACKGROUND: Mass spectrometry (MS) data are often generated from various biological or chemical experiments and there may exist outlying observations, which are extreme due to technical reasons. The determination of outlying observations is important in the analysis of replicated MS data because elaborate pre-processing is essential for successful analysis with reliable results and manual outlier detection as one of pre-processing steps is time-consuming. The heterogeneity of variability and low replication are often obstacles to successful analysis, including outlier detection. Existing approaches, which assume constant variability, can generate many false positives (outliers) and/or false negatives non-outliers). Thus, a more powerful and accurate approach is needed to account for the heterogeneity of variability and low replication. FINDINGS: We proposed an outlier detection algorithm using projection and quantile regression in MS data from multiple experiments. The performance of the algorithm and program was demonstrated by using both simulated and real-life data. The projection approach with linear, nonlinear, or nonparametric quantile regression was appropriate in heterogeneous high-throughput data with low replication. CONCLUSION: Various quantile regression approaches combined with projection were proposed for detecting outliers. The choice among linear, nonlinear, and nonparametric regressions is dependent on the degree of heterogeneity of the data. The proposed approach was illustrated with MS data with two or more replicates.  相似文献   

5.
Becker T  Knapp M 《Human heredity》2005,59(4):185-189
In the context of haplotype association analysis of unphased genotype data, methods based on Monte-Carlo simulations are often used to compensate for missing or inappropriate asymptotic theory. Moreover, such methods are an indispensable means to deal with multiple testing problems. We want to call attention to a potential trap in this usually useful approach: The simulation approach may lead to strongly inflated type I errors in the presence of different missing rates between cases and controls, depending on the chosen test statistic. Here, we consider four different testing strategies for haplotype analysis of case-control data. We recommend to interpret results for data sets with non-comparable distributions of missing genotypes with special caution, in case the test statistic is based on inferred haplotypes per individual. Moreover, our results are important for the conduction and interpretation of genome-wide association studies.  相似文献   

6.
Increasing international trade has exacerbated the risks of ecological damage due to invasive pests and diseases. For extreme events such as invasions of damaging exotic species or natural catastrophes, there are no or very few directly relevant data, so expert opinion must be relied on heavily. Expert opinion must be as fully informed and calibrated as possible – by available data, by other experts, and by the reasoned opinions of stakeholders. We survey a number of quantitative and non-quantitative methods that have shown promise for improving extreme risk analysis, particularly for assessing the risks of invasive pests and pathogens associated with international trade. We describe the legally inspired regulatory regime for banks, where these methods have been brought to bear on extreme 'operational risks'. We argue that an 'advocacy model' similar to that used in the Basel II compliance regime for bank operational risks and to a lesser extent in biosecurity import risk analyses is ideal for permitting the diversity of relevant evidence about invasive species to be presented and soundly evaluated. We recommend that the process be enhanced in ways that enable invasion ecology to make more explicit use of the methods found successful in banking.  相似文献   

7.
Research has shown that high blood glucose levels are important predictors of incident diabetes. However, they are also strongly associated with other cardiometabolic risk factors such as high blood pressure, adiposity, and cholesterol, which are also highly correlated with one another. The aim of this analysis was to ascertain how these highly correlated cardiometabolic risk factors might be associated with high levels of blood glucose in older adults aged 50 or older from wave 2 of the English Longitudinal Study of Ageing (ELSA). Due to the high collinearity of predictor variables and our interest in extreme values of blood glucose we proposed a new method, called quantile profile regression, to answer this question. Profile regression, a Bayesian nonparametric model for clustering responses and covariates simultaneously, is a powerful tool to model the relationship between a response variable and covariates, but the standard approach of using a mixture of Gaussian distributions for the response model will not identify the underlying clusters correctly, particularly with outliers in the data or heavy tail distribution of the response. Therefore, we propose quantile profile regression to model the response variable with an asymmetric Laplace distribution, allowing us to model more accurately clusters that are asymmetric and predict more accurately for extreme values of the response variable and/or outliers. Our new method performs more accurately in simulations when compared to Normal profile regression approach as well as robustly when outliers are present in the data. We conclude with an analysis of the ELSA.  相似文献   

8.
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this article, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a reparameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods.  相似文献   

9.
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness‐of‐fit procedures for applications are discussed. Furthermore, some non‐classical situations are considered where the data are possibly dependent, where a non‐stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.  相似文献   

10.
Multi-pond salterns constitute an excellent model for the study of the microbial diversity and ecology of hypersaline environments, showing a wide range of salt concentrations, from seawater to salt saturation. Accumulated studies on the Santa Pola (Alicante, Spain) multi-pond solar saltern during the last 35 years include culture-dependent and culture-independent molecular methods and metagenomics more recently. These approaches have permitted to determine in depth the microbial diversity of the ponds with intermediate salinities (from 10 % salts) up to salt saturation, with haloarchaea and bacteria as the two main dominant groups. In this review, we describe the main results obtained using the different methodologies, the most relevant contributions for understanding the ecology of these extreme environments and the future perspectives for such studies.  相似文献   

11.
Outlier detection and cleaning procedures were evaluated to estimate mathematical restricted variogram models with discrete insect population count data. Because variogram modeling is significantly affected by outliers, methods to detect and clean outliers from data sets are critical for proper variogram modeling. In this study, we examined spatial data in the form of discrete measurements of insect counts on a rectangular grid. Two well-known insect pest population data were analyzed; one data set was the western flower thrips, Frankliniella occidentalis (Pergande) on greenhouse cucumbers and the other was the greenhouse whitefly, Trialeurodes vaporariorum (Westwood) on greenhouse cherry tomatoes. A spatial additive outlier model was constructed to detect outliers in both the isolated and patchy spatial distributions of outliers, and the outliers were cleaned with the neighboring median cleaner. To analyze the effect of outliers, we compared the relative nugget effects of data cleaned of outliers and data still containing outliers after transformation. In addition, the correlation coefficients between the actual and predicted values were compared using the leave-one-out cross-validation method with data cleaned of outliers and non-cleaned data after unbiased back transformation. The outlier detection and cleaning procedure improved geostatistical analysis, particularly by reducing the nugget effect, which greatly impacts the prediction variance of kriging. Consequently, the outlier detection and cleaning procedures used here improved the results of geostatistical analysis with highly skewed and extremely fluctuating data, such as insect counts.  相似文献   

12.
MOTIVATION: A number of community profiling approaches have been widely used to study the microbial community composition and its variations in environmental ecology. Automated Ribosomal Intergenic Spacer Analysis (ARISA) is one such technique. ARISA has been used to study microbial communities using 16S-23S rRNA intergenic spacer length heterogeneity at different times and places. Owing to errors in sampling, random mutations in PCR amplification, and probably mostly variations in readings from the equipment used to analyze fragment sizes, the data read directly from the fragment analyzer should not be used for down stream statistical analysis. No optimal data preprocessing methods are available. A commonly used approach is to bin the reading lengths of the 16S-23S intergenic spacer. We have developed a dynamic programming algorithm based binning method for ARISA data analysis which minimizes the overall differences between replicates from the same sampling location and time. RESULTS: In a test example from an ocean time series sampling program, data preprocessing identified several outliers which upon re-examination were found to be because of systematic errors. Clustering analysis of the ARISA from different times based on the dynamic programming algorithm binned data revealed important features of the biodiversity of the microbial communities.  相似文献   

13.
The Lomb-Scargle periodogram was introduced in astrophysics to detect sinusoidal signals in noisy unevenly sampled time series. It proved to be a powerful tool in time series analysis and has recently been adapted in biomedical sciences. Its use is motivated by handling non-uniform data which is a common characteristic due to the restricted and irregular observations of, for instance, free-living animals. However, the observational data often contain fractions of non-Gaussian noise or may consist of periodic signals with non-sinusoidal shapes. These properties can make more difficult the interpretation of Lomb-Scargle periodograms and can lead to misleading estimates. In this letter we illustrate these difficulties for noise-free bimodal rhythms and sinusoidal signals with outliers. The examples are aimed to emphasize limitations and to complement the recent discussion on Lomb-Scargle periodograms.  相似文献   

14.
The Lomb-Scargle periodogram was introduced in astrophysics to detect sinusoidal signals in noisy unevenly sampled time series. It proved to be a powerful tool in time series analysis and has recently been adapted in biomedical sciences. Its use is motivated by handling non-uniform data which is a common characteristic due to the restricted and irregular observations of, for instance, free-living animals. However, the observational data often contain fractions of non-Gaussian noise or may consist of periodic signals with non-sinusoidal shapes. These properties can make more difficult the interpretation of Lomb-Scargle periodograms and can lead to misleading estimates. In this letter we illustrate these difficulties for noise-free bimodal rhythms and sinusoidal signals with outliers. The examples are aimed to emphasize limitations and to complement the recent discussion on Lomb-Scargle periodograms.  相似文献   

15.
Zhu H  Ibrahim JG  Chi YY  Tang N 《Biometrics》2012,68(3):954-964
Summary This article develops a variety of influence measures for carrying out perturbation (or sensitivity) analysis to joint models of longitudinal and survival data (JMLS) in Bayesian analysis. A perturbation model is introduced to characterize individual and global perturbations to the three components of a Bayesian model, including the data points, the prior distribution, and the sampling distribution. Local influence measures are proposed to quantify the degree of these perturbations to the JMLS. The proposed methods allow the detection of outliers or influential observations and the assessment of the sensitivity of inferences to various unverifiable assumptions on the Bayesian analysis of JMLS. Simulation studies and a real data set are used to highlight the broad spectrum of applications for our Bayesian influence methods.  相似文献   

16.
Phylogenetic methods for the analysis of species data are widely used in evolutionary studies. However, preliminary data transformations and data reduction procedures (such as a size‐correction and principal components analysis, PCA) are often performed without first correcting for nonindependence among the observations for species. In the present short comment and attached R and MATLAB code, I provide an overview of statistically correct procedures for phylogenetic size‐correction and PCA. I also show that ignoring phylogeny in preliminary transformations can result in significantly elevated variance and type I error in our statistical estimators, even if subsequent analysis of the transformed data is performed using phylogenetic methods. This means that ignoring phylogeny during preliminary data transformations can possibly lead to spurious results in phylogenetic statistical analyses of species data.  相似文献   

17.
18.
R. G. Blanks 《Cytopathology》2010,21(6):379-388
R.G. Blanks Using a graph of the abnormal predictive value versus the positive predictive value for the determination of outlier laboratories in the National Health Service cervical screening programme. Objective: The positive predictive value (PPV) for the detection of cervical intraepithelial neoplasia (CIN) grade 2 or worse of referral to colposcopy from moderate dyskaryosis or worse (equivalent to high‐grade squamous intraepithelial lesion or worse) is a standard performance measure in the National Health Service cervical screening programme. The current target is to examine ‘outlier’ laboratories with PPVs outside the 10th–90th percentile, which automatically identifies 20% of laboratories for further investigation. A more targeted method of identifying outliers may be more useful. Methods: A similar measure to the PPV, the abnormal predictive value (APV), can be defined as the predictive value for CIN2 or worse for referrals from borderline (includes atypical squamous and glandular cells) and mild dyskaryosis (equivalent to low‐grade squamous intraepithelial lesion) combined. A scatter plot of the APV versus the PPV can be produced (the APV‐PPV diagram). Three kinds of ‘outlier’ can be defined on the diagram to help determine laboratories with unusual data. These are termed a true outlier value (TOV) or an extreme value (EV) for either PPV or APV, or a residual extreme value (REV) from the APV‐PPV best line of fit. Results: Using annual return information for 2007/8 from 124 laboratories, two were defined as having EVs for PPV (both had a relatively low PPV of 62%). For APV, four laboratories were considered to have EVs of 34%, 34%, 34% and 4% and one was considered to be a TO with an APV of 45%. Five were identified as REV laboratories, although three of these were also identified as having extreme or outlier values, leaving two that had not been identified by the other methods. A total of eight (6%) laboratories were therefore identified as meriting further investigation using this methodology. Conclusions: The method proposed could be a useful alternative to the current method of identifying outliers. Slide exchange studies between the identified laboratories, particularly those at opposing ends of the diagram, or other further investigations of such laboratories, may be instructive in understanding why such variation occurs, and could therefore potentially, lead to improvements in the national programme.  相似文献   

19.
The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN).  相似文献   

20.
We consider the problem of identifying differentially expressed genes under different conditions using gene expression microarrays. Because of the many steps involved in the experimental process, from hybridization to image analysis, cDNA microarray data often contain outliers. For example, an outlying data value could occur because of scratches or dust on the surface, imperfections in the glass, or imperfections in the array production. We develop a robust Bayesian hierarchical model for testing for differential expression. Errors are modeled explicitly using a t-distribution, which accounts for outliers. The model includes an exchangeable prior for the variances, which allows different variances for the genes but still shrinks extreme empirical variances. Our model can be used for testing for differentially expressed genes among multiple samples, and it can distinguish between the different possible patterns of differential expression when there are three or more samples. Parameter estimation is carried out using a novel version of Markov chain Monte Carlo that is appropriate when the model puts mass on subspaces of the full parameter space. The method is illustrated using two publicly available gene expression data sets. We compare our method to six other baseline and commonly used techniques, namely the t-test, the Bonferroni-adjusted t-test, significance analysis of microarrays (SAM), Efron's empirical Bayes, and EBarrays in both its lognormal-normal and gamma-gamma forms. In an experiment with HIV data, our method performed better than these alternatives, on the basis of between-replicate agreement and disagreement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号