首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.  相似文献   

2.
Hao K  Cawley S 《Human heredity》2007,63(3-4):219-228
BACKGROUND: Current biotechnologies are able to achieve high accuracy and call rates. Concerns are raised on how differential performance on various genotypes may bias association tests. Quantitatively, we define differential dropout rate as the ratio of no-call rate among heterozygotes and homozygotes. METHODS: The hazard ofdifferential dropout is examined for population- and family-based association tests through a simulation study. Also, we investigate detection approaches such as Hardy-Weinberg Equilibrium (HWE) and testing for correlation between sample call rate and sample heterozygosity. Finally, we analyze two public datasets and evaluate the magnitudes of differential dropout. RESULTS: In case-control settings, differential dropout has negligible effect on power and odds ratio (OR) estimation. However, the impact on family-based tests range from minor to severe depending on the disease parameters. Such impact is more prominent when disease allele frequency is relatively low (e.g., 5%), where a differential dropout rate of 2.5 can dramatically bias OR estimation and reduce power even at a decent 98% overall call rate and moderate effect size (e.g., OR(true) = 2.11). Both of the two public datasets follow HWE; however, HapMap data carries detectable differential dropout that may endanger family-based studies. CONCLUSIONS: Case-control approach appears to be robust to differential dropout; however, family-based association tests can be heavily biased. Both of the public genotype data show high call rate, but differential dropout is detected in HapMap data. We suggest researchers carefully control this potential confounder even using data of high accuracy and high overall call rate.  相似文献   

3.
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.  相似文献   

4.
《Ecological monographs》2011,81(4):635-663
Ecology is inherently multivariate, but high-dimensional data are difficult to understand. Dimension reduction with ordination analysis helps with both data exploration and clarification of the meaning of inferences (e.g., randomization tests, variation partitioning) about a statistical population. Most such inferences are asymmetric, in that variables are classified as either response or explanatory (e.g., factors, predictors). But this asymmetric approach has limitations (e.g., abiotic variables may not entirely explain correlations between interacting species). We study symmetric population-level inferences by modeling correlations and co-occurrences, using these models for out-of-sample prediction. Such modeling requires a novel treatment of ordination axes as random effects, because fixed effects only allow within-sample predictions. We advocate an iterative methodology for random-effects ordination: (1) fit a set of candidate models differing in complexity (e.g., number of axes); (2) use information criteria to choose among models; (3) compare model predictions with data; (4) explore dimension-reduced graphs (e.g., biplots); (5) repeat 1–4 if model performance is poor. We describe and illustrate random-effects ordination models (with software) for two types of data: multivariate-normal (e.g., log morphometric data) and presence–absence community data. A large simulation experiment with multivariate-normal data demonstrates good performance of (1) a small-sample-corrected information criterion and (2) factor analysis relative to principal component analysis. Predictive comparisons of multiple alternative models is a powerful form of scientific reasoning: we have shown that unconstrained ordination can be based on such reasoning.  相似文献   

5.
In the presence of competing causes of event occurrence (e.g., death), the interest might not only be in the overall survival but also in the so-called net survival, that is, the hypothetical survival that would be observed if the disease under study were the only possible cause of death. Net survival estimation is commonly based on the excess hazard approach in which the hazard rate of individuals is assumed to be the sum of a disease-specific and expected hazard rate, supposed to be correctly approximated by the mortality rates obtained from general population life tables. However, this assumption might not be realistic if the study participants are not comparable with the general population. Also, the hierarchical structure of the data can induces a correlation between the outcomes of individuals coming from the same clusters (e.g., hospital, registry). We proposed an excess hazard model that corrects simultaneously for these two sources of bias, instead of dealing with them independently as before. We assessed the performance of this new model and compared it with three similar models, using extensive simulation study, as well as an application to breast cancer data from a multicenter clinical trial. The new model performed better than the others in terms of bias, root mean square error, and empirical coverage rate. The proposed approach might be useful to account simultaneously for the hierarchical structure of the data and the non-comparability bias in studies such as long-term multicenter clinical trials, when there is interest in the estimation of net survival.  相似文献   

6.
Microarrays are a new technology that allows biologists to better understand the interactions between diverse pathologic state at the gene level. However, the amount of data generated by these tools becomes problematic, even though data are supposed to be automatically analyzed (e.g., for diagnostic purposes). The issue becomes more complex when the expression data involve multiple states. We present a novel approach to the gene selection problem in multi-class gene expression-based cancer classification, which combines support vector machines and genetic algorithms. This new method is able to select small subsets and still improve the classification accuracy.  相似文献   

7.

Background  

Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e.g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application.  相似文献   

8.
In this paper, we study a parametric modeling approach to gene set enrichment analysis. Existing methods have largely relied on nonparametric approaches employing, e.g., categorization, permutation or resampling-based significance analysis methods. These methods have proven useful yet might not be powerful. By formulating the enrichment analysis into a model comparison problem, we adopt the likelihood ratio-based testing approach to assess significance of enrichment. Through simulation studies and application to gene expression data, we will illustrate the competitive performance of the proposed method.  相似文献   

9.
Since measurements of process variables are subject to measurements errors as well as process variability, data reconciliation is the procedure of optimally adjusting measured date so that the adjusted values obey the conservation laws and constraints. Thus, data reconciliation for dynamic systems is fundamental and important for control, fault detection, and system optimization. Attempts to successfully implement estimators are often hindered by serve process nonlinearities, complicated state constraints, and un-measurable perturbations. As a constrained minimization problem, the dynamic data reconciliation is dynamically carried out to product smoothed estimates with variances from the original data. Many algorithms are proposed to solve such state estimation such as the extended Kalman filter (EKF), the unscented Kalman filter, and the cubature Kalman filter (CKF). In this paper, we investigate the use of CKF algorithm in comparative with the EKF to solve the nonlinear dynamic data reconciliation problem. First we give a broad overview of the recursive nonlinear data dynamic reconciliation (RNDDR) scheme, then present an extension to the CKF algorithm, and finally address the issue of how to solve the constraints in the CKF approach. The CCRNDDR method is proposed by applying the RNDDR in the CKF algorithm to handle nonlinearity and algebraic constraints and bounds. As the sampling idea is incorporated into the RNDDR framework, more accurate estimates can obtained via the recursive nature of the estimation procedure. The performance of the CKF approach is compared with EKF and RNDDR on nonlinear process systems with constraints. The conclusion is that with an error optimization solution of the correction step, the reformulated CKF shows high performance on the selection of nonlinear constrained process systems. Simulation results show the CCRNDDR is an efficient, accurate and stable method for real-time state estimation for nonlinear dynamic processes.  相似文献   

10.
The bootstrap is a tool that allows for efficient evaluation of prediction performance of statistical techniques without having to set aside data for validation. This is especially important for high-dimensional data, e.g., arising from microarrays, because there the number of observations is often limited. For avoiding overoptimism the statistical technique to be evaluated has to be applied to every bootstrap sample in the same manner it would be used on new data. This includes a selection of complexity, e.g., the number of boosting steps for gradient boosting algorithms. Using the latter, we demonstrate in a simulation study that complexity selection in conventional bootstrap samples, drawn with replacement, is severely biased in many scenarios. This translates into a considerable bias of prediction error estimates, often underestimating the amount of information that can be extracted from high-dimensional data. Potential remedies for this complexity selection bias, such as alternatively using a fixed level of complexity or of using sampling without replacement are investigated and it is shown that the latter works well in many settings. We focus on high-dimensional binary response data, with bootstrap .632+ estimates of the Brier score for performance evaluation, and censored time-to-event data with .632+ prediction error curve estimates. The latter, with the modified bootstrap procedure, is then applied to an example with microarray data from patients with diffuse large B-cell lymphoma.  相似文献   

11.
Understanding the basis for intracellular motion is critical as the field moves toward a deeper understanding of the relation between Brownian forces, molecular crowding, and anisotropic (or isotropic) energetic forcing. Effective forces and other parameters used to summarize molecular motion change over time in live cells due to latent state changes, e.g., changes induced by dynamic micro-environments, photobleaching, and other heterogeneity inherent in biological processes. This study discusses limitations in currently popular analysis methods (e.g., mean square displacement-based analyses) and how new techniques can be used to systematically analyze Single Particle Tracking (SPT) data experiencing abrupt state changes in time or space. The approach is to track GFP tagged chromatids in metaphase in live yeast cells and quantitatively probe the effective forces resulting from dynamic interactions that reflect the sum of a number of physical phenomena. State changes can be induced by various sources including: microtubule dynamics exerting force through the centromere, thermal polymer fluctuations, and DNA-based molecular machines including polymerases and protein exchange complexes such as chaperones and chromatin remodeling complexes. Simulations aiming to show the relevance of the approach to more general SPT data analyses are also studied. Refined force estimates are obtained by adopting and modifying a nonparametric Bayesian modeling technique, the Hierarchical Dirichlet Process Switching Linear Dynamical System (HDP-SLDS), for SPT applications. The HDP-SLDS method shows promise in systematically identifying dynamical regime changes induced by unobserved state changes when the number of underlying states is unknown in advance (a common problem in SPT applications). We expand on the relevance of the HDP-SLDS approach, review the relevant background of Hierarchical Dirichlet Processes, show how to map discrete time HDP-SLDS models to classic SPT models, and discuss limitations of the approach. In addition, we demonstrate new computational techniques for tuning hyperparameters and for checking the statistical consistency of model assumptions directly against individual experimental trajectories; the techniques circumvent the need for “ground-truth” and/or subjective information.  相似文献   

12.
Serban N  Jiang H 《Biometrics》2012,68(3):805-814
Summary In this article, we investigate clustering methods for multilevel functional data, which consist of repeated random functions observed for a large number of units (e.g., genes) at multiple subunits (e.g., bacteria types). To describe the within- and between variability induced by the hierarchical structure in the data, we take a multilevel functional principal component analysis (MFPCA) approach. We develop and compare a hard clustering method applied to the scores derived from the MFPCA and a soft clustering method using an MFPCA decomposition. In a simulation study, we assess the estimation accuracy of the clustering membership and the cluster patterns under a series of settings: small versus moderate number of time points; various noise levels; and varying number of subunits per unit. We demonstrate the applicability of the clustering analysis to a real data set consisting of expression profiles from genes activated by immunity system cells. Prevalent response patterns are identified by clustering the expression profiles using our multilevel clustering analysis.  相似文献   

13.
Time series data provided by single-molecule Förster resonance energy transfer (smFRET) experiments offer the opportunity to infer not only model parameters describing molecular complexes, e.g., rate constants, but also information about the model itself, e.g., the number of conformational states. Resolving whether such states exist or how many of them exist requires a careful approach to the problem of model selection, here meaning discrimination among models with differing numbers of states. The most straightforward approach to model selection generalizes the common idea of maximum likelihood—selecting the most likely parameter values—to maximum evidence: selecting the most likely model. In either case, such an inference presents a tremendous computational challenge, which we here address by exploiting an approximation technique termed variational Bayesian expectation maximization. We demonstrate how this technique can be applied to temporal data such as smFRET time series; show superior statistical consistency relative to the maximum likelihood approach; compare its performance on smFRET data generated from experiments on the ribosome; and illustrate how model selection in such probabilistic or generative modeling can facilitate analysis of closely related temporal data currently prevalent in biophysics. Source code used in this analysis, including a graphical user interface, is available open source via http://vbFRET.sourceforge.net.  相似文献   

14.
Although multiple gene sequences are becoming increasingly available for molecular phylogenetic inference, the analysis of such data has largely relied on inference methods designed for single genes. One of the common approaches to analyzing data from multiple genes is concatenation of the individual gene data to form a single supergene to which traditional phylogenetic inference procedures - e.g., maximum parsimony (MP) or maximum likelihood (ML) - are applied. Recent empirical studies have demonstrated that concatenation of sequences from multiple genes prior to phylogenetic analysis often results in inference of a single, well-supported phylogeny. Theoretical work, however, has shown that the coalescent can produce substantial variation in single-gene histories. Using simulation, we combine these ideas to examine the performance of the concatenation approach under conditions in which the coalescent produces a high level of discord among individual gene trees and show that it leads to statistically inconsistent estimation in this setting. Furthermore, use of the bootstrap to measure support for the inferred phylogeny can result in moderate to strong support for an incorrect tree under these conditions. These results highlight the importance of incorporating variation in gene histories into multilocus phylogenetics.  相似文献   

15.
Most genetically based features should be available for use in cladistic analysis. Palynologists routinely measure polar (P) and equatorial (E) axes and place pollen into size classes defined by earlier pollen workers. Grouping of pollen into globally arbitrary classes may not correspond to statistically significant differences among the taxa of a study. We propose a model using conventional statistical procedures coupled with data visualization and Monte Carlo simulation. This approach is not a final solution to the general problem of coding continuous characters into discrete states; it is an attempt to address the problems of character state delimitation in pollen morphology. We suggest that the coding of continuous measurement variables (e.g., P, E) into character states should be done following a logical sequence of interactive visualization (2D and 3D) of bivariate frequency distributions including the inspection of prediction and confidence ellipses (e.g., 99%), and use of ANOVA. We illustrate our approach using realistic pollen data sets generated by a computer program (POLSIM) written to perform Monte Carlo sampling from normally distributed statistical populations of polar and equatorial axes. Our model is then applied to an original data set of 4,134 pollen grains from the Ebenaceae, resulting in the coding of the four genera into three character states for pollen size.  相似文献   

16.
In this paper we consider the setting where a group of n judges are to independently rank a series of k objects, but the intended complete rankings are not realized and we are faced with analyzing randomly incomplete ranking vectors. In this paper we propose a new testing procedure for dealing with such data realizations. We concentrate on the problem of testing for no differences in the objects being ranked (i.e., they are indistinguishable) against general alternatives, but our approach could easily be extended to restricted (e.g., ordered or umbrella) alternatives. Using an improvement of a preliminary screening approach previously proposed by the authors, we present an algorithm for computation of the relevant Friedman‐type statistic in the general alternatives setting and present the results of an extensive simulation study comparing the new procedure with the standard approach of imputing average within‐judge ranks to the unranked objects.  相似文献   

17.
Chen J  Chatterjee N 《Biometrics》2006,62(1):28-35
Genetic epidemiologic studies often collect genotype data at multiple loci within a genomic region of interest from a sample of unrelated individuals. One popular method for analyzing such data is to assess whether haplotypes, i.e., the arrangements of alleles along individual chromosomes, are associated with the disease phenotype or not. For many study subjects, however, the exact haplotype configuration on the pair of homologous chromosomes cannot be derived with certainty from the available locus-specific genotype data (phase ambiguity). In this article, we consider estimating haplotype-specific association parameters in the Cox proportional hazards model, using genotype, environmental exposure, and the disease endpoint data collected from cohort or nested case-control studies. We study alternative Expectation-Maximization algorithms for estimating haplotype frequencies from cohort and nested case-control studies. Based on a hazard function of the disease derived from the observed genotype data, we then propose a semiparametric method for joint estimation of relative-risk parameters and the cumulative baseline hazard function. The method is greatly simplified under a rare disease assumption, for which an asymptotic variance estimator is also proposed. The performance of the proposed estimators is assessed via simulation studies. An application of the proposed method is presented, using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study.  相似文献   

18.
19.
In many clinical trials, multiple time‐to‐event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression‐related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft‐versus‐host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated with the relapse free survival, and both the acute GVHD and relapse of leukemia are intermediate nonterminal events subject to dependent censoring by the informative terminal event death, but not vice versa, giving rise to survival data that are subject to two sets of semi‐competing risks. It is important to assess the impacts of prognostic factors on these three time‐to‐event endpoints. We propose a novel statistical approach that jointly models such data via a pair of copulas to account for multiple dependence structures, while the marginal distribution of each endpoint is formulated by a Cox proportional hazards model. We develop an estimation procedure based on pseudo‐likelihood and carry out simulation studies to examine the performance of the proposed method in finite samples. The practical utility of the proposed method is further illustrated with data from the motivating example.  相似文献   

20.
Summary .   Missing data, measurement error, and misclassification are three important problems in many research fields, such as epidemiological studies. It is well known that missing data and measurement error in covariates may lead to biased estimation. Misclassification may be considered as a special type of measurement error, for categorical data. Nevertheless, we treat misclassification as a different problem from measurement error because statistical models for them are different. Indeed, in the literature, methods for these three problems were generally proposed separately given that statistical modeling for them are very different. The problem is more challenging in a longitudinal study with nonignorable missing data. In this article, we consider estimation in generalized linear models under these three incomplete data models. We propose a general approach based on expected estimating equations (EEEs) to solve these three incomplete data problems in a unified fashion. This EEE approach can be easily implemented and its asymptotic covariance can be obtained by sandwich estimation. Intensive simulation studies are performed under various incomplete data settings. The proposed method is applied to a longitudinal study of oral bone density in relation to body bone density.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号