首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Peng Y  Dear KB 《Biometrics》2000,56(1):237-243
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.  相似文献   

2.
Chen SX 《Biometrics》1999,55(3):754-759
This paper introduces a framework for animal abundance estimation in independent observer line transect surveys of clustered populations. The framework generalizes an approach given in Chen (1999, Environmental and Ecological Statistics 6, in press) to accommodate heterogeneity in detection caused by cluster size and other covariates. Both parametric and nonparametric estimators for the local effective search widths, given the covariates, can be derived from the framework. A nonparametric estimator based on conditional kernel density estimation is proposed and studied owing to its flexibility in modeling the detection functions. A real data set on harbor porpoise in the North Sea is analyzed.  相似文献   

3.
This paper demonstrates the advantages of sharing information about unknown features of covariates across multiple model components in various nonparametric regression problems including multivariate, heteroscedastic, and semicontinuous responses. In this paper, we present a methodology which allows for information to be shared nonparametrically across various model components using Bayesian sum-of-tree models. Our simulation results demonstrate that sharing of information across related model components is often very beneficial, particularly in sparse high-dimensional problems in which variable selection must be conducted. We illustrate our methodology by analyzing medical expenditure data from the Medical Expenditure Panel Survey (MEPS). To facilitate the Bayesian nonparametric regression analysis, we develop two novel models for analyzing the MEPS data using Bayesian additive regression trees—a heteroskedastic log-normal hurdle model with a “shrink-toward-homoskedasticity” prior and a gamma hurdle model.  相似文献   

4.
Although the use of clustering methods has rapidly become one of the standard computational approaches in the literature of microarray gene expression data, little attention has been paid to uncertainty in the results obtained. Dirichlet process mixture (DPM) models provide a nonparametric Bayesian alternative to the bootstrap approach to modeling uncertainty in gene expression clustering. Most previously published applications of Bayesian model-based clustering methods have been to short time series data. In this paper, we present a case study of the application of nonparametric Bayesian clustering methods to the clustering of high-dimensional nontime series gene expression data using full Gaussian covariances. We use the probability that two genes belong to the same cluster in a DPM model as a measure of the similarity of these gene expression profiles. Conversely, this probability can be used to define a dissimilarity measure, which, for the purposes of visualization, can be input to one of the standard linkage algorithms used for hierarchical clustering. Biologically plausible results are obtained from the Rosetta compendium of expression profiles which extend previously published cluster analyses of this data.  相似文献   

5.
Hoff PD 《Biometrics》2005,61(4):1027-1036
This article develops a model-based approach to clustering multivariate binary data, in which the attributes that distinguish a cluster from the rest of the population may depend on the cluster being considered. The clustering approach is based on a multivariate Dirichlet process mixture model, which allows for the estimation of the number of clusters, the cluster memberships, and the cluster-specific parameters in a unified way. Such a clustering approach has applications in the analysis of genomic abnormality data, in which the development of different types of tumors may depend on the presence of certain abnormalities at subsets of locations along the genome. Additionally, such a mixture model provides a nonparametric estimation scheme for dependent sequences of binary data.  相似文献   

6.
BackgroundPatients with multimorbidities have the greatest healthcare needs and generate the highest expenditure in the health system. There is an increasing focus on identifying specific disease combinations for addressing poor outcomes. Existing research has identified a small number of prevalent “clusters” in the general population, but the limited number examined might oversimplify the problem and these may not be the ones associated with important outcomes. Combinations with the highest (potentially preventable) secondary care costs may reveal priority targets for intervention or prevention. We aimed to examine the potential of defining multimorbidity clusters for impacting secondary care costs.Methods and findingsWe used national, Hospital Episode Statistics, data from all hospital admissions in England from 2017/2018 (cohort of over 8 million patients) and defined multimorbidity based on ICD-10 codes for 28 chronic conditions (we backfilled conditions from 2009/2010 to address potential undercoding). We identified the combinations of multimorbidity which contributed to the highest total current and previous 5-year costs of secondary care and costs of potentially preventable emergency hospital admissions in aggregate and per patient. We examined the distribution of costs across unique disease combinations to test the potential of the cluster approach for targeting interventions at high costs. We then estimated the overlap between the unique combinations to test potential of the cluster approach for targeting prevention of accumulated disease. We examined variability in the ranks and distributions across age (over/under 65) and deprivation (area level, deciles) subgroups and sensitivity to considering a smaller number of diseases.There were 8,440,133 unique patients in our sample, over 4 million (53.1%) were female, and over 3 million (37.7%) were aged over 65 years. No clear “high cost” combinations of multimorbidity emerged as possible targets for intervention. Over 2 million (31.6%) patients had 63,124 unique combinations of multimorbidity, each contributing a small fraction (maximum 3.2%) to current-year or 5-year secondary care costs. Highest total cost combinations tended to have fewer conditions (dyads/triads, most including hypertension) affecting a relatively large population. This contrasted with the combinations that generated the highest cost for individual patients, which were complex sets of many (6+) conditions affecting fewer persons. However, all combinations containing chronic kidney disease and hypertension, or diabetes and hypertension, made up a significant proportion of total secondary care costs, and all combinations containing chronic heart failure, chronic kidney disease, and hypertension had the highest proportion of preventable emergency admission costs, which might offer priority targets for prevention of disease accumulation. The results varied little between age and deprivation subgroups and sensitivity analyses.Key limitations include availability of data only from hospitals and reliance on hospital coding of health conditions.ConclusionsOur findings indicate that there are no clear multimorbidity combinations for a cluster-targeted intervention approach to reduce secondary care costs. The role of risk-stratification and focus on individual high-cost patients with interventions is particularly questionable for this aim. However, if aetiology is favourable for preventing further disease, the cluster approach might be useful for targeting disease prevention efforts with potential for cost-savings in secondary care.

Jonathan Stokes and co-workers explore patterns of multimorbidity and implications for the organization and costs of care.  相似文献   

7.
A method is proposed that aims at identifying clusters of individuals that show similar patterns when observed repeatedly. We consider linear‐mixed models that are widely used for the modeling of longitudinal data. In contrast to the classical assumption of a normal distribution for the random effects a finite mixture of normal distributions is assumed. Typically, the number of mixture components is unknown and has to be chosen, ideally by data driven tools. For this purpose, an EM algorithm‐based approach is considered that uses a penalized normal mixture as random effects distribution. The penalty term shrinks the pairwise distances of cluster centers based on the group lasso and the fused lasso method. The effect is that individuals with similar time trends are merged into the same cluster. The strength of regularization is determined by one penalization parameter. For finding the optimal penalization parameter a new model choice criterion is proposed.  相似文献   

8.
Summary .  In this article, we study the estimation of mean response and regression coefficient in semiparametric regression problems when response variable is subject to nonrandom missingness. When the missingness is independent of the response conditional on high-dimensional auxiliary information, the parametric approach may misspecify the relationship between covariates and response while the nonparametric approach is infeasible because of the curse of dimensionality. To overcome this, we study a model-based approach to condense the auxiliary information and estimate the parameters of interest nonparametrically on the condensed covariate space. Our estimators possess the double robustness property, i.e., they are consistent whenever the model for the response given auxiliary covariates or the model for the missingness given auxiliary covariate is correct. We conduct a number of simulations to compare the numerical performance between our estimators and other existing estimators in the current missing data literature, including the propensity score approach and the inverse probability weighted estimating equation. A set of real data is used to illustrate our approach.  相似文献   

9.
Summary .   In this article, we apply the recently developed Bayesian wavelet-based functional mixed model methodology to analyze MALDI-TOF mass spectrometry proteomic data. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling nonparametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. For example, this provides a straightforward way to account for systematic block and batch effects that characterize these data. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, in a way that takes both statistical and clinical significance into account and controls the Bayesian false discovery rate to a prespecified level. We apply this method to two cancer studies.  相似文献   

10.
Researchers are often interested in predicting outcomes, detecting distinct subgroups of their data, or estimating causal treatment effects. Pathological data distributions that exhibit skewness and zero‐inflation complicate these tasks—requiring highly flexible, data‐adaptive modeling. In this paper, we present a multipurpose Bayesian nonparametric model for continuous, zero‐inflated outcomes that simultaneously predicts structural zeros, captures skewness, and clusters patients with similar joint data distributions. The flexibility of our approach yields predictions that capture the joint data distribution better than commonly used zero‐inflated methods. Moreover, we demonstrate that our model can be coherently incorporated into a standardization procedure for computing causal effect estimates that are robust to such data pathologies. Uncertainty at all levels of this model flow through to the causal effect estimates of interest—allowing easy point estimation, interval estimation, and posterior predictive checks verifying positivity, a required causal identification assumption. Our simulation results show point estimates to have low bias and interval estimates to have close to nominal coverage under complicated data settings. Under simpler settings, these results hold while incurring lower efficiency loss than comparator methods. We use our proposed method to analyze zero‐inflated inpatient medical costs among endometrial cancer patients receiving either chemotherapy or radiation therapy in the SEER‐Medicare database.  相似文献   

11.
Motivated by the absolute risk predictions required in medical decision making and patient counseling, we propose an approach for the combined analysis of case-control and prospective studies of disease risk factors. The approach is hierarchical to account for parameter heterogeneity among studies and among sampling units of the same study. It is based on modeling the retrospective distribution of the covariates given the disease outcome, a strategy that greatly simplifies both the combination of prospective and retrospective studies and the computation of Bayesian predictions in the hierarchical case-control context. Retrospective modeling differentiates our approach from most current strategies for inference on risk factors, which are based on the assumption of a specific prospective model. To ensure modeling flexibility, we propose using a mixture model for the retrospective distributions of the covariates. This leads to a general nonlinear regression family for the implied prospective likelihood. After introducing and motivating our proposal, we present simple results that highlight its relationship with existing approaches, develop Markov chain Monte Carlo methods for inference and prediction, and present an illustration using ovarian cancer data.  相似文献   

12.
Summary We consider inference for data from a clinical trial of treatments for metastatic prostate cancer. Patients joined the trial with diverse prior treatment histories. The resulting heterogeneous patient population gives rise to challenging statistical inference problems when trying to predict time to progression on different treatment arms. Inference is further complicated by the need to include a longitudinal marker as a covariate. To address these challenges, we develop a semiparametric model for joint inference of longitudinal data and an event time. The proposed approach includes the possibility of cure for some patients. The event time distribution is based on a nonparametric Pólya tree prior. For the longitudinal data we assume a mixed effects model. Incorporating a regression on covariates in a nonparametric event time model in general, and for a Pólya tree model in particular, is a challenging problem. We exploit the fact that the covariate itself is a random variable. We achieve an implementation of the desired regression by factoring the joint model for the event time and the longitudinal outcome into a marginal model for the event time and a regression of the longitudinal outcomes on the event time, i.e., we implicitly model the desired regression by modeling the reverse conditional distribution.  相似文献   

13.
Clustering is a major tool for microarray gene expression data analysis. The existing clustering methods fall mainly into two categories: parametric and nonparametric. The parametric methods generally assume a mixture of parametric subdistributions. When the mixture distribution approximately fits the true data generating mechanism, the parametric methods perform well, but not so when there is nonnegligible deviation between them. On the other hand, the nonparametric methods, which usually do not make distributional assumptions, are robust but pay the price for efficiency loss. In an attempt to utilize the known mixture form to increase efficiency, and to free assumptions about the unknown subdistributions to enhance robustness, we propose a semiparametric method for clustering. The proposed approach possesses the form of parametric mixture, with no assumptions to the subdistributions. The subdistributions are estimated nonparametrically, with constraints just being imposed on the modes. An expectation-maximization (EM) algorithm along with a classification step is invoked to cluster the data, and a modified Bayesian information criterion (BIC) is employed to guide the determination of the optimal number of clusters. Simulation studies are conducted to assess the performance and the robustness of the proposed method. The results show that the proposed method yields reasonable partition of the data. As an illustration, the proposed method is applied to a real microarray data set to cluster genes.  相似文献   

14.
Summary For analysis of genomic data, e.g., microarray data from gene expression profiling experiments, the two‐component mixture model has been widely used in practice to detect differentially expressed genes. However, it naïvely imposes strong exchangeability assumptions across genes and does not make active use of a priori information about intergene relationships that is currently available, e.g., gene annotations through the Gene Ontology (GO) project. We propose a general strategy that first generates a set of covariates that summarizes the intergene information and then extends the two‐component mixture model into a hierarchical semiparametric model utilizing the generated covariates through latent nonparametric regression. Simulations and analysis of real microarray data show that our method can outperform the naïve two‐component mixture model.  相似文献   

15.
A novel functional additive model is proposed, which is uniquely modified and constrained to model nonlinear interactions between a treatment indicator and a potentially large number of functional and/or scalar pretreatment covariates. The primary motivation for this approach is to optimize individualized treatment rules based on data from a randomized clinical trial. We generalize functional additive regression models by incorporating treatment-specific components into additive effect components. A structural constraint is imposed on the treatment-specific components in order to provide a class of additive models with main effects and interaction effects that are orthogonal to each other. If primary interest is in the interaction between treatment and the covariates, as is generally the case when optimizing individualized treatment rules, we can thereby circumvent the need to estimate the main effects of the covariates, obviating the need to specify their form and thus avoiding the issue of model misspecification. The methods are illustrated with data from a depression clinical trial with electroencephalogram functional data as patients' pretreatment covariates.  相似文献   

16.
We propose a model-based approach to unify clustering and network modeling using time-course gene expression data. Specifically, our approach uses a mixture model to cluster genes. Genes within the same cluster share a similar expression profile. The network is built over cluster-specific expression profiles using state-space models. We discuss the application of our model to simulated data as well as to time-course gene expression data arising from animal models on prostate cancer progression. The latter application shows that with a combined statistical/bioinformatics analyses, we are able to extract gene-to-gene relationships supported by the literature as well as new plausible relationships.  相似文献   

17.
The problem of inferring haplotypes from genotypes of single nucleotide polymorphisms (SNPs) is essential for the understanding of genetic variation within and among populations, with important applications to the genetic analysis of disease propensities and other complex traits. The problem can be formulated as a mixture model, where the mixture components correspond to the pool of haplotypes in the population. The size of this pool is unknown; indeed, knowing the size of the pool would correspond to knowing something significant about the genome and its history. Thus methods for fitting the genotype mixture must crucially address the problem of estimating a mixture with an unknown number of mixture components. In this paper we present a Bayesian approach to this problem based on a nonparametric prior known as the Dirichlet process. The model also incorporates a likelihood that captures statistical errors in the haplotype/genotype relationship trading off these errors against the size of the pool of haplotypes. We describe an algorithm based on Markov chain Monte Carlo for posterior inference in our model. The overall result is a flexible Bayesian method, referred to as DP-Haplotyper, that is reminiscent of parsimony methods in its preference for small haplotype pools. We further generalize the model to treat pedigree relationships (e.g., trios) between the population's genotypes. We apply DP-Haplotyper to the analysis of both simulated and real genotype data, and compare to extant methods.  相似文献   

18.
A comparison of cluster analysis methods using DNA methylation data   总被引:1,自引:0,他引:1  
MOTIVATION: Aberrant DNA methylation is common in cancer. DNA methylation profiles differ between tumor types and subtypes and provide a powerful diagnostic tool for identifying clusters of samples and/or genes. DNA methylation data obtained with the quantitative, highly sensitive MethyLight technology is not normally distributed; it frequently contains an excess of zeros. Established tools to analyze this type of data do not exist. Here, we evaluate a variety of methods for cluster analysis to determine which is most reliable. RESULTS: We introduce a Bernoulli-lognormal mixture model for clustering DNA methylation data obtained using MethyLight. We model the outcomes using a two-part distribution having discrete and continuous components. It is compared with standard cluster analysis approaches for continuous data and for discrete data. In a simulation study, we find that the two-part model has the lowest classification error rate for mixture outcome data compared with other approaches. The methods are illustrated using DNA methylation data from a study of lung cancer cell lines. Compared with competing hierarchical clustering methods, the mixture model approaches have the lowest cross-validation error for detecting lung cancer subtype (non-small versus small cell). The Bernoulli-lognormal mixture assigns observations to subgroups with the lowest uncertainty. AVAILABILITY: Software is available upon request from the authors. SUPPLEMENTARY INFORMATION: http://www-rcf.usc.edu/~kims/SupplementaryInfo.html  相似文献   

19.
Yuan Y  Little RJ 《Biometrics》2007,63(4):1172-1180
This article concerns item nonresponse adjustment for two-stage cluster samples. Specifically, we focus on two types of nonignorable nonresponse: nonresponse depending on covariates and underlying cluster characteristics, and depending on covariates and the missing outcome. In these circumstances, standard weighting and imputation adjustments are liable to be biased. To obtain consistent estimates, we extend the standard random-effects model by modeling these two types of missing data mechanism. We also propose semiparametric approaches based on fitting a spline on the propensity score, to weaken assumptions about the relationship between the outcome and covariates. These new methods are compared with existing approaches by simulation. The National Health and Nutrition Examination Survey data are used to illustrate these approaches.  相似文献   

20.
Zhou C  Wakefield J 《Biometrics》2006,62(2):515-525
In recent years there has been great interest in making inference for gene expression data collected over time. In this article, we describe a Bayesian hierarchical mixture model for partitioning such data. While conventional approaches cluster the observed data, we assume a nonparametric, random walk model, and partition on the basis of the parameters of this model. The model is flexible and can be tuned to the specific context, respects the order of observations within each curve, acknowledges measurement error, and allows prior knowledge on parameters to be incorporated. The number of partitions may also be treated as unknown, and inferred from the data, in which case computation is carried out via a birth-death Markov chain Monte Carlo algorithm. We first examine the behavior of the model on simulated data, along with a comparison with more conventional approaches, and then analyze meiotic expression data collected over time on fission yeast genes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号