首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
P. Saha  P. J. Heagerty 《Biometrics》2010,66(4):999-1011
Summary Competing risks arise naturally in time‐to‐event studies. In this article, we propose time‐dependent accuracy measures for a marker when we have censored survival times and competing risks. Time‐dependent versions of sensitivity or true positive (TP) fraction naturally correspond to consideration of either cumulative (or prevalent) cases that accrue over a fixed time period, or alternatively to incident cases that are observed among event‐free subjects at any select time. Time‐dependent (dynamic) specificity (1–false positive (FP)) can be based on the marker distribution among event‐free subjects. We extend these definitions to incorporate cause of failure for competing risks outcomes. The proposed estimation for cause‐specific cumulative TP/dynamic FP is based on the nearest neighbor estimation of bivariate distribution function of the marker and the event time. On the other hand, incident TP/dynamic FP can be estimated using a possibly nonproportional hazards Cox model for the cause‐specific hazards and riskset reweighting of the marker distribution. The proposed methods extend the time‐dependent predictive accuracy measures of Heagerty, Lumley, and Pepe (2000, Biometrics 56, 337–344) and Heagerty and Zheng (2005, Biometrics 61, 92–105).  相似文献   

2.
Summary Often a binary variable is generated by dichotomizing an underlying continuous variable measured at a specific time point according to a prespecified threshold value. In the event that the underlying continuous measurements are from a longitudinal study, one can use the repeated‐measures model to impute missing data on responder status as a result of subject dropout and apply the logistic regression model on the observed or otherwise imputed responder status. Standard Bayesian multiple imputation techniques ( Rubin, 1987 , in Multiple Imputation for Nonresponse in Surveys) that draw the parameters for the imputation model from the posterior distribution and construct the variance of parameter estimates for the analysis model as a combination of within‐ and between‐imputation variances are found to be conservative. The frequentist multiple imputation approach that fixes the parameters for the imputation model at the maximum likelihood estimates and construct the variance of parameter estimates for the analysis model using the results of Robins and Wang (2000, Biometrika 87, 113–124) is shown to be more efficient. We propose to apply ( Kenward and Roger, 1997 , Biometrics 53, 983–997) degrees of freedom to account for the uncertainty associated with variance–covariance parameter estimates for the repeated measures model.  相似文献   

3.
Wang C  Daniels MJ 《Biometrics》2011,67(3):810-818
Summary Pattern mixture modeling is a popular approach for handling incomplete longitudinal data. Such models are not identifiable by construction. Identifying restrictions is one approach to mixture model identification ( Little, 1995 , Journal of the American Statistical Association 90 , 1112–1121; Little and Wang, 1996 , Biometrics 52 , 98–111; Thijs et al., 2002 , Biostatistics 3 , 245–265; Kenward, Molenberghs, and Thijs, 2003 , Biometrika 90 , 53–71; Daniels and Hogan, 2008 , in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis) and is a natural starting point for missing not at random sensitivity analysis ( Thijs et al., 2002 , Biostatistics 3 , 245–265; Daniels and Hogan, 2008 , in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis). However, when the pattern specific models are multivariate normal, identifying restrictions corresponding to missing at random (MAR) may not exist. Furthermore, identification strategies can be problematic in models with covariates (e.g., baseline covariates with time‐invariant coefficients). In this article, we explore conditions necessary for identifying restrictions that result in MAR to exist under a multivariate normality assumption and strategies for identifying sensitivity parameters for sensitivity analysis or for a fully Bayesian analysis with informative priors. In addition, we propose alternative modeling and sensitivity analysis strategies under a less restrictive assumption for the distribution of the observed response data. We adopt the deviance information criterion for model comparison and perform a simulation study to evaluate the performances of the different modeling approaches. We also apply the methods to a longitudinal clinical trial. Problems caused by baseline covariates with time‐invariant coefficients are investigated and an alternative identifying restriction based on residuals is proposed as a solution.  相似文献   

4.
Summary Cook, Gold, and Li (2007, Biometrics 63, 540–549) extended the Kulldorff (1997, Communications in Statistics 26, 1481–1496) scan statistic for spatial cluster detection to survival‐type observations. Their approach was based on the score statistic and they proposed a permutation distribution for the maximum of score tests. The score statistic makes it possible to apply the scan statistic idea to models including explanatory variables. However, we show that the permutation distribution requires strong assumptions of independence between potential cluster and both censoring and explanatory variables. In contrast, we present an approach using the asymptotic distribution of the maximum of score statistics in a manner not requiring these assumptions.  相似文献   

5.
Summary In genetic family studies, ages at onset of diseases are routinely collected. Often one is interested in assessing the familial association of ages at the onset of a certain disease type. However, when a competing risk is present and is related to the disease of interest, the usual measure of association by treating the competing event as an independent censoring event is biased. We propose a bivariate model that incorporates two types of association: one is between the first event time of paired members, and the other is between the failure types given the first event time. We consider flexible measures for both types of association, and estimate the corresponding association parameters by adopting the two‐stage estimation of Shih and Louis (1995, Biometrics 51, 1384–1399) and Nan et al. (2006, Journal of the American Statistical Association 101, 65–77). The proposed method is illustrated using the kinship data from the Washington Ashkenazi Study.  相似文献   

6.
Menggang Yu  Bin Nan 《Biometrics》2010,66(2):405-414
Summary In large cohort studies, it often happens that some covariates are expensive to measure and hence only measured on a validation set. On the other hand, relatively cheap but error‐prone measurements of the covariates are available for all subjects. Regression calibration (RC) estimation method ( Prentice, 1982 , Biometrika 69 , 331–342) is a popular method for analyzing such data and has been applied to the Cox model by Wang et al. (1997, Biometrics 53 , 131–145) under normal measurement error and rare disease assumptions. In this article, we consider the RC estimation method for the semiparametric accelerated failure time model with covariates subject to measurement error. Asymptotic properties of the proposed method are investigated under a two‐phase sampling scheme for validation data that are selected via stratified random sampling, resulting in neither independent nor identically distributed observations. We show that the estimates converge to some well‐defined parameters. In particular, unbiased estimation is feasible under additive normal measurement error models for normal covariates and under Berkson error models. The proposed method performs well in finite‐sample simulation studies. We also apply the proposed method to a depression mortality study.  相似文献   

7.
D. Dail  L. Madsen 《Biometrics》2011,67(2):577-587
Summary Using only spatially and temporally replicated point counts, Royle (2004b, Biometrics 60, 108–115) developed an N ‐mixture model to estimate the abundance of an animal population when individual animal detection probability is unknown. One assumption inherent in this model is that the animal populations at each sampled location are closed with respect to migration, births, and deaths throughout the study. In the past this has been verified solely by biological arguments related to the study design as no statistical verification was available. In this article, we propose a generalization of the N ‐mixture model that can be used to formally test the closure assumption. Additionally, when applied to an open metapopulation, the generalized model provides estimates of population dynamics parameters and yields abundance estimates that account for imperfect detection probability and do not require the closure assumption. A simulation study shows these abundance estimates are less biased than the abundance estimate obtained from the original N ‐mixture model. The proposed model is then applied to two data sets of avian point counts. The first example demonstrates the closure test on a single‐season study of Mallards (Anas platyrhynchos), and the second uses the proposed model to estimate the population dynamics parameters and yearly abundance of American robins (Turdus migratorius) from a multi‐year study.  相似文献   

8.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

9.
Wages NA  Conaway MR  O'Quigley J 《Biometrics》2011,67(4):1555-1563
Summary Much of the statistical methodology underlying the experimental design of phase 1 trials in oncology is intended for studies involving a single cytotoxic agent. The goal of these studies is to estimate the maximally tolerated dose, the highest dose that can be administered with an acceptable level of toxicity. A fundamental assumption of these methods is monotonicity of the dose–toxicity curve. This is a reasonable assumption for single‐agent trials in which the administration of greater doses of the agent can be expected to produce dose‐limiting toxicities in increasing proportions of patients. When studying multiple agents, the assumption may not hold because the ordering of the toxicity probabilities could possibly be unknown for several of the available drug combinations. At the same time, some of the orderings are known and so we describe the whole situation as that of a partial ordering. In this article, we propose a new two‐dimensional dose‐finding method for multiple‐agent trials that simplifies to the continual reassessment method (CRM), introduced by O'Quigley, Pepe, and Fisher (1990, Biometrics 46 , 33–48), when the ordering is fully known. This design enables us to relax the assumption of a monotonic dose–toxicity curve. We compare our approach and some simulation results to a CRM design in which the ordering is known as well as to other suggestions for partial orders.  相似文献   

10.
Stuart G. Baker 《Biometrics》2011,67(1):319-323
Summary Recently, Cheng (2009 , Biometrics 65, 96–103) proposed a model for the causal effect of receiving treatment when there is all‐or‐none compliance in one randomization group, with maximum likelihood estimation based on convex programming. We discuss an alternative approach that involves a model for all‐or‐none compliance in two randomization groups and estimation via a perfect fit or an expectation–maximization algorithm for count data. We believe this approach is easier to implement, which would facilitate the reproduction of calculations.  相似文献   

11.
The Accelerated Failure Time Model Under Biased Sampling   总被引:1,自引:0,他引:1  
Summary Chen (2009, Biometrics) studies the semi‐parametric accelerated failure time model for data that are size biased. Chen considers only the uncensored case and uses hazard‐based estimation methods originally developed for censored observations. However, for uncensored data, a simple linear regression on the log scale is more natural and provides better estimators.  相似文献   

12.
Dahm PF  Olmsted AW  Greenbaum IF 《Biometrics》2002,58(4):1028-1031
Summary. Böhm et al. (1995, Human Genetics 95 , 249–256) introduced a statistical model (named FSM–fragile site model) specifically designed for the identification of fragile sites from chromosomal breakage data. In response to claims to the contrary (Hou et al., 1999, Human Genetics 104 , 350–355; Hou et al., 2001, Biometrics 57 , 435–440), we show how the FSM model is correctly modified for application under the assumption that the probability of random breakage is proportional to chromosomal band length and how the purportedly alternative procedures proposed by Hou, Chang, and Tai (1999, 2001) are variations of the correctly modified FSM algorithm. With the exception of the test statistic employed, the procedure described by Hou et al. (1999) is shown to be functionally identical to the correctly modified FSM and the application of an incorrectly modified FSM is shown to invalidate all of the comparisons of FSM to the alternatives proposed by Hou et al. (1999, 2001). Last, we discuss the statistical implications of the methodological variations proposed by Hou et al. (2001) and emphasize the logical and statistical necessity for fragile site identifications to be based on data from single individuals.  相似文献   

13.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

14.
Summary We discuss the issue of identifiability of models for multiple dichotomous diagnostic tests in the absence of a gold standard (GS) test. Data arise as multinomial or product‐multinomial counts depending upon the number of populations sampled. Models are generally posited in terms of population prevalences, test sensitivities and specificities, and test dependence terms. It is commonly believed that if the degrees of freedom in the data meet or exceed the number of parameters in a fitted model then the model is identifiable. Goodman (1974, Biometrika 61, 215–231) established that this was not the case a long time ago. We discuss currently available models for multiple tests and argue in favor of an extension of a model that was developed by Dendukuri and Joseph (2001, Biometrics 57, 158–167). Subsequently, we further develop Goodman's technique, and make geometric arguments to give further insight into the nature of models that lack identifiability. We present illustrations using simulated and real data.  相似文献   

15.
The interval estimation of the ratio of two binomial proportions based on the score statistic is superior over other methods. Iterative algorithms for calculating the approximate confidence interval have been provided by, e.g., KOOPMAN (1984, Biometrics 40:513–517) and GART and NAM (1988a, Biometrics 44:323–338). This note presents the analytical solutions for upper and lower confidence limits in a closed form and gives examples for numerical illustration. The non-iterative method is generally more desirable than the iterative method.  相似文献   

16.
Summary Several statistical methods for detecting associations between quantitative traits and candidate genes in structured populations have been developed for fully observed phenotypes. However, many experiments are concerned with failure‐time phenotypes, which are usually subject to censoring. In this article, we propose statistical methods for detecting associations between a censored quantitative trait and candidate genes in structured populations with complex multiple levels of genetic relatedness among sampled individuals. The proposed methods correct for continuous population stratification using both population structure variables as covariates and the frailty terms attributable to kinship. The relationship between the time‐at‐onset data and genotypic scores at a candidate marker is modeled via a parametric Weibull frailty accelerated failure time (AFT) model as well as a semiparametric frailty AFT model, where the baseline survival function is flexibly modeled as a mixture of Polya trees centered around a family of Weibull distributions. For both parametric and semiparametric models, the frailties are modeled via an intrinsic Gaussian conditional autoregressive prior distribution with the kinship matrix being the adjacency matrix connecting subjects. Simulation studies and applications to the Arabidopsis thaliana line flowering time data sets demonstrated the advantage of the new proposals over existing approaches.  相似文献   

17.
Cover: The cover schematically illustrates induction of pluripotent stem cells from somatic cells by protein‐based reprogramming. Please see article by Romli et al., pages 1230–1237.  相似文献   

18.
Wei Zhang  Simon J. Bonner 《Biometrics》2020,76(3):1028-1033
Schofield et al. (2018, Biometrics 74, 626–635) presented simple and efficient algorithms for fitting continuous-time capture-recapture models based on Poisson processes. They also demonstrated by real examples that the standard method of discretizing continuous-time capture-recapture data and then fitting traditional discrete-time models may lead to information loss in population size estimation. In this article, we aim to clarify that key to the approach of Schofield et al. (2018) is the Poisson model assumed for the number of captures of each individual throughout the study, rather than the fact of data being collected in continuous time. We further show that the method of data discretization works equally well as the method of Schofield et al. (2018), provided that a Poisson model is applied instead of the traditional Bernoulli model to the number of captures for each individual on each sampling occasion.  相似文献   

19.
Summary This note is in response to Wouters et al. (2003, Biometrics 59, 1131–1139) who compared three methods for exploring gene expression data. Contrary to their summary that principal component analysis is not very informative, we show that it is possible to determine principal component analyses that are useful for exploratory analysis of microarray data. We also present another biplot representation, the GE‐biplot (Gene Expression biplot), that is a useful method for exploring gene expression data with the major advantage of being able to aid interpretation of both the samples and the genes relative to each other.  相似文献   

20.
An accelerated failure time (AFT) model assuming a log-linear relationship between failure time and a set of covariates can be either parametric or semiparametric, depending on the distributional assumption for the error term. Both classes of AFT models have been popular in the analysis of censored failure time data. The semiparametric AFT model is more flexible and robust to departures from the distributional assumption than its parametric counterpart. However, the semiparametric AFT model is subject to producing biased results for estimating any quantities involving an intercept. Estimating an intercept requires a separate procedure. Moreover, a consistent estimation of the intercept requires stringent conditions. Thus, essential quantities such as mean failure times might not be reliably estimated using semiparametric AFT models, which can be naturally done in the framework of parametric AFT models. Meanwhile, parametric AFT models can be severely impaired by misspecifications. To overcome this, we propose a new type of the AFT model using a nonparametric Gaussian-scale mixture distribution. We also provide feasible algorithms to estimate the parameters and mixing distribution. The finite sample properties of the proposed estimators are investigated via an extensive stimulation study. The proposed estimators are illustrated using a real dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号