首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Patterns of treatment effects in subsets of patients in clinical trials   总被引:2,自引:0,他引:2  
We discuss the practice of examining patterns of treatment effects across overlapping patient subpopulations. In particular, we focus on the case in which patient subgroups are defined to contain patients having increasingly larger (or smaller) values of one particular covariate of interest, with the intent of exploring the possible interaction between treatment effect and that covariate. We formalize these subgroup approaches (STEPP: subpopulation treatment effect pattern plots) and implement them when treatment effect is defined as the difference in survival at a fixed time point between two treatment arms. The joint asymptotic distribution of the treatment effect estimates is derived, and used to construct simultaneous confidence bands around the estimates and to test the null hypothesis of no interaction. These methods are illustrated using data from a clinical trial conducted by the International Breast Cancer Study Group, which demonstrates the critical role of estrogen receptor content of the primary breast cancer for selecting appropriate adjuvant therapy. The considerations are also relevant for general subset analysis, since information from the same patients is typically used in the estimation of treatment effects within two or more subgroups of patients defined with respect to different covariates.  相似文献   

2.
Generalized estimating equations (Liang and Zeger, 1986) is a widely used, moment-based procedure to estimate marginal regression parameters. However, a subtle and often overlooked point is that valid inference requires the mean for the response at time t to be expressed properly as a function of the complete past, present, and future values of any time-varying covariate. For example, with environmental exposures it may be necessary to express the response as a function of multiple lagged values of the covariate series. Despite the fact that multiple lagged covariates may be predictive of outcomes, researchers often focus interest on parameters in a 'cross-sectional' model, where the response is expressed as a function of a single lag in the covariate series. Cross-sectional models yield parameters with simple interpretations and avoid issues of collinearity associated with multiple lagged values of a covariate. Pepe and Anderson (1994), showed that parameter estimates for time-varying covariates may be biased unless the mean, given all past, present, and future covariate values, is equal to the cross-sectional mean or unless independence estimating equations are used. Although working independence avoids potential bias, many authors have shown that a poor choice for the response correlation model can lead to highly inefficient parameter estimates. The purpose of this paper is to study the bias-efficiency trade-off associated with working correlation choices for application with binary response data. We investigate data characteristics or design features (e.g. cluster size, overall response association, functional form of the response association, covariate distribution, and others) that influence the small and large sample characteristics of parameter estimates obtained from several different weighting schemes or equivalently 'working' covariance models. We find that the impact of covariance model choice depends highly on the specific structure of the data features, and that key aspects should be examined before choosing a weighting scheme.  相似文献   

3.
Huang Y  Dagne G 《Biometrics》2012,68(3):943-953
Summary It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications.  相似文献   

4.
McKeague IW  Tighiouart M 《Biometrics》2000,56(4):1007-1015
This article introduces a new Bayesian approach to the analysis of right-censored survival data. The hazard rate of interest is modeled as a product of conditionally independent stochastic processes corresponding to (1) a baseline hazard function and (2) a regression function representing the temporal influence of the covariates. These processes jump at times that form a time-homogeneous Poisson process and have a pairwise dependency structure for adjacent values. The two processes are assumed to be conditionally independent given their jump times. Features of the posterior distribution, such as the mean covariate effects and survival probabilities (conditional on the covariate), are evaluated using the Metropolis-Hastings-Green algorithm. We illustrate our methodology by an application to nasopharynx cancer survival data.  相似文献   

5.
We present a parametric family of regression models for interval-censored event-time (survival) data that accomodates both fixed (e.g. baseline) and time-dependent covariates. The model employs a three-parameter family of survival distributions that includes the Weibull, negative binomial, and log-logistic distributions as special cases, and can be applied to data with left, right, interval, or non-censored event times. Standard methods, such as Newton-Raphson, can be employed to estimate the model and the resulting estimates have an asymptotically normal distribution about the true values with a covariance matrix that is consistently estimated by the information function. The deviance function is described to assess model fit and a robust sandwich estimate of the covariance may also be employed to provide asymptotically robust inferences when the model assumptions do not apply. Spline functions may also be employed to allow for non-linear covariates. The model is applied to data from a long-term study of type 1 diabetes to describe the effects of longitudinal measures of glycemia (HbA1c) over time (the time-dependent covariate) on the risk of progression of diabetic retinopathy (eye disease), an interval-censored event-time outcome.  相似文献   

6.
Incomplete covariate data are a common occurrence in studies in which the outcome is survival time. Further, studies in the health sciences often give rise to correlated, possibly censored, survival data. With no missing covariate data, if the marginal distributions of the correlated survival times follow a given parametric model, then the estimates using the maximum likelihood estimating equations, naively treating the correlated survival times as independent, give consistent estimates of the relative risk parameters Lipsitz et al. 1994 50, 842-846. Now, suppose that some observations within a cluster have some missing covariates. We show in this paper that if one naively treats observations within a cluster as independent, that one can still use the maximum likelihood estimating equations to obtain consistent estimates of the relative risk parameters. This method requires the estimation of the parameters of the distribution of the covariates. We present results from a clinical trial Lipsitz and Ibrahim (1996b) 2, 5-14 with five covariates, four of which have some missing values. In the trial, the clusters are the hospitals in which the patients were treated.  相似文献   

7.
Li E  Wang N  Wang NY 《Biometrics》2007,63(4):1068-1078
Summary .   Joint models are formulated to investigate the association between a primary endpoint and features of multiple longitudinal processes. In particular, the subject-specific random effects in a multivariate linear random-effects model for multiple longitudinal processes are predictors in a generalized linear model for primary endpoints. Li, Zhang, and Davidian (2004, Biometrics 60 , 1–7) proposed an estimation procedure that makes no distributional assumption on the random effects but assumes independent within-subject measurement errors in the longitudinal covariate process. Based on an asymptotic bias analysis, we found that their estimators can be biased when random effects do not fully explain the within-subject correlations among longitudinal covariate measurements. Specifically, the existing procedure is fairly sensitive to the independent measurement error assumption. To overcome this limitation, we propose new estimation procedures that require neither a distributional or covariance structural assumption on covariate random effects nor an independence assumption on within-subject measurement errors. These new procedures are more flexible, readily cover scenarios that have multivariate longitudinal covariate processes, and can be implemented using available software. Through simulations and an analysis of data from a hypertension study, we evaluate and illustrate the numerical performances of the new estimators.  相似文献   

8.
Zhao and Tsiatis (1997) consider the problem of estimation of the distribution of the quality-adjusted lifetime when the chronological survival time is subject to right censoring. The quality-adjusted lifetime is typically defined as a weighted sum of the times spent in certain states up until death or some other failure time. They propose an estimator and establish the relevant asymptotics under the assumption of independent censoring. In this paper we extend the data structure with a covariate process observed until the end of follow-up and identify the optimal estimation problem. Because of the curse of dimensionality, no globally efficient nonparametric estimators, which have a good practical performance at moderate sample sizes, exist. Given a correctly specified model for the hazard of censoring conditional on the observed quality-of-life and covariate processes, we propose a closed-form one-step estimator of the distribution of the quality-adjusted lifetime whose asymptotic variance attains the efficiency bound if we can correctly specify a lower-dimensional working model for the conditional distribution of quality-adjusted lifetime given the observed quality-of-life and covariate processes. The estimator remains consistent and asymptotically normal even if this latter submodel is misspecified. The practical performance of the estimators is illustrated with a simulation study. We also extend our proposed one-step estimator to the case where treatment assignment is confounded by observed risk factors so that this estimator can be used to test a treatment effect in an observational study.  相似文献   

9.
1. The current study examined the effect of broad-scale climate and individual-specific covariates on nest survival in smallmouth bass over a 20-year period. 2. Large-scale climate indices [winter North Atlantic Oscillation (NAO) and winter El Ni?o/Southern Oscillation (ENSO)] and body size of parental males were important covariates in nest survival along with nest age and a quadratic trend in survival. 3. We did not find an effect due to a habitat covariate (total effective fetch) or a phenology covariate (degree-days at start of nesting) on nest survival. 4. Male size in the second half of the nesting season was a more influential covariate on nest success than male size in the first half or throughout the nesting period. 5. We present evidence showing that winter NAO/ENSO indices establish limnological conditions the following spring that influence thermal stability of the lake during the nesting period. 6. The combined climate and body size covariates point to nest survival as a function of lagged climate-scale influences on limnology and the individual-scale influence of bioenergetics on the duration of parental care and nest success.  相似文献   

10.
Sinha D  Chen MH  Ghosh SK 《Biometrics》1999,55(2):585-590
Interval-censored data occur in survival analysis when the survival time of each patient is only known to be within an interval and these censoring intervals differ from patient to patient. For such data, we present some Bayesian discretized semiparametric models, incorporating proportional and nonproportional hazards structures, along with associated statistical analyses and tools for model selection using sampling-based methods. The scope of these methodologies is illustrated through a reanalysis of a breast cancer data set (Finkelstein, 1986, Biometrics 42, 845-854) to test whether the effect of covariate on survival changes over time.  相似文献   

11.
This paper develops methodology for estimation of the effect of a binary time-varying covariate on failure times when the change time of the covariate is interval censored. The motivating example is a study of cytomegalovirus (CMV) disease in patients with human immunodeficiency virus (HIV) disease. We are interested in determining whether CMV shedding predicts an increased hazard for developing active CMV disease. Since a clinical screening test is needed to detect CMV shedding, the time that shedding begins is only known to lie in an interval bounded by the patient's last negative and first positive tests. In a Cox proportional hazards model with a time-varying covariate for CMV shedding, the partial likelihood depends on the covariate status of every individual in the risk set at each failure time. Due to interval censoring, this is not always known. To solve this problem, we use a Monte Carlo EM algorithm with a Gibbs sampler embedded in the E-step. We generate multiple completed data sets by drawing imputed exact shedding times based on the joint likelihood of the shedding times and event times under the Cox model. The method is evaluated using a simulation study and is applied to the data set described above.  相似文献   

12.
Modelling paired survival data with covariates   总被引:4,自引:0,他引:4  
The objective of this paper is to consider the parametric analysis of paired censored survival data when additional covariate information is available, as in the Diabetic Retinopathy Study, which assessed the effectiveness of laser photocoagulation in delaying loss of visual acuity. Our first approach is to extend the fully parametric model of Clayton (1978, Biometrika 65, 141-151) to incorporate covariate information. Our second approach is to obtain parameter estimates from an independence working model together with robust variance estimates. The approaches are compared in terms of efficiency and computational considerations. A fundamental consideration in choosing a strategy for the analysis of paired survival data is whether the correlation within a pair is a nuisance parameter or a parameter of intrinsic scientific interest. The approaches are illustrated with the Diabetic Retinopathy Study.  相似文献   

13.
In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation.  相似文献   

14.
Huang L  Chen MH  Ibrahim JG 《Biometrics》2005,61(3):767-780
We propose Bayesian methods for estimating parameters in generalized linear models (GLMs) with nonignorably missing covariate data. We show that when improper uniform priors are used for the regression coefficients, phi, of the multinomial selection model for the missing data mechanism, the resulting joint posterior will always be improper if (i) all missing covariates are discrete and an intercept is included in the selection model for the missing data mechanism, or (ii) at least one of the covariates is continuous and unbounded. This impropriety will result regardless of whether proper or improper priors are specified for the regression parameters, beta, of the GLM or the parameters, alpha, of the covariate distribution. To overcome this problem, we propose a novel class of proper priors for the regression coefficients, phi, in the selection model for the missing data mechanism. These priors are robust and computationally attractive in the sense that inferences about beta are not sensitive to the choice of the hyperparameters of the prior for phi and they facilitate a Gibbs sampling scheme that leads to accelerated convergence. In addition, we extend the model assessment criterion of Chen, Dey, and Ibrahim (2004a, Biometrika 91, 45-63), called the weighted L measure, to GLMs and missing data problems as well as extend the deviance information criterion (DIC) of Spiegelhalter et al. (2002, Journal of the Royal Statistical Society B 64, 583-639) for assessing whether the missing data mechanism is ignorable or nonignorable. A novel Markov chain Monte Carlo sampling algorithm is also developed for carrying out posterior computation. Several simulations are given to investigate the performance of the proposed Bayesian criteria as well as the sensitivity of the prior specification. Real datasets from a melanoma cancer clinical trial and a liver cancer study are presented to further illustrate the proposed methods.  相似文献   

15.
Shih JH  Lu SE 《Biometrics》2007,63(3):673-680
We consider the problem of estimating covariate effects in the marginal Cox proportional hazard model and multilevel associations for child mortality data collected from a vitamin A supplementation trial in Nepal, where the data are clustered within households and villages. For this purpose, a class of multivariate survival models that can be represented by a functional of marginal survival functions and accounts for hierarchical structure of clustering is exploited. Based on this class of models, an estimation strategy involving a within-cluster resampling procedure is proposed, and a model assessment approach is presented. The asymptotic theory for the proposed estimators and lack-of-fit test is established. The simulation study shows that the estimates are approximately unbiased, and the proposed test statistic is conservative under extremely heavy censoring but approaches the size otherwise. The analysis of the Nepal study data shows that the association of mortality is much greater within households than within villages.  相似文献   

16.
S. Mandal  J. Qin  R.M. Pfeiffer 《Biometrics》2023,79(3):1701-1712
We propose and study a simple and innovative non-parametric approach to estimate the age-of-onset distribution for a disease from a cross-sectional sample of the population that includes individuals with prevalent disease. First, we estimate the joint distribution of two event times, the age of disease onset and the survival time after disease onset. We accommodate that individuals had to be alive at the time of the study by conditioning on their survival until the age at sampling. We propose a computationally efficient expectation–maximization (EM) algorithm and derive the asymptotic properties of the resulting estimates. From these joint probabilities we then obtain non-parametric estimates of the age-at-onset distribution by marginalizing over the survival time after disease onset to death. The method accommodates categorical covariates and can be used to obtain unbiased estimates of the covariate distribution in the source population. We show in simulations that our method performs well in finite samples even under large amounts of truncation for prevalent cases. We apply the proposed method to data from female participants in the Washington Ashkenazi Study to estimate the age-at-onset distribution of breast cancer associated with carrying BRCA1 or BRCA2 mutations.  相似文献   

17.
G. Y. Yi  W. Liu  Lang Wu 《Biometrics》2011,67(1):67-75
Summary Longitudinal data arise frequently in medical studies and it is common practice to analyze such data with generalized linear mixed models. Such models enable us to account for various types of heterogeneity, including between‐ and within‐subjects ones. Inferential procedures complicate dramatically when missing observations or measurement error arise. In the literature, there has been considerable interest in accommodating either incompleteness or covariate measurement error under random effects models. However, there is relatively little work concerning both features simultaneously. There is a need to fill up this gap as longitudinal data do often have both characteristics. In this article, our objectives are to study simultaneous impact of missingness and covariate measurement error on inferential procedures and to develop a valid method that is both computationally feasible and theoretically valid. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed with the proposed method.  相似文献   

18.
Researchers in observational survival analysis are interested in not only estimating survival curve nonparametrically but also having statistical inference for the parameter. We consider right-censored failure time data where we observe n independent and identically distributed observations of a vector random variable consisting of baseline covariates, a binary treatment at baseline, a survival time subject to right censoring, and the censoring indicator. We assume the baseline covariates are allowed to affect the treatment and censoring so that an estimator that ignores covariate information would be inconsistent. The goal is to use these data to estimate the counterfactual average survival curve of the population if all subjects are assigned the same treatment at baseline. Existing observational survival analysis methods do not result in monotone survival curve estimators, which is undesirable and may lose efficiency by not constraining the shape of the estimator using the prior knowledge of the estimand. In this paper, we present a one-step Targeted Maximum Likelihood Estimator (TMLE) for estimating the counterfactual average survival curve. We show that this new TMLE can be executed via recursion in small local updates. We demonstrate the finite sample performance of this one-step TMLE in simulations and an application to a monoclonal gammopathy data.  相似文献   

19.
Summary Time varying, individual covariates are problematic in experiments with marked animals because the covariate can typically only be observed when each animal is captured. We examine three methods to incorporate time varying, individual covariates of the survival probabilities into the analysis of data from mark‐recapture‐recovery experiments: deterministic imputation, a Bayesian imputation approach based on modeling the joint distribution of the covariate and the capture history, and a conditional approach considering only the events for which the associated covariate data are completely observed (the trinomial model). After describing the three methods, we compare results from their application to the analysis of the effect of body mass on the survival of Soay sheep (Ovis aries) on the Isle of Hirta, Scotland. Simulations based on these results are then used to make further comparisons. We conclude that both the trinomial model and Bayesian imputation method perform best in different situations. If the capture and recovery probabilities are all high, then the trinomial model produces precise, unbiased estimators that do not depend on any assumptions regarding the distribution of the covariate. In contrast, the Bayesian imputation method performs substantially better when capture and recovery probabilities are low, provided that the specified model of the covariate is a good approximation to the true data‐generating mechanism.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号