首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary A time‐specific log‐linear regression method on quantile residual lifetime is proposed. Under the proposed regression model, any quantile of a time‐to‐event distribution among survivors beyond a certain time point is associated with selected covariates under right censoring. Consistency and asymptotic normality of the regression estimator are established. An asymptotic test statistic is proposed to evaluate the covariate effects on the quantile residual lifetimes at a specific time point. Evaluation of the test statistic does not require estimation of the variance–covariance matrix of the regression estimators, which involves the probability density function of the survival distribution with censoring. Simulation studies are performed to assess finite sample properties of the regression parameter estimator and test statistic. The new regression method is applied to a breast cancer data set with long‐term follow‐up to estimate the patients' median residual lifetimes, adjusting for important prognostic factors.  相似文献   

2.
Tan  Z. 《Biometrika》2009,96(1):229-236
Suppose that independent observations are drawn from multipledistributions, each of which is a mixture of two component distributionssuch that their log density ratio satisfies a linear model witha slope parameter and an intercept parameter. Inference forsuch models has been studied using empirical likelihood, andmixed results have been obtained. The profile empirical likelihoodof the slope and intercept has an irregularity at the null hypothesisso that the two component distributions are equal. We derivea profile empirical likelihood and maximum likelihood estimatorof the slope alone, and obtain the usual asymptotic propertiesfor the estimator and the likelihood ratio statistic regardlessof the null. Furthermore, we show the maximum likelihood estimatorof the slope and intercept jointly is consistent and asymptoticallynormal regardless of the null. At the null, the joint maximumlikelihood estimator falls along a straight line through theorigin with perfect correlation asymptotically to the firstorder.  相似文献   

3.
A predictive continuous time model is developed for continuous panel data to assess the effect of time‐varying covariates on the general direction of the movement of a continuous response that fluctuates over time. This is accomplished by reparameterizing the infinitesimal mean of an Ornstein–Uhlenbeck processes in terms of its equilibrium mean and a drift parameter, which assesses the rate that the process reverts to its equilibrium mean. The equilibrium mean is modeled as a linear predictor of covariates. This model can be viewed as a continuous time first‐order autoregressive regression model with time‐varying lag effects of covariates and the response, which is more appropriate for unequally spaced panel data than its discrete time analog. Both maximum likelihood and quasi‐likelihood approaches are considered for estimating the model parameters and their performances are compared through simulation studies. The simpler quasi‐likelihood approach is suggested because it yields an estimator that is of high efficiency relative to the maximum likelihood estimator and it yields a variance estimator that is robust to the diffusion assumption of the model. To illustrate the proposed model, an application to diastolic blood pressure data from a follow‐up study on cardiovascular diseases is presented. Missing observations are handled naturally with this model.  相似文献   

4.
Fay MP  Tiwari RC  Feuer EJ  Zou Z 《Biometrics》2006,62(3):847-854
The annual percent change (APC) is often used to measure trends in disease and mortality rates, and a common estimator of this parameter uses a linear model on the log of the age-standardized rates. Under the assumption of linearity on the log scale, which is equivalent to a constant change assumption, APC can be equivalently defined in three ways as transformations of either (1) the slope of the line that runs through the log of each rate, (2) the ratio of the last rate to the first rate in the series, or (3) the geometric mean of the proportional changes in the rates over the series. When the constant change assumption fails then the first definition cannot be applied as is, while the second and third definitions unambiguously define the same parameter regardless of whether the assumption holds. We call this parameter the percent change annualized (PCA) and propose two new estimators of it. The first, the two-point estimator, uses only the first and last rates, assuming nothing about the rates in between. This estimator requires fewer assumptions and is asymptotically unbiased as the size of the population gets large, but has more variability since it uses no information from the middle rates. The second estimator is an adaptive one and equals the linear model estimator with a high probability when the rates are not significantly different from linear on the log scale, but includes fewer points if there are significant departures from that linearity. For the two-point estimator we can use confidence intervals previously developed for ratios of directly standardized rates. For the adaptive estimator, we show through simulation that the bootstrap confidence intervals give appropriate coverage.  相似文献   

5.
Zhao and Tsiatis (1997) consider the problem of estimation of the distribution of the quality-adjusted lifetime when the chronological survival time is subject to right censoring. The quality-adjusted lifetime is typically defined as a weighted sum of the times spent in certain states up until death or some other failure time. They propose an estimator and establish the relevant asymptotics under the assumption of independent censoring. In this paper we extend the data structure with a covariate process observed until the end of follow-up and identify the optimal estimation problem. Because of the curse of dimensionality, no globally efficient nonparametric estimators, which have a good practical performance at moderate sample sizes, exist. Given a correctly specified model for the hazard of censoring conditional on the observed quality-of-life and covariate processes, we propose a closed-form one-step estimator of the distribution of the quality-adjusted lifetime whose asymptotic variance attains the efficiency bound if we can correctly specify a lower-dimensional working model for the conditional distribution of quality-adjusted lifetime given the observed quality-of-life and covariate processes. The estimator remains consistent and asymptotically normal even if this latter submodel is misspecified. The practical performance of the estimators is illustrated with a simulation study. We also extend our proposed one-step estimator to the case where treatment assignment is confounded by observed risk factors so that this estimator can be used to test a treatment effect in an observational study.  相似文献   

6.
One-stage and two-stage closed form estimators of latent cell frequencies in multidimensional contingency tables are derived from the weighted least squares criterion. The first stage estimator is asymptotically equivalent to the conditional maximum likelihood estimator and does not necessarily have minimum asymptotic variance. The second stage estimator does have minimum asymptotic variance relative to any other existing estimator. The closed form estimators are defined for any number of latent cells in contingency tables of any order under exact general linear constraints on the logarithms of the nonlatent and latent cell frequencies.  相似文献   

7.
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow‐up. The status of the secondary event serves as a time‐varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time‐varying covariates. While information on a typical time‐varying covariate is missing for entire follow‐up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval‐censored. One may view interval‐censored covariate of the secondary event status as missing time‐varying covariates, yet missingness is partial since partial information is provided throughout the follow‐up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval‐censored covariates in the Cox proportional hazards model, we propose an available‐data estimator, a doubly robust‐type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study.  相似文献   

8.
Small-Sample Adjustments for Wald-Type Tests Using Sandwich Estimators   总被引:1,自引:0,他引:1  
The sandwich estimator of variance may be used to create robust Wald-type tests from estimating equations that are sums of K independent or approximately independent terms. For example, for repeated measures data on K individuals, each term relates to a different individual. These tests applied to a parameter may have greater than nominal size if K is small, or more generally if the parameter to be tested is essentially estimated from a small number of terms in the estimating equation. We offer some practical modifications to these robust Wald-type tests, which asymptotically approach the usual robust Wald-type tests. We show that one of these modifications provides exact coverage for a simple case and examine by simulation the modifications applied to the generalized estimating equations of Liang and Zeger (1986), conditional logistic regression, and the Cox proportional hazard model.  相似文献   

9.
Cai J  Sen PK  Zhou H 《Biometrics》1999,55(1):182-189
A random effects model for analyzing multivariate failure time data is proposed. The work is motivated by the need for assessing the mean treatment effect in a multicenter clinical trial study, assuming that the centers are a random sample from an underlying population. An estimating equation for the mean hazard ratio parameter is proposed. The proposed estimator is shown to be consistent and asymptotically normally distributed. A variance estimator, based on large sample theory, is proposed. Simulation results indicate that the proposed estimator performs well in finite samples. The proposed variance estimator effectively corrects the bias of the naive variance estimator, which assumes independence of individuals within a group. The methodology is illustrated with a clinical trial data set from the Studies of Left Ventricular Dysfunction. This shows that the variability of the treatment effect is higher than found by means of simpler models.  相似文献   

10.
Chen Y  Liang KY 《Biometrika》2010,97(3):603-620
This paper considers the asymptotic distribution of the likelihood ratio statistic T for testing a subset of parameter of interest θ, θ = (γ, η), H(0) : γ = γ(0), based on the pseudolikelihood L(θ, ??), where ?? is a consistent estimator of ?, the nuisance parameter. We show that the asymptotic distribution of T under H(0) is a weighted sum of independent chi-squared variables. Some sufficient conditions are provided for the limiting distribution to be a chi-squared variable. When the true value of the parameter of interest, θ(0), or the true value of the nuisance parameter, ?(0), lies on the boundary of parameter space, the problem is shown to be asymptotically equivalent to the problem of testing the restricted mean of a multivariate normal distribution based on one observation from a multivariate normal distribution with misspecified covariance matrix, or from a mixture of multivariate normal distributions. A variety of examples are provided for which the limiting distributions of T may be mixtures of chi-squared variables. We conducted simulation studies to examine the performance of the likelihood ratio test statistics in variance component models and teratological experiments.  相似文献   

11.
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials.  相似文献   

12.
Single‐catch traps are frequently used in live‐trapping studies of small mammals. Thus far, a likelihood for single‐catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture–recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous‐time model for SECR to derive a likelihood for single‐catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single‐catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single‐catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single‐catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single‐catch likelihood with unknown capture times remains intractable for now, researchers using single‐catch traps should aim to incorporate timing devices with their traps.  相似文献   

13.
In observational cohort studies with complex sampling schemes, truncation arises when the time to event of interest is observed only when it falls below or exceeds another random time, that is, the truncation time. In more complex settings, observation may require a particular ordering of event times; we refer to this as sequential truncation. Estimators of the event time distribution have been developed for simple left-truncated or right-truncated data. However, these estimators may be inconsistent under sequential truncation. We propose nonparametric and semiparametric maximum likelihood estimators for the distribution of the event time of interest in the presence of sequential truncation, under two truncation models. We show the equivalence of an inverse probability weighted estimator and a product limit estimator under one of these models. We study the large sample properties of the proposed estimators and derive their asymptotic variance estimators. We evaluate the proposed methods through simulation studies and apply the methods to an Alzheimer's disease study. We have developed an R package, seqTrun , for implementation of our method.  相似文献   

14.
We propose a method to estimate the regression coefficients in a competing risks model where the cause-specific hazard for the cause of interest is related to covariates through a proportional hazards relationship and when cause of failure is missing for some individuals. We use multiple imputation procedures to impute missing cause of failure, where the probability that a missing cause is the cause of interest may depend on auxiliary covariates, and combine the maximum partial likelihood estimators computed from several imputed data sets into an estimator that is consistent and asymptotically normal. A consistent estimator for the asymptotic variance is also derived. Simulation results suggest the relevance of the theory in finite samples. Results are also illustrated with data from a breast cancer study.  相似文献   

15.
In randomized clinical trials where the times to event of two treatment groups are compared under a proportional hazards assumption, it has been established that omitting prognostic factors from the model entails an underestimation of the hazards ratio. Heterogeneity due to unobserved covariates in cancer patient populations is a concern since genomic investigations have revealed molecular and clinical heterogeneity in these populations. In HIV prevention trials, heterogeneity is unavoidable and has been shown to decrease the treatment effect over time. This article assesses the influence of trial duration on the bias of the estimated hazards ratio resulting from omitting covariates from the Cox analysis. The true model is defined by including an unobserved random frailty term in the individual hazard that reflects the omitted covariate. Three frailty distributions are investigated: gamma, log‐normal, and binary, and the asymptotic bias of the hazards ratio estimator is calculated. We show that the attenuation of the treatment effect resulting from unobserved heterogeneity strongly increases with trial duration, especially for continuous frailties that are likely to reflect omitted covariates, as they are often encountered in practice. The possibility of interpreting the long‐term decrease in treatment effects as a bias induced by heterogeneity and trial duration is illustrated by a trial in oncology where adjuvant chemotherapy in stage 1B NSCLC was investigated.  相似文献   

16.
Chao A  Chu W  Hsu CH 《Biometrics》2000,56(2):427-433
We consider a capture-recapture model in which capture probabilities vary with time and with behavioral response. Two inference procedures are developed under the assumption that recapture probabilities bear a constant relationship to initial capture probabilities. These two procedures are the maximum likelihood method (both unconditional and conditional types are discussed) and an approach based on optimal estimating functions. The population size estimators derived from the two procedures are shown to be asymptotically equivalent when population size is large enough. The performance and relative merits of various population size estimators for finite cases are discussed. The bootstrap method is suggested for constructing a variance estimator and confidence interval. An example of the deer mouse analyzed in Otis et al. (1978, Wildlife Monographs 62, 93) is given for illustration.  相似文献   

17.
Summary .   We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered.  相似文献   

18.
We consider the question: In a segregation analysis, can knowledge of the family-size distribution (FSD) in the population from which a sample is drawn improve the estimators of genetic parameters? In other words, should one incorporate the population FSD into a segregation analysis if one knows it? If so, then under what circumstances? And how much improvement may result? We examine the variance and bias of the maximum likelihood estimators both asymptotically and in finite samples. We consider Poisson and geometric FSDs, as well as a simple two-valued FSD in which all families in the population have either one or two children. We limit our study to a simple genetic model with truncate selection. We find that if the FSD is completely specified, then the asymptotic variance of the estimator may be reduced by as much as 5%-10%, especially when the FSD is heavily skewed toward small families. Results in small samples are less clear-cut. For some of the simple two-valued FSDs, the variance of the estimator in small samples of one- and two-child families may actually be increased slightly when the FSD is included in the analysis. If one knows only the statistical form of the FSD, but not its parameter, then the estimator is improved only minutely. Our study also underlines the fact that results derived from asymptotic maximum likelihood theory do not necessarily hold in small samples. We conclude that in most practical applications it is not worth incorporating the FSD into a segregation analysis. However, this practice may be justified under special circumstances where the FSD is completely specified, without error, and the population consists overwhelmingly of small families.  相似文献   

19.
We present an estimator of average regression effect under a non-proportional hazards model, where the regression effect of the covariates on the log hazard ratio changes with time. In the absence of censoring, the new estimate coincides with the usual partial likelihood estimate, both estimates being consistent for a parameter having an interpretation as an average population regression effect. In the presence of an independent censorship, the new estimate is still consistent for this same population parameter, whereas the partial likelihood estimate will converge to a different quantity that depends on censoring. We give an approximation of the population average effect as integral beta(t)dF(t). The new estimate is easy to compute, requiring only minor modifications to existing softwares. We illustrate the use of the average effect estimate on a breast cancer dataset from Institut Curie. The behavior of the estimator, its comparison with the partial likelihood estimate, as well as the approximation by integral beta(t)dF(t)are studied via simulation.  相似文献   

20.
Stare J  Perme MP  Henderson R 《Biometrics》2011,67(3):750-759
Summary There is no shortage of proposed measures of prognostic value of survival models in the statistical literature. They come under different names, including explained variation, correlation, explained randomness, and information gain, but their goal is common: to define something analogous to the coefficient of determination R2 in linear regression. None however have been uniformly accepted, none have been extended to general event history data, including recurrent events, and many cannot incorporate time‐varying effects or covariates. We present here a measure specifically tailored for use with general dynamic event history regression models. The measure is applicable and interpretable in discrete or continuous time; with tied data or otherwise; with time‐varying, time‐fixed, or dynamic covariates; with time‐varying or time‐constant effects; with single or multiple event times; with parametric or semiparametric models; and under general independent censoring/observation. For single‐event survival data with neither censoring nor time dependency it reduces to the concordance index. We give expressions for its population value and the variance of the estimator and explore its use in simulations and applications. A web link to R software is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号