首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we develop a Gaussian estimation (GE) procedure to estimate the parameters of a regression model for correlated (longitudinal) binary response data using a working correlation matrix. A two‐step iterative procedure is proposed for estimating the regression parameters by the GE method and the correlation parameters by the method of moments. Consistency properties of the estimators are discussed. A simulation study was conducted to compare 11 estimators of the regression parameters, namely, four versions of the GE, five versions of the generalized estimating equations (GEEs), and two versions of the weighted GEE. Simulations show that (i) the Gaussian estimates have the smallest mean square error and best coverage probability if the working correlation structure is correctly specified and (ii) when the working correlation structure is correctly specified, the GE and the GEE with exchangeable correlation structure perform best as opposed to when the correlation structure is misspecified.  相似文献   

2.
Summary .   A common and important problem in clustered sampling designs is that the effect of within-cluster exposures (i.e., exposures that vary within clusters) on outcome may be confounded by both measured and unmeasured cluster-level factors (i.e., measurements that do not vary within clusters). When some of these are ill/not accounted for, estimation of this effect through population-averaged models or random-effects models may introduce bias. We accommodate this by developing a general theory for the analysis of clustered data, which enables consistent and asymptotically normal estimation of the effects of within-cluster exposures in the presence of cluster-level confounders. Semiparametric efficient estimators are obtained by solving so-called conditional generalized estimating equations. We compare this approach with a popular proposal by Neuhaus and Kalbfleisch (1998, Biometrics 54, 638–645) who separate the exposure effect into a within- and a between-cluster component within a random intercept model. We find that the latter approach yields consistent and efficient estimators when the model is linear, but is less flexible in terms of model specification. Under nonlinear models, this approach may yield inconsistent and inefficient estimators, though with little bias in most practical settings.  相似文献   

3.
Checking the marginal Cox model for correlated failure time data   总被引:4,自引:0,他引:4  
SPIEKERMAN  C. F.; LIN  D. Y. 《Biometrika》1996,83(1):143-156
  相似文献   

4.
5.
The differential reinforcement of low-rate 72 seconds schedule (DRL-72) is a standard behavioral test procedure for screening potential antidepressant compounds. The protocol for the DRL-72 experiment, proposed by Evenden et al. (1993), consists of using a crossover design for the experiment and one-way ANOVA for the statistical analysis. In this paper we discuss the choice of several crossover designs for the DRL-72 experiment and propose to estimate the treatment effects using either generalized linear mixed models (GLMM) or generalized estimating equation (GEE) models for clustered binary data.  相似文献   

6.
Modelling multivariate binary data with alternating logistic regressions   总被引:12,自引:0,他引:12  
  相似文献   

7.
This paper focuses on the development and study of the confidence interval procedures for mean difference between two treatments in the analysis of over‐dispersed count data in order to measure the efficacy of the experimental treatment over the standard treatment in clinical trials. In this study, two simple methods are proposed. One is based on a sandwich estimator of the variance of the regression estimator using the generalized estimating equations (GEEs) approach of Zeger and Liang (1986) and the other is based on an estimator of the variance of a ratio estimator (1977). We also develop three other procedures following the procedures studied by Newcombe (1998) and the procedure studied by Beal (1987). As assessed by Monte Carlo simulations, all the procedures have reasonably well coverage properties. Moreover, the interval procedure based on GEEs outperforms other interval procedures in the sense that it maintains the coverage very close to the nominal coverage level and that it has the shortest interval length, a satisfactory location property, and a very simple form, which can be easily implemented in the applied fields. Illustrative applications in the biological studies for these confidence interval procedures are also presented.  相似文献   

8.
Klein JP  Andersen PK 《Biometrics》2005,61(1):223-229
Typically, regression models for competing risks outcomes are based on proportional hazards models for the crude hazard rates. These estimates often do not agree with impressions drawn from plots of cumulative incidence functions for each level of a risk factor. We present a technique which models the cumulative incidence functions directly. The method is based on the pseudovalues from a jackknife statistic constructed from the cumulative incidence curve. These pseudovalues are used in a generalized estimating equation to obtain estimates of model parameters. We study the properties of this estimator and apply the technique to a study of the effect of alternative donors on relapse for patients given a bone marrow transplant for leukemia.  相似文献   

9.
Kauermann G 《Biometrics》2000,56(3):692-698
This paper presents a smooth regression model for ordinal data with longitudinal dependence structure. A marginal model with cumulative logit link is applied to cope with the ordinal scale and the main and covariate effects in the model are allowed to vary with time. Local fitting is pursued and asymptotic properties of the estimates are discussed. In a second step, the longitudinal dependence of the observations is considered. Cumulative log odds ratios are fitted locally, which allows investigation of how the longitudinal dependence of the ordinal observations changes with time.  相似文献   

10.
11.
Marginal methods have been widely used for the analysis of longitudinal ordinal and categorical data. These models do not require full parametric assumptions on the joint distribution of repeated response measurements but only specify the marginal or even association structures. However, inference results obtained from these methods often incur serious bias when variables are subject to error. In this paper, we tackle the problem that misclassification exists in both response and categorical covariate variables. We develop a marginal method for misclassification adjustment, which utilizes second‐order estimating functions and a functional modeling approach, and can yield consistent estimates and valid inference for mean and association parameters. We propose a two‐stage estimation approach for cases in which validation data are available. Our simulation studies show good performance of the proposed method under a variety of settings. Although the proposed method is phrased to data with a longitudinal design, it also applies to correlated data arising from clustered and family studies, in which association parameters may be of scientific interest. The proposed method is applied to analyze a dataset from the Framingham Heart Study as an illustration.  相似文献   

12.
Summary Many time‐to‐event studies are complicated by the presence of competing risks and by nesting of individuals within a cluster, such as patients in the same center in a multicenter study. Several methods have been proposed for modeling the cumulative incidence function with independent observations. However, when subjects are clustered, one needs to account for the presence of a cluster effect either through frailty modeling of the hazard or subdistribution hazard, or by adjusting for the within‐cluster correlation in a marginal model. We propose a method for modeling the marginal cumulative incidence function directly. We compute leave‐one‐out pseudo‐observations from the cumulative incidence function at several time points. These are used in a generalized estimating equation to model the marginal cumulative incidence curve, and obtain consistent estimates of the model parameters. A sandwich variance estimator is derived to adjust for the within‐cluster correlation. The method is easy to implement using standard software once the pseudovalues are obtained, and is a generalization of several existing models. Simulation studies show that the method works well to adjust the SE for the within‐cluster correlation. We illustrate the method on a dataset looking at outcomes after bone marrow transplantation.  相似文献   

13.
Cook RJ  Zeng L  Yi GY 《Biometrics》2004,60(3):820-828
In recent years there has been considerable research devoted to the development of methods for the analysis of incomplete data in longitudinal studies. Despite these advances, the methods used in practice have changed relatively little, particularly in the reporting of pharmaceutical trials. In this setting, perhaps the most widely adopted strategy for dealing with incomplete longitudinal data is imputation by the "last observation carried forward" (LOCF) approach, in which values for missing responses are imputed using observations from the most recently completed assessment. We examine the asymptotic and empirical bias, the empirical type I error rate, and the empirical coverage probability associated with estimators and tests of treatment effect based on the LOCF imputation strategy. We consider a setting involving longitudinal binary data with longitudinal analyses based on generalized estimating equations, and an analysis based simply on the response at the end of the scheduled follow-up. We find that for both of these approaches, imputation by LOCF can lead to substantial biases in estimators of treatment effects, the type I error rates of associated tests can be greatly inflated, and the coverage probability can be far from the nominal level. Alternative analyses based on all available data lead to estimators with comparatively small bias, and inverse probability weighted analyses yield consistent estimators subject to correct specification of the missing data process. We illustrate the differences between various methods of dealing with drop-outs using data from a study of smoking behavior.  相似文献   

14.
Summary .   We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered.  相似文献   

15.
Large observational databases derived from disease registries and retrospective cohort studies have proven very useful for the study of health services utilization. However, the use of large databases may introduce computational difficulties, particularly when the event of interest is recurrent. In such settings, grouping the recurrent event data into prespecified intervals leads to a flexible event rate model and a data reduction that remedies the computational issues. We propose a possibly stratified marginal proportional rates model with a piecewise-constant baseline event rate for recurrent event data. Both the absence and the presence of a terminal event are considered. Large-sample distributions are derived for the proposed estimators. Simulation studies are conducted under various data configurations, including settings in which the model is misspecified. Guidelines for interval selection are provided and assessed using numerical studies. We then show that the proposed procedures can be carried out using standard statistical software (e.g., SAS, R). An application based on national hospitalization data for end-stage renal disease patients is provided.  相似文献   

16.
17.
18.
Yu  Zhangsheng; Lin  Xihong 《Biometrika》2008,95(1):123-137
We study nonparametric regression for correlated failure timedata. Kernel estimating equations are used to estimate nonparametriccovariate effects. Independent and weighted-kernel estimatingequations are studied. The derivative of the nonparametric functionis first estimated and the nonparametric function is then estimatedby integrating the derivative estimator. We show that the nonparametrickernel estimator is consistent for any arbitrary working correlationmatrix and that its asymptotic variance is minimized by assumingworking independence. We evaluate the performance of the proposedkernel estimator using simulation studies, and apply the proposedmethod to the western Kenya parasitaemia data.  相似文献   

19.
20.
Sun J  Liao Q  Pagano M 《Biometrics》1999,55(3):909-914
In many epidemiological studies, the survival time of interest is the elapsed time between two related events, the originating event and the failure event, and the times of the occurrences of both events are right or interval censored. We discuss the regression analysis of such studies and a simple estimating equation approach is proposed under the proportional hazards model. The method can easily be implemented and does not involve any iteration among unknown parameters, as full likelihood approaches proposed in the literature do. The asymptotic properties of the proposed regression coefficient estimates are derived and an AIDS cohort study is analyzed to illustrate the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号