首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary The nested case–control design is a relatively new type of observational study whereby a case–control approach is employed within an established cohort. In this design, we observe cases and controls longitudinally by sampling all cases whenever they occur but controls at certain time points. Controls can be obtained at time points randomly scheduled or prefixed for operational convenience. This design with longitudinal observations is efficient in terms of cost and duration, especially when the disease is rare and the assessment of exposure levels is difficult. In our design, we propose sequential sampling methods and study both (group) sequential testing and estimation methods so that the study can be stopped as soon as the stopping rule is satisfied. To make such a longitudinal sampling more efficient in terms of both numbers of subjects and replications, we propose applying sequential sampling methods to subjects and replications, simultaneously, until the information criterion is fulfilled. This simultaneous sequential sampling on subjects and replicates is more flexible for practitioners designing their sampling schemes, and is different from the classical approaches used in longitudinal studies. We newly define the σ‐field to accommodate our proposed sampling scheme, which contains mixtures of independent and correlated observations, and prove the asymptotic optimality of sequential estimation based on the martingale theories. We also prove that the independent increment structure is retained so that the group sequential method is applicable. Finally, we present results by employing sequential estimation and group sequential testing on both simulated data and real data on children's diarrhea.  相似文献   

2.
Zhiguo Li  Peter Gilbert  Bin Nan 《Biometrics》2008,64(4):1247-1255
Summary Grouped failure time data arise often in HIV studies. In a recent preventive HIV vaccine efficacy trial, immune responses generated by the vaccine were measured from a case–cohort sample of vaccine recipients, who were subsequently evaluated for the study endpoint of HIV infection at prespecified follow‐up visits. Gilbert et al. (2005, Journal of Infectious Diseases 191 , 666–677) and Forthal et al. (2007, Journal of Immunology 178, 6596–6603) analyzed the association between the immune responses and HIV incidence with a Cox proportional hazards model, treating the HIV infection diagnosis time as a right‐censored random variable. The data, however, are of the form of grouped failure time data with case–cohort covariate sampling, and we propose an inverse selection probability‐weighted likelihood method for fitting the Cox model to these data. The method allows covariates to be time dependent, and uses multiple imputation to accommodate covariate data that are missing at random. We establish asymptotic properties of the proposed estimators, and present simulation results showing their good finite sample performance. We apply the method to the HIV vaccine trial data, showing that higher antibody levels are associated with a lower hazard of HIV infection.  相似文献   

3.
Summary In individually matched case–control studies, when some covariates are incomplete, an analysis based on the complete data may result in a large loss of information both in the missing and completely observed variables. This usually results in a bias and loss of efficiency. In this article, we propose a new method for handling the problem of missing covariate data based on a missing‐data‐induced intensity approach when the missingness mechanism does not depend on case–control status and show that this leads to a generalization of the missing indicator method. We derive the asymptotic properties of the estimates from the proposed method and, using an extensive simulation study, assess the finite sample performance in terms of bias, efficiency, and 95% confidence coverage under several missing data scenarios. We also make comparisons with complete‐case analysis (CCA) and some missing data methods that have been proposed previously. Our results indicate that, under the assumption of predictable missingness, the suggested method provides valid estimation of parameters, is more efficient than CCA, and is competitive with other, more complex methods of analysis. A case–control study of multiple myeloma risk and a polymorphism in the receptor Inter‐Leukin‐6 (IL‐6‐α) is used to illustrate our findings.  相似文献   

4.
Summary In case–control research where there are multiple case groups, standard analyses fail to make use of all available information. Multiple events case–control (MECC) studies provide a new approach to sampling from a cohort and are useful when it is desired to study multiple types of events in the cohort. In this design, subjects in the cohort who develop any event of interest are sampled, as well as a fraction of the remaining subjects. We show that a simple case–control analysis of data arising from MECC studies is biased and develop three general estimating‐equation‐based approaches to analyzing data from these studies. We conduct simulation studies to compare the efficiency of the various MECC analyses with each other and with the corresponding conventional analyses. It is shown that the gain in efficiency by using the new design is substantial in many situations. We demonstrate the application of our approach to a nested case–control study of the effect of oral sodium phosphate use on chronic kidney injury with multiple case definitions.  相似文献   

5.
Lyles RH  MacFarlane G 《Biometrics》2000,56(2):634-639
When repeated measures of an exposure variable are obtained on individuals, it can be of epidemiologic interest to relate the slope of this variable over time to a subsequent response. Subject-specific estimates of this slope are measured with error, as are corresponding estimates of the level of exposure, i.e., the intercept of a linear regression over time. Because the intercept is often correlated with the slope and may also be associated with the outcome, each error-prone covariate (intercept and slope) is a potential confounder, thereby tending to accentuate potential biases due to measurement error. Under a familiar mixed linear model for the exposure measurements, we present closed-form estimators for the true parameters of interest in the case of a continuous outcome with complete and equally timed follow-up for all subjects. Generalizations to handle incomplete follow-up, other types of outcome variables, and additional fixed covariates are illustrated via maximum likelihood. We provide examples using data from the Multicenter AIDS Cohort Study. In these examples, substantial adjustments are made to uncorrected parameter estimates corresponding to the health-related effects of exposure variable slopes over time. We illustrate the potential impact of such adjustments on the interpretation of an epidemiologic analysis.  相似文献   

6.
Summary We introduce a correction for covariate measurement error in nonparametric regression applied to longitudinal binary data arising from a study on human sleep. The data have been surveyed to investigate the association of some hormonal levels and the probability of being asleep. The hormonal effect is modeled flexibly while we account for the error‐prone measurement of its concentration in the blood and the longitudinal character of the data. We present a fully Bayesian treatment utilizing Markov chain Monte Carlo inference techniques, and also introduce block updating to improve sampling and computational performance in the binary case. Our model is partly inspired by the relevance vector machine with radial basis functions, where usually very few basis functions are automatically selected for fitting the data. In the proposed approach, we implement such data‐driven complexity regulation by adopting the idea of Bayesian model averaging. Besides the general theory and the detailed sampling scheme, we also provide a simulation study for the Gaussian and the binary cases by comparing our method to the naive analysis ignoring measurement error. The results demonstrate a clear gain when using the proposed correction method, particularly for the Gaussian case with medium and large measurement error variances, even if the covariate model is misspecified.  相似文献   

7.
Summary We consider a problem of testing mixture proportions using two‐sample data, one from group one and the other from a mixture of groups one and two with unknown proportion, λ, for being in group two. Various statistical applications, including microarray study, infectious epidemiological studies, case–control studies with contaminated controls, clinical trials allowing “nonresponders,” genetic studies for gene mutation, and fishery applications can be formulated in this setup. Under the assumption that the log ratio of probability (density) functions from the two groups is linear in the observations, we propose a generalized score test statistic to test the mixture proportion. Under some regularity conditions, it is shown that this statistic converges to a weighted chi‐squared random variable under the null hypothesis of λ= 0 , where the weight depends only on the sampling fraction of both groups. The permutation method is used to provide more reliable finite sample approximation. Simulation results and two real data applications are presented.  相似文献   

8.
Generalized estimating equations (Liang and Zeger, 1986) is a widely used, moment-based procedure to estimate marginal regression parameters. However, a subtle and often overlooked point is that valid inference requires the mean for the response at time t to be expressed properly as a function of the complete past, present, and future values of any time-varying covariate. For example, with environmental exposures it may be necessary to express the response as a function of multiple lagged values of the covariate series. Despite the fact that multiple lagged covariates may be predictive of outcomes, researchers often focus interest on parameters in a 'cross-sectional' model, where the response is expressed as a function of a single lag in the covariate series. Cross-sectional models yield parameters with simple interpretations and avoid issues of collinearity associated with multiple lagged values of a covariate. Pepe and Anderson (1994), showed that parameter estimates for time-varying covariates may be biased unless the mean, given all past, present, and future covariate values, is equal to the cross-sectional mean or unless independence estimating equations are used. Although working independence avoids potential bias, many authors have shown that a poor choice for the response correlation model can lead to highly inefficient parameter estimates. The purpose of this paper is to study the bias-efficiency trade-off associated with working correlation choices for application with binary response data. We investigate data characteristics or design features (e.g. cluster size, overall response association, functional form of the response association, covariate distribution, and others) that influence the small and large sample characteristics of parameter estimates obtained from several different weighting schemes or equivalently 'working' covariance models. We find that the impact of covariance model choice depends highly on the specific structure of the data features, and that key aspects should be examined before choosing a weighting scheme.  相似文献   

9.
Wang YG 《Biometrics》1999,55(3):984-989
Troxel, Lipsitz, and Brennan (1997, Biometrics 53, 857-869) considered parameter estimation from survey data with nonignorable nonresponse and proposed weighted estimating equations to remove the biases in the complete-case analysis that ignores missing observations. This paper suggests two alternative modifications for unbiased estimation of regression parameters when a binary outcome is potentially observed at successive time points. The weighting approach of Robins, Rotnitzky, and Zhao (1995, Journal of the American Statistical Association 90, 106-121) is also modified to obtain unbiased estimating functions. The suggested estimating functions are unbiased only when the missingness probability is correctly specified, and misspecification of the missingness model will result in biases in the estimates. Simulation studies are carried out to assess the performance of different methods when the covariate is binary or normal. For the simulation models used, the relative efficiency of the two new methods to the weighting methods is about 3.0 for the slope parameter and about 2.0 for the intercept parameter when the covariate is continuous and the missingness probability is correctly specified. All methods produce substantial biases in the estimates when the missingness model is misspecified or underspecified. Analysis of data from a medical survey illustrates the use and possible differences of these estimating functions.  相似文献   

10.
Huang X  Tebbs JM 《Biometrics》2009,65(3):710-718
Summary .  We consider structural measurement error models for a binary response. We show that likelihood-based estimators obtained from fitting structural measurement error models with pooled binary responses can be far more robust to covariate measurement error in the presence of latent-variable model misspecification than the corresponding estimators from individual responses. Furthermore, despite the loss in information, pooling can provide improved parameter estimators in terms of mean-squared error. Based on these and other findings, we create a new diagnostic method to detect latent-variable model misspecification in structural measurement error models with individual binary response. We use simulation and data from the Framingham Heart Study to illustrate our methods.  相似文献   

11.
Summary Combining data collected from different sources can potentially enhance statistical efficiency in estimating effects of environmental or genetic factors or gene–environment interactions. However, combining data across studies becomes complicated when data are collected under different study designs, such as family‐based and unrelated individual‐based case–control design. In this article, we describe likelihood‐based approaches that permit the joint estimation of covariate effects on disease risk under study designs that include cases, relatives of cases, and unrelated individuals. Our methods accommodate familial residual correlation and a variety of ascertainment schemes. Extensive simulation experiments demonstrate that the proposed methods for estimation and inference perform well in realistic settings. Efficiencies of different designs are contrasted in the simulation. We applied the methods to data from the Colorectal Cancer Family Registry.  相似文献   

12.
Summary Nested case–control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor.  相似文献   

13.
Marginalized models (Heagerty, 1999, Biometrics 55, 688-698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica 90, 329-336) are analyzed.  相似文献   

14.
This paper develops a model for repeated binary regression when a covariate is measured with error. The model allows for estimating the effect of the true value of the covariate on a repeated binary response. The choice of a probit link for the effect of the error-free covariate, coupled with normal measurement error for the error-free covariate, results in a probit model after integrating over the measurement error distribution. We propose a two-stage estimation procedure where, in the first stage, a linear mixed model is used to fit the repeated covariate. In the second stage, a model for the correlated binary responses conditional on the linear mixed model estimates is fit to the repeated binary data using generalized estimating equations. The approach is demonstrated using nutrient safety data from the Diet Intervention of School Age Children (DISC) study.  相似文献   

15.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

16.
We consider longitudinal studies in which the outcome observed over time is binary and the covariates of interest are categorical. With no missing responses or covariates, one specifies a multinomial model for the responses given the covariates and uses maximum likelihood to estimate the parameters. Unfortunately, incomplete data in the responses and covariates are a common occurrence in longitudinal studies. Here we assume the missing data are missing at random (Rubin, 1976, Biometrika 63, 581-592). Since all of the missing data (responses and covariates) are categorical, a useful technique for obtaining maximum likelihood parameter estimates is the EM algorithm by the method of weights proposed in Ibrahim (1990, Journal of the American Statistical Association 85, 765-769). In using the EM algorithm with missing responses and covariates, one specifies the joint distribution of the responses and covariates. Here we consider the parameters of the covariate distribution as a nuisance. In data sets where the percentage of missing data is high, the estimates of the nuisance parameters can lead to highly unstable estimates of the parameters of interest. We propose a conditional model for the covariate distribution that has several modeling advantages for the EM algorithm and provides a reduction in the number of nuisance parameters, thus providing more stable estimates in finite samples.  相似文献   

17.
Chen Q  Ibrahim JG 《Biometrics》2006,62(1):177-184
We consider a class of semiparametric models for the covariate distribution and missing data mechanism for missing covariate and/or response data for general classes of regression models including generalized linear models and generalized linear mixed models. Ignorable and nonignorable missing covariate and/or response data are considered. The proposed semiparametric model can be viewed as a sensitivity analysis for model misspecification of the missing covariate distribution and/or missing data mechanism. The semiparametric model consists of a generalized additive model (GAM) for the covariate distribution and/or missing data mechanism. Penalized regression splines are used to express the GAMs as a generalized linear mixed effects model, in which the variance of the corresponding random effects provides an intuitive index for choosing between the semiparametric and parametric model. Maximum likelihood estimates are then obtained via the EM algorithm. Simulations are given to demonstrate the methodology, and a real data set from a melanoma cancer clinical trial is analyzed using the proposed methods.  相似文献   

18.
Summary With advances in modern medicine and clinical diagnosis, case–control data with characterization of finer subtypes of cases are often available. In matched case–control studies, missingness in exposure values often leads to deletion of entire stratum, and thus entails a significant loss in information. When subtypes of cases are treated as categorical outcomes, the data are further stratified and deletion of observations becomes even more expensive in terms of precision of the category‐specific odds‐ratio parameters, especially using the multinomial logit model. The stereotype regression model for categorical responses lies intermediate between the proportional odds and the multinomial or baseline category logit model. The use of this class of models has been limited as the structure of the model implies certain inferential challenges with nonidentifiability and nonlinearity in the parameters. We illustrate how to handle missing data in matched case–control studies with finer disease subclassification within the cases under a stereotype regression model. We present both Monte Carlo based full Bayesian approach and expectation/conditional maximization algorithm for the estimation of model parameters in the presence of a completely general missingness mechanism. We illustrate our methods by using data from an ongoing matched case–control study of colorectal cancer. Simulation results are presented under various missing data mechanisms and departures from modeling assumptions.  相似文献   

19.
We have developed a new general approach for handling misclassification in discrete covariates or responses in regression models. The simulation and extrapolation (SIMEX) method, which was originally designed for handling additive covariate measurement error, is applied to the case of misclassification. The statistical model for characterizing misclassification is given by the transition matrix Pi from the true to the observed variable. We exploit the relationship between the size of misclassification and bias in estimating the parameters of interest. Assuming that Pi is known or can be estimated from validation data, we simulate data with higher misclassification and extrapolate back to the case of no misclassification. We show that our method is quite general and applicable to models with misclassified response and/or misclassified discrete regressors. In the case of a binary response with misclassification, we compare our method to the approach of Neuhaus, and to the matrix method of Morrissey and Spiegelman in the case of a misclassified binary regressor. We apply our method to a study on caries with a misclassified longitudinal response.  相似文献   

20.
In some clinical trials or clinical practice, the therapeutic agent is administered repeatedly, and doses are adjusted in each patient based on repeatedly measured continuous responses, to maintain the response levels in a target range. Because a lower dose tends to be selected for patients with a better outcome, simple summarizations may wrongly show a better outcome for the lower dose, producing an incorrect dose–response relationship. In this study, we consider the dose–response relationship under these situations. We show that maximum‐likelihood estimates are consistent without modeling the dose‐modification mechanisms when the selection of the dose as a time‐dependent covariate is based only on observed, but not on unobserved, responses, and measurements are generated based on administered doses. We confirmed this property by performing simulation studies under several dose‐modification mechanisms. We examined an autoregressive linear mixed effects model. The model represents profiles approaching each patient's asymptote when identical doses are repeatedly administered. The model takes into account the previous dose history and provides a dose–response relationship of the asymptote as a summary measure. We also examined a linear mixed effects model assuming all responses are measured at steady state. In the simulation studies, the estimates of both the models were unbiased under the dose modification based on observed responses, but biased under the dose modification based on unobserved responses. In conclusion, the maximum‐likelihood estimates of the dose–response relationship are consistent under the dose modification based only on observed responses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号