首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shanshan Luo  Wei Li  Yangbo He 《Biometrics》2023,79(1):502-513
It is challenging to evaluate causal effects when the outcomes of interest suffer from truncation-by-death in many clinical studies; that is, outcomes cannot be observed if patients die before the time of measurement. To address this problem, it is common to consider average treatment effects by principal stratification, for which, the identifiability results and estimation methods with a binary treatment have been established in previous literature. However, in multiarm studies with more than two treatment options, estimation of causal effects becomes more complicated and requires additional techniques. In this article, we consider identification, estimation, and bounds of causal effects with multivalued ordinal treatments and the outcomes subject to truncation-by-death. We define causal parameters of interest in this setting and show that they are identifiable either using some auxiliary variable or based on linear model assumption. We then propose a semiparametric method for estimating the causal parameters and derive their asymptotic results. When the identification conditions are invalid, we derive sharp bounds of the causal effects by use of covariates adjustment. Simulation studies show good performance of the proposed estimator. We use the estimator to analyze the effects of a four-level chronic toxin on fetal developmental outcomes such as birth weight in rats and mice, with data from a developmental toxicity trial conducted by the National Toxicology Program. Data analyses demonstrate that a high dose of the toxin significantly reduces the weights of pups.  相似文献   

2.
In longitudinal studies investigators frequently have to assess and address potential biases introduced by missing data. New methods are proposed for modeling longitudinal categorical data with nonignorable dropout using marginalized transition models and shared random effects models. Random effects are introduced for both serial dependence of outcomes and nonignorable missingness. Fisher‐scoring and Quasi–Newton algorithms are developed for parameter estimation. Methods are illustrated with a real dataset.  相似文献   

3.
Missing outcomes or irregularly timed multivariate longitudinal data frequently occur in clinical trials or biomedical studies. The multivariate t linear mixed model (MtLMM) has been shown to be a robust approach to modeling multioutcome continuous repeated measures in the presence of outliers or heavy‐tailed noises. This paper presents a framework for fitting the MtLMM with an arbitrary missing data pattern embodied within multiple outcome variables recorded at irregular occasions. To address the serial correlation among the within‐subject errors, a damped exponential correlation structure is considered in the model. Under the missing at random mechanism, an efficient alternating expectation‐conditional maximization (AECM) algorithm is used to carry out estimation of parameters and imputation of missing values. The techniques for the estimation of random effects and the prediction of future responses are also investigated. Applications to an HIV‐AIDS study and a pregnancy study involving analysis of multivariate longitudinal data with missing outcomes as well as a simulation study have highlighted the superiority of MtLMMs on the provision of more adequate estimation, imputation and prediction performances.  相似文献   

4.
We consider longitudinal studies in which the outcome observed over time is binary and the covariates of interest are categorical. With no missing responses or covariates, one specifies a multinomial model for the responses given the covariates and uses maximum likelihood to estimate the parameters. Unfortunately, incomplete data in the responses and covariates are a common occurrence in longitudinal studies. Here we assume the missing data are missing at random (Rubin, 1976, Biometrika 63, 581-592). Since all of the missing data (responses and covariates) are categorical, a useful technique for obtaining maximum likelihood parameter estimates is the EM algorithm by the method of weights proposed in Ibrahim (1990, Journal of the American Statistical Association 85, 765-769). In using the EM algorithm with missing responses and covariates, one specifies the joint distribution of the responses and covariates. Here we consider the parameters of the covariate distribution as a nuisance. In data sets where the percentage of missing data is high, the estimates of the nuisance parameters can lead to highly unstable estimates of the parameters of interest. We propose a conditional model for the covariate distribution that has several modeling advantages for the EM algorithm and provides a reduction in the number of nuisance parameters, thus providing more stable estimates in finite samples.  相似文献   

5.
Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non‐parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non‐parametric method. We illustrate our method with two medical data sets.  相似文献   

6.
In many observational studies, individuals are measured repeatedly over time, although not necessarily at a set of pre-specified occasions. Instead, individuals may be measured at irregular intervals, with those having a history of poorer health outcomes being measured with somewhat greater frequency and regularity. In this paper, we consider likelihood-based estimation of the regression parameters in marginal models for longitudinal binary data when the follow-up times are not fixed by design, but can depend on previous outcomes. In particular, we consider assumptions regarding the follow-up time process that result in the likelihood function separating into two components: one for the follow-up time process, the other for the outcome measurement process. The practical implication of this separation is that the follow-up time process can be ignored when making likelihood-based inferences about the marginal regression model parameters. That is, maximum likelihood (ML) estimation of the regression parameters relating the probability of success at a given time to covariates does not require that a model for the distribution of follow-up times be specified. However, to obtain consistent parameter estimates, the multinomial distribution for the vector of repeated binary outcomes must be correctly specified. In general, ML estimation requires specification of all higher-order moments and the likelihood for a marginal model can be intractable except in cases where the number of repeated measurements is relatively small. To circumvent these difficulties, we propose a pseudolikelihood for estimation of the marginal model parameters. The pseudolikelihood uses a linear approximation for the conditional distribution of the response at any occasion, given the history of previous responses. The appeal of this approximation is that the conditional distributions are functions of the first two moments of the binary responses only. When the follow-up times depend only on the previous outcome, the pseudolikelihood requires correct specification of the conditional distribution of the current outcome given the outcome at the previous occasion only. Results from a simulation study and a study of asymptotic bias are presented. Finally, we illustrate the main results using data from a longitudinal observational study that explored the cardiotoxic effects of doxorubicin chemotherapy for the treatment of acute lymphoblastic leukemia in children.  相似文献   

7.
We explore a Bayesian approach to selection of variables that represent fixed and random effects in modeling of longitudinal binary outcomes with missing data caused by dropouts. We show via analytic results for a simple example that nonignorable missing data lead to biased parameter estimates. This bias results in selection of wrong effects asymptotically, which we can confirm via simulations for more complex settings. By jointly modeling the longitudinal binary data with the dropout process that possibly leads to nonignorable missing data, we are able to correct the bias in estimation and selection. Mixture priors with a point mass at zero are used to facilitate variable selection. We illustrate the proposed approach using a clinical trial for acute ischemic stroke.  相似文献   

8.
Matsui S 《Biometrics》2005,61(3):816-823
This article develops methods for stratified analyses of additive or multiplicative causal effect on binary outcomes in randomized trials with noncompliance. The methods are based on a weighted estimating function for an unbiased estimating function under randomization in each stratum. When known weights are used, the derived estimator is a natural extension of the instrumental variable estimator for stratified analyses, and test-based confidence limits are solutions of a quadratic equation in the causal parameter. Optimal weights that maximize asymptotic efficiency incorporate variability in compliance aspects across strata. An assessment based on asymptotic relative efficiency shows that a substantial enhancement in efficiency can be gained by using optimal weights instead of conventional ones, which do not incorporate the variability in compliance aspects across strata. Application to a field trial for coronary heart disease is provided.  相似文献   

9.
Within the pattern-mixture modeling framework for informative dropout, conditional linear models (CLMs) are a useful approach to deal with dropout that can occur at any point in continuous time (not just at observation times). However, in contrast with selection models, inferences about marginal covariate effects in CLMs are not readily available if nonidentity links are used in the mean structures. In this article, we propose a CLM for long series of longitudinal binary data with marginal covariate effects directly specified. The association between the binary responses and the dropout time is taken into account by modeling the conditional mean of the binary response as well as the dependence between the binary responses given the dropout time. Specifically, parameters in both the conditional mean and dependence models are assumed to be linear or quadratic functions of the dropout time; and the continuous dropout time distribution is left completely unspecified. Inference is fully Bayesian. We illustrate the proposed model using data from a longitudinal study of depression in HIV-infected women, where the strategy of sensitivity analysis based on the extrapolation method is also demonstrated.  相似文献   

10.
Multilocus analysis of single-nucleotide-polymorphism (SNP) haplotypes may provide evidence of association with disease, even when the individual loci themselves do not. Haplotype-based methods are expected to outperform single-SNP analyses because (i) common genetic variation can be structured into haplotypes within blocks of strong linkage disequilibrium and (ii) the functional properties of a protein are determined by the linear sequence of amino acids corresponding to DNA variation on a haplotype. Here, I propose a flexible Bayesian framework for modeling haplotype association with disease in population-based studies of candidate genes or small candidate regions. I employ a Bayesian partition model to describe the correlation between marker-SNP haplotypes and causal variants at the underlying functional polymorphism(s). Under this model, haplotypes are clustered according to their similarity, in terms of marker-SNP allele matches, which is used as a proxy for recent shared ancestry. Haplotypes within a cluster are then assigned the same probability of carrying a causal variant at the functional polymorphism(s). In this way, I can account for the dominance effect of causal variants, here corresponding to any deviation from a multiplicative contribution to disease risk. The results of a detailed simulation study demonstrate that there is minimal cost associated with modeling these dominance effects, with substantial gains in power over haplotype-based methods that do not incorporate clustering and that assume a multiplicative model of disease risks.  相似文献   

11.
Clustered interval‐censored data commonly arise in many studies of biomedical research where the failure time of interest is subject to interval‐censoring and subjects are correlated for being in the same cluster. A new semiparametric frailty probit regression model is proposed to study covariate effects on the failure time by accounting for the intracluster dependence. Under the proposed normal frailty probit model, the marginal distribution of the failure time is a semiparametric probit model, the regression parameters can be interpreted as both the conditional covariate effects given frailty and the marginal covariate effects up to a multiplicative constant, and the intracluster association can be summarized by two nonparametric measures in simple and explicit form. A fully Bayesian estimation approach is developed based on the use of monotone splines for the unknown nondecreasing function and a data augmentation using normal latent variables. The proposed Gibbs sampler is straightforward to implement since all unknowns have standard form in their full conditional distributions. The proposed method performs very well in estimating the regression parameters as well as the intracluster association, and the method is robust to frailty distribution misspecifications as shown in our simulation studies. Two real‐life data sets are analyzed for illustration.  相似文献   

12.
O'Brien SM  Dunson DB 《Biometrics》2004,60(3):739-746
Bayesian analyses of multivariate binary or categorical outcomes typically rely on probit or mixed effects logistic regression models that do not have a marginal logistic structure for the individual outcomes. In addition, difficulties arise when simple noninformative priors are chosen for the covariance parameters. Motivated by these problems, we propose a new type of multivariate logistic distribution that can be used to construct a likelihood for multivariate logistic regression analysis of binary and categorical data. The model for individual outcomes has a marginal logistic structure, simplifying interpretation. We follow a Bayesian approach to estimation and inference, developing an efficient data augmentation algorithm for posterior computation. The method is illustrated with application to a neurotoxicology study.  相似文献   

13.
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability.  相似文献   

14.
F. S. Nathoo 《Biometrics》2010,66(2):336-346
Summary In this article, we present a new statistical methodology for longitudinal studies in forestry, where trees are subject to recurrent infection, and the hazard of infection depends on tree growth over time. Understanding the nature of this dependence has important implications for reforestation and breeding programs. Challenges arise for statistical analysis in this setting with sampling schemes leading to panel data, exhibiting dynamic spatial variability, and incomplete covariate histories for hazard regression. In addition, data are collected at a large number of locations, which poses computational difficulties for spatiotemporal modeling. A joint model for infection and growth is developed wherein a mixed nonhomogeneous Poisson process, governing recurring infection, is linked with a spatially dynamic nonlinear model representing the underlying height growth trajectories. These trajectories are based on the von Bertalanffy growth model and a spatially varying parameterization is employed. Spatial variability in growth parameters is modeled through a multivariate spatial process derived through kernel convolution. Inference is conducted in a Bayesian framework with implementation based on hybrid Monte Carlo. Our methodology is applied for analysis in an 11‐year study of recurrent weevil infestation of white spruce in British Columbia.  相似文献   

15.
In longitudinal studies and in clustered situations often binary and continuous response variables are observed and need to be modeled together. In a recent publication Dunson, Chen, and Harry (2003, Biometrics 59, 521-530) (DCH) propose a Bayesian approach for joint modeling of cluster size and binary and continuous subunit-specific outcomes and illustrate this approach with a developmental toxicity data example. In this note we demonstrate how standard software (PROC NLMIXED in SAS) can be used to obtain maximum likelihood estimates in an alternative parameterization of the model with a single cluster-level factor considered by DCH for that example. We also suggest that a more general model with additional cluster-level random effects provides a better fit to the data set. An apparent discrepancy between the estimates obtained by DCH and the estimates obtained earlier by Catalano and Ryan (1992, Journal of the American Statistical Association 87, 651-658) is also resolved. The issue of bias in inferences concerning the dose effect when cluster size is ignored is discussed. The maximum-likelihood approach considered herein is applicable to general situations with multiple clustered or longitudinally measured outcomes of different type and does not require prior specification and extensive programming.  相似文献   

16.
Multivariate heterogeneous responses and heteroskedasticity have attracted increasing attention in recent years. In genome-wide association studies, effective simultaneous modeling of multiple phenotypes would improve statistical power and interpretability. However, a flexible common modeling system for heterogeneous data types can pose computational difficulties. Here we build upon a previous method for multivariate probit estimation using a two-stage composite likelihood that exhibits favorable computational time while retaining attractive parameter estimation properties. We extend this approach to incorporate multivariate responses of heterogeneous data types (binary and continuous), and possible heteroskedasticity. Although the approach has wide applications, it would be particularly useful for genomics, precision medicine, or individual biomedical prediction. Using a genomics example, we explore statistical power and confirm that the approach performs well for hypothesis testing and coverage percentages under a wide variety of settings. The approach has the potential to better leverage genomics data and provide interpretable inference for pleiotropy, in which a locus is associated with multiple traits.  相似文献   

17.
This paper presents an extension of the joint modeling strategy for the case of multiple longitudinal outcomes and repeated infections of different types over time, motivated by postkidney transplantation data. Our model comprises two parts linked by shared latent terms. On the one hand is a multivariate mixed linear model with random effects, where a low‐rank thin‐plate spline function is incorporated to collect the nonlinear behavior of the different profiles over time. On the other hand is an infection‐specific Cox model, where the dependence between different types of infections and the related times of infection is through a random effect associated with each infection type to catch the within dependence and a shared frailty parameter to capture the dependence between infection types. We implemented the parameterization used in joint models which uses the fitted longitudinal measurements as time‐dependent covariates in a relative risk model. Our proposed model was implemented in OpenBUGS using the MCMC approach.  相似文献   

18.
Simple tests are given for consistency of the data with additive and with multiplicative effects of two risk factors on a binary outcome. A combination of the procedures will show whether data are consistent with neither, one or both of the models of no additive or no multiplicative interaction. Implications for the size of the study needed to detect differences between the models are also addressed. Because of the simple form of the test statistics, combination of evidence from different studies or strata is straightforward. Illustration of how the method could be extended to data from a 2xRxC table is also given.  相似文献   

19.
Lok JJ  Degruttola V 《Biometrics》2012,68(3):745-754
Summary We estimate how the effect of antiretroviral treatment depends on the time from HIV-infection to initiation of treatment, using observational data. A major challenge in making inferences from such observational data arises from biases associated with the nonrandom assignment of treatment, for example bias induced by dependence of time of initiation on disease status. To address this concern, we develop a new class of Structural Nested Mean Models (SNMMs) to estimate the impact of time of initiation of treatment after infection on an outcome measured a fixed duration after initiation, compared to the effect of not initiating treatment. This leads to a SNMM that models the effect of multiple dosages of treatment on a time-dependent outcome, in contrast to most existing SNNMs, which focus on the effect of one dosage of treatment on an outcome measured at the end of the study. Our identifying assumption is that there are no unmeasured confounders. We illustrate our methods using the observational Acute Infection and Early Disease Research Program (AIEDRP) Core01 database on HIV. The current standard of care in HIV-infected patients is Highly Active Anti-Retroviral Treatment (HAART); however, the optimal time to start HAART has not yet been identified. The new class of SNNMs allows estimation of the dependence of the effect of 1 year of HAART on the time between estimated date of infection and treatment initiation, and on patient characteristics. Results of fitting this model imply that early use of HAART substantially improves immune reconstitution in the early and acute phase of HIV-infection.  相似文献   

20.
In biomedical or public health research, it is common for both survival time and longitudinal categorical outcomes to be collected for a subject, along with the subject’s characteristics or risk factors. Investigators are often interested in finding important variables for predicting both survival time and longitudinal outcomes which could be correlated within the same subject. Existing approaches for such joint analyses deal with continuous longitudinal outcomes. New statistical methods need to be developed for categorical longitudinal outcomes. We propose to simultaneously model the survival time with a stratified Cox proportional hazards model and the longitudinal categorical outcomes with a generalized linear mixed model. Random effects are introduced to account for the dependence between survival time and longitudinal outcomes due to unobserved factors. The Expectation–Maximization (EM) algorithm is used to derive the point estimates for the model parameters, and the observed information matrix is adopted to estimate their asymptotic variances. Asymptotic properties for our proposed maximum likelihood estimators are established using the theory of empirical processes. The method is demonstrated to perform well in finite samples via simulation studies. We illustrate our approach with data from the Carolina Head and Neck Cancer Study (CHANCE) and compare the results based on our simultaneous analysis and the separately conducted analyses using the generalized linear mixed model and the Cox proportional hazards model. Our proposed method identifies more predictors than by separate analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号