首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Weibin Zhong  Guoqing Diao 《Biometrics》2023,79(3):1959-1971
Two-phase studies such as case-cohort and nested case-control studies are widely used cost-effective sampling strategies. In the first phase, the observed failure/censoring time and inexpensive exposures are collected. In the second phase, a subgroup of subjects is selected for measurements of expensive exposures based on the information from the first phase. One challenging issue is how to utilize all the available information to conduct efficient regression analyses of the two-phase study data. This paper proposes a joint semiparametric modeling of the survival outcome and the expensive exposures. Specifically, we assume a class of semiparametric transformation models and a semiparametric density ratio model for the survival outcome and the expensive exposures, respectively. The class of semiparametric transformation models includes the proportional hazards model and the proportional odds model as special cases. The density ratio model is flexible in modeling multivariate mixed-type data. We develop efficient likelihood-based estimation and inference procedures and establish the large sample properties of the nonparametric maximum likelihood estimators. Extensive numerical studies reveal that the proposed methods perform well under practical settings. The proposed methods also appear to be reasonably robust under various model mis-specifications. An application to the National Wilms Tumor Study is provided.  相似文献   

2.
Meta-analysis is a statistical methodology for combining information from diverse sources so that a more reliable and efficient conclusion can be reached. It can be conducted by either synthesizing study-level summary statistics or drawing inference from an overarching model for individual participant data (IPD) if available. The latter is often viewed as the “gold standard.” For random-effects models, however, it remains not fully understood whether the use of IPD indeed gains efficiency over summary statistics. In this paper, we examine the relative efficiency of the two methods under a general likelihood inference setting. We show theoretically and numerically that summary-statistics-based analysis is at most as efficient as IPD analysis, provided that the random effects follow the Gaussian distribution, and maximum likelihood estimation is used to obtain summary statistics. More specifically, (i) the two methods are equivalent in an asymptotic sense; and (ii) summary-statistics-based inference can incur an appreciable loss of efficiency if the sample sizes are not sufficiently large. Our results are established under the assumption that the between-study heterogeneity parameter remains constant regardless of the sample sizes, which is different from a previous study. Our findings are confirmed by the analyses of simulated data sets and a real-world study of alcohol interventions.  相似文献   

3.
When novel scientific questions arise after longitudinal binary data have been collected, the subsequent selection of subjects from the cohort for whom further detailed assessment will be undertaken is often necessary to efficiently collect new information. Key examples of additional data collection include retrospective questionnaire data, novel data linkage, or evaluation of stored biological specimens. In such cases, all data required for the new analyses are available except for the new target predictor or exposure. We propose a class of longitudinal outcome-dependent sampling schemes and detail a design corrected conditional maximum likelihood analysis for highly efficient estimation of time-varying and time-invariant covariate coefficients when resource limitations prohibit exposure ascertainment on all participants. Additionally, we detail an important study planning phase that exploits available cohort data to proactively examine the feasibility of any proposed substudy as well as to inform decisions regarding the most desirable study design. The proposed designs and associated analyses are discussed in the context of a study that seeks to examine the modifying effect of an interleukin-10 cytokine single nucleotide polymorphism on asthma symptom regression in adolescents participating Childhood Asthma Management Program Continuation Study. Using this example we assume that all data necessary to conduct the study are available except subject-specific genotype data. We also assume that these data would be ascertained by analyzing stored blood samples, the cost of which limits the sample size.  相似文献   

4.
Identifying effective and valid surrogate markers to make inference about a treatment effect on long-term outcomes is an important step in improving the efficiency of clinical trials. Replacing a long-term outcome with short-term and/or cheaper surrogate markers can potentially shorten study duration and reduce trial costs. There is sizable statistical literature on methods to quantify the effectiveness of a single surrogate marker. Both parametric and nonparametric approaches have been well developed for different outcome types. However, when there are multiple markers available, methods for combining markers to construct a composite marker with improved surrogacy remain limited. In this paper, building on top of the optimal transformation framework of Wang et al. (2020), we propose a novel calibrated model fusion approach to optimally combine multiple markers to improve surrogacy. Specifically, we obtain two initial estimates of optimal composite scores of the markers based on two sets of models with one set approximating the underlying data distribution and the other directly approximating the optimal transformation function. We then estimate an optimal calibrated combination of the two estimated scores which ensures both validity of the final combined score and optimality with respect to the proportion of treatment effect explained by the final combined score. This approach is unique in that it identifies an optimal combination of the multiple surrogates without strictly relying on parametric assumptions while borrowing modeling strategies to avoid fully nonparametric estimation which is subject to the curse of dimensionality. Our identified optimal transformation can also be used to directly quantify the surrogacy of this identified combined score. Theoretical properties of the proposed estimators are derived, and the finite sample performance of the proposed method is evaluated through simulation studies. We further illustrate the proposed method using data from the Diabetes Prevention Program study.  相似文献   

5.
Over the last decade the availability of SNP-trait associations from genome-wide association studies has led to an array of methods for performing Mendelian randomization studies using only summary statistics. A common feature of these methods, besides their intuitive simplicity, is the ability to combine data from several sources, incorporate multiple variants and account for biases due to weak instruments and pleiotropy. With the advent of large and accessible fully-genotyped cohorts such as UK Biobank, there is now increasing interest in understanding how best to apply these well developed summary data methods to individual level data, and to explore the use of more sophisticated causal methods allowing for non-linearity and effect modification.In this paper we describe a general procedure for optimally applying any two sample summary data method using one sample data. Our procedure first performs a meta-analysis of summary data estimates that are intentionally contaminated by collider bias between the genetic instruments and unmeasured confounders, due to conditioning on the observed exposure. These estimates are then used to correct the standard observational association between an exposure and outcome. Simulations are conducted to demonstrate the method’s performance against naive applications of two sample summary data MR. We apply the approach to the UK Biobank cohort to investigate the causal role of sleep disturbance on HbA1c levels, an important determinant of diabetes.Our approach can be viewed as a generalization of Dudbridge et al. (Nat. Comm. 10: 1561), who developed a technique to adjust for index event bias when uncovering genetic predictors of disease progression based on case-only data. Our work serves to clarify that in any one sample MR analysis, it can be advantageous to estimate causal relationships by artificially inducing and then correcting for collider bias.  相似文献   

6.
In this paper, we investigate K‐group comparisons on survival endpoints for observational studies. In clinical databases for observational studies, treatment for patients are chosen with probabilities varying depending on their baseline characteristics. This often results in noncomparable treatment groups because of imbalance in baseline characteristics of patients among treatment groups. In order to overcome this issue, we conduct propensity analysis and match the subjects with similar propensity scores across treatment groups or compare weighted group means (or weighted survival curves for censored outcome variables) using the inverse probability weighting (IPW). To this end, multinomial logistic regression has been a popular propensity analysis method to estimate the weights. We propose to use decision tree method as an alternative propensity analysis due to its simplicity and robustness. We also propose IPW rank statistics, called Dunnett‐type test and ANOVA‐type test, to compare 3 or more treatment groups on survival endpoints. Using simulations, we evaluate the finite sample performance of the weighted rank statistics combined with these propensity analysis methods. We demonstrate these methods with a real data example. The IPW method also allows us for unbiased estimation of population parameters of each treatment group. In this paper, we limit our discussions to survival outcomes, but all the methods can be easily modified for any type of outcomes, such as binary or continuous variables.  相似文献   

7.
Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative.  相似文献   

8.
In many randomized clinical trials of therapeutics for COVID-19, the primary outcome is an ordinal categorical variable, and interest focuses on the odds ratio (OR; active agent vs control) under the assumption of a proportional odds model. Although at the final analysis the outcome will be determined for all subjects, at an interim analysis, the status of some participants may not yet be determined, for example, because ascertainment of the outcome may not be possible until some prespecified follow-up time. Accordingly, the outcome from these subjects can be viewed as censored. A valid interim analysis can be based on data only from those subjects with full follow-up; however, this approach is inefficient, as it does not exploit additional information that may be available on those for whom the outcome is not yet available at the time of the interim analysis. Appealing to the theory of semiparametrics, we propose an estimator for the OR in a proportional odds model with censored, time-lagged categorical outcome that incorporates additional baseline and time-dependent covariate information and demonstrate that it can result in considerable gains in efficiency relative to simpler approaches. A byproduct of the approach is a covariate-adjusted estimator for the OR based on the full data that would be available at a final analysis.  相似文献   

9.
The rapid acceleration of genetic data collection in biomedical settings has recently resulted in the rise of genetic compendiums filled with rich longitudinal disease data. One common feature of these data sets is their plethora of interval-censored outcomes. However, very few tools are available for the analysis of genetic data sets with interval-censored outcomes, and in particular, there is a lack of methodology available for set-based inference. Set-based inference is used to associate a gene, biological pathway, or other genetic construct with outcomes and is one of the most popular strategies in genetics research. This work develops three such tests for interval-censored settings beginning with a variance components test for interval-censored outcomes, the interval-censored sequence kernel association test (ICSKAT). We also provide the interval-censored version of the Burden test, and then we integrate ICSKAT and Burden to construct the interval censored sequence kernel association test—optimal (ICSKATO) combination. These tests unlock set-based analysis of interval-censored data sets with analogs of three highly popular set-based tools commonly applied to continuous and binary outcomes. Simulation studies illustrate the advantages of the developed methods over ad hoc alternatives, including protection of the type I error rate at very low levels and increased power. The proposed approaches are applied to the investigation that motivated this study, an examination of the genes associated with bone mineral density deficiency and fracture risk.  相似文献   

10.
Two-phase designs can reduce the cost of epidemiological studies by limiting the ascertainment of expensive covariates or/and exposures to an efficiently selected subset (phase-II) of a larger (phase-I) study. Efficient analysis of the resulting data set combining disparate information from phase-I and phase-II, however, can be complex. Most of the existing methods, including semiparametric maximum-likelihood estimator, require the information in phase-I to be summarized into a fixed number of strata. In this paper, we describe a novel method for the analysis of two-phase studies where information from phase-I is summarized by parameters associated with a reduced logistic regression model of the disease outcome on available covariates. We then setup estimating equations for parameters associated with the desired extended logistic regression model, based on information on the reduced model parameters from phase-I and complete data available at phase-II after accounting for nonrandom sampling design. We use generalized method of moments to solve overly identified estimating equations and develop the resulting asymptotic theory for the proposed estimator. Simulation studies show that the use of reduced parametric models, as opposed to summarizing data into strata, can lead to more efficient utilization of phase-I data. An application of the proposed method is illustrated using the data from the U.S. National Wilms Tumor Study.  相似文献   

11.
Multiple imputation (MI) is increasingly popular for handling multivariate missing data. Two general approaches are available in standard computer packages: MI based on the posterior distribution of incomplete variables under a multivariate (joint) model, and fully conditional specification (FCS), which imputes missing values using univariate conditional distributions for each incomplete variable given all the others, cycling iteratively through the univariate imputation models. In the context of longitudinal or clustered data, it is not clear whether these approaches result in consistent estimates of regression coefficient and variance component parameters when the analysis model of interest is a linear mixed effects model (LMM) that includes both random intercepts and slopes with either covariates or both covariates and outcome contain missing information. In the current paper, we compared the performance of seven different MI methods for handling missing values in longitudinal and clustered data in the context of fitting LMMs with both random intercepts and slopes. We study the theoretical compatibility between specific imputation models fitted under each of these approaches and the LMM, and also conduct simulation studies in both the longitudinal and clustered data settings. Simulations were motivated by analyses of the association between body mass index (BMI) and quality of life (QoL) in the Longitudinal Study of Australian Children (LSAC). Our findings showed that the relative performance of MI methods vary according to whether the incomplete covariate has fixed or random effects and whether there is missingnesss in the outcome variable. We showed that compatible imputation and analysis models resulted in consistent estimation of both regression parameters and variance components via simulation. We illustrate our findings with the analysis of LSAC data.  相似文献   

12.
Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.  相似文献   

13.
In studies that require long-term and/or costly follow-up of participants to evaluate a treatment, there is often interest in identifying and using a surrogate marker to evaluate the treatment effect. While several statistical methods have been proposed to evaluate potential surrogate markers, available methods generally do not account for or address the potential for a surrogate to vary in utility or strength by patient characteristics. Previous work examining surrogate markers has indicated that there may be such heterogeneity, that is, that a surrogate marker may be useful (with respect to capturing the treatment effect on the primary outcome) for some subgroups, but not for others. This heterogeneity is important to understand, particularly if the surrogate is to be used in a future trial to replace the primary outcome. In this paper, we propose an approach and estimation procedures to measure the surrogate strength as a function of a baseline covariate W and thus examine potential heterogeneity in the utility of the surrogate marker with respect to W. Within a potential outcome framework, we quantify the surrogate strength/utility using the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate. We propose testing procedures to test for evidence of heterogeneity, examine finite sample performance of these methods via simulation, and illustrate the methods using AIDS clinical trial data.  相似文献   

14.
Summary The two‐stage case–control design has been widely used in epidemiology studies for its cost‐effectiveness and improvement of the study efficiency ( White, 1982 , American Journal of Epidemiology 115, 119–128; Breslow and Cain, 1988 , Biometrika 75, 11–20). The evolution of modern biomedical studies has called for cost‐effective designs with a continuous outcome and exposure variables. In this article, we propose a new two‐stage outcome‐dependent sampling (ODS) scheme with a continuous outcome variable, where both the first‐stage data and the second‐stage data are from ODS schemes. We develop a semiparametric empirical likelihood estimation for inference about the regression parameters in the proposed design. Simulation studies were conducted to investigate the small‐sample behavior of the proposed estimator. We demonstrate that, for a given statistical power, the proposed design will require a substantially smaller sample size than the alternative designs. The proposed method is illustrated with an environmental health study conducted at National Institutes of Health.  相似文献   

15.
Liang Y  Lu W  Ying Z 《Biometrics》2009,65(2):377-384
Summary .  In analysis of longitudinal data, it is often assumed that observation times are predetermined and are the same across study subjects. Such an assumption, however, is often violated in practice. As a result, the observation times may be highly irregular. It is well known that if the sampling scheme is correlated with the outcome values, the usual statistical analysis may yield bias. In this article, we propose joint modeling and analysis of longitudinal data with possibly informative observation times via latent variables. A two-step estimation procedure is developed for parameter estimation. We show that the resulting estimators are consistent and asymptotically normal, and that the asymptotic variance can be consistently estimated using the bootstrap method. Simulation studies and a real data analysis demonstrate that our method performs well with realistic sample sizes and is appropriate for practical use.  相似文献   

16.

Background

Trials in Alzheimer’s disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs). Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed.

Objective

We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms) of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1) annually assessed neuropsychological test scores (a conventional approach), and (2) the likelihood of having subject-specific low performance thresholds, both modeled as a function of time.

Methods

One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC) study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC) and those who transitioned to mild cognitive impairment (MCI), we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition.

Results

Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the individual-specific distributions of walking speed collected at baseline, 262 subjects are required. Similarly for computer use, 26 subjects are required.

Conclusions

Individual-specific thresholds of low functional performance based on high-frequency in-home monitoring data distinguish trajectories of MCI from NC and could substantially reduce sample sizes needed in dementia prevention RCTs.  相似文献   

17.
In many clinical studies that involve follow-up, it is common to observe one or more sequences of longitudinal measurements, as well as one or more time to event outcomes. A competing risks situation arises when the probability of occurrence of one event is altered/hindered by another time to event. Recently, there has been much attention paid to the joint analysis of a single longitudinal response and a single time to event outcome, when the missing data mechanism in the longitudinal process is non-ignorable. We, in this paper, propose an extension where multiple longitudinal responses are jointly modeled with competing risks (multiple time to events). Our shared parameter joint model consists of a system of multiphase non-linear mixed effects sub-models for the multiple longitudinal responses, and a system of cause-specific non-proportional hazards frailty sub-models for competing risks, with associations among multiple longitudinal responses and competing risks modeled using latent parameters. The joint model is applied to a data set of patients who are on mechanical circulatory support and are awaiting heart transplant, using readily available software. While on the mechanical circulatory support, patient liver and renal functions may worsen and these in turn may influence one of the two possible competing outcomes: (i) death before transplant; (ii) transplant. In one application, we propose a system of multiphase cause-specific non-proportional hazard sub-model where frailty can be time varying. Performance under different scenarios was assessed using simulation studies. By using the proposed joint modeling of the multiphase sub-models, one can identify: (i) non-linear trends in multiple longitudinal outcomes; (ii) time-varying hazards and cumulative incidence functions of the competing risks; (iii) identify risk factors for the both types of outcomes, where the effect may or may not change with time; and (iv) assess the association between multiple longitudinal and competing risks outcomes, where the association may or may not change with time.  相似文献   

18.
The accelerated failure time (AFT) model and Cox proportional hazards (PH) model are broadly used for survival endpoints of primary interest. However, the estimation efficiency from those models can be further enhanced by incorporating the information from secondary outcomes that are increasingly available and highly correlated with primary outcomes. Those secondary outcomes could be longitudinal laboratory measures collected from doctor visits or cross-sectional disease-relevant variables, which are believed to contain extra information related to primary survival endpoints to a certain extent. In this paper, we develop a two-stage estimation framework to combine a survival model with a secondary model that contains secondary outcomes, named as the empirical-likelihood-based weighting (ELW), which comprises two weighting schemes accommodated to the AFT model (ELW-AFT) and the Cox PH model (ELW-Cox), respectively. This innovative framework is flexibly adaptive to secondary outcomes with complex data features, and it leads to more efficient parameter estimation in the survival model even if the secondary model is misspecified. Extensive simulation studies showcase more efficiency gain from ELW compared to conventional approaches, and an application in the Atherosclerosis Risk in Communities study also demonstrates the superiority of ELW by successfully detecting risk factors at the time of hospitalization for acute myocardial infarction.  相似文献   

19.
Marginal regression via generalized estimating equations is widely used in biostatistics to model longitudinal data from subjects whose outcomes and covariates are observed at several time points. In this paper we consider two issues that have been raised in the literature concerning the marginal regression approach. The first is that even though the past history may be predictive of outcome, the marginal approach does not use this history. Although marginal regression has the flexibility of allowing between-subject variations in the observation times, it may lose substantial prediction power in comparison with the transitional modeling approach that relates the responses to the covariate and outcome histories. We address this issue by using the concept of “information sets” for prediction to generalize the “partly conditional mean” approach of Pepe and Couper (J. Am. Stat. Assoc. 92:991–998, 1997). This modeling approach strikes a balance between the flexibility of the marginal approach and the predictive power of transitional modeling. Another issue is the problem of excess zeros in the outcomes over what the underlying model for marginal regression implies. We show how our predictive modeling approach based on information sets can be readily modified to handle the excess zeros in the longitudinal time series. By synthesizing the marginal, transitional, and mixed effects modeling approaches in a predictive framework, we also discuss how their respective advantages can be retained while their limitations can be circumvented for modeling longitudinal data.  相似文献   

20.
Motivated by investigating the relationship between progesterone and the days in a menstrual cycle in a longitudinal study, we propose a multikink quantile regression model for longitudinal data analysis. It relaxes the linearity condition and assumes different regression forms in different regions of the domain of the threshold covariate. In this paper, we first propose a multikink quantile regression for longitudinal data. Two estimation procedures are proposed to estimate the regression coefficients and the kink points locations: one is a computationally efficient profile estimator under the working independence framework while the other one considers the within-subject correlations by using the unbiased generalized estimation equation approach. The selection consistency of the number of kink points and the asymptotic normality of two proposed estimators are established. Second, we construct a rank score test based on partial subgradients for the existence of the kink effect in longitudinal studies. Both the null distribution and the local alternative distribution of the test statistic have been derived. Simulation studies show that the proposed methods have excellent finite sample performance. In the application to the longitudinal progesterone data, we identify two kink points in the progesterone curves over different quantiles and observe that the progesterone level remains stable before the day of ovulation, then increases quickly in 5 to 6 days after ovulation and then changes to stable again or drops slightly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号