首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we investigate K‐group comparisons on survival endpoints for observational studies. In clinical databases for observational studies, treatment for patients are chosen with probabilities varying depending on their baseline characteristics. This often results in noncomparable treatment groups because of imbalance in baseline characteristics of patients among treatment groups. In order to overcome this issue, we conduct propensity analysis and match the subjects with similar propensity scores across treatment groups or compare weighted group means (or weighted survival curves for censored outcome variables) using the inverse probability weighting (IPW). To this end, multinomial logistic regression has been a popular propensity analysis method to estimate the weights. We propose to use decision tree method as an alternative propensity analysis due to its simplicity and robustness. We also propose IPW rank statistics, called Dunnett‐type test and ANOVA‐type test, to compare 3 or more treatment groups on survival endpoints. Using simulations, we evaluate the finite sample performance of the weighted rank statistics combined with these propensity analysis methods. We demonstrate these methods with a real data example. The IPW method also allows us for unbiased estimation of population parameters of each treatment group. In this paper, we limit our discussions to survival outcomes, but all the methods can be easily modified for any type of outcomes, such as binary or continuous variables.  相似文献   

2.
A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database.  相似文献   

3.
Lu W  Li L 《Biometrics》2011,67(2):513-523
Methodology of sufficient dimension reduction (SDR) has offered an effective means to facilitate regression analysis of high-dimensional data. When the response is censored, however, most existing SDR estimators cannot be applied, or require some restrictive conditions. In this article, we propose a new class of inverse censoring probability weighted SDR estimators for censored regressions. Moreover, regularization is introduced to achieve simultaneous variable selection and dimension reduction. Asymptotic properties and empirical performance of the proposed methods are examined.  相似文献   

4.
Depressive state has been reported to be significantly associated with higher-level functional capacity among community-dwelling elderly. However, few studies have investigated the associations among people with long-term care requirements. We aimed to investigate the associations between depressive state and higher-level functional capacity and obtain marginal odds ratios using propensity score analyses in people with long-term care requirements. We conducted a cross-sectional study based on participants aged ≥65 years (n = 545) who were community dwelling and used outpatient care services for long-term preventive care. We measured higher-level functional capacity, depressive state, and possible confounders. Then, we estimated the marginal odds ratios (i.e., the change in odds of impaired higher-level functional capacity if all versus no participants were exposed to depressive state) by logistic models using generalized linear models with the inverse probability of treatment weighting (IPTW) for propensity score and design-based standard errors. Depressive state was used as the exposure variable and higher-level functional capacity as the outcome variable. The all absolute standardized differences after the IPTW using the propensity scores were <10% which indicated negligible differences in the mean or prevalence of the covariates between non-depressive state and depressive state. The marginal odds ratios were estimated by the logistic models with IPTW using the propensity scores. The marginal odds ratios were 2.17 (95%CI: 1.13–4.19) for men and 2.57 (95%CI: 1.26–5.26) for women. Prevention of depressive state may contribute to not only depressive state but also higher-level functional capacity.  相似文献   

5.

Background

Quasi-experimental studies of menu labeling have found mixed results for improving diet. Differences between experimental groups can hinder interpretation. Propensity scores are an increasingly common method to improve covariate balance, but multiple methods exist and the improvements associated with each method have rarely been compared. In this re-analysis of the impact of menu labeling, we compare multiple propensity score methods to determine which methods optimize balance between experimental groups.

Methods

Study participants included adult customers who visited full-service restaurants with menu labeling (treatment) and without (control). We compared the balance between treatment groups obtained by four propensity score methods: 1) 1:1 nearest neighbor matching (NN), 2) augmented 1:1 NN (using caliper of 0.2 and an exact match on an imbalanced covariate), 3) full matching, and 4) inverse probability weighting (IPW). We then evaluated the treatment effect on differences in nutrients purchased across the different methods.

Results

1:1 NN resulted in worse balance than the original unmatched sample (average standardized absolute mean distance [ASAM]: 0.185 compared to 0.171). Augmented 1:1 NN improved balance (ASAM: 0.038) but resulted in a large reduction in sample size. Full matching and IPW improved balance over the unmatched sample without a reduction in sample size (ASAM: 0.049 and 0.031, respectively). Menu labeling was associated with decreased calories, fat, sodium and carbohydrates in the unmatched analysis. Results were qualitatively similar in the propensity score matched/weighted models.

Conclusions

While propensity scores offer an increasingly popular tool to improve causal inference, choosing the correct method can be challenging. Our results emphasize the benefit of examining multiple methods to ensure results are consistent, and considering approaches beyond the most popular method of 1:1 NN matching.  相似文献   

6.
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified.  相似文献   

7.
In observational studies of survival time featuring a binary time-dependent treatment, the hazard ratio (an instantaneous measure) is often used to represent the treatment effect. However, investigators are often more interested in the difference in survival functions. We propose semiparametric methods to estimate the causal effect of treatment among the treated with respect to survival probability. The objective is to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. For each patient, we compute a prognostic score (based on the pre-treatment death hazard) and a propensity score (based on the treatment hazard). Each treated patient is then matched with an alive, uncensored and not-yet-treated patient with similar prognostic and/or propensity scores. The experience of each treated and matched patient is weighted using a variant of Inverse Probability of Censoring Weighting to account for the impact of censoring. We propose estimators of the treatment-specific survival functions (and their difference), computed through weighted Nelson–Aalen estimators. Closed-form variance estimators are proposed which take into consideration the potential replication of subjects across matched sets. The proposed methods are evaluated through simulation, then applied to estimate the effect of kidney transplantation on survival among end-stage renal disease patients using data from a national organ failure registry.  相似文献   

8.
Internal dose metrics, as computed with pharmacokinetic models, are increasingly used as a means for extrapolating animal toxicological data to humans and to extrapolate across routes of administration. These internal dose metrics are thought to provide a more scientific means of comparing toxicological effects across animal species. The use of internal dose metrics is based on the universal assumption that toxic effects are equal across species if and only if the concentration of the toxic moieties in the target tissue is equal across species. Herein it is shown that this assumption is inconsistent with empirical toxicological data. It is shown that measurement of internal dose metrics in chronological time, as is done for AUC (Area under the concentration curve) and rate of metabolite production per kg of target tissue, does not produce equal toxic effects across species. A consequence of this observation is that the application of pharmacokinetics in risk assessments for such important chemicals as trichloroethylene, vinyl chloride, perchloroethylene, and perchlorate may need reassessment.  相似文献   

9.
Standard analyses of data from case-control studies that are nested in a large cohort ignore information available for cohort members not sampled for the sub-study. This paper reviews several methods designed to increase estimation efficiency by using more of the data, treating the case-control sample as a two or three phase stratified sample. When applied to a study of coronary heart disease among women in the hormone trials of the Women’s Health Initiative, modest but increasing gains in precision of regression coefficients were observed depending on the amount of cohort information used in the analysis. The gains were particularly evident for pseudo- or maximum likelihood estimates whose validity depends on the assumed model being correct. Larger standard errors were obtained for coefficients estimated by inverse probability weighted methods that are more robust to model misspecification. Such misspecification may have been responsible for an important difference in one key regression coefficient estimated using the weighted compared with the more efficient methods.  相似文献   

10.
Huang Y  Leroux B 《Biometrics》2011,67(3):843-851
Summary Williamson, Datta, and Satten's (2003, Biometrics 59 , 36–42) cluster‐weighted generalized estimating equations (CWGEEs) are effective in adjusting for bias due to informative cluster sizes for cluster‐level covariates. We show that CWGEE may not perform well, however, for covariates that can take different values within a cluster if the numbers of observations at each covariate level are informative. On the other hand, inverse probability of treatment weighting accounts for informative treatment propensity but not for informative cluster size. Motivated by evaluating the effect of a binary exposure in presence of such types of informativeness, we propose several weighted GEE estimators, with weights related to the size of a cluster as well as the distribution of the binary exposure within the cluster. Choice of the weights depends on the population of interest and the nature of the exposure. Through simulation studies, we demonstrate the superior performance of the new estimators compared to existing estimators such as from GEE, CWGEE, and inverse probability of treatment‐weighted GEE. We demonstrate the use of our method using an example examining covariate effects on the risk of dental caries among small children.  相似文献   

11.
Summary Clinicians are often interested in the effect of covariates on survival probabilities at prespecified study times. Because different factors can be associated with the risk of short‐ and long‐term failure, a flexible modeling strategy is pursued. Given a set of multiple candidate working models, an objective methodology is proposed that aims to construct consistent and asymptotically normal estimators of regression coefficients and average prediction error for each working model, that are free from the nuisance censoring variable. It requires the conditional distribution of censoring given covariates to be modeled. The model selection strategy uses stepup or stepdown multiple hypothesis testing procedures that control either the proportion of false positives or generalized familywise error rate when comparing models based on estimates of average prediction error. The context can actually be cast as a missing data problem, where augmented inverse probability weighted complete case estimators of regression coefficients and prediction error can be used ( Tsiatis, 2006 , Semiparametric Theory and Missing Data). A simulation study and an interesting analysis of a recent AIDS trial are provided.  相似文献   

12.
In observational cohort studies with complex sampling schemes, truncation arises when the time to event of interest is observed only when it falls below or exceeds another random time, that is, the truncation time. In more complex settings, observation may require a particular ordering of event times; we refer to this as sequential truncation. Estimators of the event time distribution have been developed for simple left-truncated or right-truncated data. However, these estimators may be inconsistent under sequential truncation. We propose nonparametric and semiparametric maximum likelihood estimators for the distribution of the event time of interest in the presence of sequential truncation, under two truncation models. We show the equivalence of an inverse probability weighted estimator and a product limit estimator under one of these models. We study the large sample properties of the proposed estimators and derive their asymptotic variance estimators. We evaluate the proposed methods through simulation studies and apply the methods to an Alzheimer's disease study. We have developed an R package, seqTrun , for implementation of our method.  相似文献   

13.
Adaptive finite element models have allowed researchers to test hypothetical relationships between the local mechanical environment and the healing of bone fractures. However, their predictive power has not yet been demonstrated by testing hypotheses ahead of experimental testing. In this study, an established mechano-biological scheme was used in an iterative finite element simulation of sheep tibial osteotomy healing under a hypothetical fixation regime, “inverse dynamisation”. Tissue distributions, interfragmentary movement and stiffness across the fracture site were compared between stiff and flexible fixation conditions and scenarios in which fixation stiffness was increased at a discrete time-point. The modelling work was conducted blind to the experimental study to be published subsequently. The simulations predicted the fastest and most direct healing under constant stiff fixation, and the slowest healing under flexible fixation. Although low fixation stiffness promoted more callus formation prior to bridging, this conferred little additional stiffness to the fracture in the first 5 weeks. Thus, while switching to stiffer fixation facilitated rapid subsequent bridging of the fracture, no advantage of inverse dynamisation could be demonstrated. In vivo data remains necessary to conclusively test this treatment protocol and this will, in turn, provide an evaluation of the model’s performance. The publication of both hypotheses and their computational simulation, prior to experimental testing, offers an appealing means to test the predictive power of mechano-biological models.  相似文献   

14.
This paper applies the inverse probability weighted least‐squares method to predict total medical cost in the presence of censored data. Since survival time and medical costs may be subject to right censoring and therefore are not always observable, the ordinary least‐squares approach cannot be used to assess the effects of explanatory variables. We demonstrate how inverse probability weighted least‐squares estimation provides consistent asymptotic normal coefficients with easily computable standard errors. In addition, to assess the effect of censoring on coefficients, we develop a test comparing ordinary least‐squares and inverse probability weighted least‐squares estimators. We demonstrate the methods developed by applying them to the estimation of cancer costs using Medicare claims data. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Researchers subject to time and budget constraints may conductsmall nested case–control studies with individually matchedcontrols to help optimize statistical power. In this paper,we show how precision can be improved considerably by combiningdata from a small nested case–control study with datafrom a larger nested case–control study of a differentoutcome in the same or overlapping cohort. Our approach is basedon the inverse probability weighting concept, in which the log-likelihoodcontribution of each individual observation is weighted by theinverse of its probability of inclusion in either study. Weillustrate our approach using simulated data and an applicationwhere we combine data sets from 2 nested case–controlstudies to investigate risk factors for anorexia nervosa ina cohort of young women in Sweden.  相似文献   

16.
17.
A study on the codominant scoring of AFLP markers in association panels without prior knowledge on genotype probabilities is described. Bands are scored codominantly by fitting normal mixture models to band intensities, illustrating and optimizing existing methodology, which employs the EM-algorithm. We study features that improve the performance of the algorithm, and the unmixing in general, like parameter initialization, restrictions on parameters, data transformation, and outlier removal. Parameter restrictions include equal component variances, equal or nearly equal distances between component means, and mixing probabilities according to Hardy–Weinberg Equilibrium. Histogram visualization of band intensities with superimposed normal densities, and optional classification scores and other grouping information, assists further in the codominant scoring. We find empirical evidence favoring the square root transformation of the band intensity, as was found in segregating populations. Our approach provides posterior genotype probabilities for marker loci. These probabilities can form the basis for association mapping and are more useful than the standard scoring categories A, H, B, C, D. They can also be used to calculate predictors for additive and dominance effects. Diagnostics for data quality of AFLP markers are described: preference for three-component mixture model, good separation between component means, and lack of singletons for the component with highest mean. Software has been developed in R, containing the models for normal mixtures with facilitating features, and visualizations. The methods are applied to an association panel in tomato, comprising 1,175 polymorphic markers on 94 tomato hybrids, as part of a larger study within the Dutch Centre for BioSystems Genomics.  相似文献   

18.
The most commonly used method in evolutionary biology for combining information across multiple tests of the same null hypothesis is Fisher's combined probability test. This note shows that an alternative method called the weighted Z-test has more power and more precision than does Fisher's test. Furthermore, in contrast to some statements in the literature, the weighted Z-method is superior to the unweighted Z-transform approach. The results in this note show that, when combining P-values from multiple tests of the same hypothesis, the weighted Z-method should be preferred.  相似文献   

19.
Recent theoretical and empirical work on foraging behaviour suggests that animals may respond to both the means and variances in benefits associated with available resources. We attempt to extend this analysis by asking if reward skew (third moment about the mean) might influence preference when two options have equal means and equal variances. We examine how minimizing the probability of starvation might induce response to skew. In the Appendix we develop an expected ‘fitness’ model which follows from economic theory and indicates more general conditions concerning responses to skew. We also report experiments involving foraging white-crowned sparrows (Zonotrichia leucophrys). Under conditions where positive skew should be favoured, the birds' behaviour supports the prediction. However, their response to skew is not as strong as responses to variance noted in the same individuals.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号