首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Complementary features of randomized controlled trials (RCTs) and observational studies (OSs) can be used jointly to estimate the average treatment effect of a target population. We propose a calibration weighting estimator that enforces the covariate balance between the RCT and OS, therefore improving the trial-based estimator's generalizability. Exploiting semiparametric efficiency theory, we propose a doubly robust augmented calibration weighting estimator that achieves the efficiency bound derived under the identification assumptions. A nonparametric sieve method is provided as an alternative to the parametric approach, which enables the robust approximation of the nuisance functions and data-adaptive selection of outcome predictors for calibration. We establish asymptotic results and confirm the finite sample performances of the proposed estimators by simulation experiments and an application on the estimation of the treatment effect of adjuvant chemotherapy for early-stage non-small-cell lung patients after surgery.  相似文献   

2.
Few studies have examined therapist effects and therapeutic alliance (TA) in treatments for chronic fatigue syndrome (CFS). Therapist effects are the differences in outcomes achieved by different therapists. TA is the quality of the bond and level of agreement regarding the goals and tasks of therapy. Prior research suffers the methodological problem that the allocation of therapist was not randomized, meaning therapist effects may be confounded with selection effects. We used data from a randomized controlled treatment trial of 296 people with CFS. The trial compared pragmatic rehabilitation (PR), a nurse led, home based self-help treatment, a counselling-based treatment called supportive listening (SL), with general practitioner treatment as usual. Therapist allocation was randomized. Primary outcome measures, fatigue and physical functioning were assessed blind to treatment allocation. TA was measured in the PR and SL arms. Regression models allowing for interactions were used to examine relationships between (i) therapist and therapeutic alliance, and (ii) therapist and average treatment effect (the difference in mean outcomes between different treatment conditions). We found no therapist effects. We found no relationship between TA and the average treatment effect of a therapist. One therapist formed stronger alliances when delivering PR compared to when delivering SL (effect size 0.76, SE 0.33, 95% CI 0.11 to 1.41). In these therapies for CFS, TA does not influence symptomatic outcome. The lack of significant therapist effects on outcome may result from the trial’s rigorous quality control, or random therapist allocation, eliminating selection effects. Further research is needed.Trial Registration: ISRCTN74156610  相似文献   

3.
Summary Cluster randomization trials with relatively few clusters have been widely used in recent years for evaluation of health‐care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. The limitation arises especially when there are many confounding variables in small studies. Such is the case in the INSTINCT trial designed to investigate the effectiveness of an education program in enhancing the tPA use in stroke patients. In this article, we introduce a new randomization design, the balance match weighted (BMW) design, which applies the optimal matching with constraints technique to a prospective randomized design and aims to minimize the mean squared error (MSE) of the treatment effect estimator. A simulation study shows that, under various confounding scenarios, the BMW design can yield substantial reductions in the MSE for the treatment effect estimator compared to a completely randomized or matched‐pair design. The BMW design is also compared with a model‐based approach adjusting for the estimated propensity score and Robins‐Mark‐Newey E‐estimation procedure in terms of efficiency and robustness of the treatment effect estimator. These investigations suggest that the BMW design is more robust and usually, although not always, more efficient than either of the approaches. The design is also seen to be robust against heterogeneous error. We illustrate these methods in proposing a design for the INSTINCT trial.  相似文献   

4.
Rosenbaum PR 《Biometrics》2011,67(3):1017-1027
Summary In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is “no departure” then this is the same as the power of a randomization test in a randomized experiment. A new family of u‐statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments—that is, it often has good Pitman efficiency—but small effects are invariably sensitive to small unobserved biases. Members of this family of u‐statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u‐statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology.  相似文献   

5.
Summary. We derive the optimal allocation between two treatments in a clinical trial based on the following optimality criterion: for fixed variance of the test statistic, what allocation minimizes the expected number of treatment failures? A sequential design is described that leads asymptotically to the optimal allocation and is compared with the randomized play‐the‐winner rule, sequential Neyman allocation, and equal allocation at similar power levels. We find that the sequential procedure generally results in fewer treatment failures than the other procedures, particularly when the success probabilities of treatments are smaller.  相似文献   

6.
The optimal allocation theory predicts that growth is allocated between the shoot and the roots so that the uptake of the most limiting resource is increased. Allocation is dynamic due to resource depletion, interaction with competitors, and the allometry of growth. We assessed the effects of intra- and inter-specific competition on growth and resource allocation of the meadow species Ranunculus acris and Agrostis capillaris, grown in environments with high (+) or low (−) availability of light (L) and nutrients (N). We took samples twice a week over the 7 weeks experiment, to follow the changes in root-to-shoot ratios in plants of different sizes, and carried out a larger scale harvest at the end of the experiment. Of all the tested factors, availability of nutrients had the largest effect on the growth rate and shoot-to-root allocation in both species, although both competition and light had significant effects as well. The highest root-to-shoot ratios were measured from the L+N− treatment, and the lowest from the L−N+ treatment, as predicted by the optimal allocation theory. Competition changed resource allocation, but not always toward acquiring the resource that is most limiting to growth. We thus conclude that the greatest variation in shoot-to-root allocation was due to the resource availability and the effects of competition were small, probably due to low density of plants in the experiment.  相似文献   

7.
The most common objective for response-adaptive clinical trials is to seek to ensure that patients within a trial have a high chance of receiving the best treatment available by altering the chance of allocation on the basis of accumulating data. Approaches that yield good patient benefit properties suffer from low power from a frequentist perspective when testing for a treatment difference at the end of the study due to the high imbalance in treatment allocations. In this work we develop an alternative pairwise test for treatment difference on the basis of allocation probabilities of the covariate-adjusted response-adaptive randomization with forward-looking Gittins Index (CARA-FLGI) Rule for binary responses. The performance of the novel test is evaluated in simulations for two-armed studies and then its applications to multiarmed studies are illustrated. The proposed test has markedly improved power over the traditional Fisher exact test when this class of nonmyopic response adaptation is used. We also find that the test's power is close to the power of a Fisher exact test under equal randomization.  相似文献   

8.
Standard sample size calculation formulas for stepped wedge cluster randomized trials (SW-CRTs) assume that cluster sizes are equal. When cluster sizes vary substantially, ignoring this variation may lead to an under-powered study. We investigate the relative efficiency of a SW-CRT with varying cluster sizes to equal cluster sizes, and derive variance estimators for the intervention effect that account for this variation under a mixed effects model—a commonly used approach for analyzing data from cluster randomized trials. When cluster sizes vary, the power of a SW-CRT depends on the order in which clusters receive the intervention, which is determined through randomization. We first derive a variance formula that corresponds to any particular realization of the randomized sequence and propose efficient algorithms to identify upper and lower bounds of the power. We then obtain an “expected” power based on a first-order approximation to the variance formula, where the expectation is taken with respect to all possible randomization sequences. Finally, we provide a variance formula for more general settings where only the cluster size arithmetic mean and coefficient of variation, instead of exact cluster sizes, are known in the design stage. We evaluate our methods through simulations and illustrate that the average power of a SW-CRT decreases as the variation in cluster sizes increases, and the impact is largest when the number of clusters is small.  相似文献   

9.
We propose a novel response-adaptive randomization procedure for multi-armed trials with continuous outcomes that are assumed to be normally distributed. Our proposed rule is non-myopic, and oriented toward a patient benefit objective, yet maintains computational feasibility. We derive our response-adaptive algorithm based on the Gittins index for the multi-armed bandit problem, as a modification of the method first introduced in Villar et al. (Biometrics, 71, pp. 969-978). The resulting procedure can be implemented under the assumption of both known or unknown variance. We illustrate the proposed procedure by simulations in the context of phase II cancer trials. Our results show that, in a multi-armed setting, there are efficiency and patient benefit gains of using a response-adaptive allocation procedure with a continuous endpoint instead of a binary one. These gains persist even if an anticipated low rate of missing data due to deaths, dropouts, or complete responses is imputed online through a procedure first introduced in this paper. Additionally, we discuss how there are response-adaptive designs that outperform the traditional equal randomized design both in terms of efficiency and patient benefit measures in the multi-armed trial context.  相似文献   

10.
A stepped-wedge cluster randomized trial (CRT) is a unidirectional crossover study in which timings of treatment initiation for clusters are randomized. Because the timing of treatment initiation is different for each cluster, an emerging question is whether the treatment effect depends on the exposure time, namely, the time duration since the initiation of treatment. Existing approaches for assessing exposure-time treatment effect heterogeneity either assume a parametric functional form of exposure time or model the exposure time as a categorical variable, in which case the number of parameters increases with the number of exposure-time periods, leading to a potential loss in efficiency. In this article, we propose a new model formulation for assessing treatment effect heterogeneity over exposure time. Rather than a categorical term for each level of exposure time, the proposed model includes a random effect to represent varying treatment effects by exposure time. This allows for pooling information across exposure-time periods and may result in more precise average and exposure-time-specific treatment effect estimates. In addition, we develop an accompanying permutation test for the variance component of the heterogeneous treatment effect parameters. We conduct simulation studies to compare the proposed model and permutation test to alternative methods to elucidate their finite-sample operating characteristics, and to generate practical guidance on model choices for assessing exposure-time treatment effect heterogeneity in stepped-wedge CRTs.  相似文献   

11.
The use of fossil fuel is predicted to cause an increase of the atmospheric CO2 concentration, which will affect the global pattern of temperature and precipitation. It is therefore essential to incorporate effects of temperature and water supply on carbon partitioning of plants to predict effects of elevated [CO2] on growth and yield of Triticum aestivum. Although earlier papers have emphasized that elevated [CO2] favours investment of biomass in roots relative to that in leaves, it has now become clear that these are indirect effects, due to the more rapid depletion of nutrients in the root environment as a consequence of enhanced growth. Broadly generalized, the effect of temperature on biomass allocation in the vegetative stage is that the relative investment of biomass in roots is lowest at a certain optimum temperature and increases at both higher and lower temperatures. This is found not only when the temperature of the entire plant is varied, but also when only root temperature is changed whilst shoot temperature is kept constant. Effects of temperature on the allocation pattern can be explained largely by the effect of root temperature on the roots' capacity to transport water. Effects of a shortage in water supply on carbon partitioning are unambiguous: roots receive relatively more carbon. The pattern of biomass allocation in the vegetative stage and variation in water-use efficiency are prime factors determining a plant's potential for early growth and yield in different environments. In a comparison of a range of T. aestivum cultivars, a high water-use efficiency at the plant level correlates positively with a large investment in both leaf and root biomass, a low stomatal conductance and a large investment in photosynthetic capacity. We also present evidence that a lower investment of biomass in roots is not only associated with lower respiratory costs for root growth, but also with lower specific costs for ion uptake. We suggest the combination of a number of traits in future wheat cultivars, i.e. a high investment of biomass in leaves, which have a low stomatal conductance and a high photosynthetic capacity, and a low investment of biomass in roots, which have low respiratory costs. Such cultivars are considered highly appropriate in a future world, especially in the dryer regions. Although variation for the desired traits already exists among wheat cultivars, it is much larger among wild Aegilops species, which can readily be crossed with T. aestivum. Such wild relatives may be exploited to develop new wheat cultivars well-adapted to changed climatic conditions.  相似文献   

12.
For continuous variables of randomized controlled trials, recently, longitudinal analysis of pre- and posttreatment measurements as bivariate responses is one of analytical methods to compare two treatment groups. Under random allocation, means and variances of pretreatment measurements are expected to be equal between groups, but covariances and posttreatment variances are not. Under random allocation with unequal covariances and posttreatment variances, we compared asymptotic variances of the treatment effect estimators in three longitudinal models. The data-generating model has equal baseline means and variances, and unequal covariances and posttreatment variances. The model with equal baseline means and unequal variance–covariance matrices has a redundant parameter. In large sample sizes, these two models keep a nominal type I error rate and have high efficiency. The model with equal baseline means and equal variance–covariance matrices wrongly assumes equal covariances and posttreatment variances. Only under equal sample sizes, this model keeps a nominal type I error rate. This model has the same high efficiency with the data-generating model under equal sample sizes. In conclusion, longitudinal analysis with equal baseline means performed well in large sample sizes. We also compared asymptotic properties of longitudinal models with those of the analysis of covariance (ANCOVA) and t-test.  相似文献   

13.
Two-stage randomized experiments become an increasingly popular experimental design for causal inference when the outcome of one unit may be affected by the treatment assignments of other units in the same cluster. In this paper, we provide a methodological framework for general tools of statistical inference and power analysis for two-stage randomized experiments. Under the randomization-based framework, we consider the estimation of a new direct effect of interest as well as the average direct and spillover effects studied in the literature. We provide unbiased estimators of these causal quantities and their conservative variance estimators in a general setting. Using these results, we then develop hypothesis testing procedures and derive sample size formulas. We theoretically compare the two-stage randomized design with the completely randomized and cluster randomized designs, which represent two limiting designs. Finally, we conduct simulation studies to evaluate the empirical performance of our sample size formulas. For empirical illustration, the proposed methodology is applied to the randomized evaluation of the Indian National Health Insurance Program. An open-source software package is available for implementing the proposed methodology.  相似文献   

14.
Investigations of sample size for planning case-control studies have usually been limited to detecting a single factor. In this paper, we investigate sample size for multiple risk factors in strata-matched case-control studies. We construct an omnibus statistic for testing M different risk factors based on the jointly sufficient statistics of parameters associated with the risk factors. The statistic is non-iterative, and it reduces to the Cochran statistic when M = 1. The asymptotic power function of the test is a non-central chi-square with M degrees of freedom and the sample size required for a specific power can be obtained by the inverse relationship. We find that the equal sample allocation is optimum. A Monte Carlo experiment demonstrates that an approximate formula for calculating sample size is satisfactory in typical epidemiologic studies. An approximate sample size obtained using Bonferroni's method for multiple comparisons is much larger than that obtained using the omnibus test. Approximate sample size formulas investigated in this paper using the omnibus test, as well as the individual tests, can be useful in designing case-control studies for detecting multiple risk factors.  相似文献   

15.
Ideally, randomized trials would be used to compare the long-term effectiveness of dynamic treatment regimes on clinically relevant outcomes. However, because randomized trials are not always feasible or timely, we often must rely on observational data to compare dynamic treatment regimes. An example of a dynamic treatment regime is “start combined antiretroviral therapy (cART) within 6 months of CD4 cell count first dropping below x cells/mm3 or diagnosis of an AIDS-defining illness, whichever happens first” where x can take values between 200 and 500. Recently, Cain et al. (Ann. Intern. Med. 154(8):509–515, 2011) used inverse probability (IP) weighting of dynamic marginal structural models to find the x that minimizes 5-year mortality risk under similar dynamic regimes using observational data. Unlike standard methods, IP weighting can appropriately adjust for measured time-varying confounders (e.g., CD4 cell count, viral load) that are affected by prior treatment. Here we describe an alternative method to IP weighting for comparing the effectiveness of dynamic cART regimes: the parametric g-formula. The parametric g-formula naturally handles dynamic regimes and, like IP weighting, can appropriately adjust for measured time-varying confounders. However, estimators based on the parametric g-formula are more efficient than IP weighted estimators. This is often at the expense of more parametric assumptions. Here we describe how to use the parametric g-formula to estimate risk by the end of a user-specified follow-up period under dynamic treatment regimes. We describe an application of this method to answer the “when to start” question using data from the HIV-CAUSAL Collaboration.  相似文献   

16.

Introduction

The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results.

Methods

We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy.

Results

For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of one event for every 100 patients given an individualized prediction.

Conclusion

The size of the benefit associated with appropriate clinical implementation of a good prediction model is sufficient to warrant development of further models. However, care is advised in the implementation of prediction modelling, especially for patients who would opt for treatment even if it was of relatively little benefit.  相似文献   

17.
Carbon cost of root systems: an architectural approach   总被引:16,自引:2,他引:14  
Root architecture is an important component of nutrient uptake and may be sensitive to carbon allocational changes brought about by rising CO2. We describe a deformable geometric model of root growth, SimRoot, for the dynamic morphological and physiological simulation of root architectures. Using SimRoot, and measurements of root biomass deposition, respiration and exudation, carbon/phosphorus budgets were developed for three contrasting root architectures. Carbon allocation patterns and phosphorus acquisition efficiencies were estimated for Phaseolus vulgaris seedlings with either a dichotomous, herringbone, or empirically determined bean root architecture. Carbon allocation to biomass, respiration, and exudation varied significantly among architectures. Root systems also varied in the relationship between C expenditure and P acquisition, providing evidence for the importance of architecture in nutrient acquisition efficiency.  相似文献   

18.
Stepped wedge cluster randomized trials (SWCRT) are increasingly used for the evaluation of complex interventions in health services research. They randomly allocate treatments to clusters that switch to intervention under investigation at variable time points without returning to control condition. The resulting unbalanced allocation over time periods and the uncertainty about the underlying correlation structures at cluster-level renders designing and analyzing SWCRTs a challenge. Adjusting for time trends is recommended, appropriate parameterizations depend on the particular context. For sample size calculation, the covariance structure and covariance parameters are usually assumed to be known. These assumptions greatly affect the influence single cluster-period cells have on the effect estimate. Thus, it is important to understand how cluster-period cells contribute to the treatment effect estimate. We therefore discuss two measures of cell influence. These are functions of the design characteristics and covariance structure only and can thus be calculated at the planning stage: the coefficient matrix as discussed by Matthews and Forbes and information content (IC) as introduced by Kasza and Forbes. The main result is a new formula for IC that is more general and computationally more efficient. The formula applies to any generalized least squares estimator, especially for any type of time trend adjustment or nonblock diagonal matrices. We further show a functional relationship between IC and the coefficient matrix. We give two examples that tie in with current literature. All discussed tools and methods are implemented in the R package SteppedPower .  相似文献   

19.
焦德志  钟露朋  张艳馥  潘林  杨允菲 《生态学报》2022,42(15):6103-6110
不同环境条件下的植物个体可以表现出形态特征的分异和物质分配的权衡与调整。采用大样本抽样调查与统计分析方法,比较研究扎龙湿地不同生境芦苇(Phragmites Australis)生殖株和营养株的形态特征以及生物量分配的异速关系。结果表明:在9月末,盐碱生境、旱生生境、湿生生境和水生生境芦苇分株的生长表现出较大的生态可塑性,株高和株重均以盐碱生境最小,水生生境最大,最大值与最小值的比值分别为1.3—3.3和1.8—5.1,分株生长在种群间的变异度高于种群内,与营养株相比,生殖株的变异度较低;分株的支持分配与生产分配的比值为1.8—4.2,生产分配以盐碱生境最高,以水生生境最低,而支持分配和生殖分配表现与生产分配相反的序位;生殖株的花序长和花序重与株高间呈直线函数形式增长,株高和株重低于种群平均值的20%和35%的分株不进行有性生殖;叶重、叶鞘和茎重以及分株重与株高间呈幂函数形式的异速生长关系。植物通过改变个体的形态特征以及调整构件间生物量分配适应不同环境,而受遗传因素控制的构件间生长关系却相对稳定。  相似文献   

20.
Summary This article proposes new tests to compare the vaccine and placebo groups in randomized vaccine trials when a small fraction of volunteers become infected. A simple approach that is consistent with the intent‐to‐treat principle is to assign a score, say W, equal to 0 for the uninfecteds and some postinfection outcome X > 0 for the infecteds. One can then test the equality of this skewed distribution of W between the two groups. This burden of illness (BOI) test was introduced by Chang, Guess, and Heyse (1994, Statistics in Medicine 13 , 1807–1814). If infections are rare, the massive number of 0s in each group tends to dilute the vaccine effect and this test can have poor power, particularly if the X's are not close to zero. Comparing X in just the infecteds is no longer a comparison of randomized groups and can produce misleading conclusions. Gilbert, Bosch, and Hudgens (2003, Biometrics 59 , 531–541) and Hudgens, Hoering, and Self (2003, Statistics in Medicine 22 , 2281–2298) introduced tests of the equality of X in a subgroup—the principal stratum of those “doomed” to be infected under either randomization assignment. This can be more powerful than the BOI approach, but requires unexaminable assumptions. We suggest new “chop‐lump” Wilcoxon and t‐tests (CLW and CLT) that can be more powerful than the BOI tests in certain situations. When the number of volunteers in each group are equal, the chop‐lump tests remove an equal number of zeros from both groups and then perform a test on the remaining W's, which are mostly >0. A permutation approach provides a null distribution. We show that under local alternatives, the CLW test is always more powerful than the usual Wilcoxon test provided the true vaccine and placebo infection rates are the same. We also identify the crucial role of the “gap” between 0 and the X's on power for the t‐tests. The chop‐lump tests are compared to established tests via simulation for planned HIV and malaria vaccine trials. A reanalysis of the first phase III HIV vaccine trial is used to illustrate the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号