首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Chang  Ted; Kott  Phillip S. 《Biometrika》2008,95(3):555-571
When we estimate the population total for a survey variableor variables, calibration forces the weighted estimates of certaincovariates to match known or alternatively estimated populationtotals called benchmarks. Calibration can be used to correctfor sample-survey nonresponse, or for coverage error resultingfrom frame undercoverage or unit duplication. The quasi-randomizationtheory supporting its use in nonresponse adjustment treats responseas an additional phase of random sampling. The functional formof a quasi-random response model is assumed to be known, itsparameter values estimated implicitly through the creation ofcalibration weights. Unfortunately, calibration depends uponknown benchmark totals while the covariates in a plausible modelfor survey response may not be the benchmark covariates. Moreover,it may be prudent to keep the number of covariates in a responsemodel small. We use calibration to adjust for nonresponse whenthe benchmark model and covariates may differ, provided thenumber of the former is at least as great as that of the latter.We discuss the estimation of a total for a vector of surveyvariables that do not include the benchmark covariates, butthat may include some of the model covariates. We show how tomeasure both the additional asymptotic variance due to the nonresponsein a calibration-weighted estimator and the full asymptoticvariance of the estimator itself. All variances are determinedwith respect to the randomization mechanism used to select thesample, the response model generating the subset of sample respondents,or both. Data from the U.S. National Agricultural StatisticalService's 2002 Census of Agriculture and simulations are usedto illustrate alternative adjustments for nonresponse. The paperconcludes with some remarks about adjustment for coverage error.  相似文献   

2.
Clinical trials with Poisson distributed count data as the primary outcome are common in various medical areas such as relapse counts in multiple sclerosis trials or the number of attacks in trials for the treatment of migraine. In this article, we present approximate sample size formulae for testing noninferiority using asymptotic tests which are based on restricted or unrestricted maximum likelihood estimators of the Poisson rates. The Poisson outcomes are allowed to be observed for unequal follow‐up schemes, and both the situations that the noninferiority margin is expressed in terms of the difference and the ratio are considered. The exact type I error rates and powers of these tests are evaluated and the accuracy of the approximate sample size formulae is examined. The test statistic using the restricted maximum likelihood estimators (for the difference test problem) and the test statistic that is based on the logarithmic transformation and employs the maximum likelihood estimators (for the ratio test problem) show favorable type I error control and can be recommended for practical application. The approximate sample size formulae show high accuracy even for small sample sizes and provide power values identical or close to the aspired ones. The methods are illustrated by a clinical trial example from anesthesia.  相似文献   

3.
Seamlessly expanding a randomized phase II trial to phase III   总被引:1,自引:0,他引:1  
Inoue LY  Thall PF  Berry DA 《Biometrics》2002,58(4):823-831
A sequential Bayesian phase II/III design is proposed for comparative clinical trials. The design is based on both survival time and discrete early events that may be related to survival and assumes a parametric mixture model. Phase II involves a small number of centers. Patients are randomized between treatments throughout, and sequential decisions are based on predictive probabilities of concluding superiority of the experimental treatment. Whether to stop early, continue, or shift into phase III is assessed repeatedly in phase II. Phase III begins when additional institutions are incorporated into the ongoing phase II trial. Simulation studies in the context of a non-small-cell lung cancer trial indicate that the proposed method maintains overall size and power while usually requiring substantially smaller sample size and shorter trial duration when compared with conventional group-sequential phase III designs.  相似文献   

4.
In clinical trials for the comparison of two treatments it seems reasonable to stop the study if either one treatment has worked out to be markedly superior in the main effect, or one to be severely inferior with respect to an adverse side effect. Two stage sampling plans are considered for simultaneously testing a main and side effect, assumed to follow a bivariate normal distribution with known variances, but unknown correlation. The test procedure keeps the global significance level under the null hypothesis of no differences in main and side effects. The critical values are chosen under the side condition, that the probability for ending at the first or second stage with a rejection of the elementary null hypothesis for the main effect is controlled, when a particular constellation of differences in mean holds; analogously the probability of ending with a rejection of the null hypotheses for the side effect, given certain treatment differences, is controlled too. Plans “optimal” with respect to sample size are given.  相似文献   

5.
In the management of most chronic conditions characterized by the lack of universally effective treatments, adaptive treatment strategies (ATSs) have grown in popularity as they offer a more individualized approach. As a result, sequential multiple assignment randomized trials (SMARTs) have gained attention as the most suitable clinical trial design to formalize the study of these strategies. While the number of SMARTs has increased in recent years, sample size and design considerations have generally been carried out in frequentist settings. However, standard frequentist formulae require assumptions on interim response rates and variance components. Misspecifying these can lead to incorrect sample size calculations and correspondingly inadequate levels of power. The Bayesian framework offers a straightforward path to alleviate some of these concerns. In this paper, we provide calculations in a Bayesian setting to allow more realistic and robust estimates that account for uncertainty in inputs through the ‘two priors’ approach. Additionally, compared to the standard frequentist formulae, this methodology allows us to rely on fewer assumptions, integrate pre-trial knowledge, and switch the focus from the standardized effect size to the MDD. The proposed methodology is evaluated in a thorough simulation study and is implemented to estimate the sample size for a full-scale SMART of an internet-based adaptive stress management intervention on cardiovascular disease patients using data from its pilot study conducted in two Canadian provinces.  相似文献   

6.
In an ongoing clinical trial and while the randomized treatment codes remain blinded, it is often desirable to estimate the treatment difference and standard deviation for a normally‐distributed response variable. This is particularly useful for estimating the sample size for future trials or for adjusting the sample size for the ongoing trial. We describe the limitations of an available EM algorithm‐based procedure to reestimate the standard deviation without unblinding the codes for the two treatments. We introduce a new procedure and propose a clinical trial design for estimating both the treatment difference and standard deviation without unblinding. The performance of the proposed procedure is evaluated in a simulation study.  相似文献   

7.
Two-stage clinical trial stopping rules   总被引:1,自引:0,他引:1  
J D Elashoff  T J Reedy 《Biometrics》1984,40(3):791-795
Two-stage stopping rules for clinical trials are considered. The nominal significance level needed for the second-stage test, for any choice of first-stage significance level, is derived for rules with overall significance levels of .01 and .05 and for studies with either half or two-thirds of the patients analyzed in the first stage. A graphical demonstration is given of the inherent tradeoff between power and expected sample size (or probability of early termination). A specific rule, intermediate to those advocated by Pocock (1977, Biometrika 64, 191-199) and O'Brien and Fleming (1979, Biometrics 5, 549-556), is recommended.  相似文献   

8.
A two-stage design is proposed to choose among several experimental treatments and a standard treatment in clinical trials. The first stage employs a selection procedure to select the best treatment, provided it is better than the standard. The second stage tests the hypothesis between the best treatment selected at the first stage (if any) and the standard treatment. All the treatments are assumed to follow normal distributions and the best treatment is the one with the largest population mean. The level and the power are defined and they are used to set up equations to solve unknown first stage sample size, second stage sample size, and procedure parameters. The optimal design is the one that gives the smallest average sample size. Numerical results are presented to illustrate the improvement of one design as compared to existing one stage design.  相似文献   

9.
In clinical trials where several experimental treatments are of interest, the goal may be viewed as identification of the best of these and comparison of that treatment to a standard control therapy. However, it is undesirable to commit patients to a large-scale comparative trial of a new regimen without evidence that its therapeutic success rate is acceptably high. We propose a two-stage design in which patients are first randomized among the experimental treatments, and the single treatment having the highest observed success rate is identified. If this highest rate falls below a fixed cutoff then the trial is terminated. Otherwise, the "best" new treatment is compared to the control at a second stage. Locally optimal values of the cutoff and the stage-1 and stage-2 sample sizes are derived by minimizing expected total sample size. The design has both high power and high probability of terminating early when no experimental treatment is superior to the control. Numerical results for implementing the design are presented, and comparison to Dunnett's (1984, in Design of Experiments: Ranking and Selection, T. J. Santner and A. C. Tamhane (eds), 47-66; New York: Marcel Dekker) optimal one-stage procedure is made.  相似文献   

10.
Heo M  Leon AC 《Biometrics》2008,64(4):1256-1262
SUMMARY: Cluster randomized clinical trials (cluster-RCT), where the community entities serve as clusters, often yield data with three hierarchy levels. For example, interventions are randomly assigned to the clusters (level three unit). Health care professionals (level two unit) within the same cluster are trained with the randomly assigned intervention to provide care to subjects (level one unit). In this study, we derived a closed form power function and formulae for sample size determination required to detect an intervention effect on outcomes at the subject's level. In doing so, we used a test statistic based on maximum likelihood estimates from a mixed-effects linear regression model for three level data. A simulation study follows and verifies that theoretical power estimates based on the derived formulae are nearly identical to empirical estimates based on simulated data. Recommendations at the design stage of a cluster-RCT are discussed.  相似文献   

11.
The tau and amyloid pathobiological processes underlying Alzheimer disease (AD) progresses slowly over periods of decades before clinical manifestation as mild cognitive impairment (MCI), then more rapidly to dementia, and eventually to end-stage organ failure. The failure of clinical trials of candidate disease modifying therapies to slow disease progression in patients already diagnosed with early AD has led to increased interest in exploring the possibility of early intervention and prevention trials, targeting MCI and cognitively healthy (HC) populations. Here, we stratify MCI individuals based on cerebrospinal fluid (CSF) biomarkers and structural atrophy risk factors for the disease. We also stratify HC individuals into risk groups on the basis of CSF biomarkers for the two hallmark AD pathologies. Results show that the broad category of MCI can be decomposed into subsets of individuals with significantly different average regional atrophy rates. By thus selectively identifying individuals, combinations of these biomarkers and risk factors could enable significant reductions in sample size requirements for clinical trials of investigational AD-modifying therapies, and provide stratification mechanisms to more finely assess response to therapy. Power is sufficiently high that detecting efficacy in MCI cohorts should not be a limiting factor in AD therapeutics research. In contrast, we show that sample size estimates for clinical trials aimed at the preclinical stage of the disorder (HCs with evidence of AD pathology) are prohibitively large. Longer natural history studies are needed to inform design of trials aimed at the presymptomatic stage.  相似文献   

12.
In the precision medicine era, (prespecified) subgroup analyses are an integral part of clinical trials. Incorporating multiple populations and hypotheses in the design and analysis plan, adaptive designs promise flexibility and efficiency in such trials. Adaptations include (unblinded) interim analyses (IAs) or blinded sample size reviews. An IA offers the possibility to select promising subgroups and reallocate sample size in further stages. Trials with these features are known as adaptive enrichment designs. Such complex designs comprise many nuisance parameters, such as prevalences of the subgroups and variances of the outcomes in the subgroups. Additionally, a number of design options including the timepoint of the sample size review and timepoint of the IA have to be selected. Here, for normally distributed endpoints, we propose a strategy combining blinded sample size recalculation and adaptive enrichment at an IA, that is, at an early timepoint nuisance parameters are reestimated and the sample size is adjusted while subgroup selection and enrichment is performed later. We discuss implications of different scenarios concerning the variances as well as the timepoints of blinded review and IA and investigate the design characteristics in simulations. The proposed method maintains the desired power if planning assumptions were inaccurate and reduces the sample size and variability of the final sample size when an enrichment is performed. Having two separate timepoints for blinded sample size review and IA improves the timing of the latter and increases the probability to correctly enrich a subgroup.  相似文献   

13.
Summary .  We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics 63, 465–474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data.  相似文献   

14.
A multiple testing procedure for clinical trials.   总被引:57,自引:0,他引:57  
A multiple testing procedure is proposed for comparing two treatments when response to treatment is both dichotomous (i.e., success or failure) and immediate. The proposed test statistic for each test is the usual (Pearson) chi-square statistic based on all data collected to that point. The maximum number (N) of tests and the number (m1 + m2) of observations collected between successive tests is fixed in advance. The overall size of the procedure is shown to be controlled with virtually the same accuracy as the single sample chi-square test based on N(m1 + m2) observations. The power is also found to be virtually the same. However, by affording the opportunity to terminate early when one treatment performs markedly better than the other, the multiple testing procedure may eliminate the ethical dilemmas that often accompany clinical trials.  相似文献   

15.

Summary

Omission of relevant covariates can lead to bias when estimating treatment or exposure effects from survival data in both randomized controlled trials and observational studies. This paper presents a general approach to assessing bias when covariates are omitted from the Cox model. The proposed method is applicable to both randomized and non‐randomized studies. We distinguish between the effects of three possible sources of bias: omission of a balanced covariate, data censoring and unmeasured confounding. Asymptotic formulae for determining the bias are derived from the large sample properties of the maximum likelihood estimator. A simulation study is used to demonstrate the validity of the bias formulae and to characterize the influence of the different sources of bias. It is shown that the bias converges to fixed limits as the effect of the omitted covariate increases, irrespective of the degree of confounding. The bias formulae are used as the basis for developing a new method of sensitivity analysis to assess the impact of omitted covariates on estimates of treatment or exposure effects. In simulation studies, the proposed method gave unbiased treatment estimates and confidence intervals with good coverage when the true sensitivity parameters were known. We describe application of the method to a randomized controlled trial and a non‐randomized study.  相似文献   

16.
D A Follmann 《Biometrics》1991,47(2):763-771
The clinical trial design in which the endpoint is measured both at baseline and at the end of the study is used in a variety of situations. For two-group designs, test such as the t test or analysis of covariance are commonly used to evaluate treatment efficacy. Often such pretest-posttest trials restrict participation to subjects with a baseline measurement of the endpoint in a certain range. A range may define a disease, or it may be thought that subjects with extreme measurements are more responsive to treatment. This paper examines the effect of screening on the analysis of covariance and t-test variances relative to the population (i.e., unscreened) variances. Bivariate normal and bivariate gamma distributions are assumed for the (pretest, posttest) measurements. Because the sample size required to detect a specified difference between treatment and control is proportional to the variance, the results have direct application to setting sample size.  相似文献   

17.
In randomized trials, an analysis of covariance (ANCOVA) is often used to analyze post-treatment measurements with pre-treatment measurements as a covariate to compare two treatment groups. Random allocation guarantees only equal variances of pre-treatment measurements. We hence consider data with unequal covariances and variances of post-treatment measurements without assuming normality. Recently, we showed that the actual type I error rate of the usual ANCOVA assuming equal slopes and equal residual variances is asymptotically at a nominal level under equal sample sizes, and that of the ANCOVA with unequal variances is asymptotically at a nominal level, even under unequal sample sizes. In this paper, we investigated the asymptotic properties of the ANCOVA with unequal slopes for such data. The estimators of the treatment effect at the observed mean are identical between equal and unequal variance assumptions, and these are asymptotically normal estimators for the treatment effect at the true mean. However, the variances of these estimators based on standard formulas are biased, and the actual type I error rates are not at a nominal level, irrespective of variance assumptions. In equal sample sizes, the efficiency of the usual ANCOVA assuming equal slopes and equal variances is asymptotically the same as those of the ANCOVA with unequal slopes and higher than that of the ANCOVA with equal slopes and unequal variances. Therefore, the use of the usual ANCOVA is appropriate in equal sample sizes.  相似文献   

18.
For continuous variables of randomized controlled trials, recently, longitudinal analysis of pre- and posttreatment measurements as bivariate responses is one of analytical methods to compare two treatment groups. Under random allocation, means and variances of pretreatment measurements are expected to be equal between groups, but covariances and posttreatment variances are not. Under random allocation with unequal covariances and posttreatment variances, we compared asymptotic variances of the treatment effect estimators in three longitudinal models. The data-generating model has equal baseline means and variances, and unequal covariances and posttreatment variances. The model with equal baseline means and unequal variance–covariance matrices has a redundant parameter. In large sample sizes, these two models keep a nominal type I error rate and have high efficiency. The model with equal baseline means and equal variance–covariance matrices wrongly assumes equal covariances and posttreatment variances. Only under equal sample sizes, this model keeps a nominal type I error rate. This model has the same high efficiency with the data-generating model under equal sample sizes. In conclusion, longitudinal analysis with equal baseline means performed well in large sample sizes. We also compared asymptotic properties of longitudinal models with those of the analysis of covariance (ANCOVA) and t-test.  相似文献   

19.
Cluster randomized studies are common in community trials. The standard method for estimating sample size for cluster randomized studies assumes a common cluster size. However often in cluster randomized studies, size of the clusters vary. In this paper, we derive sample size estimation for continuous outcomes for cluster randomized studies while accounting for the variability due to cluster size. It is shown that the proposed formula for estimating total cluster size can be obtained by adding a correction term to the traditional formula which uses the average cluster size. Application of these results to the design of a health promotion educational intervention study is discussed.  相似文献   

20.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号