首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

2.
Ke Zhu  Hanzhong Liu 《Biometrics》2023,79(3):2127-2142
Rerandomization discards assignments with covariates unbalanced in the treatment and control groups to improve estimation and inference efficiency. However, the acceptance-rejection sampling method used in rerandomization is computationally inefficient. As a result, it is time-consuming for rerandomization to draw numerous independent assignments, which are necessary for performing Fisher randomization tests and constructing randomization-based confidence intervals. To address this problem, we propose a pair-switching rerandomization (PSRR) method to draw balanced assignments efficiently. We obtain the unbiasedness and variance reduction of the difference-in-means estimator and show that the Fisher randomization tests are valid under PSRR. Moreover, we propose an exact approach to invert Fisher randomization tests to confidence intervals, which is faster than the existing methods. In addition, our method is applicable to both nonsequentially and sequentially randomized experiments. We conduct comprehensive simulation studies to compare the finite-sample performance of the proposed method with that of classical rerandomization. Simulation results indicate that PSRR leads to comparable power of Fisher randomization tests and is 3–23 times faster than classical rerandomization. Finally, we apply the PSRR method to analyze two clinical trial datasets, both of which demonstrate the advantages of our method.  相似文献   

3.
In long-term clinical studies, recurrent event data are sometimes collected and used to contrast the efficacies of two different treatments. The event reoccurrence rates can be compared using the popular negative binomial model, which incorporates information related to patient heterogeneity into a data analysis. For treatment allocation, a balanced approach in which equal sample sizes are obtained for both treatments is predominately adopted. However, if one treatment is superior, then it may be desirable to allocate fewer subjects to the less-effective treatment. To accommodate this objective, a sequential response-adaptive treatment allocation procedure is derived based on the doubly adaptive biased coin design. Our proposed treatment allocation schemes have been shown to be capable of reducing the number of subjects receiving the inferior treatment while simultaneously retaining a test power level that is comparable to that of a balanced design. The redesign of a clinical study illustrates the advantages of using our procedure.  相似文献   

4.
We propose a novel response-adaptive randomization procedure for multi-armed trials with continuous outcomes that are assumed to be normally distributed. Our proposed rule is non-myopic, and oriented toward a patient benefit objective, yet maintains computational feasibility. We derive our response-adaptive algorithm based on the Gittins index for the multi-armed bandit problem, as a modification of the method first introduced in Villar et al. (Biometrics, 71, pp. 969-978). The resulting procedure can be implemented under the assumption of both known or unknown variance. We illustrate the proposed procedure by simulations in the context of phase II cancer trials. Our results show that, in a multi-armed setting, there are efficiency and patient benefit gains of using a response-adaptive allocation procedure with a continuous endpoint instead of a binary one. These gains persist even if an anticipated low rate of missing data due to deaths, dropouts, or complete responses is imputed online through a procedure first introduced in this paper. Additionally, we discuss how there are response-adaptive designs that outperform the traditional equal randomized design both in terms of efficiency and patient benefit measures in the multi-armed trial context.  相似文献   

5.
Mancuso JY  Ahn H  Chen JJ  Mancuso JP 《Biometrics》2002,58(2):403-412
Preclinical animal carcinogenicity studies are usually concerned with testing the statistical significance of a dose-response relationship. When the response consists of a rare event such as the development of a certain type of tumor, exact statistical methods are often employed. The exact randomization trend test based on the multivariate hypergeometric distribution is less powerful in the presence of treatment-related risks other than the specified response. Particularly, the loss of power becomes more pronounced when competing risks cause progressively higher mortality rates with increasing dose, which is usual in practice. An age-adjusted form of the randomization test is proposed to adjust for this effect. Permutational distribution for Peto's cause-of-death (COD) test is also explored and compared with its asymptotic counterpart by simulation. The use of COD information has been a controversial issue due to the subjectivity in the pathologists' determinations as well as for economic reasons. The proposed age-adjusted exact test does not require COD, and it is shown to compare favorably to the COD tests via an extensive Monte Carlo simulation. Applications of the methods to two real data sets are included.  相似文献   

6.
When planning a two-arm group sequential clinical trial with a binary primary outcome that has severe implications for quality of life (e.g., mortality), investigators may strive to find the design that maximizes in-trial patient benefit. In such cases, Bayesian response-adaptive randomization (BRAR) is often considered because it can alter the allocation ratio throughout the trial in favor of the treatment that is currently performing better. Although previous studies have recommended using fixed randomization over BRAR based on patient benefit metrics calculated from the realized trial sample size, these previous comparisons have been limited by failures to hold type I and II error rates constant across designs or consider the impacts on all individuals directly affected by the design choice. In this paper, we propose a metric for comparing designs with the same type I and II error rates that reflects expected outcomes among individuals who would participate in the trial if enrollment is open when they become eligible. We demonstrate how to use the proposed metric to guide the choice of design in the context of two recent trials in persons suffering out of hospital cardiac arrest. Using computer simulation, we demonstrate that various implementations of group sequential BRAR offer modest improvements with respect to the proposed metric relative to conventional group sequential monitoring alone.  相似文献   

7.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

8.
There is increasing evidence regarding the association between mitochondrial DNA (mtDNA) and aerobic capacity; however, whether mtDNA haplogroups are associated with the status of being an elite endurance athlete is more controversial. We compared the frequency distribution of mtDNA haplogroups among the following groups of Spanish (Caucasian) men: 102 elite endurance athletes (professional road cyclists, endurance runners), 51 elite power athletes (jumpers, throwers and sprinters), and 478 non-athletic controls. We observed a significant difference between endurance athletes and controls (Fisher exact test = 17.89, P = 0.015; Bonferroni's significant threshold = 0.017), yet not between power athletes and controls (Fisher exact test = 47.99, P = 0.381) or between endurance and power athletes (Fisher exact test = 5.53, P = 0.597). We observed that the V haplogroup was overrepresented in endurance athletes (15.7%) compared with controls (7.5%) (odds ratio: 2.284; 95% confidence interval: 1.237, 4.322). In conclusion, our findings overall support the idea that mtDNA variations could be among the numerous contributors to the status of being an elite endurance athlete, whereas no association was found with elite power athletic status.  相似文献   

9.
We propose a dynamic allocation procedure that increases power and efficiency when measuring an average treatment effect in sequential randomized trials exploiting some subjects' previous assessed responses. Subjects arrive sequentially and are either randomized or paired to a previously randomized subject and administered the alternate treatment. The pairing is made via a dynamic matching criterion that iteratively learns which specific covariates are important to the response. We develop estimators for the average treatment effect as well as an exact test. We illustrate our method's increase in efficiency and power over other allocation procedures in both simulated scenarios and a clinical trial dataset. An R package “SeqExpMatch ” for use by practitioners is available on CRAN .  相似文献   

10.
The aim of the present paper is to provide optimal allocations for comparative clinical trials with survival outcomes. The suggested targets are derived adopting a compound optimization strategy based on a subjective weighting of the relative importance of inferential demands and ethical concerns. The ensuing compound optimal targets are continuous functions of the treatment effects, so we provide the conditions under which they can be approached by standard response-adaptive randomization procedures, also guaranteeing the applicability of the classical asymptotic inference. The operating characteristics of the suggested methodology are verified both theoretically and by simulation, including the robustness to model misspecification. With respect to the other available proposals, our strategy always assigns more patients to the best treatment without compromising inference, taking into account estimation efficiency and power as well. We illustrate our procedure by redesigning two real oncological trials.  相似文献   

11.
In animal vaccination experiments with binary outcome (diseased/non diseased), the comparison of the vaccinated and control group is often based on the Fisher exact test. A tool for the evaluation of different designs is proposed, based on the expected power of the Fisher exact test. The expected power can sometimes unexpectedly increase with decreasing sample size and/or increasing imbalance. The reasons for these peculiar results are explained and compared to the results of two other types of tests: the unconditional test and the randomisation test. In a vaccination experiment with a restricted number of animals it is shown to be important to consider expected power in order to choose the most appropriate design.  相似文献   

12.
Zhang K  Traskin M  Small DS 《Biometrics》2012,68(1):75-84
For group-randomized trials, randomization inference based on rank statistics provides robust, exact inference against nonnormal distributions. However, in a matched-pair design, the currently available rank-based statistics lose significant power compared to normal linear mixed model (LMM) test statistics when the LMM is true. In this article, we investigate and develop an optimal test statistic over all statistics in the form of the weighted sum of signed Mann-Whitney-Wilcoxon statistics under certain assumptions. This test is almost as powerful as the LMM even when the LMM is true, but it is much more powerful for heavy tailed distributions. A simulation study is conducted to examine the power.  相似文献   

13.
In the statistical evaluation of data from a dose-response experiment, it is frequently of interest to test for dose-related trend: an increasing trend in response with increasing dose. The randomization trend test, a generalization of Fisher's exact test, has been recommended for animal tumorigenicity testing when the numbers of tumor occurrences are small. This paper examines the type I error of the randomization trend test, and the Cochran-Armitage and Mantel-Haenszel tests. Simulation results show that when the tumor incidence rates are less than 10%, the randomization test is conservative; the test becomes very conservative when the incidence rate is less than 5%. The Cochran-Armitage and Mantel-Haenszel tests are slightly anti-conservative (liberal) when the incidence rates are larger than 3%. Further, we propose a less conservatived method of calculating the p-value of the randomization trend test by excluding some permutations whose probabilities of occurrence are greater than the probability of the the observed outcome.  相似文献   

14.
Cheng Y  Shen Y 《Biometrics》2004,60(4):910-918
For confirmatory trials of regulatory decision making, it is important that adaptive designs under consideration provide inference with the correct nominal level, as well as unbiased estimates, and confidence intervals for the treatment comparisons in the actual trials. However, naive point estimate and its confidence interval are often biased in adaptive sequential designs. We develop a new procedure for estimation following a test from a sample size reestimation design. The method for obtaining an exact confidence interval and point estimate is based on a general distribution property of a pivot function of the Self-designing group sequential clinical trial by Shen and Fisher (1999, Biometrics55, 190-197). A modified estimate is proposed to explicitly account for futility stopping boundary with reduced bias when block sizes are small. The proposed estimates are shown to be consistent. The computation of the estimates is straightforward. We also provide a modified weight function to improve the power of the test. Extensive simulation studies show that the exact confidence intervals have accurate nominal probability of coverage, and the proposed point estimates are nearly unbiased with practical sample sizes.  相似文献   

15.

Background

In conventional epidemiology confounding of the exposure of interest with lifestyle or socioeconomic factors, and reverse causation whereby disease status influences exposure rather than vice versa, may invalidate causal interpretations of observed associations. Conversely, genetic variants should not be related to the confounding factors that distort associations in conventional observational epidemiological studies. Furthermore, disease onset will not influence genotype. Therefore, it has been suggested that genetic variants that are known to be associated with a modifiable (nongenetic) risk factor can be used to help determine the causal effect of this modifiable risk factor on disease outcomes. This approach, mendelian randomization, is increasingly being applied within epidemiological studies. However, there is debate about the underlying premise that associations between genotypes and disease outcomes are not confounded by other risk factors. We examined the extent to which genetic variants, on the one hand, and nongenetic environmental exposures or phenotypic characteristics on the other, tend to be associated with each other, to assess the degree of confounding that would exist in conventional epidemiological studies compared with mendelian randomization studies.

Methods and Findings

We estimated pairwise correlations between nongenetic baseline variables and genetic variables in a cross-sectional study comparing the number of correlations that were statistically significant at the 5%, 1%, and 0.01% level (α = 0.05, 0.01, and 0.0001, respectively) with the number expected by chance if all variables were in fact uncorrelated, using a two-sided binomial exact test. We demonstrate that behavioural, socioeconomic, and physiological factors are strongly interrelated, with 45% of all possible pairwise associations between 96 nongenetic characteristics (n = 4,560 correlations) being significant at the p < 0.01 level (the ratio of observed to expected significant associations was 45; p-value for difference between observed and expected < 0.000001). Similar findings were observed for other levels of significance. In contrast, genetic variants showed no greater association with each other, or with the 96 behavioural, socioeconomic, and physiological factors, than would be expected by chance.

Conclusions

These data illustrate why observational studies have produced misleading claims regarding potentially causal factors for disease. The findings demonstrate the potential power of a methodology that utilizes genetic variants as indicators of exposure level when studying environmentally modifiable risk factors.  相似文献   

16.
Optimal multivariate matching before randomization   总被引:1,自引:0,他引:1  
Although blocking or pairing before randomization is a basic principle of experimental design, the principle is almost invariably applied to at most one or two blocking variables. Here, we discuss the use of optimal multivariate matching prior to randomization to improve covariate balance for many variables at the same time, presenting an algorithm and a case-study of its performance. The method is useful when all subjects, or large groups of subjects, are randomized at the same time. Optimal matching divides a single group of 2n subjects into n pairs to minimize covariate differences within pairs-the so-called nonbipartite matching problem-then one subject in each pair is picked at random for treatment, the other being assigned to control. Using the baseline covariate data for 132 patients from an actual, unmatched, randomized experiment, we construct 66 pairs matching for 14 covariates. We then create 10000 unmatched and 10000 matched randomized experiments by repeatedly randomizing the 132 patients, and compare the covariate balance with and without matching. By every measure, every one of the 14 covariates was substantially better balanced when randomization was performed within matched pairs. Even after covariance adjustment for chance imbalances in the 14 covariates, matched randomizations provided more accurate estimates than unmatched randomizations, the increase in accuracy being equivalent to, on average, a 7% increase in sample size. In randomization tests of no treatment effect, matched randomizations using the signed rank test had substantially higher power than unmatched randomizations using the rank sum test, even when only 2 of 14 covariates were relevant to a simulated response. Unmatched randomizations experienced rare disasters which were consistently avoided by matched randomizations.  相似文献   

17.
Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.  相似文献   

18.
Randomization in a comparative experiment has, as one aim, the control of bias in the initial selection of experimental units. When the experiment is a clinical trial employing the accrual of patients, two additional aims are the control of admission bias and control of chronologic bias. This can be accomplished by using a method of randomization, such as the “biased coin design” of Efron, which sequentially forces balance. As an extension of Efron's design, this paper develops a class of conditional Markov chain designs. The detailed randomization employed utilizes the sequential imbalances in the treatment allocation as states in a Markov process. Through the use of appropriate transition probabilities, a range of possible designs can be attained. An additional objective of physical randomization is to provide a model for data analysis. Such a randomization theoretic analysis is presented for the current designs. In addition, Monte Carlo sampling results are given to support the proposed normal theory approximation to the exact randomization distribution.  相似文献   

19.
Rosenbaum PR 《Biometrics》2011,67(3):1017-1027
Summary In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is “no departure” then this is the same as the power of a randomization test in a randomized experiment. A new family of u‐statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments—that is, it often has good Pitman efficiency—but small effects are invariably sensitive to small unobserved biases. Members of this family of u‐statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u‐statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology.  相似文献   

20.
The use of midazolam for dental care in patients with intellectual disability is poorly documented. This study aimed to evaluate the effectiveness and safety of conscious sedation procedures using intravenous midazolam in adults and children with intellectual disability (ID) compared to dentally anxious patients (DA). Ninety-eight patients with ID and 44 patients with DA programmed for intravenous midazolam participated in the study over 187 and 133 sessions, respectively. Evaluation criteria were success of dental treatment, cooperation level (modified Venham scale), and occurrence of adverse effects. The mean intravenous dose administered was 8.8±4.9 mg and 9.8±4.1 mg in ID and DA sessions respectively (t-test, NS). 50% N2O/O2 was administered during cannulation in 51% of ID sessions and 61% of DA sessions (NS, Fisher exact test). Oral or rectal midazolam premedication was administered for cannulation in 31% of ID sessions and 3% of DA sessions (p<0,001, Fisher exact test). Dental treatment was successful in 9 out of 10 sessions for both groups. Minor adverse effects occurred in 16.6% and 6.8% of ID and DA sessions respectively (p = 0.01, Fisher exact test). Patients with ID were more often very disturbed during cannulation (25.4% ID vs. 3.9% DA sessions) and were less often relaxed after induction (58.9% ID vs. 90.3% DA) and during dental treatment (39.5% ID vs. 59.7% DA) (p<0.001, Fisher exact test) than patients with DA. When midazolam sedation was repeated, cooperation improved for both groups. Conscious sedation procedures using intravenous midazolam, with or without premedication and/or inhalation sedation (50% N2O/O2), were shown to be safe and effective in patients with intellectual disability when administered by dentists.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号