首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Randomization in a comparative experiment has, as one aim, the control of bias in the initial selection of experimental units. When the experiment is a clinical trial employing the accrual of patients, two additional aims are the control of admission bias and control of chronologic bias. This can be accomplished by using a method of randomization, such as the “biased coin design” of Efron, which sequentially forces balance. As an extension of Efron's design, this paper develops a class of conditional Markov chain designs. The detailed randomization employed utilizes the sequential imbalances in the treatment allocation as states in a Markov process. Through the use of appropriate transition probabilities, a range of possible designs can be attained. An additional objective of physical randomization is to provide a model for data analysis. Such a randomization theoretic analysis is presented for the current designs. In addition, Monte Carlo sampling results are given to support the proposed normal theory approximation to the exact randomization distribution.  相似文献   

2.
S J Pocock 《Biometrics》1979,35(1):183-197
This article is intended as a practical guide to the various methods of patient assignment in clinical trials. Topics discussed include a critical appraisal of non-randomized studies, methods of restricted randomization such as random permuted blocks and the biased coin technique, the extent to which stratification is necessary and the methods available, the possible benefits of randomization with a greater proportion of patients on a new treatment, factorial designs, crossover designs, randomized consent designs and adaptive assignment procedures. With all this diversity of approach it needs to be remembered that the effective implementation and reliability of a relatively straightforward randomization scheme may be more important than attempting theoretical optimality with more complex designs.  相似文献   

3.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

4.
Summary Cluster randomization trials with relatively few clusters have been widely used in recent years for evaluation of health‐care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. The limitation arises especially when there are many confounding variables in small studies. Such is the case in the INSTINCT trial designed to investigate the effectiveness of an education program in enhancing the tPA use in stroke patients. In this article, we introduce a new randomization design, the balance match weighted (BMW) design, which applies the optimal matching with constraints technique to a prospective randomized design and aims to minimize the mean squared error (MSE) of the treatment effect estimator. A simulation study shows that, under various confounding scenarios, the BMW design can yield substantial reductions in the MSE for the treatment effect estimator compared to a completely randomized or matched‐pair design. The BMW design is also compared with a model‐based approach adjusting for the estimated propensity score and Robins‐Mark‐Newey E‐estimation procedure in terms of efficiency and robustness of the treatment effect estimator. These investigations suggest that the BMW design is more robust and usually, although not always, more efficient than either of the approaches. The design is also seen to be robust against heterogeneous error. We illustrate these methods in proposing a design for the INSTINCT trial.  相似文献   

5.
Restricted randomization designs in clinical trials.   总被引:4,自引:0,他引:4  
R Simon 《Biometrics》1979,35(2):503-512
Though therapeutic clinical trials are often categorized as using either "randomization" or "historical controls" as a basis for treatment evaluation, pure random assignment of treatments is rarely employed. Instead various restricted randomization designs are used. The restrictions include the balancing of treatment assignments over time and the stratification of the assignment with regard to covariates that may affect response. Restricted randomization designs for clinical trials differ from those of other experimental areas because patients arrive sequentially and a balanced design cannot be ensured. The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed here. The effect of randomization restrictions on the validity of significance tests is discussed.  相似文献   

6.
On constrained balance randomization for clinical trials   总被引:1,自引:0,他引:1  
D M Titterington 《Biometrics》1983,39(4):1083-1086
A method is proposed for calculating the probabilities of assignment of a patient to treatments; it involves minimizing a quadratic criterion subject to a balance constraint. The optimal probabilities are very easy to compute. Numerical illustration is given and comparisons are drawn with the entropy-based methods of Klotz (1978, Biometrics 34, 283-287).  相似文献   

7.

Background

The toss of a coin has been a method used to determine random outcomes for centuries. It is still used in some research studies as a method of randomization, although it has largely been discredited as a valid randomization method. We sought to provide evidence that the toss of a coin can be manipulated.

Methods

We performed a prospective experiment involving otolaryngology residents in Vancouver, Canada. The main outcome was the proportion of “heads” coin tosses achieved (out of 300 attempts) by each participant. Each of the participants attempted to flip the coin so as to achieve a heads result.

Results

All participants achieved more heads than tails results, with 7 of the 13 participants having significantly more heads results (p ≤ 0.05). The highest proportion of heads achieved was 0.68 (95% confidence interval 0.62–0.73, p < 0.001).

Interpretation

Certain people are able to successfully manipulate the toss of a coin. This throws into doubt the validity of using a coin toss to determine a chance result.The toss or flip of a coin to randomly assign a decision traditionally involves throwing a coin into the air and seeing which side lands facing up. This method may be used to resolve a dispute, see who goes first in a game or determine which type of treatment a patient receives in a clinical trial. There are only 2 possible outcomes, “heads” or “tails,” although, in theory, landing on an edge is possible. (Research suggests that when the coin is allowed to fall onto a hard surface, the chance of this happening is in the order of 1 in 6000 tosses.1)When a coin is flipped into the air, it is supposedly made to rotate about an axis parallel to its flat surfaces. The coin is initially placed on a bent forefinger, and the thumb is released from under the coin surface, where it has been held under tension. The thumbnail strikes the part of the coin unsupported by the index finger, sending it rotating upward. All this is done with an upward movement of the hand and forearm. The coin may be allowed to fall to the floor or other surface or it may be caught by the “tosser” and sometimes turned onto the back of the opposite hand and then revealed. The catching method should not matter, provided it is consistent for each toss. The opponent often calls the toss when the coin is airborne, although in the case of randomization for clinical trials, this is unnecessary because one is simply looking for an outcome.Open in a separate windowThe appeal of the coin toss that it is a simple, seemingly unbiased, method of deciding between 2 options. Although the outcome of a coin toss should be at even odds, the outcome may well not be. Historically, the toss of a coin before a duel reputedly decided which person had his back to the sun — an obvious advantage when taking aim! In medical trials, a simple statistical manipulation can have a dramatic effect on the treatment a patient receives. Our hypothesis is that with minimal training, the outcome of the toss can be weighted heavily to the call of the tosser, thus abolishing the 50:50 chance result that is expected and allowing for manipulation of an apparently random event.  相似文献   

8.
Shepherd BE  Gilbert PB  Dupont CT 《Biometrics》2011,67(3):1100-1110
In randomized studies researchers may be interested in the effect of treatment assignment on a time-to-event outcome that only exists in a subset selected after randomization. For example, in preventative HIV vaccine trials, it is of interest to determine whether randomization to vaccine affects the time from infection diagnosis until initiation of antiretroviral therapy. Earlier work assessed the effect of treatment on outcome among the principal stratum of individuals who would have been selected regardless of treatment assignment. These studies assumed monotonicity, that one of the principal strata was empty (e.g., every person infected in the vaccine arm would have been infected if randomized to placebo). Here, we present a sensitivity analysis approach for relaxing monotonicity with a time-to-event outcome. We also consider scenarios where selection is unknown for some subjects because of noninformative censoring (e.g., infection status k years after randomization is unknown for some because of staggered study entry). We illustrate our method using data from an HIV vaccine trial.  相似文献   

9.
In this paper, we describe a new restricted randomization method called run-reversal equilibrium (RRE), which is a Nash equilibrium of a game where (1) the clinical trial statistician chooses a sequence of medical treatments, and (2) clinical investigators make treatment predictions. RRE randomization counteracts how each investigator could observe treatment histories in order to forecast upcoming treatments. Computation of a run-reversal equilibrium reflects how the treatment history at a particular site is imperfectly correlated with the treatment imbalance for the overall trial. An attractive feature of RRE randomization is that treatment imbalance follows a random walk at each site, while treatment balance is tightly constrained and regularly restored for the overall trial. Less predictable and therefore more scientifically valid experiments can be facilitated by run-reversal equilibrium for multi-site clinical trials.  相似文献   

10.
The hot hand phenomenon refers to the expectation of “streaks” in sequences of hits and misses whose probabilities are, in fact, independent (e.g., coin tosses, basketball shots). Here we propose that the hot hand phenomenon reflects an evolved psychological assumption that items in the world come in clumps, and that hot hand, not randomness, is our evolved psychological default. In two experiments, American undergraduates and Shuar hunter–horticulturalists participated in computer tasks in which they predicted hits and misses in foraging for fruits, coin tosses, and several other kinds of resources whose distributions were generated randomly. Subjects in both populations exhibited the hot hand assumption across all the resource types. The only exception was for American students predicting coin tosses where hot hand was reduced. These data suggest that hot hand is our evolved psychological default, which can be reduced (though not eliminated) by experience with genuinely independent random phenomena like coin tosses.  相似文献   

11.
Randomized trials with continuous outcomes are often analyzed using analysis of covariance (ANCOVA), with adjustment for prognostic baseline covariates. The ANCOVA estimator of the treatment effect is consistent under arbitrary model misspecification. In an article recently published in the journal, Wang et al proved the model-based variance estimator for the treatment effect is also consistent under outcome model misspecification, assuming the probability of randomization to each treatment is 1/2. In this reader reaction, we derive explicit expressions which show that when randomization is unequal, the model-based variance estimator can be biased upwards or downwards. In contrast, robust sandwich variance estimators can provide asymptotically valid inferences under arbitrary misspecification, even when randomization probabilities are not equal.  相似文献   

12.
Summary .  We present an outcome-adaptive randomization (AR) scheme for comparative clinical trials in which the primary endpoint is a joint efficacy/toxicity outcome. Under the proposed scheme, the randomization probabilities are unbalanced adaptively in favor of treatments with superior joint outcomes characterized by higher efficacy and lower toxicity. This type of scheme is advantageous from the patients' perspective because on average, more patients are randomized to superior treatments. We extend the approximate Bayesian time-to-event model in Cheung and Thall (2002,  Biometrics   58, 89–97) to model the joint efficacy/toxicity outcomes and perform posterior computation based on a latent variable approach. Consequently, this allows us to incorporate essential information about patients with incomplete follow-up. Based on the computed posterior probabilities, we propose an AR scheme that favors the treatments with larger joint probabilities of efficacy and no toxicity. We illustrate our methodology with a leukemia trial that compares three treatments in terms of their 52-week molecular remission rates and 52-week toxicity rates.  相似文献   

13.
The most common objective for response-adaptive clinical trials is to seek to ensure that patients within a trial have a high chance of receiving the best treatment available by altering the chance of allocation on the basis of accumulating data. Approaches that yield good patient benefit properties suffer from low power from a frequentist perspective when testing for a treatment difference at the end of the study due to the high imbalance in treatment allocations. In this work we develop an alternative pairwise test for treatment difference on the basis of allocation probabilities of the covariate-adjusted response-adaptive randomization with forward-looking Gittins Index (CARA-FLGI) Rule for binary responses. The performance of the novel test is evaluated in simulations for two-armed studies and then its applications to multiarmed studies are illustrated. The proposed test has markedly improved power over the traditional Fisher exact test when this class of nonmyopic response adaptation is used. We also find that the test's power is close to the power of a Fisher exact test under equal randomization.  相似文献   

14.
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics58, 21-29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial.  相似文献   

15.
16.
Optimal multivariate matching before randomization   总被引:1,自引:0,他引:1  
Although blocking or pairing before randomization is a basic principle of experimental design, the principle is almost invariably applied to at most one or two blocking variables. Here, we discuss the use of optimal multivariate matching prior to randomization to improve covariate balance for many variables at the same time, presenting an algorithm and a case-study of its performance. The method is useful when all subjects, or large groups of subjects, are randomized at the same time. Optimal matching divides a single group of 2n subjects into n pairs to minimize covariate differences within pairs-the so-called nonbipartite matching problem-then one subject in each pair is picked at random for treatment, the other being assigned to control. Using the baseline covariate data for 132 patients from an actual, unmatched, randomized experiment, we construct 66 pairs matching for 14 covariates. We then create 10000 unmatched and 10000 matched randomized experiments by repeatedly randomizing the 132 patients, and compare the covariate balance with and without matching. By every measure, every one of the 14 covariates was substantially better balanced when randomization was performed within matched pairs. Even after covariance adjustment for chance imbalances in the 14 covariates, matched randomizations provided more accurate estimates than unmatched randomizations, the increase in accuracy being equivalent to, on average, a 7% increase in sample size. In randomization tests of no treatment effect, matched randomizations using the signed rank test had substantially higher power than unmatched randomizations using the rank sum test, even when only 2 of 14 covariates were relevant to a simulated response. Unmatched randomizations experienced rare disasters which were consistently avoided by matched randomizations.  相似文献   

17.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

18.
In the statistical evaluation of data from a dose-response experiment, it is frequently of interest to test for dose-related trend: an increasing trend in response with increasing dose. The randomization trend test, a generalization of Fisher's exact test, has been recommended for animal tumorigenicity testing when the numbers of tumor occurrences are small. This paper examines the type I error of the randomization trend test, and the Cochran-Armitage and Mantel-Haenszel tests. Simulation results show that when the tumor incidence rates are less than 10%, the randomization test is conservative; the test becomes very conservative when the incidence rate is less than 5%. The Cochran-Armitage and Mantel-Haenszel tests are slightly anti-conservative (liberal) when the incidence rates are larger than 3%. Further, we propose a less conservatived method of calculating the p-value of the randomization trend test by excluding some permutations whose probabilities of occurrence are greater than the probability of the the observed outcome.  相似文献   

19.
It is well established that sea turtles return to natal rookeries to mate and lay their eggs, and that individual females are faithful to particular nesting sites within the rookery. Less certain is whether females are precisely returning to their natal beach. Attempts to demonstrate such precise natal philopatry with genetic data have had mixed success. Here we focused on the green turtles of three nesting sites in the Ascension Island rookery, separated by 5-15 km. Our approach differed from previous work in two key areas. First, we used male microsatellite data (five loci) reconstructed from samples collected from their offspring (N = 17) in addition to data for samples taken directly from females (N = 139). Second, we employed assignment methods in addition to the more traditional F-statistics. No significant genetic structure could be demonstrated with F(ST). However, when average assignment probabilities of females were examined, those for nesting populations in which they were sampled were indeed significantly higher than their probabilities for other populations (Mann-Whitney U-test: P < 0.001). Further evidence was provided by a significant result for the mAI(C) test (P < 0.001), supporting greater natal philopatry for females compared with males. The results suggest that female natal site fidelity was not sufficient for significant genetic differentiation among the nesting populations within the rookery, but detectable with assignment tests.  相似文献   

20.
We developed new criteria for determining the library size in a saturation mutagenesis experiment. When the number of all possible distinct variants is large, any of the top-performing variants (e.g., any of the top three) is likely to meet the design requirements, so the probability that the library contains at least one of them is a sensible criterion for determining the library size. By using a criterion of this type, one may significantly reduce the library size and thus save costs and labor while minimally compromising the quality of the best variant discovered. We present the probabilistic tools underlying these criteria and use them to compare the efficiencies of four randomization schemes: NNN, which uses all 64 codons; NNB, which uses 48 codons; NNK, which uses 32 codons; and MAX, which assigns equal probabilities to each of the 20 amino acids. MAX was found to be the most efficient randomization scheme and NNN the least efficient. TopLib, a computer program for carrying out the related calculations, is available through a user-friendly Web server.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号