首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The concept of adaptive two‐stage designs is applied to the problem of testing the equality of several normal means against an ordered (monotone) alternative. The likelihood‐ratio‐test proposed by Bartholomew is known to have favorable power properties when testing against a monotonic trend. Tests based on contrasts provide a flexible way to incorporate available information regarding the pattern of the unknown true means through appropriate specification of the scores. The basic idea of the presented concept is the combination of Bartholomew 's test (first stage) with an “adaptive score test” (second stage) which utilizes the information resulting from isotonic regression estimation at the first stage. In a Monte Carlo simulation study the adaptive scoring procedure is compared to the non‐adaptive two‐stage procedure using the Bartholomew test at both stages. We found that adaptive scoring may improve the power of the two stage design, in particular if the sample size at the first stage is considerably larger than at the second stage.  相似文献   

2.
Although linear rank statistics for the two‐sample problem are distribution free tests, their power depends on the distribution of the data. In the planning phase of an experiment, researchers are often uncertain about the shape of this distribution and so the choice of test statistic for the analysis and the determination of the required sample size are based on vague information. Adaptive designs with interim analysis can potentially overcome both problems. And in particular, adaptive tests based on a selector statistic are a solution to the first. We investigate whether adaptive tests can be usefully implemented in flexible two‐stage designs to gain power. In a simulation study, we compare several methods for choosing a test statistic for the second stage of an adaptive design based on interim data with the procedure that applies adaptive tests in both stages. We find that the latter is a sensible approach that leads to the best results in most situations considered here. The different methods are illustrated using a clinical trial example.  相似文献   

3.
Polley MY  Cheung YK 《Biometrics》2008,64(1):232-241
Summary.   We deal with the design problem of early phase dose-finding clinical trials with monotone biologic endpoints, such as biological measurements, laboratory values of serum level, and gene expression. A specific objective of this type of trial is to identify the minimum dose that exhibits adequate drug activity and shifts the mean of the endpoint from a zero dose to the so-called minimum effective dose. Stepwise test procedures for dose finding have been well studied in the context of nonhuman studies where the sampling plan is done in one stage. In this article, we extend the notion of stepwise testing to a two-stage enrollment plan in an attempt to reduce the potential sample size requirement by shutting down unpromising doses in a futility interim. In particular, we examine four two-stage designs and apply them to design a statin trial with four doses and a placebo in patients with Hodgkin's disease. We discuss the calibration of the design parameters and the implementation of these proposed methods. In the context of the statin trial, a calibrated two-stage design can reduce the average total sample size up to 38% (from 125 to 78) from a one-stage step-down test, while maintaining comparable error rates and probability of correct selection. The price for the reduction in the average sample size is the slight increase in the maximum total sample size from 125 to 130.  相似文献   

4.
Adaptive designs were originally developed for independent and uniformly distributed p‐values. There are trial settings where independence is not satisfied or where it may not be possible to check whether it is satisfied. In these cases, the test statistics and p‐values of each stage may be dependent. Since the probability of a type I error for a fixed adaptive design depends on the true dependence structure between the p‐values of the stages, control of the type I error rate might be endangered if the dependence structure is not taken into account adequately. In this paper, we address the problem of controlling the type I error rate in two‐stage adaptive designs if any dependence structure between the test statistics of the stages is admitted (worst case scenario). For this purpose, we pursue a copula approach to adaptive designs. For two‐stage adaptive designs without futility stop, we derive the probability of a type I error in the worst case, that is for the most adverse dependence structure between the p‐values of the stages. Explicit analytical considerations are performed for the class of inverse normal designs. A comparison with the significance level for independent and uniformly distributed p‐values is performed. For inverse normal designs without futility stop and equally weighted stages, it turns out that correcting for the worst case is too conservative as compared to a simple Bonferroni design.  相似文献   

5.
Summary We present an adaptive percentile modified Wilcoxon rank sum test for the two‐sample problem. The test is basically a Wilcoxon rank sum test applied on a fraction of the sample observations, and the fraction is adaptively determined by the sample observations. Most of the theory is developed under a location‐shift model, but we demonstrate that the test is also meaningful for testing against more general alternatives. The test may be particularly useful for the analysis of massive datasets in which quasi‐automatic hypothesis testing is required. We investigate the power characteristics of the new test in a simulation study, and we apply the test to a microarray experiment on colorectal cancer. These empirical studies demonstrate that the new test has good overall power and that it succeeds better in finding differentially expressed genes as compared to other popular tests. We conclude that the new nonparametric test is widely applicable and that its power is comparable to the power of the Baumgartner‐Weiß‐Schindler test.  相似文献   

6.
We propose a method to construct adaptive tests based on a bootstrap technique. The procedure leads to a nearly exact adaptive test depending on the size of the sample. With the use of the estimated Pitman's relative efficacy as selector statistic, we show that the adaptive test has a power that is asymptotically equal to the power of it's better component. We apply the idea to construct an adaptive test for two-way analysis of variance model. Finally, we use simulations to observe the behaviour of the method for small sample sizes.  相似文献   

7.
Zheng G  Song K  Elston RC 《Human heredity》2007,63(3-4):175-186
We study a two-stage analysis of genetic association for case-control studies. In the first stage, we compare Hardy-Weinberg disequilibrium coefficients between cases and controls and, in the second stage, we apply the Cochran- Armitage trend test. The two analyses are statistically independent when Hardy-Weinberg equilibrium holds in the population, so all the samples are used in both stages. The significance level in the first stage is adaptively determined based on its conditional power. Given the level in the first stage, the level for the second stage analysis is determined with the overall Type I error being asymptotically controlled. For finite sample sizes, a parametric bootstrap method is used to control the overall Type I error rate. This two-stage analysis is often more powerful than the Cochran-Armitage trend test alone for a large association study. The new approach is applied to SNPs from a real study.  相似文献   

8.
Summary We propose a hierarchical model for the probability of dose‐limiting toxicity (DLT) for combinations of doses of two therapeutic agents. We apply this model to an adaptive Bayesian trial algorithm whose goal is to identify combinations with DLT rates close to a prespecified target rate. We describe methods for generating prior distributions for the parameters in our model from a basic set of information elicited from clinical investigators. We survey the performance of our algorithm in a series of simulations of a hypothetical trial that examines combinations of four doses of two agents. We also compare the performance of our approach to two existing methods and assess the sensitivity of our approach to the chosen prior distribution.  相似文献   

9.
Yin G  Yuan Y 《Biometrics》2009,65(3):866-875
Summary .  Two-agent combination trials have recently attracted enormous attention in oncology research. There are several strong motivations for combining different agents in a treatment: to induce the synergistic treatment effect, to increase the dose intensity with nonoverlapping toxicities, and to target different tumor cell susceptibilities. To accommodate this growing trend in clinical trials, we propose a Bayesian adaptive design for dose finding based on latent 2 × 2 tables. In the search for the maximum tolerated dose combination, we continuously update the posterior estimates for the unknown parameters associated with marginal probabilities and the correlation parameter based on the data from successive patients. By reordering the dose toxicity probabilities in the two-dimensional space, we assign each coming cohort of patients to the most appropriate dose combination. We conduct extensive simulation studies to examine the operating characteristics of the proposed method under various practical scenarios. Finally, we illustrate our dose-finding procedure with a clinical trial of agent combinations at M. D. Anderson Cancer Center.  相似文献   

10.
Stylianou M  Flournoy N 《Biometrics》2002,58(1):171-177
We are interested in finding a dose that has a prespecified toxicity rate in the target population. In this article, we investigate five estimators of the target dose to be used with the up-and-down biased coin design (BCD) introduced by Durham and Flournoy (1994, Statistical Decision Theory and Related Topics). These estimators are derived using maximum likelihood, weighted least squares, sample averages, and isotonic regression. A linearly interpolated isotonic regression estimate is shown to be simple to derive and to perform as well as or better than the other target dose estimators in terms of mean square error and average number of subjects needed for convergence in most scenarios studied.  相似文献   

11.
In 1976, Crump, Hoel, Langley, and Peto described how almost any dose‐response relationship for carcinogens becomes linear at low doses when background cancers are taken into account. This has been used, by the U.S. Environmental Protection Agency, USEPA, as partial justification for a regulatory posture that assumes low‐dose linearity, as is illustrated by a discussion of regulation of benzene as a carcinogen. The argument depends critically on the assumption that the pollutant and the background proceed by the same biological mechanism. In this paper we show that the same argument applies to noncancer end points also. We discuss the application to a number of situations: reduction in lung function and consequent increase in death rate due to (particulate) air pollution; reduction in IQ and hence (in extreme cases) mental deficiency due to radiation in utero; reduction of sperm count and hence increase in male infertility due to DBCP exposure. We conclude that, although the biological basis for the health effect response is different, in each case low‐dose linearity might arise from the same mathematical effect discussed by Crump et al. (1976). We then examine other situations and toxic end points where low‐dose linearity might apply by the same argument. We urge that biologists and chemists should concentrate efforts on comparing the biological and pharmacokinetic processes that apply to the pollutant and the background. Finally, we discuss some public policy implications of the possibility that low dose linearity may be the rule rather than the exception for environmental exposures.  相似文献   

12.
As an approach to combining the phase II dose finding trial and phase III pivotal trials, we propose a two-stage adaptive design that selects the best among several treatments in the first stage and tests significance of the selected treatment in the second stage. The approach controls the type I error defined as the probability of selecting a treatment and claiming its significance when the selected treatment is indifferent from placebo, as considered in Bischoff and Miller (2005). Our approach uses the conditional error function and allows determining the conditional type I error function for the second stage based on information observed at the first stage in a similar way to that for an ordinary adaptive design without treatment selection. We examine properties such as expected sample size and stage-2 power of this design with a given type I error and a maximum stage-2 sample size under different hypothesis configurations. We also propose a method to find the optimal conditional error function of a simple parametric form to improve the performance of the design and have derived optimal designs under some hypothesis configurations. Application of this approach is illustrated by a hypothetical example.  相似文献   

13.
We consider the bivariate situation of some quantitative, ordinal, binary or censored response variable and some quantitative or ordinal exposure variable (dose) with a hypothetical effect on the response. Data can either be the outcome of a planned dose‐response experiment with only few dose levels or of an observational study where, for example, both exposure and response variable are observed within each individual. We are interested in testing the null hypothesis of no effect of the dose variable vs. a dose‐response function depending on an unknown ‘threshold’ parameter. The variety of dose‐response functions considered ranges from no observed effect level (NOEL) models to umbrella alternatives. Here we discuss generalizations of the method of Lausen & Schumacher (Biometrics, 1992, 48, 73–85)which are based on combinations of two‐sample rank statistics and rank statistics for trend. Our approach may be seen as a generalization of a proposal for change‐point problems. Using the approach of Davies (Biometrika, 1987, 74, 33–43)we derive and approximate the asymptotic null distribution for a large number of thresholds considered. We use an improved Bonferroni inequality as approximation for a small number of thresholds considered. Moreover, we analyse the small sample behaviour by means of a Monte‐Carlo study. Our paper is illustrated by examples from clinical research and epidemiology.  相似文献   

14.
Summary We propose a Bayesian dose‐finding design that accounts for two important factors, the severity of toxicity and heterogeneity in patients' susceptibility to toxicity. We consider toxicity outcomes with various levels of severity and define appropriate scores for these severity levels. We then use a multinomial‐likelihood function and a Dirichlet prior to model the probabilities of these toxicity scores at each dose, and characterize the overall toxicity using an average toxicity score (ATS) parameter. To address the issue of heterogeneity in patients' susceptibility to toxicity, we categorize patients into different risk groups based on their susceptibility. A Bayesian isotonic transformation is applied to induce an order‐restricted posterior inference on the ATS. We demonstrate the performance of the proposed dose‐finding design using simulations based on a clinical trial in multiple myeloma.  相似文献   

15.
Proschan and Hunsberger (1995) suggest the use of a conditional error function to construct a two stage test that meets the α level and allows a very flexible reassessment of the sample size after the interim analysis. In this note we show that several adaptive designs can be formulated in terms of such an error function. The conditional power function defined similarly provides a simple method for sample size reassessment in adaptive two stage designs.  相似文献   

16.
We propose a multiple comparison procedure to identify the minimum effective dose level by sequentially comparing each dose level with the zero dose level in the dose finding test. If we can find the minimum effective dose level at an early stage in the sequential test, it is possible to terminate the procedure in the dose finding test after a few group observations up to the dose level. Thus, the procedure is viable from an economical point of view when high costs are involved in obtaining the observations. In the procedure, we present an integral formula to determine the critical values for satisfying a predefined type I familywise error rate. Furthermore, we show how to determine the required sample size in order to guarantee the power of the test in the procedure. In practice, we compare the power of the test and the required sample size for various configurations of the population means in simulation studies and adopt our sequential procedure to the dose response test in a case study.  相似文献   

17.
18.
Inference after two‐stage single‐arm designs with binary endpoint is challenging due to the nonunique ordering of the sampling space in multistage designs. We illustrate the problem of specifying test‐compatible confidence intervals for designs with nonconstant second‐stage sample size and present two approaches that guarantee confidence intervals consistent with the test decision. Firstly, we extend the well‐known Clopper–Pearson approach of inverting a family of two‐sided hypothesis tests from the group‐sequential case to designs with fully adaptive sample size. Test compatibility is achieved by using a sample space ordering that is derived from a test‐compatible estimator. The resulting confidence intervals tend to be conservative but assure the nominal coverage probability. In order to assess the possibility of further improving these confidence intervals, we pursue a direct optimization approach minimizing the mean width of the confidence intervals. While the latter approach produces more stable coverage probabilities, it is also slightly anti‐conservative and yields only negligible improvements in mean width. We conclude that the Clopper–Pearson‐type confidence intervals based on a test‐compatible estimator are the best choice if the nominal coverage probability is not to be undershot and compatibility of test decision and confidence interval is to be preserved.  相似文献   

19.
Traditionally drug development is generally divided into three phases which have different aims and objectives. Recently so-called adaptive seamless designs that allow combination of the objectives of different development phases into a single trial have gained much interest. Adaptive trials combining treatment selection typical for Phase II and confirmation of efficacy as in Phase III are referred to as adaptive seamless Phase II/III designs and are considered in this paper. We compared four methods for adaptive treatment selection, namely the classical Dunnett test, an adaptive version of the Dunnett test based on the conditional error approach, the combination test approach, and an approach within the classical group-sequential framework. The latter two approaches have only recently been published. In a simulation study we found that no one method dominates the others in terms of power apart from the adaptive Dunnett test that dominates the classical Dunnett by construction. Furthermore, scenarios under which one approach outperforms others are described.  相似文献   

20.
Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time‐to‐event continual reassessment method (TITE‐CRM) is a Bayesian dose‐finding design to address the issue of long observation time and early patient drop‐out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time‐to‐toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time‐to‐toxicity distribution by accounting for inter‐cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First‐in‐Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE‐CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号