首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cheung YK 《Biometrics》2002,58(1):237-240
Gasparini and Eisele (2000, Biometrics 56, 609-615) propose a design for phase I clinical trials during which dose allocation is governed by a Bayesian nonparametric estimate of the dose-response curve. The authors also suggest an elicitation algorithm to establish vague priors. However, in situations where a low percentile is targeted, priors thus obtained can lead to undesirable rigidity given certain trial outcomes that can occur with a nonnegligible probability. Interestingly, improvement can be achieved by prescribing slightly more informative priors. Some guidelines for prior elicitation are established using a connection between this curve-free method and the continual reassessment method.  相似文献   

2.
Cheung YK  Chappell R 《Biometrics》2000,56(4):1177-1182
Traditional designs for phase I clinical trials require each patient (or small group of patients) to be completely followed before the next patient or group is assigned. In situations such as when evaluating late-onset effects of radiation or toxicities from chemopreventive agents, this may result in trials of impractically long duration. We propose a new method, called the time-to-event continual reassessment method (TITE-CRM), that allows patients to be entered in a staggered fashion. It is an extension of the continual reassessment method (CRM; O'Quigley, Pepe, and Fisher, 1990, Biometrics 46, 33-48). We also note that this time-to-toxicity approach can be applied to extend other designs for studies of short-term toxicities. We prove that the recommended dose given by the TITE-CRM converges to the correct level under certain conditions. A simulation study shows our method's accuracy and safety are comparable with CRM's while the former takes a much shorter trial duration: a trial that would take up to 12 years to complete by the CRM could be reduced to 2-4 years by our method.  相似文献   

3.
J O'Quigley 《Biometrics》1992,48(3):853-862
The problem of point and interval estimation following a Phase I trial, carried out according to the scheme outlined by O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48), is investigated. A reparametrization of the model suggested in this earlier work can be seen to be advantageous in some circumstances. Maximum likelihood estimators, Bayesian estimators, and one-step estimators are considered. The continual reassessment method imposes restrictions on the sample space such that it is not possible for confidence intervals to achieve exact coverage properties, however large a sample is taken. Nonetheless, our simulations, based on a small finite sample of 20, not atypical in studies of this type, indicate that the calculated intervals are useful in most practical cases and achieve coverage very close to nominal levels in a very wide range of situations. The relative merits of the different estimators and their associated confidence intervals, viewed from a frequentist perspective, are discussed.  相似文献   

4.
The continual reassessment method (CRM) is an increasingly popular approach for estimating the maximum tolerated dose (MTD) in phase I dose finding studies. In its original formulation, the scheme is based on a fixed sample size. Many experimenters feel that, whenever possible, it may be advantageous to bring these trials to an early halt and thus reduce average sample size required to complete the study. To address this issue a stopping rule has been proposed (O'Quigley and Reiner, 1998) based on the idea that continuing the study would not lead to a change in recommendation with high probability. The rule, based on precise probabilistic calculation, is quite involved and not straightforward to implement. A much simpler rule can be constructed based on the idea of having settled at some level. In this work we investigate more deeply the essential ingredients behind these rules and consider more closely their operating characteristics.  相似文献   

5.
Gasparini M  Eisele J 《Biometrics》2000,56(2):609-615
Consider the problem of finding the dose that is as high as possible subject to having a controlled rate of toxicity. The problem is commonplace in oncology Phase I clinical trials. Such a dose is often called the maximum tolerated dose (MTD) since it represents a necessary trade-off between efficacy and toxicity. The continual reassessment method (CRM) is an improvement over traditional up-and-down schemes for estimating the MTD. It is based on a Bayesian approach and on the assumption that the dose-toxicity relationship follows a specific response curve, e.g., the logistic or power curve. The purpose of this paper is to illustrate how the assumption of a specific curve used in the CRM is not necessary and can actually hinder the efficient use of prior inputs. An alternative curve-free method in which the probabilities of toxicity are modeled directly as an unknown multidimensional parameter is presented. To that purpose, a product-of-beta prior (PBP) is introduced and shown to bring about logical improvements. Practical improvements are illustrated by simulation results.  相似文献   

6.
O'Quigley J 《Biometrics》2005,61(3):749-756
The continual reassessment method (CRM) is a dose-finding design using a dynamic sequential updating scheme. In common with other dynamic schemes the method estimates a current dose level corresponding to some target percentile for experimentation. The estimate is based on all included subjects. This continual reevaluation is made possible by the use of a simple model. As it stands, neither the CRM, nor any of the other dynamic schemes, allow for the correct estimation of some target percentile, based on retrospective data apart from the exceptional situation in which the simplified model exactly generates the observations. In this article we focus on the very specific issue of retrospective analysis of data generated by some arbitrary mechanism and subsequently analyzed via the continual reassessment method. We show how this can be done consistently. The proposed methodology is not restricted to that particular design and is applicable to any sequential updating scheme in which dose levels are associated with percentiles via model inversion.  相似文献   

7.
Although there are several new designs for phase I cancer clinical trials including the continual reassessment method and accelerated titration design, the traditional algorithm-based designs, like the '3 + 3' design, are still widely used because of their practical simplicity. In this paper, we study some key statistical properties of the traditional algorithm-based designs in a general framework and derive the exact formulae for the corresponding statistical quantities. These quantities are important for the investigator to gain insights regarding the design of the trial, and are (i) the probability of a dose being chosen as the maximum tolerated dose (MTD); (ii) the expected number of patients treated at each dose level; (iii) target toxicity level (i.e. the expected dose-limiting toxicity (DLT) incidences at the MTD); (iv) expected DLT incidences at each dose level and (v) expected overall DLT incidences in the trial. Real examples of clinical trials are given, and a computer program to do the calculation can be found at the authors' website approximately linyo" locator-type="url">http://www2.umdnj.edu/ approximately linyo.  相似文献   

8.
Stallard N 《Biometrics》2003,59(2):402-409
This article describes an approach to optimal design of phase II clinical trials using Bayesian decision theory. The method proposed extends that suggested by Stallard (1998, Biometrics 54, 279-294) in which designs were obtained to maximize a gain function including the cost of drug development and the benefit from a successful therapy. Here, the approach is extended by the consideration of other potential therapies, the development of which is competing for the same limited resources. The resulting optimal designs are shown to have frequentist properties much more similar to those traditionally used in phase II trials.  相似文献   

9.
Yuan Z  Chappell R  Bailey H 《Biometrics》2007,63(1):173-179
We consider the case of phase I trials for treatment of cancer or other severe diseases in which grade information is available about the severity of toxicity. Most dose allocation procedures dichotomize toxicity grades based on being dose limiting, which may not work well for severe and possibly irreversible toxicities such as renal, liver, and neurological toxicities, or toxicities with long duration. We propose a simple extension to the continual reassessment method (CRM), called the Quasi-CRM, to incorporate grade information. Toxicity grades are first converted to numeric scores that reflect their impacts on the dose allocation procedure, and then incorporated into the CRM using the quasi-Bernoulli likelihood. A simulation study demonstrates that the Quasi-CRM is superior to the standard CRM and comparable to a univariate version of the Bekele and Thall method (2004, Journal of the American Statistical Association 99, 26-35). We also present sensitivity analysis of the new method with respect to toxicity scores, and discuss practical issues such as extending the simple algorithmic up-and-down designs.  相似文献   

10.
Pittelkow Y  Wilson SR 《Biometrics》2005,61(2):630-2; discussion 632-4
This note is in response to Wouters et al. (2003, Biometrics 59, 1131-1139) who compared three methods for exploring gene expression data. Contrary to their summary that principal component analysis is not very informative, we show that it is possible to determine principal component analyses that are useful for exploratory analysis of microarray data. We also present another biplot representation, the GE-biplot (Gene Expression biplot), that is a useful method for exploring gene expression data with the major advantage of being able to aid interpretation of both the samples and the genes relative to each other.  相似文献   

11.

Background

For a clinical trials unit to run its first model-based, phase I trial, the statistician, chief investigator, and trial manager must all acquire a new set of skills. These trials also require a different approach to funding and data collection.

Challenges and discussion

From the statisticians’ viewpoint, we highlight what is needed to move from running rule-based, early-phase trials to running a model-based phase I study as we experienced it in our trials unit located in the United Kingdom. Our example is CHARIOT, a dose-finding trial using the time-to-event continual reassessment method. It consists of three stages and aims to discover the maximum tolerated dose of the combination of radiotherapy, chemotherapy, and the ataxia telangiectasia mutated Rad3-related inhibitor M6620 (previously known as VX-970) in patients with oesophageal cancer. We present the challenges we faced in designing this trial and how we overcame them as a way of demystifying the conduct of a model-based trial in a grant-funded clinical trials unit.

Conclusions

Although we appreciate that undertaking model-based trials requires additional time and effort, they are feasible to implement and, once suitable tools such as guiding publications and document templates become available, the design and set-up process will be easier and more efficient.
  相似文献   

12.
Jiang Q  Snapinn S  Iglewicz B 《Biometrics》2004,60(3):800-806
Sample size calculations for survival trials typically include an adjustment to account for the expected rate of noncompliance, or discontinuation from study medication. Existing sample size methods assume that when patients discontinue, they do so independently of their risk of an endpoint; that is, that noncompliance is noninformative. However, this assumption is not always true, as we illustrate using results from a published clinical trial database. In this article, we introduce a modified version of the method proposed by Lakatos (1988, Biometrics 44, 229-241) that can be used to calculate sample size under informative noncompliance. This method is based on the concept of two subpopulations: one with high rates of endpoint and discontinuation and another with low rates. Using this new method, we show that failure to consider the impact of informative noncompliance can lead to a considerably underpowered study.  相似文献   

13.
King J  Wong WK 《Biometrics》2000,56(4):1263-1267
We propose an algorithm for constructing minimax D-optimal designs for the logistic model when only the ranges of the values for both parameters are assumed known. Properties of these designs are studied and compared with optimal Bayesian designs and Sitter's (1992, Biometrics, 48, 1145-1155) minimax D-optimal kk-designs. Examples of minimax D-optimal designs are presented for the logistic and power logistic models, including a dose-response design for rheumatoid arthritis patients.  相似文献   

14.
Proschan and Hunsberger (1995) suggest the use of a conditional error function to construct a two stage test that meets the α level and allows a very flexible reassessment of the sample size after the interim analysis. In this note we show that several adaptive designs can be formulated in terms of such an error function. The conditional power function defined similarly provides a simple method for sample size reassessment in adaptive two stage designs.  相似文献   

15.
Decady and Thomas (2000, Biometrics 56, 893-896) propose a first-order corrected Umesh-Loughin-Scherer statistic to test for association in an r x c contingency table with multiple column responses. Agresti and Liu (1999, Biometrics 55, 936-943) point out that such statistics are not invariant to the arbitrary designation of a zero or one to a positive response. This paper shows that, in addition, the proposed testing procedure does not hold the correct size when there are strong pairwise associations between responses.  相似文献   

16.
Clinical trials with adaptive sample size reassessment based on an unblinded analysis of interim results are perhaps the most popular class of adaptive designs (see Elsäßer et al., 2007). Such trials are typically designed by prespecifying a zone for the interim test statistic, termed the promising zone, along with a decision rule for increasing the sample size within that zone. Mehta and Pocock (2011) provided some examples of promising zone designs and discussed several procedures for controlling their type‐1 error. They did not, however, address how to choose the promising zone or the corresponding sample size reassessment rule, and proposed instead that the operating characteristics of alternative promising zone designs could be compared by simulation. Jennison and Turnbull (2015) developed an approach based on maximizing expected utility whereby one could evaluate alternative promising zone designs relative to a gold‐standard optimal design. In this paper, we show how, by eliciting a few preferences from the trial sponsor, one can construct promising zone designs that are both intuitive and achieve the Jennison and Turnbull (2015) gold‐standard for optimality.  相似文献   

17.
O'Quigley J  Paoletti X 《Biometrics》2003,59(2):430-440
We investigate the two-group continual reassessment method for a dose-finding study in which we anticipate some ordering between the groups. This is a situation in which, for either group, we have little or almost no knowledge about which of the available dose levels will correspond to the maximum tolerated dose (MTD), but we may have quite strong knowledge concerning which of the two groups will have the higher level of MTD, if indeed they do not have the same MTD. The motivation for studying this problem came from an investigation into a new therapy for acute leukemia in children. The background to this study is discussed. There were two groups of patients: one group already received heavy prior therapy while the second group had received relatively much lighter prior therapy. It was therefore anticipated that the second group would have an MTD higher or at least as high as the first. Generally, likelihood methods or, equivalently, the use of noninformative Bayes priors, can be used to model the main aspects of the study, i.e., the MTD for one of the groups, reserving more informative Bayes modeling to be applied to the secondary features of the study. These secondary features may simply be the direction of the difference between the MTD levels for the two groups or, possibly, information on the potential gap between the two MTDs.  相似文献   

18.
Summary .   A common and important problem in clustered sampling designs is that the effect of within-cluster exposures (i.e., exposures that vary within clusters) on outcome may be confounded by both measured and unmeasured cluster-level factors (i.e., measurements that do not vary within clusters). When some of these are ill/not accounted for, estimation of this effect through population-averaged models or random-effects models may introduce bias. We accommodate this by developing a general theory for the analysis of clustered data, which enables consistent and asymptotically normal estimation of the effects of within-cluster exposures in the presence of cluster-level confounders. Semiparametric efficient estimators are obtained by solving so-called conditional generalized estimating equations. We compare this approach with a popular proposal by Neuhaus and Kalbfleisch (1998, Biometrics 54, 638–645) who separate the exposure effect into a within- and a between-cluster component within a random intercept model. We find that the latter approach yields consistent and efficient estimators when the model is linear, but is less flexible in terms of model specification. Under nonlinear models, this approach may yield inconsistent and inefficient estimators, though with little bias in most practical settings.  相似文献   

19.
Notch signalling acts in virtually every tissue during the lifetime of metazoans. Recent studies have pointed to multiple roles for Notch in stem cells during quiescence, proliferation, temporal specification, and maintenance of the niche architecture. Skeletal muscle has served as an excellent paradigm to examine these diverse roles as embryonic, foetal, and adult skeletal muscle stem cells have different molecular signatures and functional properties, reflecting their developmental specification during ontology. Notably, Notch signalling has emerged as a major regulator of all muscle stem cells. This review will provide an overview of Notch signalling during myogenic development and postnatally, and underscore the seemingly opposing contextual activities of Notch that have lead to a reassessment of its role in myogenesis.  相似文献   

20.
Linkage estimation and genetic map construction with genotyped DNA markers in plants preferentially employ a few maximally informative early-generation or recombinant-inbred mating designs. Fitting their recombination models to unconventional designs adapted to cultivar development (series of backcrossing, selfing, haploid-doubling, random-intercrossing, and sib-mating steps) distorts single- and multipoint linkage estimates even with dense marker coverage. Two methods are provided for correct linkage estimation in unconventional designs: fitting a correct multigeneration model, or correcting the estimates produced by fitting a one-generation model with any conventional software. These methods also support calculation of multilocus genotype frequencies and QTL-genotype distributions and are available in software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号