首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 98 毫秒
1.
The determination of the sample size required by a crossover trial typically depends on the specification of one or more variance components. Uncertainty about the value of these parameters at the design stage means that there is often a risk a trial may be under‐ or overpowered. For many study designs, this problem has been addressed by considering adaptive design methodology that allows for the re‐estimation of the required sample size during a trial. Here, we propose and compare several approaches for this in multitreatment crossover trials. Specifically, regulators favor reestimation procedures to maintain the blinding of the treatment allocations. We therefore develop blinded estimators for the within and between person variances, following simple or block randomization. We demonstrate that, provided an equal number of patients are allocated to sequences that are balanced for period, the proposed estimators following block randomization are unbiased. We further provide a formula for the bias of the estimators following simple randomization. The performance of these procedures, along with that of an unblinded approach, is then examined utilizing three motivating examples, including one based on a recently completed four‐treatment four‐period crossover trial. Simulation results show that the performance of the proposed blinded procedures is in many cases similar to that of the unblinded approach, and thus they are an attractive alternative.  相似文献   

2.
There has been much development in Bayesian adaptive designs in clinical trials. In the Bayesian paradigm, the posterior predictive distribution characterizes the future possible outcomes given the currently observed data. Based on the interim time-to-event data, we develop a new phase II trial design by combining the strength of both Bayesian adaptive randomization and the predictive probability. By comparing the mean survival times between patients assigned to two treatment arms, more patients are assigned to the better treatment on the basis of adaptive randomization. We continuously monitor the trial using the predictive probability for early termination in the case of superiority or futility. We conduct extensive simulation studies to examine the operating characteristics of four designs: the proposed predictive probability adaptive randomization design, the predictive probability equal randomization design, the posterior probability adaptive randomization design, and the group sequential design. Adaptive randomization designs using predictive probability and posterior probability yield a longer overall median survival time than the group sequential design, but at the cost of a slightly larger sample size. The average sample size using the predictive probability method is generally smaller than that of the posterior probability design.  相似文献   

3.
The use of drug combinations in clinical trials is increasingly common during the last years since a more favorable therapeutic response may be obtained by combining drugs. In phase I clinical trials, most of the existing methodology recommends a one unique dose combination as “optimal,” which may result in a subsequent failed phase II clinical trial since other dose combinations may present higher treatment efficacy for the same level of toxicity. We are particularly interested in the setting where it is necessary to wait a few cycles of therapy to observe an efficacy outcome and the phase I and II population of patients are different with respect to treatment efficacy. Under these circumstances, it is common practice to implement two-stage designs where a set of maximum tolerated dose combinations is selected in a first stage, and then studied in a second stage for treatment efficacy. In this article we present a new two-stage design for early phase clinical trials with drug combinations. In the first stage, binary toxicity data is used to guide the dose escalation and set the maximum tolerated dose combinations. In the second stage, we take the set of maximum tolerated dose combinations recommended from the first stage, which remains fixed along the entire second stage, and through adaptive randomization, we allocate subsequent cohorts of patients in dose combinations that are likely to have high posterior median time to progression. The methodology is assessed with extensive simulations and exemplified with a real trial.  相似文献   

4.
Double‐digested RADseq (ddRADseq) is a NGS methodology that generates reads from thousands of loci targeted by restriction enzyme cut sites, across multiple individuals. To be statistically sound and economically optimal, a ddRADseq experiment has a preliminary design stage that needs to consider issues related to the selection of enzymes, particular features of the genome of the focal species, possible modifications to the library construction protocol, coverage needed to minimize missing data, and the potential sources of error that may impact upon the coverage. We present ddradseqtools , a software package to help ddRADseq experimental design by (i) the generation of in silico double‐digested fragments; (ii) the construction of modified ddRADseq libraries using adapters with either one or two indexes and degenerate base regions (DBRs) to quantify PCR duplicates; and (iii) the initial steps of the bioinformatics preprocessing of reads. ddradseqtools generates single‐end (SE) or paired‐end (PE) reads that may bear SNPs and/or indels. The effect of allele dropout and PCR duplicates on coverage is also simulated. The resulting output files can be submitted to pipelines of alignment and variant calling, to allow the fine‐tuning of parameters. The software was validated with specific tests for the correct operability of the program. The correspondence between in silico settings and parameters from ddRADseq in vitro experiments was assessed to provide guidelines for the reliable performance of the software. ddradseqtools is cost‐efficient in terms of execution time, and can be run on computers with standard CPU and RAM configuration.  相似文献   

5.
This paper proposes a two-stage phase I-II clinical trial design to optimize dose-schedule regimes of an experimental agent within ordered disease subgroups in terms of the toxicity-efficacy trade-off. The design is motivated by settings where prior biological information indicates it is certain that efficacy will improve with ordinal subgroup level. We formulate a flexible Bayesian hierarchical model to account for associations among subgroups and regimes, and to characterize ordered subgroup effects. Sequentially adaptive decision-making is complicated by the problem, arising from the motivating application, that efficacy is scored on day 90 and toxicity is evaluated within 30 days from the start of therapy, while the patient accrual rate is fast relative to these outcome evaluation intervals. To deal with this in a practical manner, we take a likelihood-based approach that treats unobserved toxicity and efficacy outcomes as missing values, and use elicited utilities that quantify the efficacy-toxicity trade-off as a decision criterion. Adaptive randomization is used to assign patients to regimes while accounting for subgroups, with randomization probabilities depending on the posterior predictive distributions of utilities. A simulation study is presented to evaluate the design's performance under a variety of scenarios, and to assess its sensitivity to the amount of missing data, the prior, and model misspecification.  相似文献   

6.
Summary Cluster randomization trials with relatively few clusters have been widely used in recent years for evaluation of health‐care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. The limitation arises especially when there are many confounding variables in small studies. Such is the case in the INSTINCT trial designed to investigate the effectiveness of an education program in enhancing the tPA use in stroke patients. In this article, we introduce a new randomization design, the balance match weighted (BMW) design, which applies the optimal matching with constraints technique to a prospective randomized design and aims to minimize the mean squared error (MSE) of the treatment effect estimator. A simulation study shows that, under various confounding scenarios, the BMW design can yield substantial reductions in the MSE for the treatment effect estimator compared to a completely randomized or matched‐pair design. The BMW design is also compared with a model‐based approach adjusting for the estimated propensity score and Robins‐Mark‐Newey E‐estimation procedure in terms of efficiency and robustness of the treatment effect estimator. These investigations suggest that the BMW design is more robust and usually, although not always, more efficient than either of the approaches. The design is also seen to be robust against heterogeneous error. We illustrate these methods in proposing a design for the INSTINCT trial.  相似文献   

7.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

8.
For genome-wide association studies in family-based designs, a new, universally applicable approach is proposed. Using a modified Liptak’s method, we combine the p-value of the family-based association test (FBAT) statistic with the p-value for the Van Steen-statistic. The Van Steen-statistic is independent of the FBAT-statistic and utilizes information that is ignored by traditional FBAT-approaches. The new test statistic takes advantages of all available information about the genetic association, while, by virtue of its design, it achieves complete robustness against confounding due to population stratification. The approach is suitable for the analysis of almost any trait type for which FBATs are available, e.g. binary, continuous, time-to-onset, multivariate, etc. The efficiency and the validity of the new approach depend on the specification of a nuisance/tuning parameter and the weight parameters in the modified Liptak’s method. For different trait types and ascertainment conditions, we discuss general guidelines for the optimal specification of the tuning parameter and the weight parameters. Our simulation experiments and an application to an Alzheimer study show the validity and the efficiency of the new method, which achieves power levels that are comparable to those of population-based approaches.  相似文献   

9.
File and Object Replication in Data Grids   总被引:23,自引:0,他引:23  
Data replication is a key issue in a Data Grid and can be managed in different ways and at different levels of granularity: for example, at the file level or object level. In the High Energy Physics community, Data Grids are being developed to support the distributed analysis of experimental data. We have produced a prototype data replication tool, the Grid Data Mirroring Package (GDMP) that is in production use in one physics experiment, with middleware provided by the Globus Toolkit used for authentication, data movement, and other purposes. We present here a new, enhanced GDMP architecture and prototype implementation that uses Globus Data Grid tools for efficient file replication. We also explain how this architecture can address object replication issues in an object-oriented database management system. File transfer over wide-area networks requires specific performance tuning in order to gain optimal data transfer rates. We present performance results obtained with GridFTP, an enhanced version of FTP, and discuss tuning parameters.  相似文献   

10.
We propose a Bayesian two-stage biomarker-based adaptive randomization (AR) design for the development of targeted agents. The design has three main goals: (1) to test the treatment efficacy, (2) to identify prognostic and predictive markers for the targeted agents, and (3) to provide better treatment for patients enrolled in the trial. To treat patients better, both stages are guided by the Bayesian AR based on the individual patient’s biomarker profiles. The AR in the first stage is based on a known marker. A Go/No-Go decision can be made in the first stage by testing the overall treatment effects. If a Go decision is made at the end of the first stage, a two-step Bayesian lasso strategy will be implemented to select additional prognostic or predictive biomarkers to refine the AR in the second stage. We use simulations to demonstrate the good operating characteristics of the design, including the control of per-comparison type I and type II errors, high probability in selecting important markers, and treating more patients with more effective treatments. Bayesian adaptive designs allow for continuous learning. The designs are particularly suitable for the development of multiple targeted agents in the quest of personalized medicine. By estimating treatment effects and identifying relevant biomarkers, the information acquired from the interim data can be used to guide the choice of treatment for each individual patient enrolled in the trial in real time to achieve a better outcome. The design is being implemented in the BATTLE-2 trial in lung cancer at the MD Anderson Cancer Center.  相似文献   

11.
Zhao Y  Wang S 《Human heredity》2009,67(1):46-56
Study cost remains the major limiting factor for genome-wide association studies due to the necessity of genotyping a large number of SNPs for a large number of subjects. Both DNA pooling strategies and two-stage designs have been proposed to reduce genotyping costs. In this study, we propose a cost-effective, two-stage approach with a DNA pooling strategy. During stage I, all markers are evaluated on a subset of individuals using DNA pooling. The most promising set of markers is then evaluated with individual genotyping for all individuals during stage II. The goal is to determine the optimal parameters (pi(p)(sample ), the proportion of samples used during stage I with DNA pooling; and pi(p)(marker ), the proportion of markers evaluated during stage II with individual genotyping) that minimize the cost of a two-stage DNA pooling design while maintaining a desired overall significance level and achieving a level of power similar to that of a one-stage individual genotyping design. We considered the effects of three factors on optimal two-stage DNA pooling designs. Our results suggest that, under most scenarios considered, the optimal two-stage DNA pooling design may be much more cost-effective than the optimal two-stage individual genotyping design, which use individual genotyping during both stages.  相似文献   

12.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   

13.
Optimal multivariate matching before randomization   总被引:1,自引:0,他引:1  
Although blocking or pairing before randomization is a basic principle of experimental design, the principle is almost invariably applied to at most one or two blocking variables. Here, we discuss the use of optimal multivariate matching prior to randomization to improve covariate balance for many variables at the same time, presenting an algorithm and a case-study of its performance. The method is useful when all subjects, or large groups of subjects, are randomized at the same time. Optimal matching divides a single group of 2n subjects into n pairs to minimize covariate differences within pairs-the so-called nonbipartite matching problem-then one subject in each pair is picked at random for treatment, the other being assigned to control. Using the baseline covariate data for 132 patients from an actual, unmatched, randomized experiment, we construct 66 pairs matching for 14 covariates. We then create 10000 unmatched and 10000 matched randomized experiments by repeatedly randomizing the 132 patients, and compare the covariate balance with and without matching. By every measure, every one of the 14 covariates was substantially better balanced when randomization was performed within matched pairs. Even after covariance adjustment for chance imbalances in the 14 covariates, matched randomizations provided more accurate estimates than unmatched randomizations, the increase in accuracy being equivalent to, on average, a 7% increase in sample size. In randomization tests of no treatment effect, matched randomizations using the signed rank test had substantially higher power than unmatched randomizations using the rank sum test, even when only 2 of 14 covariates were relevant to a simulated response. Unmatched randomizations experienced rare disasters which were consistently avoided by matched randomizations.  相似文献   

14.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

15.
Wisz MS  Hellinga HW 《Proteins》2003,51(3):360-377
Here we introduce an electrostatic model that treats the complexity of electrostatic interactions in a heterogeneous protein environment by using multiple parameters that take into account variations in protein geometry, local structure, and the type of interacting residues. The optimal values for these parameters were obtained by fitting the model to a large dataset of 260 experimentally determined pK(a) values distributed over 41 proteins. We obtain fits between the calculated and observed values that are significantly better than the null model. The model performs well on the groups that exhibit large pK(a) shifts from solution values in response to the protein environment and compares favorably with other, successful continuum models. The empirically determined values of the parameters correlate well with experimentally observed contributions of hydrogen bonds and ion pairs as well as theoretically predicted magnitudes of charge-charge and charge-polar interactions. The magnitudes of the dielectric constants assigned to different regions of the protein rank according to the strength of the relaxation effects expected for the core, boundary, and surface. The electrostatic interactions in this model are pairwise decomposable and can be calculated rapidly. This model is therefore well suited for the large computations required for simulating protein properties and especially for prediction of mutations for protein design.  相似文献   

16.
We use optimal control theory to design a methodology to find locally optimal stimuli for desynchronization of a model of neurons with extracellular stimulation. This methodology yields stimuli which lead to positive Lyapunov exponents, and hence desynchronizes a neural population. We analyze this methodology in the presence of interneuron coupling to make predictions about the strength of stimulation required to overcome synchronizing effects of coupling. This methodology suggests a powerful alternative to pulsatile stimuli for deep brain stimulation as it uses less energy than pulsatile stimuli, and could eliminate the time consuming tuning process.  相似文献   

17.
Following complete remission of non-Hodgkin''s lymphoma by chemotherapy, irradiation or both, 44 patients were studied to assess the value of bacille Calmette-Guérin (BCG) as maintenance therapy. Patients with stage LI, EI or EII disease were allocated at random to receive BCG or no further maintenance therapy, and those with stage LII, LIII, EIII or IV disease received BCG therapy or orally administered cyclophosphamide. BCG had no effect on the duration of remission or the overall survival from the time of randomization. However, after the first recurrence there was a significant improvement in survival in the patients who had received BCG maintenance therapy.  相似文献   

18.
Yin G  Li Y  Ji Y 《Biometrics》2006,62(3):777-787
A Bayesian adaptive design is proposed for dose-finding in phase I/II clinical trials to incorporate the bivariate outcomes, toxicity and efficacy, of a new treatment. Without specifying any parametric functional form for the drug dose-response curve, we jointly model the bivariate binary data to account for the correlation between toxicity and efficacy. After observing all the responses of each cohort of patients, the dosage for the next cohort is escalated, deescalated, or unchanged according to the proposed odds ratio criteria constructed from the posterior toxicity and efficacy probabilities. A novel class of prior distributions is proposed through logit transformations which implicitly imposes a monotonic constraint on dose toxicity probabilities and correlates the probabilities of the bivariate outcomes. We conduct simulation studies to evaluate the operating characteristics of the proposed method. Under various scenarios, the new Bayesian design based on the toxicity-efficacy odds ratio trade-offs exhibits good properties and treats most patients at the desirable dose levels. The method is illustrated with a real trial design for a breast medical oncology study.  相似文献   

19.
In many areas of the world, Potato virus Y (PVY) is one of the most economically important disease problems in seed potatoes. In Taiwan, generation 2 (G2) class certified seed potatoes are required by law to be free of detectable levels of PVY. To meet this standard, it is necessary to perform accurate tests at a reasonable cost. We used a two‐stage testing design involving group testing which was performed in Taiwan's Seed Improvement and Propagation Station to identify plants infected with PVY. At the first stage of this two‐stage testing design, plants are tested in groups. The second stage involves no retesting for negative test groups and exhaustive testing of all constituent individual samples from positive test groups. In order to minimise costs while meeting government standards, it is imperative to estimate optimal group size. However, because of limited test accuracy, classification errors for diagnostic tests are inevitable; to get a more accurate estimate, it is necessary to adjust for these errors. Therefore, this paper describes an analysis of diagnostic test data in which specimens are grouped for batched testing to offset costs. The optimal batch size is determined by various cost parameters as well as test sensitivity, specificity and disease prevalence. Here, the Bayesian method is employed to deal with uncertainty in these parameters. Moreover, we developed a computer program to determine optimal group size for PVY tests such that the expected cost is minimised even when using imperfect diagnostic tests of pooled samples. Results from this research show that, compared with error free testing, when the presence of diagnostic testing errors is taken into account, the optimal group size becomes smaller. Higher diagnostic testing costs, lower costs of false negatives or smaller prevalence can all lead to a larger optimal group size. Regarding the effects of sensitivity and specificity, optimal group size increases as sensitivity increases; however, specificity has little effect on determining optimal group size. From our simulated study, it is apparent that the Bayesian method can truly update the prior information to more closely approximate the intrinsic characteristics of the parameters of interest. We believe that the results of this study will be useful in the implementation of seed potato certification programmes, particularly those which require zero tolerance for quarantine diseases in certified tubers.  相似文献   

20.
Huang X  Biswas S  Oki Y  Issa JP  Berry DA 《Biometrics》2007,63(2):429-436
The use of multiple drugs in a single clinical trial or as a therapeutic strategy has become common, particularly in the treatment of cancer. Because traditional trials are designed to evaluate one agent at a time, the evaluation of therapies in combination requires specialized trial designs. In place of the traditional separate phase I and II trials, we propose using a parallel phase I/II clinical trial to evaluate simultaneously the safety and efficacy of combination dose levels, and select the optimal combination dose. The trial is started with an initial period of dose escalation, then patients are randomly assigned to admissible dose levels. These dose levels are compared with each other. Bayesian posterior probabilities are used in the randomization to adaptively assign more patients to doses with higher efficacy levels. Combination doses with lower efficacy are temporarily closed and those with intolerable toxicity are eliminated from the trial. The trial is stopped if the posterior probability for safety, efficacy, or futility crosses a prespecified boundary. For illustration, we apply the design to a combination chemotherapy trial for leukemia. We use simulation studies to assess the operating characteristics of the parallel phase I/II trial design, and compare it to a conventional design for a standard phase I and phase II trial. The simulations show that the proposed design saves sample size, has better power, and efficiently assigns more patients to doses with higher efficacy levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号