首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent statistical methodology for precision medicine has focused on either identification of subgroups with enhanced treatment effects or estimating optimal treatment decision rules so that treatment is allocated in a way that maximizes, on average, predefined patient outcomes. Less attention has been given to subgroup testing, which involves evaluation of whether at least a subgroup of the population benefits from an investigative treatment, compared to some control or standard of care. In this work, we propose a general framework for testing for the existence of a subgroup with enhanced treatment effects based on the difference of the estimated value functions under an estimated optimal treatment regime and a fixed regime that assigns everyone to the same treatment. Our proposed test does not require specification of the parametric form of the subgroup and allows heterogeneous treatment effects within the subgroup. The test applies to cases when the outcome of interest is either a time-to-event or a (uncensored) scalar, and is valid at the exceptional law. To demonstrate the empirical performance of the proposed test, we study the type I error and power of the test statistics in simulations and also apply our test to data from a Phase III trial in patients with hematological malignancies.  相似文献   

2.
Xinyang Huang  Jin Xu 《Biometrics》2020,76(4):1310-1318
Individualized treatment rules (ITRs) recommend treatments based on patient-specific characteristics in order to maximize the expected clinical outcome. At the same time, the risks caused by various adverse events cannot be ignored. In this paper, we propose a method to estimate an optimal ITR that maximizes clinical benefit while having the overall risk controlled at a desired level. Our method works for a general setting of multi-category treatment. The proposed procedure employs two shifted ramp losses to approximate the 0-1 loss in the objective function and constraint, respectively, and transforms the estimation problem into a difference of convex functions (DC) programming problem. A relaxed DC algorithm is used to solve the nonconvex constrained optimization problem. Simulations and a real data example are used to demonstrate the finite sample performance of the proposed method.  相似文献   

3.
Summary .  It is well known that optimal designs are strongly model dependent. In this article, we apply the Lagrange multiplier approach to the optimal design problem, using a recently proposed model for carryover effects. Generally, crossover designs are not recommended when carryover effects are present and when the primary goal is to obtain an unbiased estimate of the treatment effect. In some cases, baseline measurements are believed to improve design efficiency. This article examines the impact of baselines on optimal designs using two different assumptions about carryover effects during baseline periods and employing a nontraditional crossover design model. As anticipated, baseline observations improve design efficiency considerably for two-period designs, which use the data in the first period only to obtain unbiased estimates of treatment effects, while the improvement is rather modest for three- or four-period designs. Further, we find little additional benefits for measuring baselines at each treatment period as compared to measuring baselines only in the first period. Although our study of baselines did not change the results on optimal designs that are reported in the literature, the problem of strong model dependency problem is generally recognized. The advantage of using multiperiod designs is rather evident, as we found that extending two-period designs to three- or four-period designs significantly reduced variability in estimating the direct treatment effect contrast.  相似文献   

4.
5.
Complete case analyses of complete crossover designs provide an opportunity to make comparisons based on patients who can tolerate all treatments. It is argued that this provides a means of estimating a principal stratum strategy estimand, something which is difficult to do in parallel group trials. While some trial users will consider this a relevant aim, others may be interested in hypothetical strategy estimands, that is, the effect that would be found if all patients completed the trial. Whether these estimands differ importantly is a question of interest to the different users of the trial results. This paper derives the difference between principal stratum strategy and hypothetical strategy estimands, where the former is estimated by a complete-case analysis of the crossover design, and a model for the dropout process is assumed. Complete crossover designs, that is, those where all treatments appear in all sequences, and which compare t treatments over p periods with respect to a continuous outcome are considered. Numerical results are presented for Williams designs with four and six periods. Results from a trial of obstructive sleep apnoea-hypopnoea (TOMADO) are also used for illustration. The results demonstrate that the percentage difference between the estimands is modest, exceeding 5% only when the trial has been severely affected by dropouts or if the within-subject correlation is low.  相似文献   

6.
Individualized treatment regimes (ITRs) aim to recommend treatments based on patient‐specific characteristics in order to maximize the expected clinical outcome. Outcome weighted learning approaches have been proposed for this optimization problem with primary focus on the binary treatment case. Many require assumptions of the outcome value or the randomization mechanism. In this paper, we propose a general framework for multicategory ITRs using generic surrogate risk. The proposed method accommodates the situations when the outcome takes negative value and/or when the propensity score is unknown. Theoretical results about Fisher consistency, excess risk, and risk consistency are established. In practice, we recommend using differentiable convex loss for computational optimization. We demonstrate the superiority of the proposed method under multinomial deviance risk to some existing methods by simulation and application on data from a clinical trial.  相似文献   

7.
Summary A treatment regime is a rule that assigns a treatment, among a set of possible treatments, to a patient as a function of his/her observed characteristics, hence “personalizing” treatment to the patient. The goal is to identify the optimal treatment regime that, if followed by the entire population of patients, would lead to the best outcome on average. Given data from a clinical trial or observational study, for a single treatment decision, the optimal regime can be found by assuming a regression model for the expected outcome conditional on treatment and covariates, where, for a given set of covariates, the optimal treatment is the one that yields the most favorable expected outcome. However, treatment assignment via such a regime is suspect if the regression model is incorrectly specified. Recognizing that, even if misspecified, such a regression model defines a class of regimes, we instead consider finding the optimal regime within such a class by finding the regime that optimizes an estimator of overall population mean outcome. To take into account possible confounding in an observational study and to increase precision, we use a doubly robust augmented inverse probability weighted estimator for this purpose. Simulations and application to data from a breast cancer clinical trial demonstrate the performance of the method.  相似文献   

8.
Yin G  Shen Y 《Biometrics》2005,61(2):362-369
Clinical trial designs involving correlated data often arise in biomedical research. The intracluster correlation needs to be taken into account to ensure the validity of sample size and power calculations. In contrast to the fixed-sample designs, we propose a flexible trial design with adaptive monitoring and inference procedures. The total sample size is not predetermined, but adaptively re-estimated using observed data via a systematic mechanism. The final inference is based on a weighted average of the block-wise test statistics using generalized estimating equations, where the weight for each block depends on cumulated data from the ongoing trial. When there are no significant treatment effects, the devised stopping rule allows for early termination of the trial and acceptance of the null hypothesis. The proposed design updates information regarding both the effect size and within-cluster correlation based on the cumulated data in order to achieve a desired power. Estimation of the parameter of interest and its confidence interval are proposed. We conduct simulation studies to examine the operating characteristics and illustrate the proposed method with an example.  相似文献   

9.
Cook RJ  Wei W  Yi GY 《Biometrics》2005,61(3):692-701
We derive semiparametric methods for estimating and testing treatment effects when censored recurrent event data are available over multiple periods. These methods are based on estimating functions motivated by a working "mixed-Poisson" assumption under which conditioning can eliminate subject-specific random effects. Robust pseudoscore test statistics are obtained via "sandwich" variance estimation. The relative efficiency of conditional versus marginal analyses is assessed analytically under a mixed time-homogeneous Poisson model. The robustness and empirical power of the semiparametric approach are assessed through simulation. Adaptations to handle recurrent events arising in crossover trials are described and these methods are applied to data from a two-period crossover trial of patients with bronchial asthma.  相似文献   

10.
Randomized crossover trials are clinical experiments in which participants are assigned randomly to a sequence of treatments and each participant serves as his/her own control in estimating treatment effect. We need a better understanding of the validity of their results to enable recommendations as to which crossover trials can be included in meta-analysis and for development of reporting guidelines.

Objective

To evaluate the characteristics of the design, analysis, and reporting of crossover trials for inclusion in a meta-analysis of treatment for primary open-angle glaucoma and to provide empirical evidence to inform the development of tools to assess the validity of the results from crossover trials and reporting guidelines.

Methods

We searched MEDLINE, EMBASE, and Cochrane’s CENTRAL register for randomized crossover trials for a systematic review and network meta-analysis we are conducting. Two individuals independently screened the search results for eligibility and abstracted data from each included report.

Results

We identified 83 crossover trials eligible for inclusion. Issues affecting the risk of bias in crossover trials, such as carryover, period effects and missing data, were often ignored. Some trials failed to accommodate the within-individual differences in the analysis. For a large proportion of the trials, the authors tabulated the results as if they arose from a parallel design. Precision estimates properly accounting for the paired nature of the design were often unavailable from the study reports; consequently, to include trial findings in a meta-analysis would require further manipulation and assumptions.

Conclusions

The high proportion of poorly reported analyses and results has the potential to affect whether crossover data should or can be included in a meta-analysis. There is pressing need for reporting guidelines for crossover trials.  相似文献   

11.
We propose a Bayesian two-stage biomarker-based adaptive randomization (AR) design for the development of targeted agents. The design has three main goals: (1) to test the treatment efficacy, (2) to identify prognostic and predictive markers for the targeted agents, and (3) to provide better treatment for patients enrolled in the trial. To treat patients better, both stages are guided by the Bayesian AR based on the individual patient’s biomarker profiles. The AR in the first stage is based on a known marker. A Go/No-Go decision can be made in the first stage by testing the overall treatment effects. If a Go decision is made at the end of the first stage, a two-step Bayesian lasso strategy will be implemented to select additional prognostic or predictive biomarkers to refine the AR in the second stage. We use simulations to demonstrate the good operating characteristics of the design, including the control of per-comparison type I and type II errors, high probability in selecting important markers, and treating more patients with more effective treatments. Bayesian adaptive designs allow for continuous learning. The designs are particularly suitable for the development of multiple targeted agents in the quest of personalized medicine. By estimating treatment effects and identifying relevant biomarkers, the information acquired from the interim data can be used to guide the choice of treatment for each individual patient enrolled in the trial in real time to achieve a better outcome. The design is being implemented in the BATTLE-2 trial in lung cancer at the MD Anderson Cancer Center.  相似文献   

12.
Frangakis CE  Baker SG 《Biometrics》2001,57(3):899-908
For studies with treatment noncompliance, analyses have been developed recently to better estimate treatment efficacy. However, the advantage and cost of measuring compliance data have implications on the study design that have not been as systematically explored. In order to estimate better treatment efficacy with lower cost, we propose a new class of compliance subsampling (CSS) designs where, after subjects are assigned treatment, compliance behavior is measured for only subgroups of subjects. The sizes of the subsamples are allowed to relate to the treatment assignment, the assignment probability, the total sample size, the anticipated distributions of outcome and compliance, and the cost parameters of the study. The CSS design methods relate to prior work (i) on two-phase designs in which a covariate is subsampled and (ii) on causal inference because the subsampled postrandomization compliance behavior is not the true covariate of interest. For each CSS design, we develop efficient estimation of treatment efficacy under binary outcome and all-or-none observed compliance. Then we derive a minimal cost CSS design that achieves a required precision for estimating treatment efficacy. We compare the properties of the CSS design to those of conventional protocols in a study of patient choices for medical care at the end of life.  相似文献   

13.
Recently, personalized medicine has received great attention to improve safety and effectiveness in drug development. Personalized medicine aims to provide medical treatment that is tailored to the patient's characteristics such as genomic biomarkers, disease history, etc., so that the benefit of treatment can be optimized. Subpopulations identification is to divide patients into several different subgroups where each subgroup corresponds to an optimal treatment. For two subgroups, traditionally the multivariate Cox proportional hazards model is fitted and used to calculate the risk score when outcome is survival time endpoint. Median is commonly chosen as the cutoff value to separate patients. However, using median as the cutoff value is quite subjective and sometimes may be inappropriate in situations where data are imbalanced. Here, we propose a novel tree‐based method that adopts the algorithm of relative risk trees to identify subgroup patients. After growing a relative risk tree, we apply k‐means clustering to group the terminal nodes based on the averaged covariates. We adopt an ensemble Bagging method to improve the performance of a single tree since it is well known that the performance of a single tree is quite unstable. A simulation study is conducted to compare the performance between our proposed method and the multivariate Cox model. The applications of our proposed method to two public cancer data sets are also conducted for illustration.  相似文献   

14.
中医药具有精准医学的核心"个体化诊断和治疗"要素。基因组学技术在中医药基于体质的精准预防、个体化诊断与药物、针灸治疗、药物复杂性、包括中医药干预人体微生物环境等方面已经有较多应用。这与精准医学通过分析人群基因信息、生活方式、环境因素等了解疾病发生发展机制、针对相关靶点开发药物的方法及目标相匹配。未来将在整合各组学技术、结合中医药自身特色等方面应用于中医药精准医学研究,为实现中医药现代化及精准治疗的目标提供助力。  相似文献   

15.
胆总管结石是胆道外科一种常见病、多发病。随着"精准医学"时代的到来,因人因病而异的胆总管结石精准诊断及精准的个体化治疗意义重大。影像医学作为胆总管结石精准治疗的"导航"手段,其检查方法种类多样且各种方法有着各自的优点及局限性,对于满足精准治疗需要有特殊的要求。因此,本文对胆总管结石的各种影像学及内镜检查方法的优点及局限性进行综述并提出精准医学时代影像学诊断策略,以期能为胆总管结石精准诊断的提供一定的参考。  相似文献   

16.
Personalized medicine optimizes patient outcome by tailoring treatments to patient‐level characteristics. This approach is formalized by dynamic treatment regimes (DTRs): decision rules that take patient information as input and output recommended treatment decisions. The DTR literature has seen the development of increasingly sophisticated causal inference techniques that attempt to address the limitations of our typically observational datasets. Often overlooked, however, is that in practice most patients may be expected to receive optimal or near‐optimal treatment, and so the outcome used as part of a typical DTR analysis may provide limited information. In light of this, we propose considering a more standard analysis: ignore the outcome and elicit an optimal DTR by modeling the observed treatment as a function of relevant covariates. This offers a far simpler analysis and, in some settings, improved optimal treatment identification. To distinguish this approach from more traditional DTR analyses, we term it reward ignorant modeling, and also introduce the concept of multimethod analysis, whereby different analysis methods are used in settings with multiple treatment decisions. We demonstrate this concept through a variety of simulation studies, and through analysis of data from the International Warfarin Pharmacogenetics Consortium, which also serve as motivation for this work.  相似文献   

17.
Zhao Y  Zeng D  Socinski MA  Kosorok MR 《Biometrics》2011,67(4):1422-1433
Typical regimens for advanced metastatic stage IIIB/IV nonsmall cell lung cancer (NSCLC) consist of multiple lines of treatment. We present an adaptive reinforcement learning approach to discover optimal individualized treatment regimens from a specially designed clinical trial (a "clinical reinforcement trial") of an experimental treatment for patients with advanced NSCLC who have not been treated previously with systemic therapy. In addition to the complexity of the problem of selecting optimal compounds for first- and second-line treatments based on prognostic factors, another primary goal is to determine the optimal time to initiate second-line therapy, either immediately or delayed after induction therapy, yielding the longest overall survival time. A reinforcement learning method called Q-learning is utilized, which involves learning an optimal regimen from patient data generated from the clinical reinforcement trial. Approximating the Q-function with time-indexed parameters can be achieved by using a modification of support vector regression that can utilize censored data. Within this framework, a simulation study shows that the procedure can extract optimal regimens for two lines of treatment directly from clinical data without prior knowledge of the treatment effect mechanism. In addition, we demonstrate that the design reliably selects the best initial time for second-line therapy while taking into account the heterogeneity of NSCLC across patients.  相似文献   

18.
Summary The crossover is a popular and efficient trial design used in the context of patient heterogeneity to assess the effect of treatments that act relatively quickly and whose benefit disappears with discontinuation. Each patient can serve as her own control as within‐individual treatment and placebo responses are compared. Conventional wisdom is that these designs are not appropriate for absorbing binary endpoints, such as death or HIV infection. We explore the use of crossover designs in the context of these absorbing binary endpoints and show that they can be more efficient than the standard parallel group design when there is heterogeneity in individuals' risks. We also introduce a new two‐period design where first period “survivors” are rerandomized for the second period. This design combines the crossover design with the parallel design and achieves some of the efficiency advantages of the crossover design while ensuring that the second period groups are comparable by randomization. We discuss the validity of the new designs and evaluate both a mixture model and a modified Mantel–Haenszel test for inference. The mixture model assumes no carryover or period effects while the Mantel–Haenszel approach conditions out period effects. Simulations are used to compare the different designs and an example is provided to explore practical issues in implementation.  相似文献   

19.
A R Willan 《Biometrics》1988,44(1):211-218
In a two-period crossover trial where residual carryover is suspected, it is often advised that first-period data only be used in an analysis appropriate for a parallel design. However, it has been shown (Willan and Pater, 1986, Biometrics 42, 593-599) that the crossover analysis is more powerful than the parallel analysis if the residual carryover, expressed as a proportion of treatment effect, is less than 2- square root of 2(1 - rho), where rho is the intrasubject correlation coefficient. Choosing between the analyses based on the empirical evaluation of this condition is equivalent to choosing the analysis with the larger corresponding test statistic. Approximate nominal significance levels are presented that maintain the desired level when basing the analysis on the maximum test statistic. Furthermore, the power and precision of the analysis based on the maximum test statistic are compared to the crossover and parallel analyses.  相似文献   

20.
In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号