首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Rosner GL 《Biometrics》2005,61(1):239-245
This article presents an aid for monitoring clinical trials with failure-time endpoints based on the Bayesian nonparametric analyses of the data. The posterior distribution is a mixture of Dirichlet processes in the presence of censoring if one assumes a Dirichlet process prior for the survival distribution. Using Gibbs sampling, one can generate random samples from the posterior distribution. With samples from the posterior distributions of treatment-specific survival curves, one can evaluate the current evidence in favor of stopping or continuing the trial based on summary statistics of these survival curves. Because the method is nonparametric, it can easily be used, for example, in situations where hazards cross or are suspected to cross and where relevant clinical decisions might be based on estimating when the integral between the curves might be expected to become positive and in favor of the new but toxic therapy. An example based on an actual trial illustrates the method.  相似文献   

2.
Bayesian design and analysis of active control clinical trials   总被引:6,自引:0,他引:6  
Simon R 《Biometrics》1999,55(2):484-487
We consider the design and analysis of active control clinical trials, i.e., clinical trials comparing an experimental treatment E to a control treatment C considered to be effective. Direct comparison of E to placebo P, or no treatment, is sometimes ethically unacceptable. Much discussion of the design and analysis of such clinical trials has focused on whether the comparison of E to C should be based on a test of the null hypothesis of equivalence, on a test of a nonnull hypothesis that the difference is of some minimally medically important size delta, or on one or two-sided confidence intervals. These approaches are essentially the same for study planning. They all suffer from arbitrariness in specifying the size of the difference delta that must be excluded. We propose an alternative Bayesian approach to the design and analysis of active control trials. We derive the posterior probability that E is superior to P or that E is at least k% as good as C and that C is more effective than P. We also derive approximations for use with logistic and proportional hazard models. Selection of prior distributions is discussed, and results are illustrated using data from an active control trial of a drug for the treatment of unstable angina.  相似文献   

3.
4.
Li Z  Li Y 《Biometrics》2000,56(1):134-138
It is known that using statistical stopping rules in clinical trials can create an artificial heterogeneity of treatment effects in overviews of related trials (Hughes, Freedman, and Pocock, 1992, Biometrics 48, 41-53). If the true treatment effect being tested is small, as is often the case, the homogeneity test by DerSimonian and Laird (1986, Controlled Clinical Trials 7, 177-188) violates the size of the test very severely. This paper provides a new homogeneity test, which preserves the size of the test more accurately. The operating characteristics of the new test are examined through simulations.  相似文献   

5.
Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this article, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular, conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen.  相似文献   

6.
The likelihood ratio summarizes the strength of statistical evidence for one simple pre-determined hypothesis versus another. However, it does not directly address the multiple comparisons problem. In this paper we discuss some concerns related to the application of likelihood ratio methods to several multiple comparisons issues in clinical trials, in particular, subgroup analysis, multiple variables, interim monitoring, and data driven choice of hypotheses.  相似文献   

7.
When making Bayesian inferences we need to elicit an expert's opinion to set up the prior distribution. For applications in clinical trials, we study this problem with binary variables. A critical and often ignored issue in the process of eliciting priors in clinical trials is that medical investigators can seldom specify the prior quantities with precision. In this paper, we discuss several methods of eliciting beta priors from clinical information, and we use simulations to conduct sensitivity analyses of the effect of imprecise assessment of the prior information. These results provide useful guidance for choosing methods of eliciting the prior information in practice.  相似文献   

8.
Modification of sample size in group sequential clinical trials   总被引:1,自引:0,他引:1  
Cui L  Hung HM  Wang SJ 《Biometrics》1999,55(3):853-857
In group sequential clinical trials, sample size reestimation can be a complicated issue when it allows for change of sample size to be influenced by an observed sample path. Our simulation studies show that increasing sample size based on an interim estimate of the treatment difference can substantially inflate the probability of type I error in most practical situations. A new group sequential test procedure is developed by modifying the weights used in the traditional repeated significance two-sample mean test. The new test has the type I error probability preserved at the target level and can provide a substantial gain in power with the increase of sample size. Generalization of the new procedure is discussed.  相似文献   

9.
Müller HH  Schäfer H 《Biometrics》2001,57(3):886-891
A general method is presented integrating the concept of adaptive interim analyses into classical group sequential testing. This allows the researcher to represent every group sequential plan as an adaptive trial design and to make design changes during the course of the trial after every interim analysis in the same way as with adaptive designs. The concept of adaptive trial designing is thereby generalized to a large variety of possible sequential plans.  相似文献   

10.
11.
Did the impartiality of clinical trials play any role in their acceptance as regulatory standards for the safety and efficacy of drugs? According to the standard account of early British trials in the 1930s and 1940s, their impartiality was just rhetorical: the public demanded fair tests and statistical devices such as randomization created an appearance of neutrality. In fact, the design of the experiment was difficult to understand and the British authorities took advantage of it to promote their own particular interests. I claim that this account is based on a poorly defined concept of experimental fairness (derived from T. Porter’s ideas). I present an alternative approach in which a test would be impartial if it incorporates warrants of non-manipulability. With this concept, I reconstruct the history of British trials showing that they were indeed fair and this fairness played a role in their acceptance as regulatory yardsticks.  相似文献   

12.
13.
14.
15.
Immunotherapy has revolutionized the landscape of cancer treatment and become a standard pillar of the treatment. The two main drivers, immune checkpoint inhibitors and chimeric antigen receptor T cells, contributed to this unprecedented success. However, despite the striking clinical improvements, most patients still suffer from disease progression because of the evolution of primary or acquired resistance. This mini-review summa-rizes new treatment options including novel targets and interesting combinational approaches to increase our understanding of the mechanisms of the action of and resistance to immunotherapy, to expand our knowledge of advances in biomarker and therapeutics development, and to help to find the most appropriate option or a way of overcoming the resistance for cancer patients.  相似文献   

16.
In clinical research and practice, landmark models are commonly used to predict the risk of an adverse future event, using patients' longitudinal biomarker data as predictors. However, these data are often observable only at intermittent visits, making their measurement times irregularly spaced and unsynchronized across different subjects. This poses challenges to conducting dynamic prediction at any post-baseline time. A simple solution is the last-value-carry-forward method, but this may result in bias for the risk model estimation and prediction. Another option is to jointly model the longitudinal and survival processes with a shared random effects model. However, when dealing with multiple biomarkers, this approach often results in high-dimensional integrals without a closed-form solution, and thus the computational burden limits its software development and practical use. In this article, we propose to process the longitudinal data by functional principal component analysis techniques, and then use the processed information as predictors in a class of flexible linear transformation models to predict the distribution of residual time-to-event occurrence. The measurement schemes for multiple biomarkers are allowed to be different within subject and across subjects. Dynamic prediction can be performed in a real-time fashion. The advantages of our proposed method are demonstrated by simulation studies. We apply our approach to the African American Study of Kidney Disease and Hypertension, predicting patients' risk of kidney failure or death by using four important longitudinal biomarkers for renal functions.  相似文献   

17.
18.

Background

Acute exacerbation of idiopathic pulmonary fibrosis has become an important outcome measure in clinical trials. This study aimed to explore the concept of suspected acute exacerbation as an outcome measure.

Methods

Three investigators retrospectively reviewed subjects enrolled in the Sildenafil Trial of Exercise Performance in IPF who experienced a respiratory serious adverse event during the course of the study. Events were classified as definite acute exacerbation, suspected acute exacerbation, or other, according to established criteria.

Results

Thirty-five events were identified. Four were classified as definite acute exacerbation, fourteen as suspected acute exacerbation, and seventeen as other. Definite and suspected acute exacerbations were clinically indistinguishable. Both were most common in the winter and spring months and were associated with a high risk of disease progression and short-term mortality.

Conclusions

In this study one half of respiratory serious adverse events were attributed to definite or suspected acute exacerbations. Suspected acute exacerbations are clinically indistinguishable from definite acute exacerbations and represent clinically meaningful events. Clinical trialists should consider capturing both definite and suspected acute exacerbations as outcome measures.  相似文献   

19.
20.
Vittinghoff E  Bauer DC 《Biometrics》2006,62(3):769-776
Differential effectiveness of treatments across subgroups defined by pretreatment variables are of increasing interest, particularly in the expanding research field of pharmacogenomics. When the pretreatment variable is difficult to obtain or expensive to measure, but can be assessed at the end of the study using stored samples, nested case-control and case-cohort methods can be used to reduce costs in large efficacy trials with rare outcomes. Case-only methods are even more efficient, and reliable under a range of circumstances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号