首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many clinical trials, the primary endpoint is time to an event of interest, for example, time to cardiac attack or tumor progression, and the statistical power of these trials is primarily driven by the number of events observed during the trials. In such trials, the number of events observed is impacted not only by the number of subjects enrolled but also by other factors including the event rate and the follow‐up duration. Consequently, it is important for investigators to be able to monitor and predict accurately patient accrual and event times so as to predict the times of interim and final analyses and enable efficient allocation of research resources, which have long been recognized as important aspects of trial design and conduct. The existing methods for prediction of event times all assume that patient accrual follows a Poisson process with a constant Poisson rate over time; however, it is fairly common in real‐life clinical trials that the Poisson rate changes over time. In this paper, we propose a Bayesian joint modeling approach for monitoring and prediction of accrual and event times in clinical trials. We employ a nonhomogeneous Poisson process to model patient accrual and a parametric or nonparametric model for the event and loss to follow‐up processes. Compared to existing methods, our proposed methods are more flexible and robust in that we model accrual and event/loss‐to‐follow‐up times jointly and allow the underlying accrual rates to change over time. We evaluate the performance of the proposed methods through simulation studies and illustrate the methods using data from a real oncology trial.  相似文献   

2.
Burman CF  Sonesson C 《Biometrics》2006,62(3):664-669
Flexible designs allow large modifications of a design during an experiment. In particular, the sample size can be modified in response to interim data or external information. A standard flexible methodology combines such design modifications with a weighted test, which guarantees the type I error level. However, this inference violates basic inference principles. In an example with independent N(mu, 1) observations, the test rejects the null hypothesis of mu < or = 0 while the average of the observations is negative. We conclude that flexible design in its most general form with the corresponding weighted test is not valid. Several possible modifications of the flexible design methodology are discussed with a focus on alternative hypothesis tests.  相似文献   

3.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

4.
Englert S  Kieser M 《Biometrics》2012,68(3):886-892
Summary Phase II trials in oncology are usually conducted as single-arm two-stage designs with binary endpoints. Currently available adaptive design methods are tailored to comparative studies with continuous test statistics. Direct transfer of these methods to discrete test statistics results in conservative procedures and, therefore, in a loss in power. We propose a method based on the conditional error function principle that directly accounts for the discreteness of the outcome. It is shown how application of the method can be used to construct new phase II designs that are more efficient as compared to currently applied designs and that allow flexible mid-course design modifications. The proposed method is illustrated with a variety of frequently used phase II designs.  相似文献   

5.
In this article, we develop methods for quantifying center effects with respect to recurrent event data. In the models of interest, center effects are assumed to act multiplicatively on the recurrent event rate function. When the number of centers is large, traditional estimation methods that treat centers as categorical variables have many parameters and are sometimes not feasible to implement, especially with large numbers of distinct recurrent event times. We propose a new estimation method for center effects which avoids including indicator variables for centers. We then show that center effects can be consistently estimated by the center-specific ratio of observed to expected cumulative numbers of events. We also consider the case where the recurrent event sequence can be stopped permanently by a terminating event. Large-sample results are developed for the proposed estimators. We assess the finite-sample properties of the proposed estimators through simulation studies. The methods are then applied to national hospital admissions data for end stage renal disease patients.  相似文献   

6.
Summary .   We develop methods for competing risks analysis when individual event times are correlated within clusters. Clustering arises naturally in clinical genetic studies and other settings. We develop a nonparametric estimator of cumulative incidence, and obtain robust pointwise standard errors that account for within-cluster correlation. We modify the two-sample Gray and Pepe–Mori tests for correlated competing risks data, and propose a simple two-sample test of the difference in cumulative incidence at a landmark time. In simulation studies, our estimators are asymptotically unbiased, and the modified test statistics control the type I error. The power of the respective two-sample tests is differentially sensitive to the degree of correlation; the optimal test depends on the alternative hypothesis of interest and the within-cluster correlation. For purposes of illustration, we apply our methods to a family-based prospective cohort study of hereditary breast/ovarian cancer families. For women with BRCA1 mutations, we estimate the cumulative incidence of breast cancer in the presence of competing mortality from ovarian cancer, accounting for significant within-family correlation.  相似文献   

7.
S Wacholder  M Gail  D Pee 《Biometrics》1991,47(1):63-76
We develop approximate methods to compare the efficiencies and to compute the power of alternative potential designs for sampling from a cohort before beginning to collect exposure data. Our methods require only that the cohort be assembled, meaning that the numbers of individuals Nkj at risk at pairs of event times tk and tj greater than or equal to tk are available. To compute Nkj, one needs to know the entry, follow-up, censoring, and event history, but not the exposure, for each individual. Our methods apply to any "unbiased control sampling design," in which cases are compared to a random sample of noncases at risk at the time of an event. We apply our methods to approximate the efficiencies of the nested case-control design, the case-cohort design, and an augmented case-cohort design, compared to the full cohort design, in an assembled cohort of 17,633 members of an insurance cooperative who were followed for mortality from prostatic cancer. The assumptions underlying the approximation are that exposure is unrelated both to the hazard of an event and to the hazard for censoring. The approximations performed well in simulations when both assumptions held and when the exposure was moderately related to censoring.  相似文献   

8.
Sternberg MR  Satten GA 《Biometrics》1999,55(2):514-522
Chain-of-events data are longitudinal observations on a succession of events that can only occur in a prescribed order. One goal in an analysis of this type of data is to determine the distribution of times between the successive events. This is difficult when individuals are observed periodically rather than continuously because the event times are then interval censored. Chain-of-events data may also be subject to truncation when individuals can only be observed if a certain event in the chain (e.g., the final event) has occurred. We provide a nonparametric approach to estimate the distributions of times between successive events in discrete time for data such as these under the semi-Markov assumption that the times between events are independent. This method uses a self-consistency algorithm that extends Turnbull's algorithm (1976, Journal of the Royal Statistical Society, Series B 38, 290-295). The quantities required to carry out the algorithm can be calculated recursively for improved computational efficiency. Two examples using data from studies involving HIV disease are used to illustrate our methods.  相似文献   

9.
We conducted a simulation study to compare two methods that have been recently used in clinical literature for the dynamic prediction of time to pregnancy. The first is landmarking, a semi-parametric method where predictions are updated as time progresses using the patient subset still at risk at that time point. The second is the beta-geometric model that updates predictions over time from a parametric model estimated on all data and is specific to applications with a discrete time to event outcome. The beta-geometric model introduces unobserved heterogeneity by modelling the chance of an event per discrete time unit according to a beta distribution. Due to selection of patients with lower chances as time progresses, the predicted probability of an event decreases over time. Both methods were recently used to develop models predicting the chance to conceive naturally. The advantages, disadvantages and accuracy of these two methods are unknown. We simulated time-to-pregnancy data according to different scenarios. We then compared the two methods by the following out-of-sample metrics: bias and root mean squared error in the average prediction, root mean squared error in individual predictions, Brier score and c statistic. We consider different scenarios including data-generating mechanisms for which the models are misspecified. We applied the two methods on a clinical dataset comprising 4999 couples. Finally, we discuss the pros and cons of the two methods based on our results and present recommendations for use of either of the methods in different settings and (effective) sample sizes.  相似文献   

10.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

11.
The discovery that microsatellite repeat expansions can cause clinical disease has fostered renewed interest in testing for age-at-onset anticipation (AOA). A commonly used procedure is to sample affected parent-child pairs (APCPs) from available data sets and to test for a difference in mean age at onset between the parents and the children. However, standard statistical methods fail to take into account the right truncation of both the parent and child age-at-onset distributions under this design, with the result that type I error rates can be inflated substantially. Previously, we had introduced a new test, based on the correct, bivariate right-truncated, age-at-onset distribution. We showed that this test has the correct type I error rate for random APCPs, even for quite small samples. However, in that paper, we did not consider two key statistical complications that arise when the test is applied to realistic data. First, affected pairs usually are sampled from pedigrees preferentially selected for the presence of multiple affected individuals. In this paper, we show that this will tend to inflate the type I error rate of the test. Second, we consider the appropriate probability model under the alternative hypothesis of true AOA due to an expanding microsatellite mechanism, and we show that there is good reason to believe that the power to detect AOA may be quite small, even for substantial effect sizes. When the type I error rate of the test is high relative to the power, interpretation of test results becomes problematic. We conclude that, in many applications, AOA tests based on APCPs may not yield meaningful results.  相似文献   

12.
Efron-type measures of prediction error for survival analysis   总被引:3,自引:0,他引:3  
Gerds TA  Schumacher M 《Biometrics》2007,63(4):1283-1287
Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial.  相似文献   

13.
In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.  相似文献   

14.
An important issue arising in therapeutic studies of hepatitis C and HIV is the identification of and adjustment for covariates associated with viral eradication and resistance. Analyses of such data are complicated by the fact that eradication is an occult event that is not directly observable, resulting in unique types of censored observations that do not arise in other competing risks settings. This paper proposes a semiparametric regression model to assess the association between multiple covariates and the eradication/resistance processes. The proposed methods are based on a piecewise proportional hazards model that allows parameters to vary between observation times. We illustrate the methods with data from recent hepatitis C clinical trials.  相似文献   

15.
Dunson DB  Dinse GE 《Biometrics》2002,58(1):79-88
Multivariate current status data, consist of indicators of whether each of several events occur by the time of a single examination. Our interest focuses on inferences about the joint distribution of the event times. Conventional methods for analysis of multiple event-time data cannot be used because all of the event times are censored and censoring may be informative. Within a given subject, we account for correlated event times through a subject-specific latent variable, conditional upon which the various events are assumed to occur independently. We also assume that each event contributes independently to the hazard of censoring. Nonparametric step functions are used to characterize the baseline distributions of the different event times and of the examination times. Covariate and subject-specific effects are incorporated through generalized linear models. A Markov chain Monte Carlo algorithm is described for estimation of the posterior distributions of the unknowns. The methods are illustrated through application to multiple tumor site data from an animal carcinogenicity study.  相似文献   

16.
The simplicity and flexibility of Markov models make them appealing for investigations of the acquisition of HIV drug-resistance mutations, whose presence can define specific Markov states. Because the exact time of acquiring a mutation is not observed during clinical research studies on HIV infection, it is important that methods for fitting such models accommodate interval-censored transition times. Furthermore, many such studies include patients with extensive treatment experience prior to the onset of the studies. Therefore, the virus in these patients may have already acquired resistance mutations by study entry. Retrospective data regarding the time on treatment, which is often known or known with error, provide information about the acquisition rates before the start of a study. Finally, variability in the genetic sequences of circulating HIV creates uncertainty in the Markov states. This paper considers approaches to fitting Markov models to data with interval-censored transition times when the time origin and the Markov states are known with error. The methods were applied to AIDS Clinical Trial Group protocol 398, a randomized comparison of mono- versus dual-protease inhibitor use in heavily pretreated patients. We found that the estimated rates of acquiring certain classes of mutations depended on the presence of others, and that the precision of these estimates can be considerably improved by inclusion of retrospective data.  相似文献   

17.
SUMMARY Ossification sequences of the skull in extant Urodela and in Permo‐Carboniferous Branchiosauridae have already been used to study the origin of lissamphibians. But most of these studies did not consider some recent methods developed to analyze the developmental sequences within a phylogenetic framework. Here, we analyze the ossification sequences of 24 cranial bones of 23 extant species of salamanders using the event‐pairing method. This reveals new developmental synapomorphies for several extant salamander taxa and ancestral sequences for Urodela under four alternative reference phylogenies. An analysis with the 12 bones for which ossification sequence data are available in urodeles and in the branchiosaurid Apateon is also performed in order to compare the ancestral condition of the crown‐group of Urodela to the sequence of Apateon. This reveals far more incompatibilities than previously suggested. The similarities observed between some extant salamanders and branchiosaurids may result from extensive homoplasy, as the extreme variation observed in extant Urodela suggests, or be plesiomorphic, as the conservation of some ossification patterns observed in other remotely related vertebrates like actinopterygians suggests. We propose a new, simpler method based on squared‐change optimization to estimate the relative timing of ossification of various bones of hypothetical ancestors, and use independent‐contrasts analysis to estimate the confidence intervals around these times. Our results show that the uncertainty of the ancestral ossification sequence of Urodela is much greater than event‐pairing suggests. The developmental data do not allow to conclude that branchiosaurids are closely related to salamanders and their limited taxonomic distribution in Paleozoic taxa precludes testing hypotheses about lissamphibian origins. This is true regardless of the analytical method used (event‐pairing or our new method based on squared‐change parsimony). Simulations show that the new analytical method is generally more powerful to detect evolutionary shifts in developmental timing, and has lower Type I error rate than event‐pairing. It also makes fewer errors in ancestral character value or state assignment than event‐pairing.  相似文献   

18.
Recurrent event outcomes are adopted increasingly often as a basis for evaluating experimental interventions. In clinical trials involving recurrent events, patients are frequently observed for a baseline period while under standard care, and then randomised to receive either an experimental treatment or continue on standard care. When events are generated according to a mixed Poisson model, having baseline data permits a conditional analysis which can eliminate the subject-specific random effect and yield a more efficient analysis regarding treatment effect. When studies are expected to recruit a large number of patients over an extended period of accrual, or if the period of follow-up is long, sequential testing is desirable to ensure the study is stopped as soon as sufficient data have been collected to establish treatment benefits. We describe methods which facilitate sequential analysis of data arising from trials with recurrent event responses observed over two treatment periods where one is a baseline period of observation. Formulae to help schedule analyses at approximately equal increments of information are given. Simulation studies show that the sequential testing procedures have rejection rates compatible with the nominal error rates under the null and alternative hypotheses. Data from a trial of patients with herpes simplex virus infection are analysed to illustrate the utility of these methods.  相似文献   

19.
Chan IS  Tang NS  Tang ML  Chan PS 《Biometrics》2003,59(4):1170-1177
Testing of noninferiority has become increasingly important in modern medicine as a means of comparing a new test procedure to a currently available test procedure. Asymptotic methods have recently been developed for analyzing noninferiority trials using rate ratios under the matched-pair design. In small samples, however, the performance of these asymptotic methods may not be reliable, and they are not recommended. In this article, we investigate alternative methods that are desirable for assessing noninferiority trials, using the rate ratio measure under small-sample matched-pair designs. In particular, we propose an exact and an approximate exact unconditional test, along with the corresponding confidence intervals based on the score statistic. The exact unconditional method guarantees the type I error rate will not exceed the nominal level. It is recommended for when strict control of type I error (protection against any inflated risk of accepting inferior treatments) is required. However, the exact method tends to be overly conservative (thus, less powerful) and computationally demanding. Via empirical studies, we demonstrate that the approximate exact score method, which is computationally simple to implement, controls the type I error rate reasonably well and has high power for hypothesis testing. On balance, the approximate exact method offers a very good alternative for analyzing correlated binary data from matched-pair designs with small sample sizes. We illustrate these methods using two real examples taken from a crossover study of soft lenses and a Pneumocystis carinii pneumonia study. We contrast the methods with a hypothetical example.  相似文献   

20.
DiRienzo AG 《Biometrics》2003,59(3):497-504
When testing the null hypothesis that treatment arm-specific survival-time distributions are equal, the log-rank test is asymptotically valid when the distribution of time to censoring is conditionally independent of randomized treatment group given survival time. We introduce a test of the null hypothesis for use when the distribution of time to censoring depends on treatment group and survival time. This test does not make any assumptions regarding independence of censoring time and survival time. Asymptotic validity of this test only requires a consistent estimate of the conditional probability that the survival event is observed given both treatment group and that the survival event occurred before the time of analysis. However, by not making unverifiable assumptions about the data-generating mechanism, there exists a set of possible values of corresponding sample-mean estimates of these probabilities that are consistent with the observed data. Over this subset of the unit square, the proposed test can be calculated and a rejection region identified. A decision on the null that considers uncertainty because of censoring that may depend on treatment group and survival time can then be directly made. We also present a generalized log-rank test that enables us to provide conditions under which the ordinary log-rank test is asymptotically valid. This generalized test can also be used for testing the null hypothesis when the distribution of censoring depends on treatment group and survival time. However, use of this test requires semiparametric modeling assumptions. A simulation study and an example using a recent AIDS clinical trial are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号