首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many clinical trials, multiple time‐to‐event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression‐related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft‐versus‐host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated with the relapse free survival, and both the acute GVHD and relapse of leukemia are intermediate nonterminal events subject to dependent censoring by the informative terminal event death, but not vice versa, giving rise to survival data that are subject to two sets of semi‐competing risks. It is important to assess the impacts of prognostic factors on these three time‐to‐event endpoints. We propose a novel statistical approach that jointly models such data via a pair of copulas to account for multiple dependence structures, while the marginal distribution of each endpoint is formulated by a Cox proportional hazards model. We develop an estimation procedure based on pseudo‐likelihood and carry out simulation studies to examine the performance of the proposed method in finite samples. The practical utility of the proposed method is further illustrated with data from the motivating example.  相似文献   

2.
We consider the impact of a possible intermediate event on a terminal event in an illness-death model with states 'initial', 'intermediate' and 'terminal'. One aim is to unambiguously describe the occurrence of the intermediate event in terms of the observable data, the problem being that the intermediate event may not occur. We propose to consider a random time interval, whose length is the time spent in the intermediate state. We derive an estimator of the joint distribution of the left and right limit of the random time interval from the Aalen-Johansen estimator of the matrix of transition probabilities and study its asymptotic properties. We apply our approach to hospital infection data. Estimating the distribution of the random time interval will usually be only a first step of an analysis. We illustrate this by analysing change in length of hospital stay following an infection and derive the large sample properties of the respective estimator.  相似文献   

3.
Regression modeling of semicompeting risks data   总被引:1,自引:0,他引:1  
Peng L  Fine JP 《Biometrics》2007,63(1):96-108
Semicompeting risks data are often encountered in clinical trials with intermediate endpoints subject to dependent censoring from informative dropout. Unlike with competing risks data, dropout may not be dependently censored by the intermediate event. There has recently been increased attention to these data, in particular inferences about the marginal distribution of the intermediate event without covariates. In this article, we incorporate covariates and formulate their effects on the survival function of the intermediate event via a functional regression model. To accommodate informative censoring, a time-dependent copula model is proposed in the observable region of the data which is more flexible than standard parametric copula models for the dependence between the events. The model permits estimation of the marginal distribution under weaker assumptions than in previous work on competing risks data. New nonparametric estimators for the marginal and dependence models are derived from nonlinear estimating equations and are shown to be uniformly consistent and to converge weakly to Gaussian processes. Graphical model checking techniques are presented for the assumed models. Nonparametric tests are developed accordingly, as are inferences for parametric submodels for the time-varying covariate effects and copula parameters. A novel time-varying sensitivity analysis is developed using the estimation procedures. Simulations and an AIDS data analysis demonstrate the practical utility of the methodology.  相似文献   

4.
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study.  相似文献   

5.
In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation.  相似文献   

6.
Clinical trials involve multi-site heterogeneous data generation with complex data input-formats and forms. The data should be captured and queried in an integrated fashion to facilitate further analysis. Electronic case-report forms (eCRF) are gaining popularity since it allows capture of clinical information in a rapid manner. We have designed and developed an XML based flexible clinical trials data management framework in .NET environment that can be used for efficient design and deployment of eCRFs to efficiently collate data and analyze information from multi-site clinical trials. The main components of our system include an XML form designer, a Patient registration eForm, reusable eForms, multiple-visit data capture and consolidated reports. A unique id is used for tracking the trial, site of occurrence, the patient and the year of recruitment.

Availability  相似文献   


7.
An important aim in clinical studies in oncology is to study how treatment and prognostic factors influence the course of disease of a patient. Typically in these trials, besides overall survival, also other endpoints such as locoregional recurrence or distant metastasis are of interest. Most commonly in these situations, Cox regression models are applied for each of these endpoints separately or to composite endpoints such as disease-free survival. These approaches however fail to give insight into what happens to a patient after a first event. We re-analyzed data of 2795 patients from a breast cancer trial (EORTC 10854) by applying a multi-state model, with local recurrence, distant metastasis, and both local recurrence and distant metastasis as transient states and death as absorbing state. We used an approach where the clock is reset on entry of a new state. The influence of prognostic factors on each of the transition rates is studied, as well as the influence of the time at which intermediate events occur. The estimated transition rates between the states in the model are used to obtain predictions for patients with a given history. Formulas are developed and illustrated for these prediction probabilities for the clock reset approach.  相似文献   

8.
In the presence of competing causes of event occurrence (e.g., death), the interest might not only be in the overall survival but also in the so-called net survival, that is, the hypothetical survival that would be observed if the disease under study were the only possible cause of death. Net survival estimation is commonly based on the excess hazard approach in which the hazard rate of individuals is assumed to be the sum of a disease-specific and expected hazard rate, supposed to be correctly approximated by the mortality rates obtained from general population life tables. However, this assumption might not be realistic if the study participants are not comparable with the general population. Also, the hierarchical structure of the data can induces a correlation between the outcomes of individuals coming from the same clusters (e.g., hospital, registry). We proposed an excess hazard model that corrects simultaneously for these two sources of bias, instead of dealing with them independently as before. We assessed the performance of this new model and compared it with three similar models, using extensive simulation study, as well as an application to breast cancer data from a multicenter clinical trial. The new model performed better than the others in terms of bias, root mean square error, and empirical coverage rate. The proposed approach might be useful to account simultaneously for the hierarchical structure of the data and the non-comparability bias in studies such as long-term multicenter clinical trials, when there is interest in the estimation of net survival.  相似文献   

9.
French B  Heagerty PJ 《Biometrics》2009,65(2):415-422
Summary .  Longitudinal studies typically collect information on the timing of key clinical events and on specific characteristics that describe those events. Random variables that measure qualitative or quantitative aspects associated with the occurrence of an event are known as marks. Recurrent marked point process data consist of possibly recurrent events, with the mark (and possibly exposure) measured if and only if an event occurs. Analysis choices depend on which aspect of the data is of primary scientific interest. First, factors that influence the occurrence or timing of the event may be characterized using recurrent event analysis methods. Second, if there is more than one event per subject, then the association between exposure and the mark may be quantified using repeated measures regression methods. We detail assumptions required of any time-dependent exposure process and the event time process to ensure that linear or generalized linear mixed models and generalized estimating equations provide valid estimates. We provide theoretical and empirical evidence that if these conditions are not satisfied, then an independence estimating equation should be used for consistent estimation of association. We conclude with the recommendation that analysts carefully explore both the exposure and event time processes prior to implementing a repeated measures analysis of recurrent marked point process data.  相似文献   

10.
Mind-wandering is the occasional distraction we experience while performing a cognitive task. It arises without any external precedent, varies over time, and interferes with the processing of sensory information. Here, we asked whether the transition from the on-task state to mind-wandering is a gradual process or an abrupt event. We developed a new experimental approach, based on the continuous, online assessment of individual psychophysical performance. Probe questions were asked whenever response times (RTs) exceeded 2 standard deviations from the participant’s average RT. Results showed that mind-wandering reports were generally preceded by slower RTs, as compared to trials preceding on-task reports. Mind-wandering episodes could be reliably predicted from the response time difference between the last and the second-to-last trials. Thus, mind-wandering reports follow an abrupt increase in behavioral variability, lasting between 2.5 and 10 seconds.  相似文献   

11.
Hazard rate models with covariates.   总被引:3,自引:0,他引:3  
Many problems, particularly in medical research, concern the relationship between certain covariates and the time to occurrence of an event. The hazard or failure rate function provides a conceptually simple representation of time to occurrence data that readily adapts to include such generalizations as competing risks and covariates that vary with time. Two partially parametric models for the hazard function are considered. These are the proportional hazards model of Cox (1972) and the class of log-linear or accelerated failure time models. A synthesis of the literature on estimation from these models under prospective sampling indicates that, although important advances have occurred during the past decade, further effort is warranted on such topics as distribution theory, tests of fit, robustness, and the full utilization of a methodology that permits non-standard features. It is further argued that a good deal of fruitful research could be done on applying the same models under a variety of other sampling schemes. A discussion of estimation from case-control studies illustrates this point.  相似文献   

12.
Multistate models can be successfully used for describing complex event history data, for example, describing stages in the disease progression of a patient. The so‐called “illness‐death” model plays a central role in the theory and practice of these models. Many time‐to‐event datasets from medical studies with multiple end points can be reduced to this generic structure. In these models one important goal is the modeling of transition rates but biomedical researchers are also interested in reporting interpretable results in a simple and summarized manner. These include estimates of predictive probabilities, such as the transition probabilities, occupation probabilities, cumulative incidence functions, and the sojourn time distributions. We will give a review of some of the available methods for estimating such quantities in the progressive illness‐death model conditionally (or not) on covariate measures. For some of these quantities estimators based on subsampling are employed. Subsampling, also referred to as landmarking, leads to small sample sizes and usually to heavily censored data leading to estimators with higher variability. To overcome this issue estimators based on a preliminary estimation (presmoothing) of the probability of censoring may be used. Among these, the presmoothed estimators for the cumulative incidences are new. We also introduce feasible estimation methods for the cumulative incidence function conditionally on covariate measures. The proposed methods are illustrated using real data. A comparative simulation study of several estimation approaches is performed and existing software in the form of R packages is discussed.  相似文献   

13.
Insertions and deletions in a profile hidden Markov model (HMM) are modeled by transition probabilities between insert, delete and match states. These are estimated by combining observed data and prior probabilities. The transition prior probabilities can be defined either ad hoc or by maximum likelihood (ML) estimation. We show that the choice of transition prior greatly affects the HMM's ability to discriminate between true and false hits. HMM discrimination was measured using the HMMER 2.2 package applied to 373 families from Pfam. We measured the discrimination between true members and noise sequences employing various ML transition priors and also systematically scanned the parameter space of ad hoc transition priors. Our results indicate that ML priors produce far from optimal discrimination, and we present an empirically derived prior that considerably decreases the number of misclassifications compared to ML. Most of the difference stems from the probabilities for exiting a delete state. The ML prior, which is unaware of noise sequences, estimates a delete-to-delete probability that is relatively high and does not penalize noise sequences enough for optimal discrimination.  相似文献   

14.
Analysis of adverse events (AE) for drug safety assessment presents challenges to statisticians in observational studies as well as in clinical trials since AEs are typically recurrent with varying duration and severity. Routine analyses often concentrate on the number of patients who had at least one occurrence of a specific AE or a group of AEs, or the time to occurrence of the first event. We argue that other information in AE data particularly cumulative duration of events is also important, particularly for benefit-risk assessment. We propose a nonparametric method to estimate the mean cumulative duration (MCD) based on the nonparametric cumulative mean function estimate, together with a robust estimate for the variance of the estimate, as in Lawless and Nadeau (1995). This approach can be easily used to analyze multiple, overlapped and severity weighted AE durations. This method can also be used for estimating the difference between two MCDs. Estimation in the presence of censoring due to informative dropouts and/or a terminal event is also considered. The method can be implemented in standard softwares such as SAS. We illustrate the use of the method with a numerical example. Small sample properties of this approach are examined via simulation.  相似文献   

15.
In clinical trials with time‐to‐event outcomes, it is of interest to predict when a prespecified number of events can be reached. Interim analysis is conducted to estimate the underlying survival function. When another correlated time‐to‐event endpoint is available, both outcome variables can be used to improve estimation efficiency. In this paper, we propose to use the convolution of two time‐to‐event variables to estimate the survival function of interest. Propositions and examples are provided based on exponential models that accommodate possible change points. We further propose a new estimation equation about the expected time that exploits the relationship of two endpoints. Simulations and the analysis of real data show that the proposed methods with bivariate information yield significant improvement in prediction over that of the univariate method.  相似文献   

16.
The maximum likelihood (ML) method of phylogenetic tree construction is not as widely used as other tree construction methods (e.g., parsimony, neighbor-joining) because of the prohibitive amount of time required to find the ML tree when the number of sequences under consideration is large. To overcome this difficulty, we propose a stochastic search strategy for estimation of the ML tree that is based on a simulated annealing algorithm. The algorithm works by moving through tree space by way of a "local rearrangement" strategy so that topologies that improve the likelihood are always accepted, whereas those that decrease the likelihood are accepted with a probability that is related to the proportionate decrease in likelihood. Besides greatly reducing the time required to estimate the ML tree, the stochastic search strategy is less likely to become trapped in local optima than are existing algorithms for ML tree estimation. We demonstrate the success of the modified simulated annealing algorithm by comparing it with two existing algorithms (Swofford's PAUP* and Felsenstein's DNAMLK) for several theoretical and real data examples.  相似文献   

17.
Cardiovascular disease accounts for significant morbidity and mortality in the elderly. The clinical trial data available to guide therapy in this growing population subset are relatively limited. This review will focus on treatment approaches and recommendations obtained from subgroup analyses of elderly patients from major clinical trials for the management of chronic stable angina, acute coronary syndromes (unstable angina and non-ST-segment elevation myocardial infarction), and coronary revascularization. Recent advances in the treatment of stable angina have shown that use of angiotensin-converting enzyme inhibitors and lipid-lowering therapy as adjunctive measures show benefit in the elderly by reducing the occurrence of death, nonfatal myocardial infarction, and unstable angina. However, if patients experience disabling or unstable anginal symptoms despite effective medical therapy, coronary revascularization must be considered. Several clinical trials have shown a significant reduction in major adverse cardiac events when using intravenous glycoprotein receptor antagonists periprocedurally during percutaneous revascularization approaches in elderly patients with unstable angina or non-ST-segment elevation myocardial infarction, especially when these measures are performed as soon as possible. However, the success of myocardial revascularization by a percutaneous or surgical approach is highly dependent on the patient's associated comorbidities, especially in patients over age 80 years.  相似文献   

18.
Competing risks data are commonly encountered in randomized clinical trials and observational studies. This paper considers the situation where the ending statuses of competing events have different clinical interpretations and/or are of simultaneous interest. In clinical trials, often more than one competing event has meaningful clinical interpretations even though the trial effects of different events could be different or even opposite to each other. In this paper, we develop estimation procedures and inferential properties for the joint use of multiple cumulative incidence functions (CIFs). Additionally, by incorporating longitudinal marker information, we develop estimation and inference procedures for weighted CIFs and related metrics. The proposed methods are applied to a COVID-19 in-patient treatment clinical trial, where the outcomes of COVID-19 hospitalization are either death or discharge from the hospital, two competing events with completely different clinical implications.  相似文献   

19.
This paper develops methodology for estimation of the effect of a binary time-varying covariate on failure times when the change time of the covariate is interval censored. The motivating example is a study of cytomegalovirus (CMV) disease in patients with human immunodeficiency virus (HIV) disease. We are interested in determining whether CMV shedding predicts an increased hazard for developing active CMV disease. Since a clinical screening test is needed to detect CMV shedding, the time that shedding begins is only known to lie in an interval bounded by the patient's last negative and first positive tests. In a Cox proportional hazards model with a time-varying covariate for CMV shedding, the partial likelihood depends on the covariate status of every individual in the risk set at each failure time. Due to interval censoring, this is not always known. To solve this problem, we use a Monte Carlo EM algorithm with a Gibbs sampler embedded in the E-step. We generate multiple completed data sets by drawing imputed exact shedding times based on the joint likelihood of the shedding times and event times under the Cox model. The method is evaluated using a simulation study and is applied to the data set described above.  相似文献   

20.

Background:

Statins were initially used to improve cardiovascular outcomes in people with established coronary artery disease, but recently their use has become more common in people at low cardiovascular risk. We did a systematic review of randomized trials to assess the efficacy and harms of statins in these individuals.

Methods:

We searched MEDLINE and EMBASE (to Jan. 28, 2011), registries of health technology assessments and clinical trials, and reference lists of relevant reviews. We included trials that randomly assigned participants at low cardiovascular risk to receive a statin versus a placebo or no statin. We defined low risk as an observed 10-year risk of less than 20% for cardiovascular-related death or nonfatal myocardial infarction, but we explored other definitions in sensitivity analyses.

Results:

We identified 29 eligible trials involving a total of 80 711 participants. All-cause mortality was significantly lower among patients receiving a statin than among controls (relative risk [RR] 0.90, 95% confidence interval [CI] 0.84–0.97) for trials with a 10-year risk of cardiovascular disease < 20% [primary analysis] and 0.83, 95% CI 0.73–0.94, for trials with 10-year risk < 10% [sensitivity analysis]). Patients in the statin group were also significantly less likely than controls to have nonfatal myocardial infarction (RR 0.64, 95% CI 0.49–0.84) and nonfatal stroke (RR 0.81, 95% CI 0.68–0.96). Neither metaregression nor stratified analyses suggested statistically significant differences in efficacy between high-and low-potency statins, or larger reductions in cholesterol.

Interpretation:

Statins were found to be efficacious in preventing death and cardiovascular morbidity in people at low cardiovascular risk. Reductions in relative risk were similar to those seen in patients with a history of coronary artery disease.Although statins are known to improve survival and relevant clinical outcomes in high-risk populations,1 evidence of their clinical benefit in lower risk populations is more equivocal. Initially, low-risk populations were defined by the absence of known coronary artery disease (and their treatment was termed “primary prevention”). However, it was subsequently recognized that these populations included both patients at very high risk of coronary artery disease (e.g., those with severe peripheral vascular disease) and those at very low risk (e.g., those aged < 40 years who have no diabetes or hypertension and have low-density lipoprotein cholesterol level of less than 1.8 mmol/L). Accordingly, current guidelines for the use of statins are based on the projected risk of an atherosclerotic event rather than solely on the presence or absence of known coronary artery disease.2,3Results of the recent JUPITER study (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin)4 have renewed enthusiasm for the use of statins in people without a history of coronary artery disease and have generated further controversy as to whether high-potency statins such as rosuvastatin and atorvastatin lead to better clinical outcomes than low-potency statins such as pravastatin, simvastatin, fluvastatin and lovastatin. We did a systematic review of randomized trials to assess the efficacy and harms of statins in people at low cardiovascular risk, including indirect comparisons of high-potency and low-potency statins.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号