首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The restricted mean survival time (RMST) evaluates the expectation of survival time truncated by a prespecified time point, because the mean survival time in the presence of censoring is typically not estimable. The frequentist inference procedure for RMST has been widely advocated for comparison of two survival curves, while research from the Bayesian perspective is rather limited. For the RMST of both right- and interval-censored data, we propose Bayesian nonparametric estimation and inference procedures. By assigning a mixture of Dirichlet processes (MDP) prior to the distribution function, we can estimate the posterior distribution of RMST. We also explore another Bayesian nonparametric approach using the Dirichlet process mixture model and make comparisons with the frequentist nonparametric method. Simulation studies demonstrate that the Bayesian nonparametric RMST under diffuse MDP priors leads to robust estimation and under informative priors it can incorporate prior knowledge into the nonparametric estimator. Analysis of real trial examples demonstrates the flexibility and interpretability of the Bayesian nonparametric RMST for both right- and interval-censored data.  相似文献   

2.
The t-year mean survival or restricted mean survival time (RMST) has been used as an appealing summary of the survival distribution within a time window [0, t]. RMST is the patient's life expectancy until time t and can be estimated nonparametrically by the area under the Kaplan-Meier curve up to t. In a comparative study, the difference or ratio of two RMSTs has been utilized to quantify the between-group-difference as a clinically interpretable alternative summary to the hazard ratio. The choice of the time window [0, t] may be prespecified at the design stage of the study based on clinical considerations. On the other hand, after the survival data have been collected, the choice of time point t could be data-dependent. The standard inferential procedures for the corresponding RMST, which is also data-dependent, ignore this subtle yet important issue. In this paper, we clarify how to make inference about a random “parameter.” Moreover, we demonstrate that under a rather mild condition on the censoring distribution, one can make inference about the RMST up to t, where t is less than or even equal to the largest follow-up time (either observed or censored) in the study. This finding reduces the subjectivity of the choice of t empirically. The proposal is illustrated with the survival data from a primary biliary cirrhosis study, and its finite sample properties are investigated via an extensive simulation study.  相似文献   

3.
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials.  相似文献   

4.
In the context of right-censored and interval-censored data, we develop asymptotic formulas to compute pseudo-observations for the survival function and the restricted mean survival time (RMST). These formulas are based on the original estimators and do not involve computation of the jackknife estimators. For right-censored data, Von Mises expansions of the Kaplan–Meier estimator are used to derive the pseudo-observations. For interval-censored data, a general class of parametric models for the survival function is studied. An asymptotic representation of the pseudo-observations is derived involving the Hessian matrix and the score vector. Theoretical results that justify the use of pseudo-observations in regression are also derived. The formula is illustrated on the piecewise-constant-hazard model for the RMST. The proposed approximations are extremely accurate, even for small sample sizes, as illustrated by Monte Carlo simulations and real data. We also study the gain in terms of computation time, as compared to the original jackknife method, which can be substantial for a large dataset.  相似文献   

5.
BackgroundHigh-grade gliomas (HGGs) are a heterogeneous disease group, with variable prognosis, inevitably causing deterioration of the quality of life. The estimated 2-year overall survival is 20%, despite the best trimodality treatment consisting of surgery, chemotherapy, and radiotherapy.AimTo evaluate long-term survival outcomes and factors influencing the survival of patients with high-grade gliomas treated with radiotherapy.Materials and methodsData from 47 patients diagnosed with high-grade gliomas between 2009 and 2014 and treated with three-dimensional radiotherapy (3DRT) or intensity-modulated radiotherapy (IMRT) were analyzed retrospectively.ResultsMedian survival was 16.6 months; 29 patients (62%) died before the time of analysis. IMRT was employed in 68% of cases. The mean duration of radiotherapy was 56 days, and the mean delay to the start of radiotherapy was 61.7 days (range, 27–123 days). There were no statistically significant effects of duration of radiotherapy or delay to the start of radiotherapy on patient outcomes.ConclusionsAge, total amount of gross resection, histological type, and use of adjuvant temozolomide influenced survival rate (p < 0.05). The estimated overall survival was 18 months (Kaplan–Meier estimator). Our results corroborated those reported in the literature.  相似文献   

6.
When the individual outcomes within a composite outcome appear to have different treatment effects, either in magnitude or direction, researchers may question the validity or appropriateness of using this composite outcome as a basis for measuring overall treatment effect in a randomized controlled trial. The question remains as to how to distinguish random variation in estimated treatment effects from important heterogeneity within a composite outcome. This paper suggests there may be some utility in directly testing the assumption of homogeneity of treatment effect across the individual outcomes within a composite outcome. We describe a treatment heterogeneity test for composite outcomes based on a class of models used for the analysis of correlated data arising from the measurement of multiple outcomes for the same individuals. Such a test may be useful in planning a trial with a primary composite outcome and at trial end with final analysis and presentation. We demonstrate how to determine the statistical power to detect composite outcome treatment heterogeneity using the POISE Trial data. Then we describe how this test may be incorporated into a presentation of trial results with composite outcomes. We conclude that it may be informative for trialists to assess the consistency of treatment effects across the individual outcomes within a composite outcome using a formalized methodology and the suggested test represents one option.  相似文献   

7.
In the presence of competing causes of event occurrence (e.g., death), the interest might not only be in the overall survival but also in the so-called net survival, that is, the hypothetical survival that would be observed if the disease under study were the only possible cause of death. Net survival estimation is commonly based on the excess hazard approach in which the hazard rate of individuals is assumed to be the sum of a disease-specific and expected hazard rate, supposed to be correctly approximated by the mortality rates obtained from general population life tables. However, this assumption might not be realistic if the study participants are not comparable with the general population. Also, the hierarchical structure of the data can induces a correlation between the outcomes of individuals coming from the same clusters (e.g., hospital, registry). We proposed an excess hazard model that corrects simultaneously for these two sources of bias, instead of dealing with them independently as before. We assessed the performance of this new model and compared it with three similar models, using extensive simulation study, as well as an application to breast cancer data from a multicenter clinical trial. The new model performed better than the others in terms of bias, root mean square error, and empirical coverage rate. The proposed approach might be useful to account simultaneously for the hierarchical structure of the data and the non-comparability bias in studies such as long-term multicenter clinical trials, when there is interest in the estimation of net survival.  相似文献   

8.
Demographic estimation methods for plants with unobservable life-states   总被引:2,自引:0,他引:2  
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture–recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture–recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture–recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1–4 years (mean 1.4, SD 0.74). The capture–recapture models estimated ramet survival rate at 0.86 (SE~0.01), ranging from 0.77–0.94 (SEs≤0.1) in any one year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16–47% (SEs≤5.1) in any one year. Multistate capture–recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture–recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent time‐specific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life‐states are a prominent feature in plant life cycles, multistate capture–recapture models are a natural framework for analysing population dynamics of plants with dormancy.  相似文献   

9.
Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download.  相似文献   

10.

Objective

In economic evaluation, a commonly used outcome measure for the treatment effect is the between-arm difference in restricted mean survival time (rmstD). This study illustrates how different survival analysis methods can be used to estimate the rmstD for economic evaluation using individual patient data (IPD) meta-analysis. Our aim was to study if/how the choice of a method impacts on cost-effectiveness results.

Methods

We used IPD from the Meta-Analysis of Radiotherapy in Lung Cancer concerning 2,000 patients with locally advanced non-small cell lung cancer, included in ten trials. We considered methods either used in the field of meta-analysis or in economic evaluation but never applied to assess the rmstD for economic evaluation using IPD meta-analysis. Methods were classified into two approaches. With the first approach, the rmstD is estimated directly as the area between the two pooled survival curves. With the second approach, the rmstD is based on the aggregation of the rmstDs estimated in each trial.

Results

The average incremental cost-effectiveness ratio (ICER) and acceptability curves were sensitive to the method used to estimate the rmstD. The estimated rmstDs ranged from 1.7 month to 2.5 months, and mean ICERs ranged from € 24,299 to € 34,934 per life-year gained depending on the chosen method. At a ceiling ratio of € 25,000 per life year-gained, the probability of the experimental treatment being cost-effective ranged from 31% to 68%.

Conclusions

This case study suggests that the method chosen to estimate the rmstD from IPD meta-analysis is likely to influence the results of cost-effectiveness analyses.  相似文献   

11.
G-estimation of structural nested models (SNMs) plays an important role in estimating the effects of time-varying treatments with appropriate adjustment for time-dependent confounding. As SNMs for a failure time outcome, structural nested accelerated failure time models (SNAFTMs) and structural nested cumulative failure time models have been developed. The latter models are included in the class of structural nested mean models (SNMMs) and are not involved in artificial censoring, which induces several difficulties in g-estimation of SNAFTMs. Recently, restricted mean time lost (RMTL), which corresponds to the area under a distribution function up to a restriction time, is attracting attention in clinical trial communities as an appropriate summary measure of a failure time outcome. In this study, we propose another SNMM for a failure time outcome, which is called structural nested RMTL model (SNRMTLM) and describe randomized and observational g-estimation procedures that use different assumptions for the treatment mechanism in a randomized trial setting. We also provide methods to estimate marginal RMTLs under static treatment regimes using estimated SNRMTLMs. A simulation study evaluates finite-sample performances of the proposed methods compared with the conventional intention-to-treat and per-protocol analyses. We illustrate the proposed methods using data from a randomized controlled trial for cardiovascular disease with treatment changes. G-estimation of SNRMTLMs is a useful tool to estimate the effects of time-varying treatments on a failure time outcome.  相似文献   

12.
Some clinical trials follow a design where patients are randomized to a primary therapy at entry followed by another randomization to maintenance therapy contingent upon disease remission. Ideally, analysis would allow different treatment policies, i.e., combinations of primary and maintenance therapy if specified up-front, to be compared. Standard practice is to conduct separate analyses for the primary and follow-up treatments, which does not address this issue directly. We propose consistent estimators for the survival distribution and mean restricted survival time for each treatment policy in such two-stage studies and derive large-sample properties. The methods are demonstrated on a leukemia clinical trial data set and through simulation.  相似文献   

13.
ObjectiveTo compare the cost effectiveness of sildenafil and papaverine-phentolamine injections for treating erectile dysfunction.DesignCost utility analysis comparing treatment with sildenafil (allowing a switch to injection therapy) and treatment with papaverine-phentolamine (no switch allowed). Costs and effects were estimated from the societal perspective. Using time trade-off, a sample of the general public (n=169) valued health states relating to erectile dysfunction. These values were used to estimated health related quality of life by converting the clinical outcomes of a trial into quality adjusted life years (QALYs).Participants169 residents of Rotterdam.ResultsParticipants thought that erectile dysfunction limits quality of life considerably: the mean utility gain attributable to sildenafil is 0.11. Overall, treatment with sildenafil gained more QALYs, but the total costs were higher. The incremental cost effectiveness ratio for the introduction of sildenafil was £3639 in the first year and fell in following years. Doubling the frequency of use of sildenafil almost doubled the cost per additional QALY.ConclusionsTreatment with sildenafil is cost effective. When considering funding sildenafil, healthcare systems should take into account that the frequency of use affects cost effectiveness.  相似文献   

14.
Variance estimators are derived for estimators of the average lead time and average benefit time due to screening in a randomized screening trial via influence functions. The influence functions demonstrate that these estimators are asymptotically equivalent to the mean difference, between the study and control case groups, in the appropriate survival times. For estimating benefit time, the survival time is measured since start of study; for estimating lead time, the survival time is measured since time of diagnosis. Asymptotic variances of these estimators can be calculated in a straightforward manner from the influence functions, and these variances can be estimated from actual trial data. The performance of the variance estimators is assessed via a simulated screening trial. The situation involving censored data is also discussed.  相似文献   

15.
Lu Mao 《Biometrics》2023,79(3):1749-1760
Measuring the treatment effect on recurrent events like hospitalization in the presence of death has long challenged statisticians and clinicians alike. Traditional inference on the cumulative frequency unjustly penalizes survivorship as longer survivors also tend to experience more adverse events. Expanding a recently suggested idea of the “while-alive” event rate, we consider a general class of such estimands that adjust for the length of survival without losing causal interpretation. Given a user-specified loss function that allows for arbitrary weighting, we define as estimand the average loss experienced per unit time alive within a target period and use the ratio of this loss rate to measure the effect size. Scaling the loss rate by the width of the corresponding time window gives us an alternative, and sometimes more photogenic, way of showing the data. To make inferences, we construct a nonparametric estimator for the loss rate through the cumulative loss and the restricted mean survival time and derive its influence function in closed form for variance estimation and testing. As simulations and analysis of real data from a heart failure trial both show, the while-alive approach corrects for the false attenuation of treatment effect due to patients living longer under treatment, with increased statistical power as a result. The proposed methods are implemented in the R-package WA , which is publicly available from the Comprehensive R Archive Network (CRAN).  相似文献   

16.

Background

Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.

Methods

By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.

Findings

PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.

Conclusions

By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and non-proportional hazards.  相似文献   

17.
Transcranial direct current stimulation (tDCS) has shown potential for providing tinnitus relief, although positive effects have usually been observed only during a short time period after treatment. In recent studies the focus has turned from one-session experiments towards multi-session treatment studies investigating long-term outcomes with double-blinded and sham-controlled study designs. Traditionally, tDCS has been administered in a clinical setting by a healthcare professional but in studies involving multiple treatment sessions, often a trade-off has to be made between sample size and the amount of labor needed to run the trial. Also, as the number of required visits to the clinic increases, the dropout rate is likely to rise proportionally.The aim of the current study was to find out if tDCS treatment for tinnitus could be patient-administered in a domiciliary setting and whether the results would be comparable to those from in-hospital treatment studies. Forty-three patients with chronic (> 6 months) tinnitus were involved in the study, and data on 35 out of these patients were included in final analysis. Patients received 20 minutes of left temporal area anodal (LTA) or bifrontal tDCS stimulation (2 mA) or sham stimulation (0.3 mA) for ten consecutive days. An overall reduction in the main outcome measure, Tinnitus Handicap Inventory (THI), was found (mean change 5.0 points, p < 0.05), but there was no significant difference between active and sham treatment outcomes. Patients found the tDCS treatment easy to administer and they all tolerated it well. In conclusion, self-administered domiciliary tDCS treatment for tinnitus was found safe and feasible and gave outcome results similar to recent randomized controlled long-term treatment trials. The results suggest better overall treatment response—as measured by THI—with domiciliary treatment than with in-hospital treatment, but this advantage is not related to the tDCS variant. The study protocol demonstrated in the current study is not restricted to tinnitus only.  相似文献   

18.
This study examines the impact that pharmaceutical innovation, which accounts for most private biomedical research expenditure, has had on longevity. We perform two types of two-way fixed-effects analyses, which control for the effects of many potentially confounding variables. First, we analyze long-run (2006–2018) changes in longevity associated with different diseases in a single country: the U.S. Then, we analyze relative longevity levels associated with different diseases in 26 high-income countries during a single time period (2006–2016). The measure of longevity we analyze, mean age at time of death, is strongly positively correlated across countries with life expectancy at birth. The measure of pharmaceutical innovation we use is the mean vintage (year of initial world launch) of the drugs used to treat each disease in each country. Changes in the vintage distribution of drugs are due to both entry of new drugs and exit of old drugs. Our analysis of U.S. data indicates that the diseases for which there were larger increases in drug vintage tended to have larger increases in the longevity of Americans of all races and both sexes. In other words, the lower the mean age of the drugs, the higher the mean age at death. We test, and are unable to reject, the “parallel trends” hypothesis. We estimate that the 2006–2018 increase in drug vintage increased the mean age at death of Americans by about 6 months (66% of the observed increase). Controlling for sex, race, and education has only a small effect on the estimate of the vintage coefficient. The estimates indicate that drug vintage did not have a significant effect on the mean age at death of decedents with less than 9 years of education. Drug vintage had a positive and significant effect on the mean age at death of decedents with at least 9 years of education, and a larger effect on the mean age at death of decedents with at least 13 years of education. The finding that pharmaceutical innovation has a larger effect on the longevity of people with more education is consistent with previous evidence that more educated people are more likely to use newer drugs. Our analysis of data on 26 high-income countries indicates that the higher the vintage of drugs available to treat a disease in a country, the higher mean age at death was, controlling for fixed disease and country effects. The increase in drug vintage is estimated to have increased mean age at death in the 26 countries by 1.23 years between 2006 and 2016—73% of the observed increase. We obtain estimates of the cost of pharmaceutical innovation—its impact on drug expenditure—as well as estimates of an important benefit of pharmaceutical innovation—the number of life-years gained from it—and of their ratio, i.e., the incremental cost-effectiveness ratio. Estimates of the cost per life-year gained for the U.S. and the 26 countries are $35,817 and $13,904, respectively. Both figures are well below per capita GDP in the respective regions, suggesting that, overall, pharmaceutical innovation was highly cost-effective.  相似文献   

19.
Thach CT  Fisher LD 《Biometrics》2002,58(2):432-438
In the design of clinical trials, the sample size for the trial is traditionally calculated from estimates of parameters of interest, such as the mean treatment effect, which can often be inaccurate. However, recalculation of the sample size based on an estimate of the parameter of interest that uses accumulating data from the trial can lead to inflation of the overall Type I error rate of the trial. The self-designing method of Fisher, also known as the variance-spending method, allows the use of all accumulating data in a sequential trial (including the estimated treatment effect) in determining the sample size for the next stage of the trial without inflating the Type I error rate. We propose a self-designing group sequential procedure to minimize the expected total cost of a trial. Cost is an important parameter to consider in the statistical design of clinical trials due to limited financial resources. Using Bayesian decision theory on the accumulating data, the design specifies sequentially the optimal sample size and proportion of the test statistic's variance needed for each stage of a trial to minimize the expected cost of the trial. The optimality is with respect to a prior distribution on the parameter of interest. Results are presented for a simple two-stage trial. This method can extend to nonmonetary costs, such as ethical costs or quality-adjusted life years.  相似文献   

20.
There has been much development in Bayesian adaptive designs in clinical trials. In the Bayesian paradigm, the posterior predictive distribution characterizes the future possible outcomes given the currently observed data. Based on the interim time-to-event data, we develop a new phase II trial design by combining the strength of both Bayesian adaptive randomization and the predictive probability. By comparing the mean survival times between patients assigned to two treatment arms, more patients are assigned to the better treatment on the basis of adaptive randomization. We continuously monitor the trial using the predictive probability for early termination in the case of superiority or futility. We conduct extensive simulation studies to examine the operating characteristics of four designs: the proposed predictive probability adaptive randomization design, the predictive probability equal randomization design, the posterior probability adaptive randomization design, and the group sequential design. Adaptive randomization designs using predictive probability and posterior probability yield a longer overall median survival time than the group sequential design, but at the cost of a slightly larger sample size. The average sample size using the predictive probability method is generally smaller than that of the posterior probability design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号