首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we develop methods for quantifying center effects with respect to recurrent event data. In the models of interest, center effects are assumed to act multiplicatively on the recurrent event rate function. When the number of centers is large, traditional estimation methods that treat centers as categorical variables have many parameters and are sometimes not feasible to implement, especially with large numbers of distinct recurrent event times. We propose a new estimation method for center effects which avoids including indicator variables for centers. We then show that center effects can be consistently estimated by the center-specific ratio of observed to expected cumulative numbers of events. We also consider the case where the recurrent event sequence can be stopped permanently by a terminating event. Large-sample results are developed for the proposed estimators. We assess the finite-sample properties of the proposed estimators through simulation studies. The methods are then applied to national hospital admissions data for end stage renal disease patients.  相似文献   

2.
Injury is rapidly becoming the leading cause of death worldwide, and uncontrolled hemorrhage is the leading cause of potentially preventable death. In addition to crystalloid and/or colloid based resuscitation, severely injured trauma patients are routinely transfused RBCs, plasma, platelets, and in some centers either cryoprecipitate or fibrinogen concentrates or whole blood. Optimal timing and quantity of these products in the treatment of hypothermic, coagulopathic and acidotic trauma patients is unclear. The immediate availability of these components is important, as most hemorrhagic deaths occur within the first 3–6 h of patient arrival. While there are strongly held opinions and longstanding traditions in their use, there are little data within which to logically guide resuscitation therapy. Many current recommendations are based on euvolemic elective surgery patients and incorporate laboratory data parameters not widely available in the first few minutes after patient arrival. Finally, blood components themselves have evolved over the last 30 years, with great attention paid to product safety and inventory management, yet there are surprisingly limited clinical outcome data describing the long term effects of these changes, or how the components have improved clinical outcomes compared to whole blood therapy. When focused on survival of the rapidly bleeding trauma patient, it is unclear if current component therapy is equivalent to whole blood transfusion. In fact data from the current war in Iraq and Afghanistan suggest otherwise. All of these factors have contributed to the current situation, whereby blood component therapy is highly variable and not driven by long term patient outcomes. This review will address the issues raised above and describe recent trauma patient outcome data utilizing predetermined plasma:platelet:RBC transfusion ratios and an ongoing prospective observational trauma transfusion study.  相似文献   

3.
Lu Mao 《Biometrics》2023,79(1):61-72
The restricted mean time in favor (RMT-IF) of treatment is a nonparametric effect size for complex life history data. It is defined as the net average time the treated spend in a more favorable state than the untreated over a prespecified time window. It generalizes the familiar restricted mean survival time (RMST) from the two-state life–death model to account for intermediate stages in disease progression. The overall estimand can be additively decomposed into stage-wise effects, with the standard RMST as a component. Alternate expressions of the overall and stage-wise estimands as integrals of the marginal survival functions for a sequence of landmark transitioning events allow them to be easily estimated by plug-in Kaplan–Meier estimators. The dynamic profile of the estimated treatment effects as a function of follow-up time can be visualized using a multilayer, cone-shaped “bouquet plot.” Simulation studies under realistic settings show that the RMT-IF meaningfully and accurately quantifies the treatment effect and outperforms traditional tests on time to the first event in statistical efficiency thanks to its fuller utilization of patient data. The new methods are illustrated on a colon cancer trial with relapse and death as outcomes and a cardiovascular trial with recurrent hospitalizations and death as outcomes. The R-package rmt implements the proposed methodology and is publicly available from the Comprehensive R Archive Network (CRAN).  相似文献   

4.
Zhang M  Tsiatis AA  Davidian M 《Biometrics》2008,64(3):707-715
Summary .   The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds ratios or log odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods.  相似文献   

5.
Multistate models can be successfully used for describing complex event history data, for example, describing stages in the disease progression of a patient. The so‐called “illness‐death” model plays a central role in the theory and practice of these models. Many time‐to‐event datasets from medical studies with multiple end points can be reduced to this generic structure. In these models one important goal is the modeling of transition rates but biomedical researchers are also interested in reporting interpretable results in a simple and summarized manner. These include estimates of predictive probabilities, such as the transition probabilities, occupation probabilities, cumulative incidence functions, and the sojourn time distributions. We will give a review of some of the available methods for estimating such quantities in the progressive illness‐death model conditionally (or not) on covariate measures. For some of these quantities estimators based on subsampling are employed. Subsampling, also referred to as landmarking, leads to small sample sizes and usually to heavily censored data leading to estimators with higher variability. To overcome this issue estimators based on a preliminary estimation (presmoothing) of the probability of censoring may be used. Among these, the presmoothed estimators for the cumulative incidences are new. We also introduce feasible estimation methods for the cumulative incidence function conditionally on covariate measures. The proposed methods are illustrated using real data. A comparative simulation study of several estimation approaches is performed and existing software in the form of R packages is discussed.  相似文献   

6.
In clinical settings, the necessity of treatment is often measured in terms of the patient’s prognosis in the absence of treatment. Along these lines, it is often of interest to compare subgroups of patients (e.g., based on underlying diagnosis) with respect to pre-treatment survival. Such comparisons may be complicated by at least two important issues. First, mortality contrasts by subgroup may differ over follow-up time, as opposed to being constant, and may follow a form that is difficult to model parametrically. Moreover, in settings where the proportional hazards assumption fails, investigators tend to be more interested in cumulative (as opposed to instantaneous) effects on mortality. Second, pre-treatment death is censored by the receipt of treatment and in settings where treatment assignment depends on time-dependent factors that also affect mortality, such censoring is likely to be informative. We propose semiparametric methods for contrasting subgroup-specific cumulative mortality in the presence of dependent censoring. The proposed estimators are based on the cumulative hazard function, with pre-treatment mortality assumed to follow a stratified Cox model. No functional form is assumed for the nature of the non-proportionality. Asymptotic properties of the proposed estimators are derived, and simulation studies show that the proposed methods are applicable to practical sample sizes. The methods are then applied to contrast pre-transplant mortality for acute versus chronic End-Stage Liver Disease patients.  相似文献   

7.
Semiparametric models for cumulative incidence functions   总被引:1,自引:0,他引:1  
Bryant J  Dignam JJ 《Biometrics》2004,60(1):182-190
In analyses of time-to-failure data with competing risks, cumulative incidence functions may be used to estimate the time-dependent cumulative probability of failure due to specific causes. These functions are commonly estimated using nonparametric methods, but in cases where events due to the cause of primary interest are infrequent relative to other modes of failure, nonparametric methods may result in rather imprecise estimates for the corresponding subdistribution. In such cases, it may be possible to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimators. The cumulative incidence estimators so obtained are simple to compute and are considerably more efficient than the usual nonparametric estimator, particularly with regard to interpolation of cumulative incidence at early or intermediate time points within the range of data used to fit the function. More surprisingly, they are often nearly as efficient as fully parametric estimators. We illustrate the utility of this approach in the analysis of patients treated for early stage breast cancer.  相似文献   

8.

Background

Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.

Methods and Findings

A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.

Conclusions

The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.

Trial registration

www.ClinicalTrials.gov NCT01746953 Please see later in the article for the Editors'' Summary  相似文献   

9.
10.
Chiang CT  Huang SY 《Biometrics》2009,65(1):152-158
Summary .  In the time-dependent receiver operating characteristic curve analysis with several baseline markers, research interest focuses on seeking appropriate composite markers to enhance the accuracy in predicting the vital status of individuals over time. Based on censored survival data, we proposed a more flexible estimation procedure for the optimal combination of markers under the validity of a time-varying coefficient generalized linear model for the event time without restrictive assumptions on the censoring pattern. The consistency of the proposed estimators is also established in this article. In contrast, the inverse probability weighting (IPW) approach might introduce a bias when the selection probabilities are misspecified in the estimating equations. The performance of both estimation procedures are examined and compared through a class of simulations. It is found from the simulation study that the proposed estimators are far superior to the IPW ones. Applying these methods to an angiography cohort, our estimation procedure is shown to be useful in predicting the time to all-cause and coronary artery disease related death.  相似文献   

11.
Objectives: A model is proposed to estimate and compare cervical cancer screening test properties for third world populations when only subjects with a positive screen receive the gold standard test. Two fallible screening tests are compared, VIA and VILI. Methods: We extend the model of Berry et al. [1] to the multi-site case in order to pool information across sites and form better estimates for prevalences of cervical cancer, the true positive rates (TPRs), and false positive rates (FPRs). For 10 centers in five African countries and India involving more than 52,000 women, Bayesian methods were applied when gold standard results for subjects who screened negative on both tests were treated as missing. The Bayesian methods employed suitably correct for the missing screen negative subjects. The study included gold standard verification for all cases, making it possible to validate model-based estimation of accuracy using only outcomes of women with positive VIA or VILI result (ignoring verification of double negative screening test results) with the observed full data outcomes. Results: Across the sites, estimates for the sensitivity of VIA ranged from 0.792 to 0.917 while for VILI sensitivities ranged from 0.929 to 0.977. False positive estimates ranged from 0.056 to 0.256 for VIA and 0.085 to 0.269 for VILI. The pooled estimates for the TPR of VIA and VILI are 0.871 and 0.968, respectively, compared to the full data values of 0.816 and 0.918. Similarly, the pooled estimates for the FPR of VIA and VILI are 0.134 and 0.146, respectively, compared to the full data values of 0.144 and 0.146. Globally, we found VILI had a statistically significant higher sensitivity but no statistical difference for the false positive rates could be determined. Conclusion: Hierarchical Bayesian methods provide a straight forward approach to estimate screening test properties, prevalences, and to perform comparisons for screening studies where screen negative subjects do not receive the gold standard test. The hierarchical model with random effects used to analyze the sites simultaneously resulted in improved estimates compared to the single-site analyses with improved TPR estimates and nearly identical FPR estimates to the full data outcomes. Furthermore, higher TPRs but similar FPRs were observed for VILI compared to VIA.  相似文献   

12.

Objectives

To compare 6 month and 12 month health status and functional outcomes between regional major trauma registries in Hong Kong and Victoria, Australia.

Summary Background Data

Multicentres from trauma registries in Hong Kong and the Victorian State Trauma Registry (VSTR).

Methods

Multicentre, prospective cohort study. Major trauma patients and aged ≥18 years were included. The main outcome measures were Extended Glasgow Outcome Scale (GOSE) functional outcome and risk-adjusted Short-Form 12 (SF-12) health status at 6 and 12 months after injury.

Results

261 cases from Hong Kong and 1955 cases from VSTR were included. Adjusting for age, sex, ISS, comorbid status, injury mechanism and GCS group, the odds of a better functional outcome for Hong Kong patients relative to Victorian patients at six months was 0.88 (95% CI: 0.66, 1.17), and at 12 months was 0.83 (95% CI: 0.60, 1.12). Adjusting for age, gender, ISS, GCS, injury mechanism and comorbid status, Hong Kong patients demonstrated comparable mean PCS-12 scores at 6-months (adjusted mean difference: 1.2, 95% CI: −1.2, 3.6) and 12-months (adjusted mean difference: −0.4, 95% CI: −3.2, 2.4) compared to Victorian patients. Keeping age, gender, ISS, GCS, injury mechanism and comorbid status, there was no difference in the MCS-12 scores of Hong Kong patients compared to Victorian patients at 6-months (adjusted mean difference: 0.4, 95% CI: −2.1, 2.8) or 12-months (adjusted mean difference: 1.8, 95% CI: −0.8, 4.5).

Conclusion

The unadjusted analyses showed better outcomes for Victorian cases compared to Hong Kong but after adjusting for key confounders, there was no difference in 6-month or 12-month functional outcomes between the jurisdictions.  相似文献   

13.
In observational studies of survival time featuring a binary time-dependent treatment, the hazard ratio (an instantaneous measure) is often used to represent the treatment effect. However, investigators are often more interested in the difference in survival functions. We propose semiparametric methods to estimate the causal effect of treatment among the treated with respect to survival probability. The objective is to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. For each patient, we compute a prognostic score (based on the pre-treatment death hazard) and a propensity score (based on the treatment hazard). Each treated patient is then matched with an alive, uncensored and not-yet-treated patient with similar prognostic and/or propensity scores. The experience of each treated and matched patient is weighted using a variant of Inverse Probability of Censoring Weighting to account for the impact of censoring. We propose estimators of the treatment-specific survival functions (and their difference), computed through weighted Nelson–Aalen estimators. Closed-form variance estimators are proposed which take into consideration the potential replication of subjects across matched sets. The proposed methods are evaluated through simulation, then applied to estimate the effect of kidney transplantation on survival among end-stage renal disease patients using data from a national organ failure registry.  相似文献   

14.
Molecular marker data collected from natural populations allows information on genetic relationships to be established without referencing an exact pedigree. Numerous methods have been developed to exploit the marker data. These fall into two main categories: method of moment estimators and likelihood estimators. Method of moment estimators are essentially unbiased, but utilise weighting schemes that are only optimal if the analysed pair is unrelated. Thus, they differ in their efficiency at estimating parameters for different relationship categories. Likelihood estimators show smaller mean squared errors but are much more biased. Both types of estimator have been used in variance component analysis to estimate heritability. All marker-based heritability estimators require that adequate levels of the true relationship be present in the population of interest and that adequate amounts of informative marker data are available. I review the different approaches to relationship estimation, with particular attention to optimizing the use of this relationship information in subsequent variance component estimation.  相似文献   

15.
Many late-phase clinical trials recruit subjects at multiple study sites. This introduces a hierarchical structure into the data that can result in a power-loss compared to a more homogeneous single-center trial. Building on a recently proposed approach to sample size determination, we suggest a sample size recalculation procedure for multicenter trials with continuous endpoints. The procedure estimates nuisance parameters at interim from noncomparative data and recalculates the sample size required based on these estimates. In contrast to other sample size calculation methods for multicenter trials, our approach assumes a mixed effects model and does not rely on balanced data within centers. It is therefore advantageous, especially for sample size recalculation at interim. We illustrate the proposed methodology by a study evaluating a diabetes management system. Monte Carlo simulations are carried out to evaluate operation characteristics of the sample size recalculation procedure using comparative as well as noncomparative data, assessing their dependence on parameters such as between-center heterogeneity, residual variance of observations, treatment effect size and number of centers. We compare two different estimators for between-center heterogeneity, an unadjusted and a bias-adjusted estimator, both based on quadratic forms. The type 1 error probability as well as statistical power are close to their nominal levels for all parameter combinations considered in our simulation study for the proposed unadjusted estimator, whereas the adjusted estimator exhibits some type 1 error rate inflation. Overall, the sample size recalculation procedure can be recommended to mitigate risks arising from misspecified nuisance parameters at the planning stage.  相似文献   

16.
Talmor M  FAhey TJ  Wise J  Hoffman LA  Barie PS 《Plastic and reconstructive surgery》2000,105(6):2244-8; discussion 2249-50
Large-volume liposuction can be associated rarely with major medical complications and death. The case of exsanguinating retroperitoneal hemorrhage that led to cardiopulmonary arrest in an obese 47-year-old woman who underwent large-volume liposuction is described. Extensive liposuction is not a minor procedure. Performance in an ambulatory setting should be monitored carefully, if it is performed at all. Reporting of adverse events associated with outpatient procedures performed by plastic surgeons should be mandated. Hemodynamic instability in the early postoperative period in an otherwise healthy patient may be due to fluid overload, lidocaine toxicity, or to hemorrhagic shock and must be recognized and treated aggressively. Guidelines for the safe practice of large-volume liposuction need to be established.  相似文献   

17.
This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.  相似文献   

18.
G. Asteris  S. Sarkar 《Genetics》1996,142(1):313-326
Bayesian procedures are developed for estimating mutation rates from fluctuation experiments. Three Bayesian point estimators are compared with four traditional ones using the results of 10,000 simulated experiments. The Bayesian estimators were found to be at least as efficient as the best of the previously known estimators. The best Bayesian estimator is one that uses (1/m(2)) as the prior probability density function and a quadratic loss function. The advantage of using these estimators is most pronounced when the number of fluctuation test tubes is small. Bayesian estimation allows the incorporation of prior knowledge about the estimated parameter, in which case the resulting estimators are the most efficient. It enables the straightforward construction of confidence intervals for the estimated parameter. The increase of efficiency with prior information and the narrowing of the confidence intervals with additional experimental results are investigated. The results of the simulations show that any potential inaccuracy of estimation arising from lumping together all cultures with more than n mutants (the jackpots) almost disappears at n = 70 (provided that the number of mutations in a culture is low). These methods are applied to a set of experimental data to illustrate their use.  相似文献   

19.
Greaves S  Sanson B  White P  Vincent JP 《Genetics》1999,152(4):1753-1766
Applications of quantitative genetics and conservation genetics often require measures of pairwise relationships between individuals, which, in the absence of known pedigree structure, can be estimated only by use of molecular markers. Here we introduce methods for the joint estimation of the two-gene and four-gene coefficients of relationship from data on codominant molecular markers in randomly mating populations. In a comparison with other published estimators of pairwise relatedness, we find these new "regression" estimators to be computationally simpler and to yield similar or lower sampling variances, particularly when many loci are used or when loci are hypervariable. Two examples are given in which the new estimators are applied to natural populations, one that reveals isolation-by-distance in an annual plant and the other that suggests a genetic basis for a coat color polymorphism in bears.  相似文献   

20.
Quantiles, especially the medians, of survival times are often used as summary statistics to compare the survival experiences between different groups. Quantiles are robust against outliers and preferred over the mean. Multivariate failure time data often arise in biomedical research. For example, in clinical trials, each patient in the study may experience multiple events which may be of the same type or distinct types, while in family studies of genetic diseases or litter matched mice studies, failure times for subjects in the same cluster may be correlated. In this article, we propose nonparametric procedures for the estimation of quantiles with multivariate failure time data. We show that the proposed estimators asymptotically follow a multivariate normal distribution. The asymptotic variance‐covariance matrix of the estimated quantiles is estimated based on the kernel smoothing and bootstrap techniques. Simulation results show that the proposed estimators perform well in finite samples. The methods are illustrated with the burn‐wound infection data and the Diabetic Retinopathy Study (DRS) data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号