首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Paired survival times with potential censoring are often observed from two treatment groups in clinical trials and other types of clinical studies. The ratio of marginal hazard rates may be used to quantify the treatment effect in these studies. In this paper, a recently proposed nonparametric kernel method is used to estimate the marginal hazard rate, and the method of variance estimates recovery (MOVER) is used for the construction of the confidence intervals of a time‐dependent hazard ratio based on the confidence limits of a single marginal hazard rate. Two methods are proposed: one uses the delta method and another adopts the transformation method to construct confidence limits for the marginal hazard rate. Simulations are performed to evaluate the performance of the proposed methods. Real data from two clinical trials are analyzed using the proposed methods.  相似文献   

2.
This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM‐Preserved trial (where CHARM is the ‘Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity’ programme).  相似文献   

3.
Person‐time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person‐time incidence rate is the maximum‐likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum‐likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less‐than‐nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies.  相似文献   

4.
The approach to early termination for efficacy in a trial where events occur over time but the primary question of interest relates to a long-term binary endpoint is not straightforward. This article considers comparison of treatment groups with Kaplan-Meier (KM) proportions evaluated at increasing times from randomization, at increasing calendar testing times. This strategy is employed to improve the ability to detect important treatment effects and provide critical treatments to patients in a timely manner. This dynamic Kaplan-Meier (DKM) approach is shown to be robust; that is, it produces high power and early termination time across a wide range of circumstances. In contrast, a fixed time KM comparison and the log-rank test are both shown to be more variable in performance. Practical considerations of implementing the DKM method are discussed.  相似文献   

5.
Variability in blood pressure predicts cardiovascular disease in young- and middle-aged subjects, but relevant data for older individuals are sparse. We analysed data from the PROspective Study of Pravastatin in the Elderly at Risk (PROSPER) study of 5804 participants aged 70–82 years with a history of, or risk factors for cardiovascular disease. Visit-to-visit variability in blood pressure (standard deviation) was determined using a minimum of five measurements over 1 year; an inception cohort of 4819 subjects had subsequent in-trial 3 years follow-up; longer-term follow-up (mean 7.1 years) was available for 1808 subjects. Higher systolic blood pressure variability independently predicted long-term follow-up vascular and total mortality (hazard ratio per 5 mmHg increase in standard deviation of systolic blood pressure = 1.2, 95% confidence interval 1.1–1.4; hazard ratio 1.1, 95% confidence interval 1.1–1.2, respectively). Variability in diastolic blood pressure associated with increased risk for coronary events (hazard ratio 1.5, 95% confidence interval 1.2–1.8 for each 5 mmHg increase), heart failure hospitalisation (hazard ratio 1.4, 95% confidence interval 1.1–1.8) and vascular (hazard ratio 1.4, 95% confidence interval 1.1–1.7) and total mortality (hazard ratio 1.3, 95% confidence interval 1.1–1.5), all in long-term follow-up. Pulse pressure variability was associated with increased stroke risk (hazard ratio 1.2, 95% confidence interval 1.0–1.4 for each 5 mmHg increase), vascular mortality (hazard ratio 1.2, 95% confidence interval 1.0–1.3) and total mortality (hazard ratio 1.1, 95% confidence interval 1.0–1.2), all in long-term follow-up. All associations were independent of respective mean blood pressure levels, age, gender, in-trial treatment group (pravastatin or placebo) and prior vascular disease and cardiovascular disease risk factors. Our observations suggest variability in diastolic blood pressure is more strongly associated with vascular or total mortality than is systolic pressure variability in older high-risk subjects.  相似文献   

6.
In cancer clinical trials, it is often of interest in estimating the ratios of hazard rates at some specific time points during the study from two independent populations. In this paper, we consider nonparametric confidence interval procedures for the hazard ratio based on kernel estimates for the hazard rates with under-smoothing bandwidths. Two methods are used to derive the confidence intervals: one based on the asymptotic normality of the ratio of the kernel estimates for the hazard rates in two populations and another through Fieller's Theorem. The performances of the proposed confidence intervals are evaluated through Monte-Carlo simulations and applied to the analysis of data from a clinical trial on early breast cancer.  相似文献   

7.
E Laska  M Meisner  H B Kushner 《Biometrics》1983,39(4):1087-1091
Under either the random patient-effect model with sequence effects or the fixed patient-effect model, the usual two-period, two-treatment crossover design, AB,BA, cannot be used to estimate the contrast between direct treatment effects when unequal carryover effects are present. If baseline observations are available, the design AB,BA can validly be used to estimate a treatment contrast. However, the design AB,BA,AA,BB with baseline observations is more efficient. In fact, we show that this design is optimal whether or not baseline observations are available. For experiments with more than two periods, universally optimal designs are found for both models, with and without carryover effects. It is shown that uncertainty about the presence of carryover effects is of little or no consequence, and the addition of baseline observations is of little or no added value for designs with three or more periods; however, if the experiment is limited to only two periods the investigator pays a heavy penalty.  相似文献   

8.
The study was conducted to determine whether patients with rheumatoid arthritis (RA) are at increased risk of acute pancreatitis compared with those without RA and to determine if the risk of acute pancreatitis varied by anti-RA drug use. We used the large population-based dataset from the National Health Insurance (NHI) program in Taiwan to conduct a retrospective cohort study. Patients newly diagnosed with RA between 2000 and 2011 were referred to as the RA group. The comparator non-RA group was matched with propensity score, using age and sex, in the same time period. We presented the incidence density by 100,000 person-years. The propensity score and all variables were analyzed in fully adjusted Cox proportional hazard regression. The cumulative incidence of acute pancreatitis was assessed by Kaplan-Meier analysis, with significance based on the log-rank test. From claims data of one million enrollees randomly sampled from the Taiwan NHI database, 29,755 adults with RA were identified and 119,020 non- RA persons were matched as a comparison group. The RA cohort had higher incidence density of acute pancreatitis (185.7 versus 119.0 per 100,000 person-years) than the non-RA cohort. The adjusted hazard ratio (HR) was 1.62 (95% CI [confidence interval] 1.43–1.83) for patients with RA to develop acute pancreatitis. Oral corticosteroid use decreased the risk of acute pancreatitis (adjusted HR 0.83, 95% CI 0.73–0.94) but without a dose-dependent effect. Current use of disease modifying anti-rheumatic drugs or tumor necrosis factor blockers did not decrease the risk of acute pancreatitis. In conclusion, patients with RA are at an elevated risk of acute pancreatitis. Use of oral corticosteroids may reduce the risk of acute pancreatitis.  相似文献   

9.
We investigate the optimal behaviour of an organism that is unable to obtain a reliable estimate of its mortality risk. In this case, natural selection will shape behaviour to be approximately optimal given the probability distribution of mortality risks in possible environments that the organism and its ancestors encountered. The mean of this distribution is the average mortality risk experienced by a randomly selected member of the species. We show that if an organism does not know the exact mortality risk, it should act as if the risk is less than the mean risk. This can be viewed as being optimistic. We argue that this effect is likely to be general.  相似文献   

10.
OBJECTIVE--To examine the influence that being female has on the outcome of acute myocardial infarction. DESIGN--Observational follow up study. SETTING--London district general hospital. PATIENTS--216 women and 607 men with acute myocardial infarction admitted to a coronary care unit from 1 January 1988 to 31 December 1992. MAIN OUTCOME MEASURES--All cause mortality and recurrent ischaemic events in the first six months. RESULTS--Event free survival (95% confidence interval) at six months was 63.3% (56.3% to 69.4%) in women and 76.1% (72.4% to 79.4%) in men, P < 0.001. The difference was confined to the first 30 days but thereafter the hazard plots for women and men converged, with reduction of the hazard ratio from 2.36 (1.70 to 3.27) to 0.81 (0.44 to 1.48). Women were older, but their excess risk persisted after adjustment for age, other baseline variables, and indices of severity of infarction (hazard ratio 1.53 (1.09 to 2.15), P = 0.015). Women tended to be treated with thrombolysis less commonly than men but the difference was small. Substantially fewer women than men, however, were discharged taking beta blockers (23.3% v 41.4%, P < 0.001), and although additional adjustment for discharge treatment did not further reduce the point estimate of the hazard ratio (1.84 (0.89-3.83)), the 95% confidence interval was wide and statistical significance was lost. CONCLUSIONS--Women with acute myocardial infarction have a worse prognosis than men but the excess risk is confined to the first 30 days and is only partly explained by age and other baseline variables. The tendency for women to receive less vigorous treatment than men must be remedied before gender can be considered to be an independent determinant of risk.  相似文献   

11.
In literature-based meta-analyses of time-to-event data, the number of events in the treated and control groups together with the total number of patients randomized to the two treatment arms are often used as summary statistics. If interest is in mortality at a specified moment in time, the number of events can, in most cases, only be obtained from the Kaplan-Meier curve. The estimated number of events, however, is typically larger than the true number of events. The effect of this overestimation on the Mantel-Haenszel test and the odds ratio is studied in this paper. From these results, it can be concluded that the number of events should not be estimated from the Kaplan-Meier curves for meta-analytic purposes unless virtually no patients are lost to follow-up or censored and there are still many patients at risk in the two groups at the time at which the number of events is to be determined.  相似文献   

12.
Wilks I 《Bioethics》1997,11(5):413-426
This discussion paper continues the debate over risk-related standards of mental competence which appears in Bioethics 5. Dan Brock there defends an approach to mental competence in patients which defines it as being relative to differing standards, more or less rigorous depending on the degree of risk involved in proposed treatments. But Mark Wicclair raises a problem for this approach: if significantly different levels of risk attach, respectively, to accepting and refusing the same treatment, then it is possible, on this account, for a patient to be considered competent to accept, but not refuse, the treatment, or vice versa. I argue that this puzzle does not constitute a genuine problem for a risk-related standard.
To this end I focus on the situation where, of two mutually exclusive options, one is riskier, but offering more pronounced benefit, while the other is safer, but offering less benefit. I argue for this proposition: it can take far less insight to know that the safe option is good than to know that the risky option is better. Now say one is actually informed enough to know that the safe option is good, but not enough to know whether the risky option is better; in such a case one is competent to say yes to that first option (the safe one), but not to say yes to the other. (I argue in passing that Pascal's Wager can be interpreted as having precisely this deliberative structure.)
I thus conclude that cases do indeed exist where one can be competent to say yes but not no, or vice versa; and that it is thus not an anomaly in the risk-related standard that it entails the existence of such cases.  相似文献   

13.
Wei G  Schaubel DE 《Biometrics》2008,64(3):724-732
Summary .   Often in medical studies of time to an event, the treatment effect is not constant over time. In the context of Cox regression modeling, the most frequent solution is to apply a model that assumes the treatment effect is either piecewise constant or varies smoothly over time, i.e., the Cox nonproportional hazards model. This approach has at least two major limitations. First, it is generally difficult to assess whether the parametric form chosen for the treatment effect is correct. Second, in the presence of nonproportional hazards, investigators are usually more interested in the cumulative than the instantaneous treatment effect (e.g., determining if and when the survival functions cross). Therefore, we propose an estimator for the aggregate treatment effect in the presence of nonproportional hazards. Our estimator is based on the treatment-specific baseline cumulative hazards estimated under a stratified Cox model. No functional form for the nonproportionality need be assumed. Asymptotic properties of the proposed estimators are derived, and the finite-sample properties are assessed in simulation studies. Pointwise and simultaneous confidence bands of the estimator can be computed. The proposed method is applied to data from a national organ failure registry.  相似文献   

14.
15.
Transgenic technology is developing rapidly; however, consumers and environmentalists remain wary of its safety for use in agriculture. Research is needed to ensure the safe use of transgenic technology and thus increase consumer confidence. This goal is best accomplished by using a thorough, unbiased examination of risks associated with agricultural biotechnology. In this paper, we review discussion on risk and extend our approach to predict risk. We also distinguish between the risk and hazard of transgenic organisms in natural environments. We define transgene risk as the probability a transgene will spread into natural conspecific populations and define hazard as the probability of species extinction, displacement, or ecosystem disruption given that the transgene has spread. Our methods primarily address risk relative to two types of hazards: extinction which has a high hazard, and invasion which has an unknown level of hazard, similar to that of an introduced exotic species. Our method of risk assessment is unique in that we concentrate on the six major fitness components of an organism's life cycle to determine if transgenic individuals differ in survival or reproductive capacity from wild type. Our approach then combines estimates of the net fitness parameters into a mathematical model to determine the fate of the transgene and the affected wild population. We also review aspects of fish ecology and behavior that contribute to risk and examine combinations of net fitness parameters which can lead to invasion and extinction hazards. We describe three new ways that a transgene could result in an extinction hazard: (1) when the transgene increases male mating success but reduces daily adult viability, (2) when the transgene increases adult viability but reduces male fertility, and (3) when the transgene increases both male mating success and adult viability but reduces male fertility. The last scenario is predicted to cause rapid extinction, thus it poses an extreme risk. Although we limit our discussion to aquacultural applications, our methods can easily be adapted to other sexually reproducing organisms with suitable adjustments of terminology.  相似文献   

16.
Environmental threats, such as habitat size reduction or environmental pollution, may not cause immediate extinction of a population but shorten the expected time to extinction. We develop a method to estimate the mean time to extinction for a density-dependent population with environmental fluctuation. We first derive a formula for a stochastic differential equation model (canonical model) of a population with logistic growth with environmental and demographic stochasticities. We then study an approximate maximum likelihood (AML) estimate of three parameters (intrinsic growth rate r, carrying capacity K, and environmental stochasticity sigma(2)(e)) from a time series of population size. The AML estimate of r has a significant bias, but by adopting the Monte Carlo method, we can remove the bias very effectively (bias-corrected estimate). We can also determine the confidence interval of the parameter based on the Monte Carlo method. If the length of the time series is moderately long (with 40-50 data points), parameter estimation with the Monte Carlo sampling bias correction has a relatively small variance. However, if the time series is short (less than or equal to 10 data points), the estimate has a large variance and is not reliable. If we know the intrinsic growth rate r, however, the estimate of K and sigma(2)(e)and the mean extinction time T are reliable even if only a short time series is available. We illustrate the method using data for a freshwater fish, Japanese crucian carp (Carassius auratus subsp.) in Lake Biwa, in which the growth rate and environmental noise of crucian carp are estimated using fishery records.  相似文献   

17.
J. Feifel  D. Dobler 《Biometrics》2021,77(1):175-185
Nested case‐control designs are attractive in studies with a time‐to‐event endpoint if the outcome is rare or if interest lies in evaluating expensive covariates. The appeal is that these designs restrict to small subsets of all patients at risk just prior to the observed event times. Only these small subsets need to be evaluated. Typically, the controls are selected at random and methods for time‐simultaneous inference have been proposed in the literature. However, the martingale structure behind nested case‐control designs allows for more powerful and flexible non‐standard sampling designs. We exploit that structure to find simultaneous confidence bands based on wild bootstrap resampling procedures within this general class of designs. We show in a simulation study that the intended coverage probability is obtained for confidence bands for cumulative baseline hazard functions. We apply our methods to observational data about hospital‐acquired infections.  相似文献   

18.
Flandre P 《PloS one》2011,6(9):e22871

Background

In recent years the “noninferiority” trial has emerged as the new standard design for HIV drug development among antiretroviral patients often with a primary endpoint based on the difference in success rates between the two treatment groups. Different statistical methods have been introduced to provide confidence intervals for that difference. The main objective is to investigate whether the choice of the statistical method changes the conclusion of the trials.

Methods

We presented 11 trials published in 2010 using a difference in proportions as the primary endpoint. In these trials, 5 different statistical methods have been used to estimate such confidence intervals. The five methods are described and applied to data from the 11 trials. The noninferiority of the new treatment is not demonstrated if the prespecified noninferiority margin it includes in the confidence interval of the treatment difference.

Results

Results indicated that confidence intervals can be quite different according to the method used. In many situations, however, conclusions of the trials are not altered because point estimates of the treatment difference were too far from the prespecified noninferiority margins. Nevertheless, in few trials the use of different statistical methods led to different conclusions. In particular the use of “exact” methods can be very confusing.

Conclusion

Statistical methods used to estimate confidence intervals in noninferiority trials have a strong impact on the conclusion of such trials.  相似文献   

19.

Background

Monitoring the effectiveness of global antiretroviral therapy scale-up efforts in resource-limited settings is a global health priority, but is complicated by high rates of losses to follow-up after treatment initiation. Determining definitive outcomes of these lost patients, and the effects of losses to follow-up on estimates of survival and risk factors for death after HAART, are key to monitoring the effectiveness of global HAART scale-up efforts.

Methodology/Principal Findings

A cohort study comparing clinical outcomes and risk factors for death after HAART initiation as reported before and after tracing of patients lost to follow-up was conducted in Botswana''s National Antiretroviral Therapy Program. 410 HIV-infected adults consecutively presenting for HAART were evaluated. The main outcome measures were death or loss to follow-up within the first year after HAART initiation. Of 68 patients initially categorized as lost, over half (58.8%) were confirmed dead after tracing. Patient tracing resulted in reporting of significantly lower survival rates when death was used as the outcome and losses to follow-up were censored [1-year Kaplan Meier survival estimate 0.92 (95% confidence interval, 0.88–0.94 before tracing and 0.83 (95% confidence interval, 0.79–0.86) after tracing, log rank P<0.001]. In addition, a significantly increased risk of death after HAART among men [adjusted hazard ratio 1.74 (95% confidence interval, 1.05–2.87)] would have been missed had patients not been traced [adjusted hazard ratio 1.41 (95% confidence interval, 0.65–3.05)].

Conclusions/Significance

Due to high rates of death among patients lost to follow-up after HAART, survival rates may be inaccurate and important risk factors for death may be missed if patients are not actively traced. Patient tracing and uniform reporting of outcomes after HAART are needed to enable accurate monitoring of global HAART scale-up efforts.  相似文献   

20.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号