首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Paired survival times with potential censoring are often observed from two treatment groups in clinical trials and other types of clinical studies. The ratio of marginal hazard rates may be used to quantify the treatment effect in these studies. In this paper, a recently proposed nonparametric kernel method is used to estimate the marginal hazard rate, and the method of variance estimates recovery (MOVER) is used for the construction of the confidence intervals of a time‐dependent hazard ratio based on the confidence limits of a single marginal hazard rate. Two methods are proposed: one uses the delta method and another adopts the transformation method to construct confidence limits for the marginal hazard rate. Simulations are performed to evaluate the performance of the proposed methods. Real data from two clinical trials are analyzed using the proposed methods.  相似文献   

2.
The present paper reports the results of a Monte Carlo simulation study to examine the performance of several approximate confidence intervals for the Relative Risk Ratio (RRR) parameter in an epidemiologic study, involving two groups of individuals. The first group consists of n1 individuals, called the experimental group, who are exposed to some carcinogen, say radiation, whose effect on the incidence of some form of cancer, say skin cancer, is being investigated. The second group consists of n2 individuals (called the control group) who are not exposed to the carcinogen. Two cases are considered in which the life times (or time to cancer) in the two groups follow (i) the exponential and (ii) the Weibull distributions. The case when the life times follow a Rayleigh distribution follows as a particular case. A general random censorship model is considered in which the life times of the individuals are censored on the right by random censoring times following (i) the exponential and (ii) the Weibull distributions. The Relative Risk Ratio parameter in the study is defined as the ratio of the hazard rates in the two distributions of the times to cancer. Approximate confidence intervals are constructed for the RRR parameter using its maximum likelihood estimator (m.l.e) and several other methods, including a method due to FIELLER. SPROTT'S (1973) and Cox's (1953) suggestions, as well as the Box-Cox (1964) transformation, are also utilized to construct approximate confidence intervals. The performance of these confidence intervals in small samples is investigated by means of some Monte Carlo simulations based on 500 random samples. Our simulation study indicates that many of these confidence intervals perform quite well in samples of size 10 and 15, in terms of the coverage probability and expected length of the interval.  相似文献   

3.
The routine assignment of error rates (confidence intervals) to Poisson distribution estimates of plankton abundance should be rejected. In addition to the interval estimation procedure being pseudoreplicative, it is not robust to common violations of its assumptions. Because the spatial dispersion of organisms in sampling units from the counting chamber to the field is rarely random and because counting protocols are usually terminated by a count threshold having been equalled or exceeded, Poisson based estimates are usually derived from sampling non-Poisson distributions. Computer simulation was used to investigate the quantitative consequences of such estimates. The expected mean error rate of 95% confidence intervals is inflated from 5% to 15% as contagion increases, as the parametric variance-mean ratio increases from 1 to 2. Also, count threshold termination of the counting protocol effects both a biased estimate of the parametric mean (or total) and alters expected mean error rates, especially if the total count is low (< 100 organisms) and the mean density in the sampling unit is low.  相似文献   

4.
Molecular estimates of the age of angiosperms have varied widely, and many greatly predate the Early Cretaceous appearance of angiosperms in the fossil record, but there have been few attempts to assess confidence limits on ages. Experiments with rbcL and 18S data using maximum likelihood suggest that previous angiosperm age estimates were too old because they assumed equal rates across sites-use of a gamma distribution of rates to correct for site-to-site variation gives 10-30 my (million years) younger ages-and relied on herbaceous angiosperm taxa with high rates of molecular evolution. Ages based on first and second codon positions of rbcL are markedly older than those based on third positions, which conflict with the fossil record in being too young, but all examined data partitions of rbcL and 18S depart substantially from a molecular clock. Age estimates are surprisingly insensitive to different views on seed-plant relationships. Randomization schemes were used to quantify confidence intervals due to phylogenetic uncertainty, substitutional noise, and lineage effects (deviations from a molecular clock). Estimates of the age of crown-group angiosperms range from 68 to 281 mya (million years ago), depending on data, tree, and assumptions, with most ~140-190 mya (Early Jurassic-earliest Cretaceous). Approximate 95% confidence intervals on ages are wider for rbcL than 18S, ranging up to 160 my for phylogenetic uncertainty, 90 my for substitutional noise, and 70 my for lineage effects. These intervals overlap the oldest occurrences of angiosperms in the fossil record, as well as some estimates from previous molecular studies.  相似文献   

5.
Every statistical model is based on explicitly or implicitly formulated assumptions. In this study we address new techniques of calculation of variances and confidence intervals, analyse some statistical methods applied to modelling twinning rates, and investigate whether the improvements give more reliable results. For an observed relative frequency, the commonly used variance formula holds exactly with the assumptions that the repetitions are independent and that the probability of success is constant. The probability of a twin maternity depends not only on genetic predisposition, but also on several demographic factors, particularly ethnicity, maternal age and parity. Therefore, the assumption of constancy is questionable. The effect of grouping on the analysis of regression models for twinning rates is also considered. Our results indicate that grouping influences the efficiency of the estimates but not the estimates themselves. Recently, confidence intervals for proportions of low-incidence events have been a target for revived interest and we present the new alternatives. These confidence intervals are slightly wider and their midpoints do not coincide with the maximum-likelihood estimate of the twinning rate, but their actual coverage is closer to the nominal one than the coverage of the traditional confidence interval. In general, our findings indicate that the traditional methods are mainly satisfactorily robust and give reliable results. However, we propose that new formulae for the confidence intervals should be used. Our results are applied to twin-maternity data from Finland and Denmark.  相似文献   

6.
The epidemiologic concept of the adjusted attributable risk is a useful approach to quantitatively describe the importance of risk factors on the population level. It measures the proportional reduction in disease probability when a risk factor is eliminated from the population, accounting for effects of confounding and effect-modification by nuisance variables. The computation of asymptotic variance estimates for estimates of the adjusted attributable risk is often done by applying the delta method. Investigations on the delta method have shown, however, that the delta method generally tends to underestimate the standard error, leading to biased confidence intervals. We compare confidence intervals for the adjusted attributable risk derived by applying computer intensive methods like the bootstrap or jackknife to confidence intervals based on asymptotic variance estimates using an extensive Monte Carlo simulation and within a real data example from a cohort study in cardiovascular disease epidemiology. Our results show that confidence intervals based on bootstrap and jackknife methods outperform intervals based on asymptotic theory. Best variants of computer intensive confidence intervals are indicated for different situations.  相似文献   

7.
Person‐time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person‐time incidence rate is the maximum‐likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum‐likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less‐than‐nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies.  相似文献   

8.
Wang L  Du P  Liang H 《Biometrics》2012,68(3):726-735
Summary In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study.  相似文献   

9.
Chan IS  Zhang Z 《Biometrics》1999,55(4):1202-1209
Confidence intervals are often provided to estimate a treatment difference. When the sample size is small, as is typical in early phases of clinical trials, confidence intervals based on large sample approximations may not be reliable. In this report, we propose test-based methods of constructing exact confidence intervals for the difference in two binomial proportions. These exact confidence intervals are obtained from the unconditional distribution of two binomial responses, and they guarantee the level of coverage. We compare the performance of these confidence intervals to ones based on the observed difference alone. We show that a large improvement can be achieved by using the standardized Z test with a constrained maximum likelihood estimate of the variance.  相似文献   

10.
Bennewitz J  Reinsch N  Kalm E 《Genetics》2002,160(4):1673-1686
The nonparametric bootstrap approach is known to be suitable for calculating central confidence intervals for the locations of quantitative trait loci (QTL). However, the distribution of the bootstrap QTL position estimates along the chromosome is peaked at the positions of the markers and is not tailed equally. This results in conservativeness and large width of the confidence intervals. In this study three modified methods are proposed to calculate nonparametric bootstrap confidence intervals for QTL locations, which compute noncentral confidence intervals (uncorrected method I), correct for the impact of the markers (weighted method I), or both (weighted method II). Noncentral confidence intervals were computed with an analog of the highest posterior density method. The correction for the markers is based on the distribution of QTL estimates along the chromosome when the QTL is not linked with any marker, and it can be obtained with a permutation approach. In a simulation study the three methods were compared with the original bootstrap method. The results showed that it is useful, first, to compute noncentral confidence intervals and, second, to correct the bootstrap distribution of the QTL estimates for the impact of the markers. The weighted method II, combining these two properties, produced the shortest and less biased confidence intervals in a large number of simulated configurations.  相似文献   

11.
Cosmic radiation is an occupational risk factor for commercial aircrews. In this large European cohort study (ESCAPE) its association with cancer mortality was investigated on the basis of individual effective dose estimates for 19,184 male pilots. Mean annual doses were in the range of 2–5 mSv and cumulative lifetime doses did not exceed 80 mSv. All-cause and all-cancer mortality was low for all exposure categories. A significant negative risk trend for all-cause mortality was seen with increasing dose. Neither external and internal comparisons nor nested case-control analyses showed any substantially increased risks for cancer mortality due to ionizing radiation. However, the number of deaths for specific types of cancer was low and the confidence intervals of the risk estimates were rather wide. Difficulties in interpreting mortality risk estimates for time-dependent exposures are discussed.Abbreviations CI confidence interval - CLL chronic lymphatic leukemia - RRC radiation-related cancers - NRRC non-radiation-related cancers - RR relative risk - SMR standardized mortality ratio  相似文献   

12.
This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM‐Preserved trial (where CHARM is the ‘Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity’ programme).  相似文献   

13.
The coverage probabilities of several confidence limit estimators of genetic parameters, obtained from North Carolina I designs, were assessed by means of Monte Carlo simulations. The reliability of the estimators was compared under three different parental sample sizes. The coverage of confidence intervals set on the Normal distribution, and using standard errors either computed by the “delta” method or derived using an approximation for the variance of a variance component estimated by means of a linear combination of mean squares, was affected by the number of males and females included in the experiment. The “delta” method was found to provide reliable standard errors of the genetic parameters only when at least 48 males were each mated to six different females randomly selected from the reference population. Formulae are provided for obtaining “delta” method standard errors, and appropriate statistical software procedures are discussed. The error rates of confidence limits based on the Normal distribution and using standard errors obtained by an approximation for the variance of a variance component varied widely. The coverage of F-distribution confidence intervals for heritability estimates was not significantly affected by parental sample size and consistently provided a mean coverage near the stated coverage. For small parental sample sizes, confidence intervals for heritability estimates should be based on the F-distribution.  相似文献   

14.
The aim of the paper is to develop a procedure for an estimate of an analytical form of a hazard function for cancer patients. Although a deterministic approach based on cancer cell population dynamics yields the analytical expression, it depends on several parameters which should be estimated. On the other hand, a kernel estimate is an effective nonparametric method for estimating hazard functions. This method provides the pointwise estimate of the hazard function. Our procedure consists of two steps: in the first step we find the kernel estimate of the hazard function and in the second step the parameters in the deterministic model are obtained by the least squares method. A simulation study with different types of censorship is carried out and the developed procedure is applied to real data.  相似文献   

15.
The methods ofManly (1973),Manly (1975) andManly (1977) for estimating survival rates and relative survival rates from recapture data have been compared by computer simulation. In the simulations batches of two types of animal were “released” at one point in “time” and recapture samples were taken at “daily” intervals from then on. The various methods of estimation were then used to estimate, the daily survival rates of type 1 and type 2 animals, and also the survival rate of the type 2 animals relative to the type 1 animals. Simulation experiments were designed to examine (a) the bias in estimates, (b) the relative precision of different methods of estimation, (c) the validity of confidence intervals for true parameter values, and (d) the effect on estimates of the failure of certain assumptions.  相似文献   

16.
Recent concern with the survival of endangered species has renewed interest in estimating the growth rates of natural populations. Estimates of population growth rate are subject to uncertainties because of both sampling and experimental errors incurred when estimating rates of fecundity and survivorship. In recent years, a variety of methods have been proposed for placing confidence limits on estimated growth rates. The commonly used analytical approximation assumes that errors are relatively small. There are several computer-intensive methods, including methods based on jackknife and bootsrap procedures, that test the robustness of that approximation. In addition, several computer simulations of hypothetical populations have led to some generalizations about the performance of different methods. In general, it is possible to find confidence intervals for estimates of population growth rates but the appropriate method for doing so depends on the kind of data available and on the magnitude and correlation structure of the errors.  相似文献   

17.
E V Slud  D P Byar  S B Green 《Biometrics》1984,40(3):587-600
The small-sample performance of some recently proposed nonparametric methods of constructing confidence intervals for the median survival time, based on randomly right-censored data, is compared with that of two new methods. Most of these methods are equivalent for large samples. All proposed intervals are either 'test-based' or 'reflected' intervals, in the sense defined in the paper. Coverage probabilities for the interval estimates were obtained by exact calculation for uncensored data, and by stimulation for three life distributions and four censoring patterns. In the range of situations studied, 'test-based' methods often have less than nominal coverage, while the coverage of the new 'reflected' confidence intervals is closer to nominal (although somewhat conservative), and these intervals are easy to compute.  相似文献   

18.
Propensity-score matching is frequently used in the medical literature to reduce or eliminate the effect of treatment selection bias when estimating the effect of treatments or exposures on outcomes using observational data. In propensity-score matching, pairs of treated and untreated subjects with similar propensity scores are formed. Recent systematic reviews of the use of propensity-score matching found that the large majority of researchers ignore the matched nature of the propensity-score matched sample when estimating the statistical significance of the treatment effect. We conducted a series of Monte Carlo simulations to examine the impact of ignoring the matched nature of the propensity-score matched sample on Type I error rates, coverage of confidence intervals, and variance estimation of the treatment effect. We examined estimating differences in means, relative risks, odds ratios, rate ratios from Poisson models, and hazard ratios from Cox regression models. We demonstrated that accounting for the matched nature of the propensity-score matched sample tended to result in type I error rates that were closer to the advertised level compared to when matching was not incorporated into the analyses. Similarly, accounting for the matched nature of the sample tended to result in confidence intervals with coverage rates that were closer to the nominal level, compared to when matching was not taken into account. Finally, accounting for the matched nature of the sample resulted in estimates of standard error that more closely reflected the sampling variability of the treatment effect compared to when matching was not taken into account.  相似文献   

19.
Cheng Y  Shen Y 《Biometrics》2004,60(4):910-918
For confirmatory trials of regulatory decision making, it is important that adaptive designs under consideration provide inference with the correct nominal level, as well as unbiased estimates, and confidence intervals for the treatment comparisons in the actual trials. However, naive point estimate and its confidence interval are often biased in adaptive sequential designs. We develop a new procedure for estimation following a test from a sample size reestimation design. The method for obtaining an exact confidence interval and point estimate is based on a general distribution property of a pivot function of the Self-designing group sequential clinical trial by Shen and Fisher (1999, Biometrics55, 190-197). A modified estimate is proposed to explicitly account for futility stopping boundary with reduced bias when block sizes are small. The proposed estimates are shown to be consistent. The computation of the estimates is straightforward. We also provide a modified weight function to improve the power of the test. Extensive simulation studies show that the exact confidence intervals have accurate nominal probability of coverage, and the proposed point estimates are nearly unbiased with practical sample sizes.  相似文献   

20.
Problems involving thousands of null hypotheses have been addressed by estimating the local false discovery rate (LFDR). A previous LFDR approach to reporting point and interval estimates of an effect-size parameter uses an estimate of the prior distribution of the parameter conditional on the alternative hypothesis. That estimated prior is often unreliable, and yet strongly influences the posterior intervals and point estimates, causing the posterior intervals to differ from fixed-parameter confidence intervals, even for arbitrarily small estimates of the LFDR. That influence of the estimated prior manifests the failure of the conditional posterior intervals, given the truth of the alternative hypothesis, to match the confidence intervals. Those problems are overcome by changing the posterior distribution conditional on the alternative hypothesis from a Bayesian posterior to a confidence posterior. Unlike the Bayesian posterior, the confidence posterior equates the posterior probability that the parameter lies in a fixed interval with the coverage rate of the coinciding confidence interval. The resulting confidence-Bayes hybrid posterior supplies interval and point estimates that shrink toward the null hypothesis value. The confidence intervals tend to be much shorter than their fixed-parameter counterparts, as illustrated with gene expression data. Simulations nonetheless confirm that the shrunken confidence intervals cover the parameter more frequently than stated. Generally applicable sufficient conditions for correct coverage are given. In addition to having those frequentist properties, the hybrid posterior can also be motivated from an objective Bayesian perspective by requiring coherence with some default prior conditional on the alternative hypothesis. That requirement generates a new class of approximate posteriors that supplement Bayes factors modified for improper priors and that dampen the influence of proper priors on the credibility intervals. While that class of posteriors intersects the class of confidence-Bayes posteriors, neither class is a subset of the other. In short, two first principles generate both classes of posteriors: a coherence principle and a relevance principle. The coherence principle requires that all effect size estimates comply with the same probability distribution. The relevance principle means effect size estimates given the truth of an alternative hypothesis cannot depend on whether that truth was known prior to observing the data or whether it was learned from the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号