首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Many confidence intervals calculated in practice are potentially not exact, either because the requirements for the interval estimator to be exact are known to be violated, or because the (exact) distribution of the data is unknown. If a confidence interval is approximate, the crucial question is how well its true coverage probability approximates its intended coverage probability. In this paper we propose to use the bootstrap to calculate an empirical estimate for the (true) coverage probability of a confidence interval. In the first instance, the empirical coverage can be used to assess whether a given type of confidence interval is adequate for the data at hand. More generally, when planning the statistical analysis of future trials based on existing data pools, the empirical coverage can be used to study the coverage properties of confidence intervals as a function of type of data, sample size, and analysis scale, and thus inform the statistical analysis plan for the future trial. In this sense, the paper proposes an alternative to the problematic pretest of the data for normality, followed by selection of the analysis method based on the results of the pretest. We apply the methodology to a data pool of bioequivalence studies, and in the selection of covariance patterns for repeated measures data.  相似文献   

2.
3.
J Benichou  M H Gail 《Biometrics》1990,46(4):991-1003
The attributable risk (AR), defined as AR = [Pr(disease) - Pr(disease/no exposure)]/Pr(disease), measures the proportion of disease risk that is attributable to an exposure. Recently Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) presented point estimates of AR based on logistic models for case-control data to allow for confounding factors and secondary exposures. To produce confidence intervals, we derived variance estimates for AR under the logistic model and for various designs for sampling controls. Calculations for discrete exposure and confounding factors require covariances between estimates of the risk parameters of the logistic model and the proportions of cases with given levels of exposure and confounding factors. These covariances are estimated from Taylor series expansions applied to implicit functions. Similar calculations for continuous exposures are derived using influence functions. Simulations indicate that those asymptotic procedures yield reliable variance estimates and confidence intervals with near nominal coverage. An example illustrates the usefulness of variance calculations in selecting a logistic model that is neither so simplified as to exhibit systematic lack of fit nor so complicated as to inflate the variance of the estimate of AR.  相似文献   

4.
Confidence intervals for the mean of one sample and the difference in means of two independent samples based on the ordinary-t statistic suffer deficiencies when samples come from skewed families. In this article we evaluate several existing techniques and propose new methods to improve coverage accuracy. The methods examined include the ordinary-t, the bootstrap-t, the biased-corrected acceleration and three new intervals based on transformation of the t-statistic. Our study shows that our new transformation intervals and the bootstrap-t intervals give best coverage accuracy for a variety of skewed distributions, and that our new transformation intervals have shorter interval lengths.  相似文献   

5.
In health policy and economics studies, the incremental cost-effectiveness ratio (ICER) has long been used to compare the economic consequences relative to the health benefits of therapies. Due to the skewed distributions of the costs and ICERs, much research has been done on how to obtain confidence intervals of ICERs, using either parametric or nonparametric methods, with or without the presence of censoring. In this paper, we will examine and compare the finite sample performance of many approaches via simulation studies. For the special situation when the health effect of the treatment is not statistically significant, we will propose a new bootstrapping approach to improve upon the bootstrap percentile method that is currently available. The most efficient way of constructing confidence intervals will be identified and extended to the censored data case. Finally, a data example from a cardiovascular clinical trial is used to demonstrate the application of these methods.  相似文献   

6.
A method, based on the bootstrap procedure, is proposed for the estimation of branch-length errors and confidence intervals in a phylogenetic tree for which equal rates of substitution among lineages do not necessarily hold. The method can be used to test whether an estimated internodal distance is significantly greater than zero. In the application of the method, any estimator of genetic distances, as well as any tree reconstruction procedure (based on distance matrices), can be used. Also the method is not limited by the number of species involved in the phylogenetic tree. An example of the application of the method in the reconstruction of the phylogenetic tree for the four hominoid species—human, chimpanzee, gorilla, and orangutan—is shown. Correspondence to: J. Dopazo  相似文献   

7.
EFRON  BRADLEY 《Biometrika》1981,68(3):589-599
  相似文献   

8.
Greenland S 《Biometrics》2001,57(1):182-188
Standard presentations of epidemiological results focus on incidence-ratio estimates derived from regression models fit to specialized study data. These data are often highly nonrepresentative of populations for which public-health impacts must be evaluated. Basic methods are provided for interval estimation of attributable fractions from model-based incidence-ratio estimates combined with independent survey estimates of the exposure distribution in the target population of interest. These methods are illustrated in estimation of the potential impact of magnetic-field exposures on childhood leukemia in the United States, based on pooled data from 11 case-control studies and a U.S. sample survey of magnetic-field exposures.  相似文献   

9.
CHEN  SONG XI 《Biometrika》1996,83(2):329-341
  相似文献   

10.
Barber S  Jennison C 《Biometrics》1999,55(2):430-436
We describe existing tests and introduce two new tests concerning the value of a survival function. These tests may be used to construct a confidence interval for the survival probability at a given time or for a quantile of the survival distribution. Simulation studies show that error rates can differ substantially from their nominal values, particularly at survival probabilities close to zero or one. We recommend our new constrained bootstrap test for its good overall performance.  相似文献   

11.
12.
Agresti A 《Biometrics》1999,55(2):597-602
Unless the true association is very strong, simple large-sample confidence intervals for the odds ratio based on the delta method perform well even for small samples. Such intervals include the Woolf logit interval and the related Gart interval based on adding .5 before computing the log odds ratio estimate and its standard error. The Gart interval smooths the observed counts toward the model of equiprobability, but one obtains better coverage probabilities by smoothing toward the independence model and by extending the interval in the appropriate direction when a cell count is zero.  相似文献   

13.
Zhou XH  Tu W 《Biometrics》2000,56(4):1118-1125
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression.  相似文献   

14.
Paired survival times with potential censoring are often observed from two treatment groups in clinical trials and other types of clinical studies. The ratio of marginal hazard rates may be used to quantify the treatment effect in these studies. In this paper, a recently proposed nonparametric kernel method is used to estimate the marginal hazard rate, and the method of variance estimates recovery (MOVER) is used for the construction of the confidence intervals of a time‐dependent hazard ratio based on the confidence limits of a single marginal hazard rate. Two methods are proposed: one uses the delta method and another adopts the transformation method to construct confidence limits for the marginal hazard rate. Simulations are performed to evaluate the performance of the proposed methods. Real data from two clinical trials are analyzed using the proposed methods.  相似文献   

15.
16.
17.
18.
We obtain the asymptotic sample variance of the intraclass kappa statistic for multinomial outcome data. A modified Wald type procedure based on this theory is then used for confidence interval construction. The results of a simulation study show that the proposed non-iterative approach performs very well in terms of confidence interval coverage and width for samples as small as 50. The procedure is illustrated with two examples from previously published medical studies.  相似文献   

19.
20.
Estimate the richness of a community with accuracy despite differences in sampling effort is a key aspect to monitoring high diverse ecosystems. We compiled a worldwide multitaxa database, comprising 185 communities, in order to study the relationship between the percentage of species represented by one individual (singletons) and the intensity of sampling (number of individuals divided by the number of species sampled). The database was used to empirically adjust a correction factor to improve the performance of non-parametrical estimators under conditions of low sampling effort. The correction factor was tested on seven estimators (Chao1, Chao2, Jack1, Jack2, ACE, ICE and Bootstrap). The correction factor was able to reduce the bias of all estimators tested under conditions of undersampling, while converging to the original uncorrected values at higher intensities. Our findings led us to recommend the threshold of 20 individuals/species, or less than 21% of singletons, as a minimum sampling effort to produce reliable richness estimates of high diverse ecosystems using corrected non-parametric estimators. This threshold rise for 50 individuals/species if non-corrected estimators are used which implies in an economy of 60% of sampling effort if the correction factor is used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号