首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dinh P  Zhou XH 《Biometrics》2006,62(2):576-588
Two measures often used in a cost-effectiveness analysis are the incremental cost-effectiveness ratio (ICER) and the net health benefit (NHB). Inferences on these two quantities are often hindered by highly skewed cost data. In this article, we derive the Edgeworth expansions for the studentized t-statistics for the two measures and show how they could be used to guide inferences. In particular, we use the expansions to study the theoretical performance of existing confidence intervals based on normal theory and to derive new confidence intervals for the ICER and the NHB. We conduct a simulation study to compare our new intervals with several existing methods. The methods evaluated include Taylor's interval, Fieller's interval, the bootstrap percentile interval, and the bootstrap bias-corrected acceleration interval. We found that our new intervals give good coverage accuracy and are narrower compared to the current recommended intervals.  相似文献   

2.
Wang J  Basu S 《Biometrics》1999,55(1):111-116
Interval estimates of the concentration of target entities from a serial dilution assay are usually based on the maximum likelihood estimator. The distribution of the maximum likelihood estimator is skewed to the right and is positively biased. This bias results in interval estimates that either provide inadequate coverage relative to the nominal level or yield excessively long intervals. Confidence intervals based on both log transformation and bias reduction are proposed and are shown through simulations to provide appropriate coverage with shorter widths than the commonly used intervals in a variety of designs. An application to feline AIDS research, which motivated this work, is also presented.  相似文献   

3.
E V Slud  D P Byar  S B Green 《Biometrics》1984,40(3):587-600
The small-sample performance of some recently proposed nonparametric methods of constructing confidence intervals for the median survival time, based on randomly right-censored data, is compared with that of two new methods. Most of these methods are equivalent for large samples. All proposed intervals are either 'test-based' or 'reflected' intervals, in the sense defined in the paper. Coverage probabilities for the interval estimates were obtained by exact calculation for uncensored data, and by stimulation for three life distributions and four censoring patterns. In the range of situations studied, 'test-based' methods often have less than nominal coverage, while the coverage of the new 'reflected' confidence intervals is closer to nominal (although somewhat conservative), and these intervals are easy to compute.  相似文献   

4.
Bootstrap confidence intervals for adaptive cluster sampling   总被引:2,自引:0,他引:2  
Consider a collection of spatially clustered objects where the clusters are geographically rare. Of interest is estimation of the total number of objects on the site from a sample of plots of equal size. Under these spatial conditions, adaptive cluster sampling of plots is generally useful in improving efficiency in estimation over simple random sampling without replacement (SRSWOR). In adaptive cluster sampling, when a sampled plot meets some predefined condition, neighboring plots are added to the sample. When populations are rare and clustered, the usual unbiased estimators based on small samples are often highly skewed and discrete in distribution. Thus, confidence intervals based on asymptotic normal theory may not be appropriate. We investigated several nonparametric bootstrap methods for constructing confidence intervals under adaptive cluster sampling. To perform bootstrapping, we transformed the initial sample in order to include the information from the adaptive portion of the sample yet maintain a fixed sample size. In general, coverages of bootstrap percentile methods were closer to nominal coverage than the normal approximation.  相似文献   

5.
Evaluation of the overall accuracy of biomarkers might be based on average measures of the sensitivity for all possible specificities ‐and vice versa‐ or equivalently the area under the receiver operating characteristic (ROC) curve that is typically used in such settings. In practice clinicians are in need of a cutoff point to determine whether intervention is required after establishing the utility of a continuous biomarker. The Youden index can serve both purposes as an overall index of a biomarker's accuracy, that also corresponds to an optimal, in terms of maximizing the Youden index, cutoff point that in turn can be utilized for decision making. In this paper, we provide new methods for constructing confidence intervals for both the Youden index and its corresponding cutoff point. We explore approaches based on the delta approximation under the normality assumption, as well as power transformations to normality and nonparametric kernel‐ and spline‐based approaches. We compare our methods to existing techniques through simulations in terms of coverage and width. We then apply the proposed methods to serum‐based markers of a prospective observational study involving diagnosis of late‐onset sepsis in neonates.  相似文献   

6.
Zhou XH  Tu W 《Biometrics》2000,56(4):1118-1125
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression.  相似文献   

7.
In applied work, distributions are often highly skewed with heavy tails, and this can have disastrous consequences in terms of power when comparing groups based on means. One solution to this problem in the one-sample case is to use the TUKEY and MCLAUGHLIN (1963) method for trimmed means, while in the two-group case YUEN's (1974) method can be used. Published simulations indicate that they yield accurate confidence intervals when distributions are symmetric. Using a Cornish-Fisher expansion, this paper extends these results by describing general circumstances under which methods based on trimmed means can be expected to give more accurate confidence intervals than those based on means. The results cover both symmetric and asymmetric distributions. Simulations are also used to illustrate the accuracy of confidence intervals using trimmed means versus means.  相似文献   

8.
We consider profile-likelihood inference based on the multinomial distribution for assessing the accuracy of a diagnostic test. The methods apply to ordinal rating data when accuracy is assessed using the area under the receiver operating characteristic (ROC) curve. Simulation results suggest that the derived confidence intervals have acceptable coverage probabilities, even when sample sizes are small and the diagnostic tests have high accuracies. The methods extend to stratified settings and situations in which the ratings are correlated. We illustrate the methods using data from a clinical trial on the detection of ovarian cancer.  相似文献   

9.
Every statistical model is based on explicitly or implicitly formulated assumptions. In this study we address new techniques of calculation of variances and confidence intervals, analyse some statistical methods applied to modelling twinning rates, and investigate whether the improvements give more reliable results. For an observed relative frequency, the commonly used variance formula holds exactly with the assumptions that the repetitions are independent and that the probability of success is constant. The probability of a twin maternity depends not only on genetic predisposition, but also on several demographic factors, particularly ethnicity, maternal age and parity. Therefore, the assumption of constancy is questionable. The effect of grouping on the analysis of regression models for twinning rates is also considered. Our results indicate that grouping influences the efficiency of the estimates but not the estimates themselves. Recently, confidence intervals for proportions of low-incidence events have been a target for revived interest and we present the new alternatives. These confidence intervals are slightly wider and their midpoints do not coincide with the maximum-likelihood estimate of the twinning rate, but their actual coverage is closer to the nominal one than the coverage of the traditional confidence interval. In general, our findings indicate that the traditional methods are mainly satisfactorily robust and give reliable results. However, we propose that new formulae for the confidence intervals should be used. Our results are applied to twin-maternity data from Finland and Denmark.  相似文献   

10.
Multivariate meta-analysis is gaining prominence in evidence synthesis research because it enables simultaneous synthesis of multiple correlated outcome data, and random-effects models have generally been used for addressing between-studies heterogeneities. However, coverage probabilities of confidence regions or intervals for standard inference methods for random-effects models (eg, restricted maximum likelihood estimation) cannot retain their nominal confidence levels in general, especially when the number of synthesized studies is small because their validities depend on large sample approximations. In this article, we provide permutation-based inference methods that enable exact joint inferences for average outcome measures without large sample approximations. We also provide accurate marginal inference methods under general settings of multivariate meta-analyses. We propose effective approaches for permutation inferences using optimal weighting based on the efficient score statistic. The effectiveness of the proposed methods is illustrated via applications to bivariate meta-analyses of diagnostic accuracy studies for airway eosinophilia in asthma and a network meta-analysis for antihypertensive drugs on incident diabetes, as well as through simulation experiments. In numerical evaluations performed via simulations, our methods generally provided accurate confidence regions or intervals under a broad range of settings, whereas the current standard inference methods exhibited serious undercoverage properties.  相似文献   

11.
Two new methods for computing confidence intervals for the difference δ = p1 — p2 between two binomial proportions (p1, p2) are proposed. Both the Mid-P and Max-P likelihood weighted intervals are constructed by mapping the tail probabilities from the two-dimensional (p1, p2)-space into a one-dimensional function of δ based on the likelihood weights. This procedure may be regarded as a natural extension of the CLOPPER-PEARSON (1934) interval to the two-sample case where the weighted tail probability is α/2 at each end on the δ scale. The probability computation is based on the exact distribution rather than a large sample approximation. Extensive computation was carried out to evaluate the coverage probability and expected width of the likelihood-weighted intervals, and of several other methods. The likelihood-weighted intervals compare very favorably with the standard asymptotic interval and with intervals proposed by HAUCK and ANDERSON (1986), COX and SNELL (1989), SANTNER and SNELL (1980), SANTNER and YAMAGAMI (1993), and PESKUN (1993). In particular, the Mid-P likelihood-weighted interval provides a good balance between accurate coverage probability and short interval width in both small and large samples. The Mid-P interval is also comparable to COE and TAMHANE'S (1993) interval, which has the best performance in small samples.  相似文献   

12.
Sequencing family DNA samples provides an attractive alternative to population based designs to identify rare variants associated with human disease due to the enrichment of causal variants in pedigrees. Previous studies showed that genotype calling accuracy can be improved by modeling family relatedness compared to standard calling algorithms. Current family-based variant calling methods use sequencing data on single variants and ignore the identity-by-descent (IBD) sharing along the genome. In this study we describe a new computational framework to accurately estimate the IBD sharing from the sequencing data, and to utilize the inferred IBD among family members to jointly call genotypes in pedigrees. Through simulations and application to real data, we showed that IBD can be reliably estimated across the genome, even at very low coverage (e.g. 2X), and genotype accuracy can be dramatically improved. Moreover, the improvement is more pronounced for variants with low frequencies, especially at low to intermediate coverage (e.g. 10X to 20X), making our approach effective in studying rare variants in cost-effective whole genome sequencing in pedigrees. We hope that our tool is useful to the research community for identifying rare variants for human disease through family-based sequencing.  相似文献   

13.
Clegg LX  Gail MH  Feuer EJ 《Biometrics》2002,58(3):684-688
We propose a new Poisson method to estimate the variance for prevalence estimates obtained by the counting method described by Gail et al. (1999, Biometrics 55, 1137-1144) and to construct a confidence interval for the prevalence. We evaluate both the Poisson procedure and the procedure based on the bootstrap proposed by Gail et al. in simulated samples generated by resampling real data. These studies show that both variance estimators usually perform well and yield coverages of confidence intervals at nominal levels. When the number of disease survivors is very small, however, confidence intervals based on the Poisson method have supranominal coverage, whereas those based on the procedure of Gail et al. tend to have below-nominal coverage. For these reasons, we recommend the Poisson method, which also reduces the computational burden considerably.  相似文献   

14.
Cross-validation based point estimates of prediction accuracy are frequently reported in microarray class prediction problems. However these point estimates can be highly variable, particularly for small sample numbers, and it would be useful to provide confidence intervals of prediction accuracy. We performed an extensive study of existing confidence interval methods and compared their performance in terms of empirical coverage and width. We developed a bootstrap case cross-validation (BCCV) resampling scheme and defined several confidence interval methods using BCCV with and without bias-correction. The widely used approach of basing confidence intervals on an independent binomial assumption of the leave-one-out cross-validation errors results in serious under-coverage of the true prediction error. Two split-sample based methods previously proposed in the literature tend to give overly conservative confidence intervals. Using BCCV resampling, the percentile confidence interval method was also found to be overly conservative without bias-correction, while the bias corrected accelerated (BCa) interval method of Efron returns substantially anti-conservative confidence intervals. We propose a simple bias reduction on the BCCV percentile interval. The method provides mildly conservative inference under all circumstances studied and outperforms the other methods in microarray applications with small to moderate sample sizes.  相似文献   

15.
This article derives generalized prediction intervals for random effects in linear random‐effects models. For balanced and unbalanced data in two‐way layouts, models are considered with and without interaction. Coverage of the proposed generalized prediction intervals was estimated in a simulation study based on an agricultural field experiment. Generalized prediction intervals were compared with prediction intervals based on the restricted maximum likelihood (REML) procedure and the approximate methods of Satterthwaite and Kenward and Roger. The simulation study showed that coverage of generalized prediction intervals was closer to the nominal level 0.95 than coverage of prediction intervals based on the REML procedure.  相似文献   

16.
Construction of confidence intervals or regions is an important part of statistical inference. The usual approach to constructing a confidence interval for a single parameter or confidence region for two or more parameters requires that the distribution of estimated parameters is known or can be assumed. In reality, the sampling distributions of parameters of biological importance are often unknown or difficult to be characterized. Distribution-free nonparametric resampling methods such as bootstrapping and permutation have been widely used to construct the confidence interval for a single parameter. There are also several parametric (ellipse) and nonparametric (convex hull peeling, bagplot and HPDregionplot) methods available for constructing confidence regions for two or more parameters. However, these methods have some key deficiencies including biased estimation of the true coverage rate, failure to account for the shape of the distribution inherent in the data and difficulty to implement. The purpose of this paper is to develop a new distribution-free method for constructing the confidence region that is based only on a few basic geometrical principles and accounts for the actual shape of the distribution inherent in the real data. The new method is implemented in an R package, distfree.cr/R. The statistical properties of the new method are evaluated and compared with those of the other methods through Monte Carlo simulation. Our new method outperforms the other methods regardless of whether the samples are taken from normal or non-normal bivariate distributions. In addition, the superiority of our method is consistent across different sample sizes and different levels of correlation between the two variables. We also analyze three biological data sets to illustrate the use of our new method for genomics and other biological researches.  相似文献   

17.
宋丽丽  白中科  樊翔  孙鹏旸  卫怡 《生态学报》2018,38(4):1272-1283
植被覆盖度测度的准确性很大程度上影响着研究结论是否科学合理。在干旱半干旱退化草原区,尤其是受采矿剧烈扰动的矿区,发育的生物土壤结皮(Biological soil crust,BSC)由于其颜色和光谱同绿色植被具有相似性,导致对植被覆盖度的测量存在一定的影响。以伊敏露天矿区为研究区,在西排土场和内排土场采集了含苔藓结皮、地衣结皮和藻结皮的样方相片各四组(每组中包含样方喷水前和喷水后的相片各一张),并采集了一组不含结皮的样方相片作为对照组,运用数码照相法提取植被覆盖度,通过不同的数据处理方法(最大似然分类法及RGB阈值法)进行植被覆盖度提取,设立对比试验,分析BSC对于植被覆盖度测度是否有影响,其影响大小如何,影响程度是否受BSC含水量大小的影响,并对比各常规处理方法的优劣,研究能否通过结合纹理特征与色彩信息剔除BSC对植被覆盖度提取值的影响。研究结论:1)基于照相法的常规数据处理方法提取植被覆盖度时,BSC的存在导致测得的植被覆盖度值偏高,且苔藓结皮、地衣结皮吸水后比吸水前影响更显著,藻结皮相反;2)3个演替阶段的BSC中,尤以含苔藓结皮的样方植被覆盖度高估最为明显,其次为地衣,而含藻结皮样方规律不明显;3)样方内BSC覆盖度越高,植被覆盖度越低,其植被覆盖度测度越不准确,因此在研究草原矿区这类草本植物覆盖度较低、结皮发育的区域时,应当注意BSC的影响;4)试通过应用纹理信息提出改进的提取方法,发现单纯的纹理分类精度极低,而结合了纹理信息与RGB色彩信息的分类精度较高;5)对两种常规分类方法的精度进行比较,RGB阈值法较最大似然分类法更为不准确,对植被覆盖度的高估接近最大似然分类法的2倍。对两种改进的提取方法的精度进行比较,二者都可以有效提高测量精度,基于波段合成的纹理分类方法最佳。四种方法精度由高到低的顺序为:纹理结合RGB法考虑生物土壤结皮的最大似然分类法普通最大似然分类法RGB阈值法。  相似文献   

18.
We develop a new Bayesian approach to interval estimation for both the risk difference and the risk ratio for a 2 x 2 table with a structural zero using Markov chain Monte Carlo (MCMC) methods. We also derive a normal approximation for the risk difference and a gamma approximation for the risk ratio. We then compare the coverage and interval width of our new intervals to the score-based intervals over various parameter and sample-size configurations. Finally, we consider a Bayesian method for sample-size determination.  相似文献   

19.
In this article, we compare Wald-type, logarithmic transformation, and Fieller-type statistics for the classical 2-sided equivalence testing of the rate ratio under matched-pair designs with a binary end point. These statistics can be implemented through sample-based, constrained least squares estimation and constrained maximum likelihood (CML) estimation methods. Sample size formulae based on the CML estimation method are developed. We consider formulae that control a prespecified power or confidence width. Our simulation studies show that statistics based on the CML estimation method generally outperform other statistics and methods with respect to actual type I error rate and average width of confidence intervals. Also, the corresponding sample size formulae are valid asymptotically in the sense that the exact power and actual coverage probability for the estimated sample size are generally close to their prespecified values. The methods are illustrated with a real example from a clinical laboratory study.  相似文献   

20.
A common measure of the relative toxicity is the ratio of median lethal doses for responses estimated in two bioassays. Robertson and Preisler previously proposed a method for constructing a confidence interval for the ratio. The applicability of this technique in common experimental situations, especially those involving small samples, may be questionable because the sampling distribution of this ratio estimator may be highly skewed. To examine this possibility, we did a computer simulation experiment to evaluate the coverage properties of the Robertson and Preisler method. The simulation showed that the method provided confidence intervals that performed at the nominal confidence level for the range of responses often observed in pesticide bioassays. Results of this study provide empirical support for the continued use this technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号