首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A nonparametric discrete delta method for estimating standard errors of percentile estimators in quantal bioassay is described. A simulation study of confidence intervals for EDx in probit analysis shows the discrete delta method compared favorably with intervals based on maximum likelihood and also some parametric bootstrap methods.  相似文献   

2.
The epidemiologic concept of the adjusted attributable risk is a useful approach to quantitatively describe the importance of risk factors on the population level. It measures the proportional reduction in disease probability when a risk factor is eliminated from the population, accounting for effects of confounding and effect-modification by nuisance variables. The computation of asymptotic variance estimates for estimates of the adjusted attributable risk is often done by applying the delta method. Investigations on the delta method have shown, however, that the delta method generally tends to underestimate the standard error, leading to biased confidence intervals. We compare confidence intervals for the adjusted attributable risk derived by applying computer intensive methods like the bootstrap or jackknife to confidence intervals based on asymptotic variance estimates using an extensive Monte Carlo simulation and within a real data example from a cohort study in cardiovascular disease epidemiology. Our results show that confidence intervals based on bootstrap and jackknife methods outperform intervals based on asymptotic theory. Best variants of computer intensive confidence intervals are indicated for different situations.  相似文献   

3.
In this paper we consider the competing risks model where the risks may not be independent. We assume both fixed and random censoring. The random censoring mechanism could have either a parametric or a non-parametric form. The life distributions and the parametric censoring distribution considered are exponential or Weibull. The expressions for the asymptotic confidence intervals for various parameters of interest under different models, using the estimated Fisher information matrix and parametric bootstrap techniques have been derived. Monte Carlo simulation studies for some of these cases have been carried out.  相似文献   

4.
Diagnostic studies in ophthalmology frequently involve binocular data where pairs of eyes are evaluated, through some diagnostic procedure, for the presence of certain diseases or pathologies. The simplest approach of estimating measures of diagnostic accuracy, such as sensitivity and specificity, treats eyes as independent, consequently yielding incorrect estimates, especially of the standard errors. Approaches that account for the inter‐eye correlation include regression methods using generalized estimating equations and likelihood techniques based on various correlated binomial models. The paper proposes a simple alternative statistical methodology of jointly estimating measures of diagnostic accuracy for binocular tests based on a flexible model for correlated binary data. Moments' estimation of model parameters is outlined and asymptotic inference is discussed. The resulting estimates are straightforward and easy to obtain, requiring no special statistical software but only elementary calculations. Results of simulations indicate that large‐sample and bootstrap confidence intervals based on the estimates have relatively good coverage properties when the model is correctly specified. The computation of the estimates and their standard errors are illustrated with data from a study on diabetic retinopathy.  相似文献   

5.
When predicting population dynamics, the value of the prediction is not enough and should be accompanied by a confidence interval that integrates the whole chain of errors, from observations to predictions via the estimates of the parameters of the model. Matrix models are often used to predict the dynamics of age- or size-structured populations. Their parameters are vital rates. This study aims (1) at assessing the impact of the variability of observations on vital rates, and then on model’s predictions, and (2) at comparing three methods for computing confidence intervals for values predicted from the models. The first method is the bootstrap. The second method is analytic and approximates the standard error of predictions by their asymptotic variance as the sample size tends to infinity. The third method combines use of the bootstrap to estimate the standard errors of vital rates with the analytical method to then estimate the errors of predictions from the model. Computations are done for an Usher matrix models that predicts the asymptotic (as time goes to infinity) stock recovery rate for three timber species in French Guiana. Little difference is found between the hybrid and the analytic method. Their estimates of bias and standard error converge towards the bootstrap estimates when the error on vital rates becomes small enough, which corresponds in the present case to a number of observations greater than 5000 trees.  相似文献   

6.
You N  Xuan Mao C 《Biometrics》2008,64(2):371-376
Summary .   Capture–recapture methods are widely adopted to estimate sizes of populations of public health interest using information from surveillance systems. For a two-list surveillance system with a discrete covariate, a population is divided into several subpopulations. A unified framework is proposed in which the logits of presence probabilities are decomposed into case effects and list effects. The estimators for the whole population and subpopulation sizes, their adjusted versions, and asymptotic standard errors admit closed-form expressions. Asymptotic and bootstrap individual and simultaneous confidence intervals are easily constructed. Conditional likelihood ratio tests are used to select one from three possible models. Real examples are investigated.  相似文献   

7.
The molecular clock theory has greatly enlightened our understanding of macroevolutionary events. Maximum likelihood (ML) estimation of divergence times involves the adoption of fixed calibration points, and the confidence intervals associated with the estimates are generally very narrow. The credibility intervals are inferred assuming that the estimates are normally distributed, which may not be the case. Moreover, calculation of standard errors is usually carried out by the curvature method and is complicated by the difficulty in approximating second derivatives of the likelihood function. In this study, a standard primate phylogeny was used to examine the standard errors of ML estimates via the bootstrap method. Confidence intervals were also assessed from the posterior distribution of divergence times inferred via Bayesian Markov Chain Monte Carlo. For the primate topology under evaluation, no significant differences were found between the bootstrap and the curvature methods. Also, Bayesian confidence intervals were always wider than those obtained by ML.  相似文献   

8.
Bootstrap confidence intervals for adaptive cluster sampling   总被引:2,自引:0,他引:2  
Consider a collection of spatially clustered objects where the clusters are geographically rare. Of interest is estimation of the total number of objects on the site from a sample of plots of equal size. Under these spatial conditions, adaptive cluster sampling of plots is generally useful in improving efficiency in estimation over simple random sampling without replacement (SRSWOR). In adaptive cluster sampling, when a sampled plot meets some predefined condition, neighboring plots are added to the sample. When populations are rare and clustered, the usual unbiased estimators based on small samples are often highly skewed and discrete in distribution. Thus, confidence intervals based on asymptotic normal theory may not be appropriate. We investigated several nonparametric bootstrap methods for constructing confidence intervals under adaptive cluster sampling. To perform bootstrapping, we transformed the initial sample in order to include the information from the adaptive portion of the sample yet maintain a fixed sample size. In general, coverages of bootstrap percentile methods were closer to nominal coverage than the normal approximation.  相似文献   

9.
10.
Species distribution models have been widely used to predict species distributions for various purposes, including conservation planning, and climate change impact assessment. The success of these applications relies heavily on the accuracy of the models. Various measures have been proposed to assess the accuracy of the models. Rigorous statistical analysis should be incorporated in model accuracy assessment. However, since relevant information about the statistical properties of accuracy measures is scattered across various disciplines, ecologists find it difficult to select the most appropriate ones for their research. In this paper, we review accuracy measures that are currently used in species distribution modelling (SDM), and introduce additional metrics that have potential applications in SDM. For the commonly used measures (which are also intensively studied by statisticians), including overall accuracy, sensitivity, specificity, kappa, and area and partial area under the ROC curves, promising methods to construct confidence intervals and statistically compare the accuracy between two models are given. For other accuracy measures, methods to estimate standard errors are given, which can be used to construct approximate confidence intervals. We also suggest that as general tools, computer‐intensive methods, especially bootstrap and randomization methods can be used in constructing confidence intervals and statistical tests if suitable analytic methods cannot be found. Usually, these computer‐intensive methods provide robust results.  相似文献   

11.
Publication bias is a major concern in conducting systematic reviews and meta-analyses. Various sensitivity analysis or bias-correction methods have been developed based on selection models, and they have some advantages over the widely used trim-and-fill bias-correction method. However, likelihood methods based on selection models may have difficulty in obtaining precise estimates and reasonable confidence intervals, or require a rather complicated sensitivity analysis process. Herein, we develop a simple publication bias adjustment method by utilizing the information on conducted but still unpublished trials from clinical trial registries. We introduce an estimating equation for parameter estimation in the selection function by regarding the publication bias issue as a missing data problem under the missing not at random assumption. With the estimated selection function, we introduce the inverse probability weighting (IPW) method to estimate the overall mean across studies. Furthermore, the IPW versions of heterogeneity measures such as the between-study variance and the I2 measure are proposed. We propose methods to construct confidence intervals based on asymptotic normal approximation as well as on parametric bootstrap. Through numerical experiments, we observed that the estimators successfully eliminated bias, and the confidence intervals had empirical coverage probabilities close to the nominal level. On the other hand, the confidence interval based on asymptotic normal approximation is much wider in some scenarios than the bootstrap confidence interval. Therefore, the latter is recommended for practical use.  相似文献   

12.
13.
Recent work on Bayesian inference of disease mapping models discusses the advantages of the fully Bayesian (FB) approach over its empirical Bayes (EB) counterpart, suggesting that FB posterior standard deviations of small-area relative risks are more reflective of the uncertainty associated with the relative risk estimation than counterparts based on EB inference, since the latter fail to account for the variability in the estimation of the hyperparameters. In this article, an EB bootstrap methodology for relative risk inference with accurate parametric EB confidence intervals is developed, illustrated, and contrasted with the hyperprior Bayes. We elucidate the close connection between the EB bootstrap methodology and hyperprior Bayes, present a comparison between FB inference via hybrid Markov chain Monte Carlo and EB inference via penalized quasi-likelihood, and illustrate the ability of parametric bootstrap procedures to adjust for the undercoverage in the "naive" EB interval estimates. We discuss the important roles that FB and EB methods play in risk inference, map interpretation, and real-life applications. The work is motivated by a recent analysis of small-area infant mortality rates in the province of British Columbia in Canada.  相似文献   

14.
Summary We introduce a nearly automatic procedure to locate and count the quantum dots in images of kinesin motor assays. Our procedure employs an approximate likelihood estimator based on a two‐component mixture model for the image data; the first component has a normal distribution, and the other component is distributed as a normal random variable plus an exponential random variable. The normal component has an unknown variance, which we model as a function of the mean. We use B‐splines to estimate the variance function during a training run on a suitable image, and the estimate is used to process subsequent images. Parameter estimates are generated for each image along with estimates of standard errors, and the number of dots in the image is determined using an information criterion and likelihood ratio tests. Realistic simulations show that our procedure is robust and that it leads to accurate estimates, both of parameters and of standard errors.  相似文献   

15.
Bennewitz J  Reinsch N  Kalm E 《Genetics》2002,160(4):1673-1686
The nonparametric bootstrap approach is known to be suitable for calculating central confidence intervals for the locations of quantitative trait loci (QTL). However, the distribution of the bootstrap QTL position estimates along the chromosome is peaked at the positions of the markers and is not tailed equally. This results in conservativeness and large width of the confidence intervals. In this study three modified methods are proposed to calculate nonparametric bootstrap confidence intervals for QTL locations, which compute noncentral confidence intervals (uncorrected method I), correct for the impact of the markers (weighted method I), or both (weighted method II). Noncentral confidence intervals were computed with an analog of the highest posterior density method. The correction for the markers is based on the distribution of QTL estimates along the chromosome when the QTL is not linked with any marker, and it can be obtained with a permutation approach. In a simulation study the three methods were compared with the original bootstrap method. The results showed that it is useful, first, to compute noncentral confidence intervals and, second, to correct the bootstrap distribution of the QTL estimates for the impact of the markers. The weighted method II, combining these two properties, produced the shortest and less biased confidence intervals in a large number of simulated configurations.  相似文献   

16.
J. Feifel  D. Dobler 《Biometrics》2021,77(1):175-185
Nested case‐control designs are attractive in studies with a time‐to‐event endpoint if the outcome is rare or if interest lies in evaluating expensive covariates. The appeal is that these designs restrict to small subsets of all patients at risk just prior to the observed event times. Only these small subsets need to be evaluated. Typically, the controls are selected at random and methods for time‐simultaneous inference have been proposed in the literature. However, the martingale structure behind nested case‐control designs allows for more powerful and flexible non‐standard sampling designs. We exploit that structure to find simultaneous confidence bands based on wild bootstrap resampling procedures within this general class of designs. We show in a simulation study that the intended coverage probability is obtained for confidence bands for cumulative baseline hazard functions. We apply our methods to observational data about hospital‐acquired infections.  相似文献   

17.
It has long been known that insufficient consideration of spatial autocorrelation leads to unreliable hypothesis‐tests and inaccurate parameter estimates. Yet, ecologists are confronted with a confusing array of methods to account for spatial autocorrelation. Although Beale et al. (2010) provided guidance for continuous data on regular grids, researchers still need advice for other types of data in more flexible spatial contexts. In this paper, we extend Beale et al. (2010)‘s work to count data on both regularly‐ and irregularly‐spaced plots, the latter being commonly encountered in ecological studies. Through a simulation‐based approach, we assessed the accuracy and the type I errors of two frequentist and two Bayesian ready‐to‐use methods in the family of generalized mixed models, with distance‐based or neighbourhood‐based correlated random effects. In addition, we tested whether the methods are robust to spatial non‐stationarity, and over‐ and under‐dispersion – both typical features of species distribution count data which violate standard regression assumptions. In the simplest of our simulated datasets, the two frequentist methods gave inflated type I errors, while the two Bayesian methods provided satisfying results. When facing real‐world complexities, the distance‐based Bayesian method (MCMC with Langevin–Hastings updates) performed best of all. We hope that, in the light of our results, ecological researchers will feel more comfortable including spatial autocorrelation in their analyses of count data.  相似文献   

18.
Cross-validation based point estimates of prediction accuracy are frequently reported in microarray class prediction problems. However these point estimates can be highly variable, particularly for small sample numbers, and it would be useful to provide confidence intervals of prediction accuracy. We performed an extensive study of existing confidence interval methods and compared their performance in terms of empirical coverage and width. We developed a bootstrap case cross-validation (BCCV) resampling scheme and defined several confidence interval methods using BCCV with and without bias-correction. The widely used approach of basing confidence intervals on an independent binomial assumption of the leave-one-out cross-validation errors results in serious under-coverage of the true prediction error. Two split-sample based methods previously proposed in the literature tend to give overly conservative confidence intervals. Using BCCV resampling, the percentile confidence interval method was also found to be overly conservative without bias-correction, while the bias corrected accelerated (BCa) interval method of Efron returns substantially anti-conservative confidence intervals. We propose a simple bias reduction on the BCCV percentile interval. The method provides mildly conservative inference under all circumstances studied and outperforms the other methods in microarray applications with small to moderate sample sizes.  相似文献   

19.
The coverage probabilities of several confidence limit estimators of genetic parameters, obtained from North Carolina I designs, were assessed by means of Monte Carlo simulations. The reliability of the estimators was compared under three different parental sample sizes. The coverage of confidence intervals set on the Normal distribution, and using standard errors either computed by the “delta” method or derived using an approximation for the variance of a variance component estimated by means of a linear combination of mean squares, was affected by the number of males and females included in the experiment. The “delta” method was found to provide reliable standard errors of the genetic parameters only when at least 48 males were each mated to six different females randomly selected from the reference population. Formulae are provided for obtaining “delta” method standard errors, and appropriate statistical software procedures are discussed. The error rates of confidence limits based on the Normal distribution and using standard errors obtained by an approximation for the variance of a variance component varied widely. The coverage of F-distribution confidence intervals for heritability estimates was not significantly affected by parental sample size and consistently provided a mean coverage near the stated coverage. For small parental sample sizes, confidence intervals for heritability estimates should be based on the F-distribution.  相似文献   

20.
Bailer AJ  Liu S  Smith ML  Isaacson LG 《Biometrics》2000,56(3):936-939
Treatments designed to increase neurochemical levels may also result in increases in the numbers of axons that produce the neurochemicals of interest. A natural research question is how does one compare the average neurochemical production per axon between two (or more) experimental groups. Two statistical methods are proposed for this problem. The first method utilizes a delta-method approximation to the variance of a function of random variables while the second method is based on the bootstrap. These methods are illustrated with data obtained from perivascular norepinephrine following intracerebroventricular infusion of neurotrophin nerve growth factor in adult rats and are studied in a small simulation experiment. The delta-method confidence intervals exhibited better coverage properties than the bootstrap alternative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号