首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
The epidemiologic concept of the adjusted attributable risk is a useful approach to quantitatively describe the importance of risk factors on the population level. It measures the proportional reduction in disease probability when a risk factor is eliminated from the population, accounting for effects of confounding and effect-modification by nuisance variables. The computation of asymptotic variance estimates for estimates of the adjusted attributable risk is often done by applying the delta method. Investigations on the delta method have shown, however, that the delta method generally tends to underestimate the standard error, leading to biased confidence intervals. We compare confidence intervals for the adjusted attributable risk derived by applying computer intensive methods like the bootstrap or jackknife to confidence intervals based on asymptotic variance estimates using an extensive Monte Carlo simulation and within a real data example from a cohort study in cardiovascular disease epidemiology. Our results show that confidence intervals based on bootstrap and jackknife methods outperform intervals based on asymptotic theory. Best variants of computer intensive confidence intervals are indicated for different situations.  相似文献   

2.
Many confidence intervals calculated in practice are potentially not exact, either because the requirements for the interval estimator to be exact are known to be violated, or because the (exact) distribution of the data is unknown. If a confidence interval is approximate, the crucial question is how well its true coverage probability approximates its intended coverage probability. In this paper we propose to use the bootstrap to calculate an empirical estimate for the (true) coverage probability of a confidence interval. In the first instance, the empirical coverage can be used to assess whether a given type of confidence interval is adequate for the data at hand. More generally, when planning the statistical analysis of future trials based on existing data pools, the empirical coverage can be used to study the coverage properties of confidence intervals as a function of type of data, sample size, and analysis scale, and thus inform the statistical analysis plan for the future trial. In this sense, the paper proposes an alternative to the problematic pretest of the data for normality, followed by selection of the analysis method based on the results of the pretest. We apply the methodology to a data pool of bioequivalence studies, and in the selection of covariance patterns for repeated measures data.  相似文献   

3.
4.
ABSTRACT: BACKGROUND: Predicting a system's behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. RESULTS: In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified modelpredictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. CONCLUSIONS: The presented methodology allows the propagation of uncertainty from experimental to model pre-dictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/~ckreutz/PPL .  相似文献   

5.
6.
7.
8.
In cancer clinical trials, it is often of interest in estimating the ratios of hazard rates at some specific time points during the study from two independent populations. In this paper, we consider nonparametric confidence interval procedures for the hazard ratio based on kernel estimates for the hazard rates with under-smoothing bandwidths. Two methods are used to derive the confidence intervals: one based on the asymptotic normality of the ratio of the kernel estimates for the hazard rates in two populations and another through Fieller's Theorem. The performances of the proposed confidence intervals are evaluated through Monte-Carlo simulations and applied to the analysis of data from a clinical trial on early breast cancer.  相似文献   

9.
In current genetic counseling practice, a single risk estimate is often quoted to a family rather than a range of risks. Such point estimates are predicated on knowing basic parameters like recombination fractions exactly when, in fact, there may be considerable uncertainty about them. Using the large sample theory of statistics, it is possible to derive approximate risk intervals that incorporate known statistical imprecision. The necessary theory will be briefly discussed and illustrated by an application to family counseling for Duchenne muscular dystrophy in the presence of two flanking markers. Some of the problems of the theory will be mentioned. These include lack of adequate sample size to justify the conclusions of large sample theory, pronounced nonlinearity in the risk function, and failure to take into proper account genetic interference. Except in trivial cases, sophisticated computer software is needed to carry out the computations of risk intervals.  相似文献   

10.
Scherag et al. [Hum Hered 2002;54:210-217] recently proposed point estimates and asymptotic as well as exact confidence intervals for genotype relative risks (GRRs) and the attributable risk (AR) in case parent trio designs using single nucleotide polymorphism (SNP) data. The aim of this study was the investigation of coverage probabilities and bias in estimates if the marker locus is not identical to the disease locus. Using a variety of parameter constellations, including marker allele frequencies identical to and different from the SNP at the disease locus, we performed an analytical study to quantify the bias and a Monte-Carlo simulation study for quantifying both bias and coverage probabilities. No bias was observed if marker and trait locus coincided. Two parameters had a strong impact on coverage probabilities of confidence intervals and bias in point estimates if they did not coincide: the linkage disequilibrium (LD) parameter delta and the allele frequency at the marker SNP. If marker allele frequencies were different from the allele frequencies at the functional SNP, substantial biases occurred. Further, if delta between the marker and the disease locus was lower than the maximum possible delta, estimates were also biased. In general, biases were towards the null hypothesis for both GRRs and AR. If one GRR was not increased, as e.g. in a recessive genetic model, biases away from the null could be observed. If both GRRs were in identical directions and if both were substantially larger than 1, the bias always was towards the null. When applying point estimates and confidence intervals for GRRs and AR in candidate gene studies, great care is needed. Effect estimates are substantially biased towards the null if either the allele frequencies at the marker SNP and the true disease locus are different or if the LD between the marker SNP and the disease locus is not at its maximum. A bias away from the null occurs only in uncommon study situations; it is small and can therefore be ignored for applications.  相似文献   

11.
The distributions of genetic variance components and their ratios (heritability and type-B genetic correlation) from 105 pairs of six-parent disconnected half-diallels of a breeding population of loblolly pine (Pinus taeda L.) were examined. A series of simulations based on these estimates were carried out to study the coverage accuracy of confidence intervals based on the usual t-method and several other alternative methods. Genetic variance estimates fluctuated greatly from one experiment to another. Both general combining ability variance (2g) and specific combining ability variance (2s) had a large positive skewness. For 2g and 2s, a skewness-adjusted t-method proposed by Boos and Hughes-Oliver (Am Stat 54:121–128, 2000) provided better upper endpoint confidence intervals than t-intervals, whereas they were similar for the lower endpoint. Bootstrap BCa-intervals (Efron and Tibshirani, An introduction to the bootstrap. Chapman & Hall, London 436 p, 1993) and Halls transformation methods (Zhou and Gao, Am Stat 54:100–104, 2000) had poor coverages. Coverage accuracy of Fiellers interval endpoint(J R Stat Soc Ser B 16:175–185, 1954) and t-interval endpoint were similar for both h2 and rB for sample sizes n10, but for n=30 the Fiellers method is much better.  相似文献   

12.
On the existence of maximum likelihood estimates in logistic regression models   总被引:12,自引:0,他引:12  
ALBERT  A.; ANDERSON  J. A. 《Biometrika》1984,71(1):1-10
  相似文献   

13.
14.
Confidence intervals for the mean of one sample and the difference in means of two independent samples based on the ordinary-t statistic suffer deficiencies when samples come from skewed families. In this article we evaluate several existing techniques and propose new methods to improve coverage accuracy. The methods examined include the ordinary-t, the bootstrap-t, the biased-corrected acceleration and three new intervals based on transformation of the t-statistic. Our study shows that our new transformation intervals and the bootstrap-t intervals give best coverage accuracy for a variety of skewed distributions, and that our new transformation intervals have shorter interval lengths.  相似文献   

15.
16.
17.
The traditional approach to 'exact' small-sample interval estimation of the odds ratio for binomial, Poisson, or multinomial samples uses the conditional distribution to eliminate nuisance parameters. This approach can be very conservative. For two independent binomial samples, we study an unconditional approach with overall confidence level guaranteed to equal at least the nominal level. With small samples this interval tends to be shorter and have coverage probabilities nearer the nominal level.  相似文献   

18.

Background

Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth.

Methodology/Principal Findings

We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort.

Conclusions/Significance

Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.  相似文献   

19.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

20.
Measurement error in a continuous test variable may bias estimates of the summary properties of receiver operating characteristics (ROC) curves. Typically, unbiased measurement error will reduce the diagnostic potential of a continuous test variable. This paper explores the effects of possibly heterogenous measurement error on estimated ROC curves for binormal test variables. Corrected estimators for specific points on the curve are derived under the assumption of known or estimated measurement variances for individual test results. These estimators and associated confidence intervals do not depend on normal assumptions for the distribution of the measurement error and are shown to be approximately unbiased for moderate size samples in a simulation study. An application from a study of emerging imaging modalities in breast cancer is used to demonstrate the new techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号