首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cheng Y  Shen Y 《Biometrics》2004,60(4):910-918
For confirmatory trials of regulatory decision making, it is important that adaptive designs under consideration provide inference with the correct nominal level, as well as unbiased estimates, and confidence intervals for the treatment comparisons in the actual trials. However, naive point estimate and its confidence interval are often biased in adaptive sequential designs. We develop a new procedure for estimation following a test from a sample size reestimation design. The method for obtaining an exact confidence interval and point estimate is based on a general distribution property of a pivot function of the Self-designing group sequential clinical trial by Shen and Fisher (1999, Biometrics55, 190-197). A modified estimate is proposed to explicitly account for futility stopping boundary with reduced bias when block sizes are small. The proposed estimates are shown to be consistent. The computation of the estimates is straightforward. We also provide a modified weight function to improve the power of the test. Extensive simulation studies show that the exact confidence intervals have accurate nominal probability of coverage, and the proposed point estimates are nearly unbiased with practical sample sizes.  相似文献   

2.
Environmental threats, such as habitat size reduction or environmental pollution, may not cause immediate extinction of a population but may shorten the expected time to extinction. We developed a method to estimate the mean time to extinction for a density-dependent population with environmental fluctuation and to compare the impacts of different risk factors. We first derived a formula of the mean extinction time for a population with logistic growth and environmental and demographic stochasticities expressed as a stochastic differential equation model (canonical model). The relative importance of different risk factors is evaluated by the decrease in the mean extinction time. We studied an approximated formula for the reduction in habitat size that enhances extinction risk by the same magnitude as a given decrease in survivorship caused by toxic chemical exposure. In a large population (large K) or in a slowly growing population (small r), a small decrease in survivorship can cause the extinction risk to increase, corresponding to a significant reduction in the habitat size. Finally, we studied an approximate maximum likelihood estimate of three parameters (intrinsic growth rate r, carrying capacity K, and environmental stochasticity σ 2 e ) from time series data. By Monte Carlo sampling, we can remove the bias very effectively and determine the confidence interval. We discuss here how the reliability of the estimate changes with the length of time series. If we know the intrinsic rate of population growth r, the mean extinction time is estimated quite accurately even when only a short time series is available for parameter estimation. Received: March 31, 1999 / Accepted: November 9, 1999  相似文献   

3.
4.
Problems involving thousands of null hypotheses have been addressed by estimating the local false discovery rate (LFDR). A previous LFDR approach to reporting point and interval estimates of an effect-size parameter uses an estimate of the prior distribution of the parameter conditional on the alternative hypothesis. That estimated prior is often unreliable, and yet strongly influences the posterior intervals and point estimates, causing the posterior intervals to differ from fixed-parameter confidence intervals, even for arbitrarily small estimates of the LFDR. That influence of the estimated prior manifests the failure of the conditional posterior intervals, given the truth of the alternative hypothesis, to match the confidence intervals. Those problems are overcome by changing the posterior distribution conditional on the alternative hypothesis from a Bayesian posterior to a confidence posterior. Unlike the Bayesian posterior, the confidence posterior equates the posterior probability that the parameter lies in a fixed interval with the coverage rate of the coinciding confidence interval. The resulting confidence-Bayes hybrid posterior supplies interval and point estimates that shrink toward the null hypothesis value. The confidence intervals tend to be much shorter than their fixed-parameter counterparts, as illustrated with gene expression data. Simulations nonetheless confirm that the shrunken confidence intervals cover the parameter more frequently than stated. Generally applicable sufficient conditions for correct coverage are given. In addition to having those frequentist properties, the hybrid posterior can also be motivated from an objective Bayesian perspective by requiring coherence with some default prior conditional on the alternative hypothesis. That requirement generates a new class of approximate posteriors that supplement Bayes factors modified for improper priors and that dampen the influence of proper priors on the credibility intervals. While that class of posteriors intersects the class of confidence-Bayes posteriors, neither class is a subset of the other. In short, two first principles generate both classes of posteriors: a coherence principle and a relevance principle. The coherence principle requires that all effect size estimates comply with the same probability distribution. The relevance principle means effect size estimates given the truth of an alternative hypothesis cannot depend on whether that truth was known prior to observing the data or whether it was learned from the data.  相似文献   

5.
ESTIMATED POPULATION SIZE OF THE CALIFORNIA GRAY WHALE   总被引:1,自引:0,他引:1  
Abstract: The 1987-1988 counts of gray whales passing Monterey are reanalyzed to provide a revised population size estimate. The double count data are modeled using iterative logistic regression to allow for the effects of various covariates on probability of detection, and a correction factor is introduced for night rate of travel. The revised absolute population size estimate is 20,869 animals, with CV = 4.37% and 95% confidence interval (19,200, 22,700). In addition the series of relative population size estimates from 1967-1968 to 1987-1988 is scaled to pass through this estimate and modeled to provide variance estimates from interannual variation in population size estimates. This method yields an alternative population size estimate for 1987-1988 of 21,296 animals, with CV = 6.05% and 95% confidence interval (18,900, 24,000). The average annual rate of increase between 1967-1968 and 1987-1988 was estimated to be 3.29% with standard error 0.44%.  相似文献   

6.
J Benichou  M H Gail 《Biometrics》1990,46(4):991-1003
The attributable risk (AR), defined as AR = [Pr(disease) - Pr(disease/no exposure)]/Pr(disease), measures the proportion of disease risk that is attributable to an exposure. Recently Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) presented point estimates of AR based on logistic models for case-control data to allow for confounding factors and secondary exposures. To produce confidence intervals, we derived variance estimates for AR under the logistic model and for various designs for sampling controls. Calculations for discrete exposure and confounding factors require covariances between estimates of the risk parameters of the logistic model and the proportions of cases with given levels of exposure and confounding factors. These covariances are estimated from Taylor series expansions applied to implicit functions. Similar calculations for continuous exposures are derived using influence functions. Simulations indicate that those asymptotic procedures yield reliable variance estimates and confidence intervals with near nominal coverage. An example illustrates the usefulness of variance calculations in selecting a logistic model that is neither so simplified as to exhibit systematic lack of fit nor so complicated as to inflate the variance of the estimate of AR.  相似文献   

7.
Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well‐established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change‐in‐estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p‐values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low‐dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.  相似文献   

8.
Knapp M 《Human heredity》2008,66(2):111-121
Two approaches are described to estimate relative risks from significant family-based association studies. They can be used to obtain either point estimates or confidence regions. The approaches are evaluated by a simulation study and illustrated by application to a real data set. It is shown that both approaches largely reduce the bias in the relative risk estimates which can occur in case that the significant outcome of the study from which the relative risks are estimated is ignored.  相似文献   

9.
Multipoint linkage analysis is a powerful method for mapping a rare disease gene on the human gene map despite limited genotype and pedigree data. However, there is no standard procedure for determining a confidence interval for gene location by using multipoint linkage analysis. A genetic counselor needs to know the confidence interval for gene location in order to determine the uncertainty of risk estimates provided to a consultant on the basis of DNA studies. We describe a resampling, or "bootstrap," method for deriving an approximate confidence interval for gene location on the basis of data from a single pedigree. This method was used to define an approximate confidence interval for the location of a gene causing nonsyndromal X-linked mental retardation in a single pedigree. The approach seemed robust in that similar confidence intervals were derived by using different resampling protocols. Quantitative bounds for the confidence interval were dependent on the genetic map chosen. Once an approximate confidence interval for gene location was determined for this pedigree, it was possible to use multipoint risk analysis to estimate risk intervals for women of unknown carrier status. Despite the limited genotype data, the combination of the resampling method and multipoint risk analysis had a dramatic impact on the genetic advice available to consultants.  相似文献   

10.
In this article, we provide a method of estimation for the treatment effect in the adaptive design for censored survival data with or without adjusting for risk factors other than the treatment indicator. Within the semiparametric Cox proportional hazards model, we propose a bias-adjusted parameter estimator for the treatment coefficient and its asymptotic confidence interval at the end of the trial. The method for obtaining an asymptotic confidence interval and point estimator is based on a general distribution property of the final test statistic from the weighted linear rank statistics at the interims with or without considering the nuisance covariates. The computation of the estimates is straightforward. Extensive simulation studies show that the asymptotic confidence intervals have reasonable nominal probability of coverage, and the proposed point estimators are nearly unbiased with practical sample sizes.  相似文献   

11.
ABSTRACT Sightability models have been used to estimate population size of many wildlife species; however, a limitation of these models is an assumption that groups of animals observed and counted during aerial surveys are enumerated completely. Replacing these unknown counts with maximum observed counts, as is typically done, produces population size estimates that are negatively biased. This bias can be substantial depending on the degree of undercounting occurring. We first investigated a method-of-moments estimator of group sizes. We then defined a population size estimator using the method-of-moments estimator of group sizes in place of maximum counts in the traditional sightability models, thereby correcting for bias associated with undercounting group size. We also provide associated equations for calculating the variance of our estimator. This estimator is an improvement over existing sightability model techniques because it significantly reduces bias, and variance estimates provide near nominal confidence interval coverage. The data needed for this estimator can be easily collected and implemented by wildlife managers with a field crew of only 3 individuals and little additional flight or personnel time beyond the normal requirements for developing sightability models.  相似文献   

12.
13.
Precise measures of population abundance and trend are needed for species conservation; these are most difficult to obtain for rare and rapidly changing populations. We compare uncertainty in densities estimated from spatio–temporal models with that from standard design-based methods. Spatio–temporal models allow us to target priority areas where, and at times when, a population may most benefit. Generalised additive models were fitted to a 31-year time series of point-transect surveys of an endangered Hawaiian forest bird, the Hawai‘i ‘ākepa Loxops coccineus. This allowed us to estimate bird densities over space and time. We used two methods to quantify uncertainty in density estimates from the spatio–temporal model: the delta method (which assumes independence between detection and distribution parameters) and a variance propagation method. With the delta method we observed a 52% decrease in the width of the design-based 95% confidence interval (CI), while we observed a 37% decrease in CI width when propagating the variance. We mapped bird densities as they changed across space and time, allowing managers to evaluate management actions. Integrating detection function modelling with spatio–temporal modelling exploits survey data more efficiently by producing finer-grained abundance estimates than are possible with design-based methods as well as producing more precise abundance estimates. Model-based approaches require switching from making assumptions about the survey design to assumptions about bird distribution. Such a switch warrants consideration. In this case the model-based approach benefits conservation planning through improved management efficiency and reduced costs by taking into account both spatial shifts and temporal changes in population abundance and distribution.  相似文献   

14.
Recombination is a fundamental evolutionary force. Therefore the population recombination rate ρ plays an important role in the analysis of population genetic data; however, it is notoriously difficult to estimate. This difficulty applies both to the accuracy of commonly used estimates and to the computational efforts required to obtain them. Some particularly popular methods are based on approximations to the likelihood. They require considerably less computational efforts than the full-likelihood method with not much less accuracy. Nevertheless, the computation of these approximate estimates can still be very time consuming, in particular when the sample size is large. Although auxiliary quantities for composite likelihood estimates can be computed in advance and stored in tables, these tables need to be recomputed if either the sample size or the mutation rate θ changes. Here we introduce a new method based on regression combined with boosting as a model selection technique. For large samples, it requires much less computational effort than other approximate methods, while providing similar levels of accuracy. Notably, for a sample of hundreds or thousands of individuals, the estimate of ρ using regression can be obtained on a single personal computer within a couple of minutes while other methods may need a couple of days or months (or even years). When the sample size is smaller (n ≤ 50), our new method remains computational efficient but produces biased estimates. We expect the new estimates to be helpful when analyzing large samples and/or many loci with possibly different mutation rates.  相似文献   

15.
ABSTRACT.   Mountain Plover ( Charadrius montanus ) populations are inefficiently sampled by Breeding Bird Surveys. As a result, targeted sampling of select populations of this species (with an estimated global population of 11,000–14,000 birds) can be valuable. Our objectives were to determine the breeding distribution and estimate the size of the Mountain Plover population in Oklahoma. We conducted a randomized point count survey in an area where Mountain Plovers were previously known to breed and conducted additional surveys over a larger area to better delimit the distribution. We used a removal model to estimate detection probability for raw counts obtained from 1104 point counts in 2004 and 2005, and derived a state-level population estimate using the detection-adjusted counts. Mountain Plovers used flat, bare, cultivated fields for nesting, and their distribution was closely tied to the presence of clay loam soils. We estimated that at least 68–91 Mountain Plovers bred in Oklahoma in 2004–2005. The low breeding density we observed may be due to the location of our study area near the southeastern edge of the breeding range of these plovers, the low-quality habitat provided by cultivated landscapes, or a combination of factors. Because the number of birds is small, the status of the Oklahoma population is not likely to have a large effect on the global population. However, additional information is needed to help determine if cultivated landscapes represented population sources or sinks.  相似文献   

16.
Schweder T 《Biometrics》2003,59(4):974-983
Maximum likelihood estimates of abundance are obtained from repeated photographic surveys of a closed stratified population with naturally marked and unmarked individuals. Capture intensities are assumed log-linear in stratum, year, and season. In the chosen model, an approximate confidence distribution for total abundance of bowhead whales, with an accompanying likelihood reduced of nuisance parameters, is found from a parametric bootstrap experiment. The confidence distribution depends on the assumed study protocol. A confidence distribution that is exact (except for the effect of discreteness) is found by conditioning in the unstratified case without unmarked individuals.  相似文献   

17.
Ro S  Rannala B 《Genetics》2007,177(1):9-16
A new method is developed for estimating rates of somatic mutation in vivo. The stop-enhanced green fluorescent protein (EGFP) transgenic mouse carries multiple copies of an EGFP gene with a premature stop codon. The gene can revert to a functional form via point mutations. Mice treated with a potent mutagen, N-ethyl-N-nitrosourea (ENU), and mice treated with a vehicle alone are assayed for mutations in liver cells. A stochastic model is developed to model the mutation and gene expression processes and maximum-likelihood estimators of the model parameters are derived. A likelihood-ratio test (LRT) is developed for detecting mutagenicity. Parametric bootstrap simulations are used to obtain confidence intervals of the parameter estimates and to estimate the significance of the LRT. The LRT is highly significant (alpha < 0.01) and the 95% confidence interval for the relative effect of the mutagen (the ratio of the rate of mutation during the interval of mutagen exposure to the rate of background mutation) ranges from a minimum 200-fold effect of the mutagen to a maximum 2000-fold effect.  相似文献   

18.
S. Mandal  J. Qin  R.M. Pfeiffer 《Biometrics》2023,79(3):1701-1712
We propose and study a simple and innovative non-parametric approach to estimate the age-of-onset distribution for a disease from a cross-sectional sample of the population that includes individuals with prevalent disease. First, we estimate the joint distribution of two event times, the age of disease onset and the survival time after disease onset. We accommodate that individuals had to be alive at the time of the study by conditioning on their survival until the age at sampling. We propose a computationally efficient expectation–maximization (EM) algorithm and derive the asymptotic properties of the resulting estimates. From these joint probabilities we then obtain non-parametric estimates of the age-at-onset distribution by marginalizing over the survival time after disease onset to death. The method accommodates categorical covariates and can be used to obtain unbiased estimates of the covariate distribution in the source population. We show in simulations that our method performs well in finite samples even under large amounts of truncation for prevalent cases. We apply the proposed method to data from female participants in the Washington Ashkenazi Study to estimate the age-at-onset distribution of breast cancer associated with carrying BRCA1 or BRCA2 mutations.  相似文献   

19.
Researchers are often interested in predicting outcomes, detecting distinct subgroups of their data, or estimating causal treatment effects. Pathological data distributions that exhibit skewness and zero‐inflation complicate these tasks—requiring highly flexible, data‐adaptive modeling. In this paper, we present a multipurpose Bayesian nonparametric model for continuous, zero‐inflated outcomes that simultaneously predicts structural zeros, captures skewness, and clusters patients with similar joint data distributions. The flexibility of our approach yields predictions that capture the joint data distribution better than commonly used zero‐inflated methods. Moreover, we demonstrate that our model can be coherently incorporated into a standardization procedure for computing causal effect estimates that are robust to such data pathologies. Uncertainty at all levels of this model flow through to the causal effect estimates of interest—allowing easy point estimation, interval estimation, and posterior predictive checks verifying positivity, a required causal identification assumption. Our simulation results show point estimates to have low bias and interval estimates to have close to nominal coverage under complicated data settings. Under simpler settings, these results hold while incurring lower efficiency loss than comparator methods. We use our proposed method to analyze zero‐inflated inpatient medical costs among endometrial cancer patients receiving either chemotherapy or radiation therapy in the SEER‐Medicare database.  相似文献   

20.
The laboratory is dealing with reporting tests as information needed to make clinical decisions. The traditional statistical quality control measures which assigns reference ranges based on 95 percent confidence intervals is insufficient for diagnostic tests that assign risk. We construct a basis for risk assignment by a method that builds on the 2 x 2 contingency table used to calculate the C2 goodness-of-fit and Bayesian estimates. The widely used logistic regression is a subset of the regression method, as it only considers dichotomous outcome choices. We use examples of multivalued predictor(s) and a multivalued as well as dichotomous outcome. Outcomes analyses are quite easy using the ordinal logit regression model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号