首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Ronald A. Fisher, who is the founder of maximum likelihood estimation (ML estimation), criticized the Bayes estimation of using a uniform prior distribution, because we can create estimates arbitrarily if we use Bayes estimation by changing the transformation used before the analysis. Thus, the Bayes estimates lack the scientific objectivity, especially when the amount of data is small. However, we can use the Bayes estimates as an approximation to the objective ML estimates if we use an appropriate transformation that makes the posterior distribution close to a normal distribution. One-to-one correspondence exists between a uniform prior distribution under a transformed scale and a non-uniform prior distribution under the original scale. For this reason, the Bayes estimation of ML estimates is essentially identical to the estimation using Jeffreys prior.  相似文献   

3.
We investigated the utility of adaptive management (AM) in wildlife management, reviewing our experiences in applying AM to overabundant sika deer (Cervus nippon) populations in Hokkaido, Japan. The management goals of our program were: (1) to maintain the population at moderate density levels preventing population irruption, (2) to reduce damage to crops and forests, and (3) to sustain a moderate yield of hunting without endangering the population. Because of significant uncertainty in biological and environmental parameters, we designed a “feedback” management program based on controlling hunting pressure. Three threshold levels of relative population size and four levels of hunting pressure were configured, with a choice of four corresponding management actions. Under this program, the Hokkaido Government has been promoting aggressive female culling to reduce the sika deer population since 1998. We devised a harvest-based estimation for population size using relative population size and the number of deer harvested, and found that the 1993 population size (originally estimated by extrapolation of aerial surveys) had been underestimated. To reduce observation errors, a harvest-based Bayesian estimation was developed and the 1993 population estimate was again revised. Analyses of population trends and harvest data demonstrate that hunting is an important large-scale experiment to obtain reliable estimation of population size. A serious side effect of hunting on sika deer was inadvertent lead poisoning of large birds of prey. The prohibition of the use of lead bullets by the Hokkaido Government was successful in reducing the lead poisoning, but the problem still remains. Two case studies on sika population irruption show that the densities set by maximum sustainable yield may be too high to prevent damage to agriculture, forestry, and/or ecosystems. Threshold management based on feedback control is better for ecosystem management. Since volunteer hunters favor higher hunting efficiency in resource management (e.g., venison), it is necessary to support the development of professional hunters for culling operations for ecosystem management, where lower densities of deer should be set for target areas. Hunting as resource management and culling for ecosystem management should be synergistically combined under AM.  相似文献   

4.
In this article we describe the construction of a general computer program for the iterative calculation of maximum likelihood estimators. The program is general in the sense that it allows the maximization of any given likelihood function. The user only has to write a subroutine LKLHD, in which the special likelihood function and their first and second derivatives will be calculated. This subroutine is an input parameter of the optimization program. This enables the user to employ one main program for the maximization of various likelihood functions. This advantage will be shown for the evaluation of qualitative dose response relationships (quantal assays: probit-, logit-analysis).  相似文献   

5.
6.
This paper provides asymptotic simultaneous confidence intervals for a success probability and intraclass correlation of the beta‐binomial model, based on the maximum likelihood estimator approach. The coverage probabilities of those intervals are evaluated. An application to screening mammography is presented as an example. The individual and simultaneous confidence intervals for sensitivity and specificity and the corresponding intraclass correlations are investigated. Two additional examples using influenza data and sex ratio data among sibships are also considered, where the individual and simultaneous confidence intervals are provided.  相似文献   

7.
Coral reef species are frequently the focus of bio-prospecting, and when promising bioactive compounds are identified there is often a need for the development of responsible harvesting based on relatively limited data. The Caribbean gorgonian Pseudopterogorgia elisabethae has been harvested in the Bahamas for over a decade. Data on population age structure and growth rates in conjunction with harvest data provide an opportunity to compare fishery practices and outcomes to those suggested by a Beverton-Holt fishery model. The model suggests a minimum colony size limit of 7–9 years of age (21–28 cm height), which would allow each colony 2–4 years of reproduction prior to harvesting. The Beverton-Holt model assumes that colonies at or above the minimum size limit are completely removed. In the P. elisabethae fishery, colonies are partially clipped and can be repeatedly harvested. Linear growth of surviving colonies was up to 3 times that predicted for colonies that were not harvested and biomass increase was up to 9 times greater than that predicted for undisturbed colonies. The survival of harvested colonies and compensatory growth increases yield, and yields at sites that had previously been harvested were generally greater than predicted by the Beverton-Holt model. The model also assumes recruitment is independent of fishing intensity, but lower numbers of young colonies in the fished populations, compared to unfished populations, suggest possible negative effects of the harvest on reproduction. This suggests the need for longer intervals between harvests. Because it can be developed from data that can be collected at a single time, the Beverton-Holt model provides a rational starting point for regulating new fisheries where long-term characterizations of population dynamics are rarely available. However, an adaptive approach to the fishery requires the incorporation of reproductive data.  相似文献   

8.
A sensitivity analysis of general stoichiometric networks is considered. The results are presented as a generalization of Metabolic Control Analysis, which has been concerned primarily with system sensitivities at steady state. An expression for time-varying sensitivity coefficients is given and the Summation and Connectivity Theorems are generalized. The results are compared to previous treatments. The analysis is accompanied by a discussion of the computation of the sensitivity coefficients and an application to a model of phototransduction.  相似文献   

9.
Summary .  In many studies, the aim is to learn about the direct exposure effect, that is, the effect not mediated through an intermediate variable. For example, in circulation disease studies it may be of interest to assess whether a suitable level of physical activity can prevent disease, even if it fails to prevent obesity. It is well known that stratification on the intermediate may introduce a so-called posttreatment selection bias. To handle this problem, we use the framework of principal stratification ( Frangakis and Rubin, 2002 , Biometrics 58, 21–29) to define a causally relevant estimand—the principal stratum direct effect (PSDE). The PSDE is not identified in our setting. We propose a method of sensitivity analysis that yields a range of plausible values for the causal estimand. We compare our work to similar methods proposed in the literature for handling the related problem of "truncation by death."  相似文献   

10.
Quantifying the influence of weather on yield variability is decisive for agricultural management under current and future climate anomalies. We extended an existing semiempirical modeling scheme that allows for such quantification. Yield anomalies, measured as interannual differences, were modeled for maize, soybeans, and wheat in the United States and 32 other main producer countries. We used two yield data sets, one derived from reported yields and the other from a global yield data set deduced from remote sensing. We assessed the capacity of the model to forecast yields within the growing season. In the United States, our model can explain at least two‐thirds (63%–81%) of observed yield anomalies. Its out‐of‐sample performance (34%–55%) suggests a robust yield projection capacity when applied to unknown weather. Out‐of‐sample performance is lower when using remote sensing‐derived yield data. The share of weather‐driven yield fluctuation varies spatially, and estimated coefficients agree with expectations. Globally, the explained variance in yield anomalies based on the remote sensing data set is similar to the United States (71%–84%). But the out‐of‐sample performance is lower (15%–42%). The performance discrepancy is likely due to shortcomings of the remote sensing yield data as it diminishes when using reported yield anomalies instead. Our model allows for robust forecasting of yields up to 2 months before harvest for several main producer countries. An additional experiment suggests moderate yield losses under mean warming, assuming no major changes in temperature extremes. We conclude that our model can detect weather influences on yield anomalies and project yields with unknown weather. It requires only monthly input data and has a low computational demand. Its within‐season yield forecasting capacity provides a basis for practical applications like local adaptation planning. Our study underlines high‐quality yield monitoring and statistics as critical prerequisites to guide adaptation under climate change.  相似文献   

11.
We study a system of partial differential equations which models the disease transmission dynamics of schistosomiasis. The model incorporates both the definitive human hosts and the intermediate snail hosts. The human hosts have an age-dependent infection rate and the snail hosts have an infection-age-dependent cercaria releasing rate. The parasite reproduction number R is computed and is shown to determine the disease dynamics. Stability results are obtained via both analytic and numerical studies. Results of the model are used to discuss age-targeted drug treatment strategies for humans. Sensitivity and uncertainty analysis is conducted to determine the role of various parameters on the variation of R. The effects of various drug treatment programs on disease control are compared in terms of both R and the mean parasite load within the human hosts.  相似文献   

12.
Hoff PD 《Biometrics》2005,61(4):1027-1036
This article develops a model-based approach to clustering multivariate binary data, in which the attributes that distinguish a cluster from the rest of the population may depend on the cluster being considered. The clustering approach is based on a multivariate Dirichlet process mixture model, which allows for the estimation of the number of clusters, the cluster memberships, and the cluster-specific parameters in a unified way. Such a clustering approach has applications in the analysis of genomic abnormality data, in which the development of different types of tumors may depend on the presence of certain abnormalities at subsets of locations along the genome. Additionally, such a mixture model provides a nonparametric estimation scheme for dependent sequences of binary data.  相似文献   

13.
An efficient method is presented to compute the probabilityof selection of a specified subset from the set of all subsetsof a fixed size where the subsets are taken from a populationwhose units have varying individual probabilities of selection.The problem is motivated by the computation of the exact marginallikelihood for the Cox proportional hazards model.  相似文献   

14.
In this paper the situation of extra population heterogeneity is discussed from a analysis of variance point of view. We first provide a non‐iterative way of estimating the variance of the heterogeneity distribution without estimating the heterogeneity distribution itself for Poisson and binomial counts. The consequences of the presence of heterogeneity in the estimation of the mean are discussed. We show that if the homogeneity assumption holds, the pooled mean is optimal while in the presence of strong heterogeneity, the simple (arithmetic) mean is an optimal estimator of the mean SMR or mean proportion. These results lead to the problem of finding an optimal estimator for situations not represented by these two extreme cases. We propose an iterative solution to this problem. Illustrations for the application of these findings are provided with examples from various areas.  相似文献   

15.
Objective: To adjust an excess hazard regression model with a random effect associated with a geographical level, the Département in France, and compare its parameter estimates with those obtained using a “fixed-effect” excess hazard regression model. Methods: An excess hazard regression model with a piecewise constant baseline hazard was used and a normal distribution was assumed for the random effect. Likelihood maximization was performed using a numerical integration technique, the Quadrature of Gauss–Hermite. Results were obtained with colon-rectum and thyroid cancer data from the French network of cancer registries. Result: The results were in agreement with what was theoretically expected. We showed a greater heterogeneity of the excess hazard in thyroid cancers than in colon-rectum cancers. The hazard ratios for the covariates as estimated with the mixed-effect model were close to those obtained with the fixed-effect model. However, unlike the fixed-effect model, the mixed-effect model allowed the analysis of data with a large number of clusters. The shrinkage estimator associated with Département is an optimal measure of Département-specific excess risk of death and the variance of the random effect gave information on the within-cluster correlation. Conclusion: An excess hazard regression model with random effect can be used for estimating variation in the risk of death due to cancer between many clusters of small sizes.  相似文献   

16.
Some grouping is necessary when constructing a Leslie matrix model because it involves discretizing a continuous process of births and deaths. The level of grouping is determined by the number of age classes and frequency of sampling. It is largely unknown what is lost or gained by using fewer age classes, and I address this question using aggregation theory. I derive an aggregator for a Leslie matrix model using weighted least squares, determine what properties an aggregated matrix inherits from the original matrix, evaluate aggregation error, and measure the influence of aggregation on asymptotic and transient behaviors. To gauge transient dynamics, I employ reactivity of the standardized Leslie matrix. I apply the aggregator to 10 Leslie models developed for animal populations drawn from a diverse set of species. Several properties are inherited by the aggregated matrix: (a) it is a Leslie matrix; (b) it is irreducible whenever the original matrix is irreducible; (c) it is primitive whenever the original matrix is primitive; and (d) its stable population growth rate and stable age distribution are consistent with those of the original matrix if the least squares weights are equal to the original stable age distribution. In the application, depending on the population modeled, when the least squares weights do not follow the stable age distribution, the stable population growth rate of the aggregated matrix may or may not be approximately consistent with that of the original matrix. Transient behavior is lost with high aggregation.  相似文献   

17.
BACKGROUND AND AIMS: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. METHODS: The GREENLAB model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. KEY RESULTS AND CONCLUSIONS: By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits--such as cob weight--and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GREENLAB maize model. The paper discusses the potentials of GREENLAB to represent environment x genotype interactions, in particular through its main state variable, the ratio of biomass supply over demand.  相似文献   

18.
Greenland S 《Biometrics》2001,57(1):182-188
Standard presentations of epidemiological results focus on incidence-ratio estimates derived from regression models fit to specialized study data. These data are often highly nonrepresentative of populations for which public-health impacts must be evaluated. Basic methods are provided for interval estimation of attributable fractions from model-based incidence-ratio estimates combined with independent survey estimates of the exposure distribution in the target population of interest. These methods are illustrated in estimation of the potential impact of magnetic-field exposures on childhood leukemia in the United States, based on pooled data from 11 case-control studies and a U.S. sample survey of magnetic-field exposures.  相似文献   

19.
This paper presents a Bayesian analysis of a time series of counts to assess its dependence on an explanatory variable. The time series represented is the incidence of the infectious disease ESBL-producing Klebsiella pneumoniae in an Australian hospital and the explanatory variable is the number of grams of antibiotic (third generation) cephalosporin used during that time. We demonstrate that there is a statistically significant relationship between disease occurrence and use of the antibiotic, lagged by three months. The model used is a parameter-driven model in the form of a generalized linear mixed model. Comparison of models is made in terms of mean square error.  相似文献   

20.
We construct Bayesian methods for semiparametric modeling of a monotonic regression function when the predictors are measured with classical error. Berkson error, or a mixture of the two. Such methods require a distribution for the unobserved (latent) predictor, a distribution we also model semiparametrically. Such combinations of semiparametric methods for the dose response as well as the latent variable distribution have not been considered in the measurement error literature for any form of measurement error. In addition, our methods represent a new approach to those problems where the measurement error combines Berkson and classical components. While the methods are general, we develop them around a specific application, namely, the study of thyroid disease in relation to radiation fallout from the Nevada test site. We use this data to illustrate our methods, which suggest a point estimate (posterior mean) of relative risk at high doses nearly double that of previous analyses but that also suggest much greater uncertainty in the relative risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号