首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Land managers must balance the needs of a variety of species when manipulating habitats. Structured decision making provides a systematic means of defining choices and choosing among alternative management options; implementation of a structured decision requires quantitative approaches to predicting consequences of management on the relevant species. Multi-species occupancy models provide a convenient framework for making structured decisions when the management objective is focused on a collection of species. These models use replicate survey data that are often collected on managed lands. Occupancy can be modeled for each species as a function of habitat and other environmental features, and Bayesian methods allow for estimation and prediction of collective responses of groups of species to alternative scenarios of habitat management. We provide an example of this approach using data from breeding bird surveys conducted in 2008 at the Patuxent Research Refuge in Laurel, Maryland, evaluating the effects of eliminating meadow and wetland habitats on scrub-successional and woodland-breeding bird species using summed total occupancy of species as an objective function. Removal of meadows and wetlands decreased value of an objective function based on scrub-successional species by 23.3% (95% CI: 20.3–26.5), but caused only a 2% (0.5, 3.5) increase in value of an objective function based on woodland species, documenting differential effects of elimination of meadows and wetlands on these groups of breeding birds. This approach provides a useful quantitative tool for managers interested in structured decision making. © 2012 The Wildlife Society.  相似文献   

2.
A multistage single arm phase II trial with binary endpoint is considered. Bayesian posterior probabilities are used to monitor futility in interim analyses and efficacy in the final analysis. For a beta‐binomial model, decision rules based on Bayesian posterior probabilities are converted to “traditional” decision rules in terms of number of responders among patients observed so far. Analytical derivations are given for the probability of stopping for futility and for the probability to declare efficacy. A workflow is presented on how to select the parameters specifying the Bayesian design, and the operating characteristics of the design are investigated. It is outlined how the presented approach can be transferred to statistical models other than the beta‐binomial model.  相似文献   

3.
Owing to the exponential growth of genome databases, phylogenetic trees are now widely used to test a variety of evolutionary hypotheses. Nevertheless, computation time burden limits the application of methods such as maximum likelihood nonparametric bootstrap to assess reliability of evolutionary trees. As an alternative, the much faster Bayesian inference of phylogeny, which expresses branch support as posterior probabilities, has been introduced. However, marked discrepancies exist between nonparametric bootstrap proportions and Bayesian posterior probabilities, leading to difficulties in the interpretation of sometimes strongly conflicting results. As an attempt to reconcile these two indices of node reliability, we apply the nonparametric bootstrap resampling procedure to the Bayesian approach. The correlation between posterior probabilities, bootstrap maximum likelihood percentages, and bootstrapped posterior probabilities was studied for eight highly diverse empirical data sets and were also investigated using experimental simulation. Our results show that the relation between posterior probabilities and bootstrapped maximum likelihood percentages is highly variable but that very strong correlations always exist when Bayesian node support is estimated on bootstrapped character matrices. Moreover, simulations corroborate empirical observations in suggesting that, being more conservative, the bootstrap approach might be less prone to strongly supporting a false phylogenetic hypothesis. Thus, apparent conflicts in topology recovered by the Bayesian approach were reduced after bootstrapping. Both posterior probabilities and bootstrap supports are of great interest to phylogeny as potential upper and lower bounds of node reliability, but they are surely not interchangeable and cannot be directly compared.  相似文献   

4.
Overabundant populations of ungulates have caused environmental degradation and loss of biological diversity in ecosystems throughout the world. Culling or regulated harvest is often used to control overabundant species. These methods are difficult to implement in national parks, other types of conservation reserves, or in residential areas where public hunting may be forbidden by policy. As a result, fertility control has been recommended as a non-lethal alternative for regulating ungulate populations. We evaluate this alternative using white-tailed deer in national parks in the vicinity of Washington, D.C., USA as a model system. Managers seek to reduce densities of white-tailed deer from the current average (50 deer per km2) to decrease harm to native plant communities caused by deer. We present a Bayesian hierarchical model using 13 years of population estimates from 8 national parks in the National Capital Region Network. We offer a novel way to evaluate management actions relative to goals using short term forecasts. Our approach confirms past analyses that fertility control is incapable of rapidly reducing deer abundance. Fertility control can be combined with culling to maintain a population below carrying capacity with a high probability of success. This gives managers confronted with problematic overabundance a framework for implementing management actions with a realistic assessment of uncertainty.  相似文献   

5.
Human activities have severely disrupted the Lake Erie ecosystem. Recent changes in the structure of the lower trophic level associated with exotic species invasions and reduced nutrient loading have created ecological uncertainties for fisheries management. Decisions that naïvely assume certainty may be different and suboptimal compared to choices that consider uncertainty. Here we illustrate how multiobjective Bayesian decision analysis can recognize the multiple goals of management in evaluations of the effect of ecological uncertainties on management and the value of information from ecological research. Value judgments and subjective probabilities required by the decision analysis were provided by six Lake Erie fishery agency biologists. The Lake Erie Ecological Model was used to project the impacts of each combination of management actions and lower trophic level parameter values. The analysis shows that explicitly considering lower trophic level uncertainties can alter decisions concerning Lake Erie fishery harvests. Of the research projects considered, investigation of goby predation of zebra mussels (Dreissena sp.) and lakewide estimation of secondary production appear to have the greatest expected value for fisheries management. We also find that changes in the weights assigned to management goals affects decisions and value of information more than do changes in probability judgments.  相似文献   

6.
Global change issues are complex and the consequences of decisions are often highly uncertain. The large spatial and temporal scales and stakes involved make it important to take account of present and potential consequences in decision-making. Standard approaches to decision-making under uncertainty require information about the likelihood of alternative states, how states and actions combine to form outcomes and the net benefits of different outcomes. For global change issues, however, the set of potential states is often unknown, much less the probabilities, effect of actions or their net benefits. Decision theory, thresholds, scenarios and resilience thinking can expand awareness of the potential states and outcomes, as well as of the probabilities and consequences of outcomes under alternative decisions.  相似文献   

7.
Fisheries assessment scientists can learn at least three lessons from the collapse of the northern cod off Newfoundland: (1) assessment errors can contribute to overfishing through optimistic long-term forecasts leading to the build-up of overcapacity or through optimistic assessments which lead to TACs being set higher than they should; (2) stock size overestimation is a major risk when commercial catch per effort is used as an abundance trend index, so there is continued need to invest in survey indices of abundance trend no matter what assessment methodology is used; and (3) the risk of recruitment overfishing exists and may be high even for very fecund species like cod. This implies that harvest rate targets should be lower than has often been assumed, especially when stock size assessments are uncertain. In the end, the high cost of information for accurate stock assessment may call for an alternative approach to management, involving regulation of exploitation rate via measures such as large-scale closures (refuges) that directly restrict the proportion of fish available to harvest. Development of predictive models for such regulatory options is a major challenge for fisheries assessment science.  相似文献   

8.
This article presents a framework to evaluate emerging systems in life cycle assessment (LCA). Current LCA methods are effective for established systems; however, lack of data often inhibits robust analysis of future products or processes that may benefit the most from life cycle information. In many cases the life cycle inventory (LCI) of a system can change depending on its development pathway. Modeling emerging systems allows insights into probable trends and a greater understanding of the effect of future scenarios on LCA results. The proposed framework uses Bayesian probabilities to model technology adoption. The method presents a unique approach to modeling system evolution and can be used independently or within the context of an agent‐based model (ABM). LCA can be made more robust and dynamic by using this framework to couple scenario modeling with life cycle data, analyzing the effect of decision‐making patterns over time. Potential uses include examining the changing urban metabolism of growing cities, understanding the development of renewable energy technologies, identifying transformations in material flows over space and time, and forecasting industrial networks for developing products. A switchgrass‐to‐energy case demonstrates the approach.  相似文献   

9.
Many empirical studies have revealed considerable differences between nonparametric bootstrapping and Bayesian posterior probabilities in terms of the support values for branches, despite claimed predictions about their approximate equivalence. We investigated this problem by simulating data, which were then analyzed by maximum likelihood bootstrapping and Bayesian phylogenetic analysis using identical models and reoptimization of parameter values. We show that Bayesian posterior probabilities are significantly higher than corresponding nonparametric bootstrap frequencies for true clades, but also that erroneous conclusions will be made more often. These errors are strongly accentuated when the models used for analyses are underparameterized. When data are analyzed under the correct model, nonparametric bootstrapping is conservative. Bayesian posterior probabilities are also conservative in this respect, but less so.  相似文献   

10.
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike''s Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered.  相似文献   

11.
The Bayesian method for estimating species phylogenies from molecular sequence data provides an attractive alternative to maximum likelihood with nonparametric bootstrap due to the easy interpretation of posterior probabilities for trees and to availability of efficient computational algorithms. However, for many data sets it produces extremely high posterior probabilities, sometimes for apparently incorrect clades. Here we use both computer simulation and empirical data analysis to examine the effect of the prior model for internal branch lengths. We found that posterior probabilities for trees and clades are sensitive to the prior for internal branch lengths, and priors assuming long internal branches cause high posterior probabilities for trees. In particular, uniform priors with high upper bounds bias Bayesian clade probabilities in favor of extreme values. We discuss possible remedies to the problem, including empirical and full Bayesian methods and subjective procedures suggested in Bayesian hypothesis testing. Our results also suggest that the bootstrap proportion and Bayesian posterior probability are different measures of accuracy, and that the bootstrap proportion, if interpreted as the probability that the clade is true, can be either too liberal or too conservative.  相似文献   

12.
Fair-balance paradox, star-tree paradox, and Bayesian phylogenetics   总被引:1,自引:0,他引:1  
The star-tree paradox refers to the conjecture that the posterior probabilities for the three unrooted trees for four species (or the three rooted trees for three species if the molecular clock is assumed) do not approach 1/3 when the data are generated using the star tree and when the amount of data approaches infinity. It reflects the more general phenomenon of high and presumably spurious posterior probabilities for trees or clades produced by the Bayesian method of phylogenetic reconstruction, and it is perceived to be a manifestation of the deeper problem of the extreme sensitivity of Bayesian model selection to the prior on parameters. Analysis of the star-tree paradox has been hampered by the intractability of the integrals involved. In this article, I use Laplacian expansion to approximate the posterior probabilities for the three rooted trees for three species using binary characters evolving at a constant rate. The approximation enables calculation of posterior tree probabilities for arbitrarily large data sets. Both theoretical analysis of the analogous fair-coin and fair-balance problems and computer simulation for the tree problem confirmed the existence of the star-tree paradox. When the data size n --> infinity, the posterior tree probabilities do not converge to 1/3 each, but they vary among data sets according to a statistical distribution. This distribution is characterized. Two strategies for resolving the star-tree paradox are explored: (1) a nonzero prior probability for the degenerate star tree and (2) an increasingly informative prior forcing the internal branch length toward zero. Both appear to be effective in resolving the paradox, but the latter is simpler to implement. The posterior tree probabilities are found to be very sensitive to the prior.  相似文献   

13.
Hans C  Dunson DB 《Biometrics》2005,61(4):1018-1026
In regression applications with categorical predictors, interest often focuses on comparing the null hypothesis of homogeneity to an ordered alternative. This article proposes a Bayesian approach for addressing this problem in the setting of normal linear and probit regression models. The regression coefficients are assigned a conditionally conjugate prior density consisting of mixtures of point masses at 0 and truncated normal densities, with a (possibly unknown) changepoint parameter included to accommodate umbrella ordering. Two strategies of prior elicitation are considered: (1) a Bayesian Bonferroni approach in which the probability of the global null hypothesis is specified and local hypotheses are considered independent; and (2) an approach which treats these probabilities as random. A single Gibbs sampling chain can be used to obtain posterior probabilities for the different hypotheses and to estimate regression coefficients and predictive quantities either by model averaging or under the preferred hypothesis. The methods are applied to data from a carcinogenesis study.  相似文献   

14.
Abstract

Quantitative risk assessment (QRA) approaches systematically evaluate the likelihood, impacts, and risk of adverse events. QRA using fault tree analysis (FTA) is based on the assumptions that failure events have crisp probabilities and they are statistically independent. The crisp probabilities of the events are often absent, which leads to data uncertainty. However, the independence assumption leads to model uncertainty. Experts’ knowledge can be utilized to obtain unknown failure data; however, this process itself is subject to different issues such as imprecision, incompleteness, and lack of consensus. For this reason, to minimize the overall uncertainty in QRA, in addition to addressing the uncertainties in the knowledge, it is equally important to combine the opinions of multiple experts and update prior beliefs based on new evidence. In this article, a novel methodology is proposed for QRA by combining fuzzy set theory and evidence theory with Bayesian networks to describe the uncertainties, aggregate experts’ opinions, and update prior probabilities when new evidences become available. Additionally, sensitivity analysis is performed to identify the most critical events in the FTA. The effectiveness of the proposed approach has been demonstrated via application to a practical system.  相似文献   

15.
The assessment of the effectiveness of a treatment in a clinical trial, depends on calculating p-values. However, p-values are only indirect and partial indicators of a genuine effect. Particularly in situations where publication bias is very likely, assessment using a p-value of 0.05 may not be sufficiently cautious. In other situations it seems reasonable to believe that assessment based on p-values may be unduly conservative. Assessments could be improved by using prior information. This implies using a Bayesian approach to take account of prior probability. However, the use of prior information in the form of expert opinion can allow bias. A method is given here that applies to assessments already included or likely to be included in the Cochrane Collaboration, excluding those reviews concerning new drugs. This method uses prior information and a Bayesian approach, but the prior information comes not from expert opinion but simply from the distribution of effectiveness apparent in a random sample of summary statistics in the Cochrane Collaboration. The method takes certain types of summary statistics and their confidence intervals and with the help of a graph, translates this into probabilities that the treatments being trialled are effective.  相似文献   

16.
A P Grieve 《Biometrics》1985,41(4):979-990
Statisticians have been critical of the use of the two-period crossover designs for clinical trials because the estimate of the treatment difference is biased when the carryover effects of the two treatments are not equal. In the standard approach, if the null hypothesis of equal carryover effects is not rejected, data from both periods are used to estimate and test for treatment differences; if the null hypothesis is rejected, data from the first period alone are used. A Bayesian analysis based on the Bayes factor against unequal carryover effects is given. Although this Bayesian approach avoids the "all-or-nothing" decision inherent in the standard approach, it recognizes that with small trials it is difficult to provide unequivocal evidence that the carryover effects of the two treatments are equal, and thus that the interpretation of the difference between treatment effects is highly dependent on a subjective assessment of the reality or not of equal carryover effects.  相似文献   

17.
Summary Time varying, individual covariates are problematic in experiments with marked animals because the covariate can typically only be observed when each animal is captured. We examine three methods to incorporate time varying, individual covariates of the survival probabilities into the analysis of data from mark‐recapture‐recovery experiments: deterministic imputation, a Bayesian imputation approach based on modeling the joint distribution of the covariate and the capture history, and a conditional approach considering only the events for which the associated covariate data are completely observed (the trinomial model). After describing the three methods, we compare results from their application to the analysis of the effect of body mass on the survival of Soay sheep (Ovis aries) on the Isle of Hirta, Scotland. Simulations based on these results are then used to make further comparisons. We conclude that both the trinomial model and Bayesian imputation method perform best in different situations. If the capture and recovery probabilities are all high, then the trinomial model produces precise, unbiased estimators that do not depend on any assumptions regarding the distribution of the covariate. In contrast, the Bayesian imputation method performs substantially better when capture and recovery probabilities are low, provided that the specified model of the covariate is a good approximation to the true data‐generating mechanism.  相似文献   

18.
Economic analysis can be a guide to determining the level of actions taken to reduce nitrogen (N) losses and reduce environmental risk in a cost-effective manner while also allowing consideration of relative costs of controls to various groups. The biophysical science of N control, especially from nonpoint sources such as agriculture, is not certain. Widespread precise data do not exist for a river basin (or often even for a watershed) that couples management practices and other actions to reduce nonpoint N losses with specific delivery from the basin. The causal relationships are clouded by other factors influencing N flows, such as weather, temperature, and soil characteristics. Even when the science is certain, economic analysis has its own sets of uncertainties and simplifying economic assumptions. The economic analysis of the National Hypoxia Assessment provides an example of economic analysis based on less than complete scientific information that can still provide guidance to policy makers about the economic consequences of alternative approaches. One critical value to policy makers comes from bounding the economic magnitude of the consequences of alternative actions. Another value is the identification of impacts outside the sphere of initial concerns. Such analysis can successfully assess relative impacts of different degrees of control of N losses within the basin as well as outside the basin. It can demonstrate the extent to which costs of control of any one action increase with the intensity of application of control.  相似文献   

19.
Individual perception of vaccine safety is an important factor in determining a person's adherence to a vaccination program and its consequences for disease control. This perception, or belief, about the safety of a given vaccine is not a static parameter but a variable subject to environmental influence. To complicate matters, perception of risk (or safety) does not correspond to actual risk. In this paper we propose a way to include the dynamics of such beliefs into a realistic epidemiological model, yielding a more complete depiction of the mechanisms underlying the unraveling of vaccination campaigns. The methodology proposed is based on Bayesian inference and can be extended to model more complex belief systems associated with decision models. We found the method is able to produce behaviors which approximate what has been observed in real vaccine and disease scare situations. The framework presented comprises a set of useful tools for an adequate quantitative representation of a common yet complex public-health issue. These tools include representation of beliefs as Bayesian probabilities, usage of logarithmic pooling to combine probability distributions representing opinions, and usage of natural conjugate priors to efficiently compute the Bayesian posterior. This approach allowed a comprehensive treatment of the uncertainty regarding vaccination behavior in a realistic epidemiological model.  相似文献   

20.

Background

The grades of recommendation, assessment, development and evaluation (GRADE) approach is widely implemented in systematic reviews, health technology assessment and guideline development organisations throughout the world. A key advantage to this approach is that it aids transparency regarding judgments on the quality of evidence. However, the intricacies of making judgments about research methodology and evidence make the GRADE system complex and challenging to apply without training.

Methods

We have developed a semi-automated quality assessment tool (SAQAT) l based on GRADE. This is informed by responses by reviewers to checklist questions regarding characteristics that may lead to unreliability. These responses are then entered into the Bayesian network to ascertain the probabilities of risk of bias, inconsistency, indirectness, imprecision and publication bias conditional on review characteristics. The model then combines these probabilities to provide a probability for each of the GRADE overall quality categories. We tested the model using a range of plausible scenarios that guideline developers or review authors could encounter.

Results

Overall, the model reproduced GRADE judgements for a range of scenarios. Potential advantages over standard assessment are use of explicit and consistent weightings for different review characteristics, forcing consideration of important but sometimes neglected characteristics and principled downgrading where small but important probabilities of downgrading are accrued across domains.

Conclusions

Bayesian networks have considerable potential for use as tools to assess the validity of research evidence. The key strength of such networks lies in the provision of a statistically coherent method for combining probabilities across a complex framework based on both belief and evidence. In addition to providing tools for less experienced users to implement reliability assessment, the potential for sensitivity analyses and automation may be beneficial for application and the methodological development of reliability tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号