首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 42 毫秒
1.
In the risk assessment methods for new and existing chemicals in the European Union (EU), environmental “risk” is characterized by the deterministic quotient of exposure and effects (PEC/PNEC). From a scientific viewpoint, the uncertainty in the risk quotient should be accounted for explicitly in the decision making, which can be done in a probabilistic risk framework. To demonstrate the feasibility and benefits of such a framework, a sample risk assessment for an existing chemical (dibutylphthalate, DBP) is presented in this paper. The example shows a probabilistic framework to be feasible with relatively little extra effort; such a framework also provides more relevant information. The deterministic risk quotients turned out to be worst cases at generally higher than the 95th percentile of the probability distributions. Sensitivity analysis proves to be a powerful tool in identifying the main sources of uncertainty and thus will be effective for efficient further testing. The distributions assigned to the assess ment factors (derivation of the PNEC) dominate the total uncertainty in the risk assessment; uncertainties in the release estimates come second. Large uncertainties are an inherent part of risk assessment that we have to deal with quantitatively. However, the most appropriate way to characterise effects and risks requires further attention. Recommendations for further study are identified.  相似文献   

2.
In recent years, risk assessors have increasingly been moving away from deterministic risk assessment approaches and are applying probabilistic approaches that incorporate distributions of possible values for each input parameter. This paper reviews several approaches that are being used or that could potentially be used to develop distributions for carcinogenic slope factors (CSFs). Based on the primary tool or framework that is applied, these approaches have been divided into the following three categories: the statistical framework, the decision analysis framework, and the biological framework. Work that has been done on each approach is summarized, and the aspects of variability and uncertainty that are incorporated into each approach are examined. The implications of the resulting distributional information for calculating risk estimates or risk-based concentrations is explored. The approaches differ in their stage of development, the degree to which they diverge from the U.S. Environmental Protection Agency's (EPA) current practice in establishing CSF values, their flexibility to accommodate varying data sets or theories of carcinogenicity, and their complexity of application. In some cases, wide ranges of potential potency estimates are indicated by these approaches. Such findings suggest widely divergent risk assessment implications and the need for additional evaluation of the goals of developing CSF distributions for use in risk assessment applications and the types of information that should be reflected in such distributions. Some combination of the features offered by these approaches may best support risk assessment and risk management decisions.  相似文献   

3.
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple‐ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple‐ensemble probabilistic assessment, the median of simulated yield change was ?4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981–2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple‐ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources.  相似文献   

4.
In the European Union, Directive 92/32/EC and EC Council Regulation (EC) 793/93 require the risk assessment of industrial chemicals. In this framework, it is agreed to characterise the level of “risk” by means of the deterministic quotient of exposure and effects parameters. Decision makers require that the uncertainty in the risk assessment be accounted for as explicitly as possible. Therefore, this paper intends to show the advantages and possibilities of a probabilistic human health risk assessment of an industrial chemical, dibutylphthalate (DBP). The risk assessment is based on non-cancer endpoints assumed to have a threshold for toxicity. This example risk assessment shows that a probabilistic risk assessment in the EU framework covering both the exposure and the effects assessment is feasible with currently available techniques. It shows the possibility of comparing the various uncertainties involved in a typical risk assessment, including the uncertainty in the exposure estimate, the uncertainty in the effect parameter, and the uncertainty in assessment factors used in the extrapolation from experimental animals to sensitive human beings. The analysis done did not confirm the reasonable worst-case character of the deterministic EU-assessment of DBP. Sensitivity analysis revealed the extrapolation procedure in the human effects assessment to be the main source of uncertainty. Since the probabilistic approach allows determination of the range of possible outcomes and their likelihood, it better informs both risk assessors and risk managers.  相似文献   

5.
Population variability and uncertainty are important features of biological systems that must be considered when developing mathematical models for these systems. In this paper we present probability-based parameter estimation methods that account for such variability and uncertainty. Theoretical results that establish well-posedness and stability for these methods are discussed. A probabilistic parameter estimation technique is then applied to a toxicokinetic model for trichloroethylene using several types of simulated data. Comparison with results obtained using a standard, deterministic parameter estimation method suggests that the probabilistic methods are better able to capture population variability and uncertainty in model parameters.  相似文献   

6.
7.
8.
There has been a trend in recent years toward the use of probabilistic methods for the analysis of uncertainty and variability in risk assessment. By developing a plausible distribution of risk, it is possible to obtain a more complete characterization of risk than is provided by either best estimates or upper limits. We describe in this paper a general framework for evaluating uncertainty and variability in risk estimation and outline how this framework can be used in the establishment of drinking water quality objectives. In addition to characterizing uncertainty and variability in risk, this framework also facilitates the identification of specific factors that contribute most to uncertainty and variability. The application of these probabilistic risk assessment methods is illustrated using tetrachloroethylene and trihalomethanes as examples.  相似文献   

9.
Life‐cycle assessment (LCA) practitioners build models to quantify resource consumption, environmental releases, and potential environmental and human health impacts of product systems. Most often, practitioners define a model structure, assign a single value to each parameter, and build deterministic models to approximate environmental outcomes. This approach fails to capture the variability and uncertainty inherent in LCA. To make good decisions, decision makers need to understand the uncertainty in and divergence between LCA outcomes for different product systems. Several approaches for conducting LCA under uncertainty have been proposed and implemented. For example, Monte Carlo simulation and fuzzy set theory have been applied in a limited number of LCA studies. These approaches are well understood and are generally accepted in quantitative decision analysis. But they do not guarantee reliable outcomes. A survey of approaches used to incorporate quantitative uncertainty analysis into LCA is presented. The suitability of each approach for providing reliable outcomes and enabling better decisions is discussed. Approaches that may lead to overconfident or unreliable results are discussed and guidance for improving uncertainty analysis in LCA is provided.  相似文献   

10.
Life cycle assessment (LCA) will always involve some subjectivity and uncertainty. This reality is especially true when the analysis concerns new technologies. Dealing with uncertainty can generate richer information and minimize some of the result mismatches currently encountered in the literature. As a way of analyzing future fuel cell vehicles and their potential new fuels, the Fuel Upstream Energy and Emission Model (FUEEM) developed at the University of California—Davis, pioneered two different ways to incorporate uncertainty into the analysis. First, the model works with probabilistic curves as inputs and with Monte Carlo simulation techniques to propagate the uncertainties. Second, the project involved the interested parties in the entire process, not only in the critical review phase. The objective of this paper is to present, as a case study, the tools and the methodologies developed to acquire most of the knowledge held by interested parties and to deal with their — eventually conflicted—interests. The analysis calculation methodology, the scenarios, and all assumed probabilistic curves were derived from a consensus of an international expert network discussion, using existing data in the literature along with new information collected from companies. The main part of the expert discussion process uses a variant of the Delphi technique, focusing on the group learning process through the information feedback feature. A qualitative analysis indicates that a higher level of credibility and a higher quality of information can be achieved through a more participatory process. The FUEEM method works well within technical information and also in establishing a reasonable set of simple scenarios. However, for a complex combination of scenarios, it will require some improvement. The time spent in the process was the major drawback of the method and some alternatives to share this time cost are suggested.  相似文献   

11.
Lin WY  Lee WC 《PloS one》2012,7(2):e32022
Quantifying exposure-disease associations is a central issue in epidemiology. Researchers of a study often present an odds ratio (or a logarithm of odds ratio, logOR) estimate together with its confidence interval (CI), for each exposure they examined. Here the authors advocate using the empirical-Bayes-based 'prediction intervals' (PIs) to bound the uncertainty of logORs. The PI approach is applicable to a panel of factors believed to be exchangeable (no extra information, other than the data itself, is available to distinguish some logORs from the others). The authors demonstrate its use in a genetic epidemiological study on age-related macular degeneration (AMD). The proposed PIs can enjoy straightforward probabilistic interpretations--a 95% PI has a probability of 0.95 to encompass the true value, and the expected number of true values that are being encompassed is 0.95m for a total of m 95% PIs. The PI approach is theoretically more efficient (producing shorter intervals) than the traditional CI approach. In the AMD data, the average efficiency gain is 51.2%. The PI approach is advocated to present the uncertainties of many logORs in a study, for its straightforward probabilistic interpretations and higher efficiency while maintaining the nominal coverage probability.  相似文献   

12.
Management of endangered species requires methods to assess the effects of strategies, providing a basis for deciding on a best course of action. An important component of assessment is population viability analysis (PVA). The latter may be formally implemented through decision analysis (DA). These methods are most useful for conservation when used in conjunction. In this paper we outline the objectives and the potential of both frameworks and their overlaps. Both are particularly helpful when dealing with uncertainty. A major problem for conservation decision-making is the interpretation of observations and scientific measurements. This paper considers probabilistic and non-probabilistic approaches to assessment and decision-making and recommends appropriate contexts for alternative approaches.  相似文献   

13.
14.
Film News     
This article describes simple games which can be used as aids to teaching the mathematics of exponential and logistic variation of population size. The approach is novel in that the games are based on the laws of chance, that is they are essentially stochastic (probabilistic) rather than deterministic. Typical results and a mathematical analysis of each game are presented.  相似文献   

15.
The long-term behavior of the stem-cement interface is one of the most frequent topics of discussion in the design of cemented total hip replacements, especially with regards to the process of damage accumulation in the cement layer. This effect is analyzed here comparing two different situations of the interface: completely bonded and debonded with friction. This comparative analysis is performed using a probabilistic computational approach that considers the variability and uncertainty of determinant factors that directly compromise the damage accumulation in the cement mantle. This stochastic technique is based on the combination of probabilistic finite elements (PFEM) and a cumulative damage approach known as B-model. Three random variables were considered: muscle and joint contact forces at the hip (both for walking and stair climbing), cement damage and fatigue properties of the cement. The results predicted that the regions with higher failure probability in the bulk cement are completely different depending on the stem-cement interface characteristics. In a bonded interface, critical sites appeared at the distal and medial parts of the cement, while for debonded interfaces, the critical regions were found distally and proximally. In bonded interfaces, the failure probability was higher than in debonded ones. The same conclusion may be established for stair climbing in comparison with walking activity.  相似文献   

16.
17.
The importance of fitting distributions to data for risk analysis continues to grow as regulatory agencies, like the Environmental Protection Agency (EPA), continue to shift from deterministic to probabilistic risk assessment techniques. The use of Monte Carlo simulation as a tool for propagating variability and uncertainty in risk requires specification of the risk model's inputs in the form of distributions or tables of data. Several software tools exist to support risk assessors in their efforts to develop distributions. However, users must keep in mind that these tools do not replace clear thought about judgments that must be made in characterizing the information from data. This overview introduces risk assessors to the statistical concepts and physical reasons that support important judgments about appropriate types of parametric distributions and goodness-of-fit. In the context of using data to improve risk assessment and ultimately risk management, this paper discusses issues related to the nature of the data (representativeness, quantity, and quality, correlation with space and time, and distinguishing between variability and uncertainty for a set of data), and matching data and distributions appropriately. All data analysis (whether “Frequentist” or “Bayesian” or oblivious to the distinction) requires the use of subjective judgment. The paper offers an iterative process for developing distributions using data to characterize variability and uncertainty for inputs to risk models that provides incentives for collecting better information when the value of information exceeds its cost. Risk analysts need to focus attention on characterizing the information appropriately for purposes of the risk assessment (and risk management questions at hand), not on characterization for its own sake.  相似文献   

18.
Ecologists and biogeographers usually rely on a single phylogenetic tree to study evolutionary processes that affect macroecological patterns. This approach ignores the fact that each phylogenetic tree is a hypothesis about the evolutionary history of a clade, and cannot be directly observed in nature. Also, trees often leave out many extant species, or include missing species as polytomies because of a lack of information on the relationship among taxa. Still, researchers usually do not quantify the effects of phylogenetic uncertainty in ecological analyses. We propose here a novel analytical strategy to maximize the use of incomplete phylogenetic information, while simultaneously accounting for several sources of phylogenetic uncertainty that may distort statistical inferences about evolutionary processes. We illustrate the approach using a clade‐wide analysis of the hummingbirds, evaluating how different sources of uncertainty affect several phylogenetic comparative analyses of trait evolution and biogeographic patterns. Although no statistical approximation can fully substitute for a complete and robust phylogeny, the method we describe and illustrate enables researchers to broaden the number of clades for which studies informed by evolutionary relationships are possible, while allowing the estimation and control of statistical error that arises from phylogenetic uncertainty. Software tools to carry out the necessary computations are offered.  相似文献   

19.
MOTIVATION: The study of carbohydrate sugar chains, or glycans, has been one of slow progress mainly due to the difficulty in establishing standard methods for analyzing their structures and biosynthesis. Glycans are generally tree structures that are more complex than linear DNA or protein sequences, and evidence shows that patterns in glycans may be present that spread across siblings and into further regions that are not limited by the edges in the actual tree structure itself. Current models were not able to capture such patterns. RESULTS: We have applied a new probabilistic model, called probabilistic sibling-dependent tree Markov model (PSTMM), which is able to inherently capture such complex patterns of glycans. Not only is the ability to capture such patterns important in itself, but this also implies that PSTMM is capable of performing multiple tree structure alignments efficiently. We prove through experimentation on actual glycan data that this new model is extremely useful for gaining insight into the hidden, complex patterns of glycans, which are so crucial for the development and functioning of higher level organisms. Furthermore, we also show that this model can be additionally utilized as an innovative approach to multiple tree alignment, which has not been applied to glycan chains before. This extension on the usage of PSTMM may be a major step forward for not only the structural analysis of glycans, but it may consequently prove useful for discovering clues into their function.  相似文献   

20.
Clustering of multivariate data is a commonly used technique in ecology, and many approaches to clustering are available. The results from a clustering algorithm are uncertain, but few clustering approaches explicitly acknowledge this uncertainty. One exception is Bayesian mixture modelling, which treats all results probabilistically, and allows comparison of multiple plausible classifications of the same data set. We used this method, implemented in the AutoClass program, to classify catchments (watersheds) in the Murray Darling Basin (MDB), Australia, based on their physiographic characteristics (e.g. slope, rainfall, lithology). The most likely classification found nine classes of catchments. Members of each class were aggregated geographically within the MDB. Rainfall and slope were the two most important variables that defined classes. The second-most likely classification was very similar to the first, but had one fewer class. Increasing the nominal uncertainty of continuous data resulted in a most likely classification with five classes, which were again aggregated geographically. Membership probabilities suggested that a small number of cases could be members of either of two classes. Such cases were located on the edges of groups of catchments that belonged to one class, with a group belonging to the second-most likely class adjacent. A comparison of the Bayesian approach to a distance-based deterministic method showed that the Bayesian mixture model produced solutions that were more spatially cohesive and intuitively appealing. The probabilistic presentation of results from the Bayesian classification allows richer interpretation, including decisions on how to treat cases that are intermediate between two or more classes, and whether to consider more than one classification. The explicit consideration and presentation of uncertainty makes this approach useful for ecological investigations, where both data and expectations are often highly uncertain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号