首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The results of quantitative risk assessments are key factors in a risk manager's decision of the necessity to implement actions to reduce risk. The extent of the uncertainty in the assessment will play a large part in the degree of confidence a risk manager has in the reported significance and probability of a given risk. The two main sources of uncertainty in such risk assessments are variability and incertitude. In this paper we use two methods, a second-order two-dimensional Monte Carlo analysis and probability bounds analysis, to investigate the impact of both types of uncertainty on the results of a food-web exposure model. We demonstrate how the full extent of uncertainty in a risk estimate can be fully portrayed in a way that is useful to risk managers. We show that probability bounds analysis is a useful tool for identifying the parameters that contribute the most to uncertainty in a risk estimate and how it can be used to complement established practices in risk assessment. We conclude by promoting the use of probability analysis in conjunction with Monte Carlo analyses as a method for checking how plausible Monte Carlo results are in the full context of uncertainty.  相似文献   

2.
Drugs, sex and HIV: a mathematical model for New York City.   总被引:5,自引:0,他引:5  
A data-based mathematical model was formulated to assess the epidemiological consequences of heterosexual, intravenous drug use (IVDU) and perinatal transmission in New York City (NYC). The model was analysed to clarify the relationship between heterosexual and IVDU transmission and to provide qualitative and quantitative insights into the HIV epidemic in NYC. The results demonstrated the significance of the dynamic interaction of heterosexual and IVDU transmission. Scenario analysis of the model was used to suggest a new explanation for the stabilization of the seroprevalence level that has been observed in the NYC IVDU community; the proposed explanation does not rely upon any IVDU or sexual behavioural changes. Gender-specific risks of heterosexual transmission in IVDUs were also explored by scenario analysis. The results showed that the effect of the heterosexual transmission risk factor on increasing the risk of HIV infection depends upon the level of IVDU. The model was used to predict future numbers of adult and pediatric AIDS cases; a sensitivity analysis of the model showed that the confidence intervals on these prediction estimates were extremely wide. This prediction variability was due to the uncertainty in estimating the values of the models' thirty variables (twenty biological-behavioural transmission parameters and the initial sizes of ten subgroups). However, the sensitivity analysis revealed that only a few key variables were significant in contributing to the AIDS case prediction variability; partial rank correlation coefficients were calculated and used to identify and to rank the importance of these key variables. The results suggest that long-term precise estimates of the future number of AIDS cases will only be possible once the values of these key variables have been evaluated accurately.  相似文献   

3.
The importance of fitting distributions to data for risk analysis continues to grow as regulatory agencies, like the Environmental Protection Agency (EPA), continue to shift from deterministic to probabilistic risk assessment techniques. The use of Monte Carlo simulation as a tool for propagating variability and uncertainty in risk requires specification of the risk model's inputs in the form of distributions or tables of data. Several software tools exist to support risk assessors in their efforts to develop distributions. However, users must keep in mind that these tools do not replace clear thought about judgments that must be made in characterizing the information from data. This overview introduces risk assessors to the statistical concepts and physical reasons that support important judgments about appropriate types of parametric distributions and goodness-of-fit. In the context of using data to improve risk assessment and ultimately risk management, this paper discusses issues related to the nature of the data (representativeness, quantity, and quality, correlation with space and time, and distinguishing between variability and uncertainty for a set of data), and matching data and distributions appropriately. All data analysis (whether “Frequentist” or “Bayesian” or oblivious to the distinction) requires the use of subjective judgment. The paper offers an iterative process for developing distributions using data to characterize variability and uncertainty for inputs to risk models that provides incentives for collecting better information when the value of information exceeds its cost. Risk analysts need to focus attention on characterizing the information appropriately for purposes of the risk assessment (and risk management questions at hand), not on characterization for its own sake.  相似文献   

4.
Ecohydrologic models are a key tool in understanding plant–water interactions and their vulnerability to environmental change. Although implications of uncertainty in these models are often assessed within a strictly hydrologic context (for example, runoff modeling), the implications of uncertainty for estimation of vegetation water use are less frequently considered. We assess the influence of commonly used model parameters and inputs on predictions of catchment-scale evapotranspiration (ET) and runoff. By clarifying the implications of uncertainty, we identify strategies for insuring that the quality of data used to drive models is considered in interpretation of model predictions. Our assessment also provides insight into unique features of semi-arid, urbanizing watersheds that shape ET patterns. We consider four sources of uncertainty: soil parameters, irrigation inputs, and spatial extrapolation of both point precipitation and air temperature for an urbanizing, semi-arid coastal catchment in Santa Barbara, CA. Our results highlight a seasonal transition from soil parameters to irrigation inputs as key controls on ET. Both ET and runoff show substantial sensitivity to uncertainty in soil parameters, even after parameters have been calibrated against observed streamflow. Sensitivity to uncertainty in precipitation manifested primarily in winter runoff predictions, whereas sensitivity to irrigation manifested exclusively in modeled summer ET. Neither ET nor runoff was highly sensitive to uncertainty in spatial interpolation of temperature. Results argue that efforts to improve ecohydrologic modeling of vegetation water use and associated water-limited ecological processes in these semi-arid regions should focus on improving estimates of anthropogenic outdoor water use and explicit accounting of soil parameter uncertainty.  相似文献   

5.
A guideline is presented for selection of sensitivity analysis methods applied to microbial food safety process risk (MFSPR) models. The guideline provides useful boundaries and principles for selecting sensitivity analysis methods for MSFPR models. Although the guideline is predicated on a specific branch of risk assessment models related to food-borne diseases, the principles and recommendations provided are typically generally applicable to other types of risk models. Applicable situations include: prioritizing potential critical control points; identifying key sources of variability and uncertainty; and refinement, verification, and validation of a model. Based on the objective of the analysis, characteristics of the model under study, amount of detail expected from sensitivity analysis, and characteristics of the sensitivity analysis method, recommendations for selection of sensitivity analysis methods are provided. A decision framework for method selection is introduced. The decision framework can substantially facilitate the process of selecting a sensitivity analysis method.  相似文献   

6.
Purpose

Objective uncertainty quantification (UQ) of a product life-cycle assessment (LCA) is a critical step for decision-making. Environmental impacts can be measured directly or by using models. Underlying mathematical functions describe a model that approximate the environmental impacts during various LCA stages. In this study, three possible uncertainty sources of a mathematical model, i.e., input variability, model parameter (differentiate from input in this study), and model-form uncertainties, were investigated. A simple and easy to implement method is proposed to quantify each source.

Methods

Various data analytics methods were used to conduct a thorough model uncertainty analysis; (1) Interval analysis was used for input uncertainty quantification. A direct sampling using Monte Carlo (MC) simulation was used for interval analysis, and results were compared to that of indirect nonlinear optimization as an alternative approach. A machine learning surrogate model was developed to perform direct MC sampling as well as indirect nonlinear optimization. (2) A Bayesian inference was adopted to quantify parameter uncertainty. (3) A recently introduced model correction method based on orthogonal polynomial basis functions was used to evaluate the model-form uncertainty. The methods are applied to a pavement LCA to propagate uncertainties throughout an energy and global warming potential (GWP) estimation model; a case of a pavement section in Chicago metropolitan area was used.

Results and discussion

Results indicate that each uncertainty source contributes to the overall energy and GWP output of the LCA. Input uncertainty was shown to have significant impact on overall GWP output; for the example case study, GWP interval was around 50%. Parameter uncertainty results showed that an assumption of ±?10% uniform variation in the model parameter priors resulted in 28% variation in the GWP output. Model-form uncertainty had the lowest impact (less than 10% variation in the GWP). This is because the original energy model is relatively accurate in estimating the energy. However, sensitivity of the model-form uncertainty showed that even up to 180% variation in the results can be achieved due to lower original model accuracies.

Conclusions

Investigating each uncertainty source of the model indicated the importance of the accurate characterization, propagation, and quantification of uncertainty. The outcome of this study proposed independent and relatively easy to implement methods that provide robust grounds for objective model uncertainty analysis for LCA applications. Assumptions on inputs, parameter distributions, and model form need to be justified. Input uncertainty plays a key role in overall pavement LCA output. The proposed model correction method as well as interval analysis were relatively easy to implement. Research is still needed to develop a more generic and simplified MCMC simulation procedure that is fast to implement.

  相似文献   

7.
Jager  Henriette I.  King  Anthony W. 《Ecosystems》2004,7(8):841-847
Applied ecological models that are used to understand and manage natural systems often rely on spatial data as input. Spatial uncertainty in these data can propagate into model predictions. Uncertainty analysis, sensitivity analysis, error analysis, error budget analysis, spatial decision analysis, and hypothesis testing using neutral models are all techniques designed to explore the relationship between variation in model inputs and variation in model predictions. Although similar methods can be used to answer them, these approaches address different questions. These approaches differ in (a) whether the focus is forward or backward (forward to evaluate the magnitude of variation in model predictions propagated or backward to rank input parameters by their influence); (b) whether the question involves model robustness to large variations in spatial pattern or to small deviations from a reference map; and (c) whether processes that generate input uncertainty (for example, cartographic error) are of interest. In this commentary, we propose a taxonomy of approaches, all of which clarify the relationship between spatial uncertainty and the predictions of ecological models. We describe existing techniques and indicate a few areas where research is needed.  相似文献   

8.

Purpose

Identification of key inputs and their effect on results from Life Cycle Assessment (LCA) models is fundamental. Because parameter importance varies greatly between cases due to the interaction of sensitivity and uncertainty, these features should never be defined a priori. However, exhaustive parametrical uncertainty analyses may potentially be complicated and demanding, both with analytical and sampling methods. Therefore, we propose a systematic method for selection of critical parameters based on a simplified analytical formulation that unifies the concepts of sensitivity and uncertainty in a Global Sensitivity Analysis (GSA) framework.

Methods

The proposed analytical method based on the calculation of sensitivity coefficients (SC) is evaluated against Monte Carlo sampling on traditional uncertainty assessment procedures, both for individual parameters and for full parameter sets. Three full-scale waste management scenarios are modelled with the dedicated waste LCA model EASETECH and a full range of ILCD recommended impact categories. Common uncertainty ranges of 10 % are used for all parameters, which we assume to be normally distributed. The applicability of the concepts of additivity of variances and GSA is tested on results from both uncertainty propagation methods. Then, we examine the differences in discernibility analyses results carried out with varying numbers of sampling points and parameters.

Results and discussion

The proposed analytical method complies with the Monte Carlo results for all scenarios and impact categories, but offers substantially simpler mathematical formulation and shorter computation times. The coefficients of variation obtained with the analytical method and Monte Carlo differ only by 1 %, indicating that the analytical method provides a reliable representation of uncertainties and allows determination of whether a discernibility analysis is required. The additivity of variances and the GSA approach show that the uncertainty in results is determined by a limited set of important parameters. The results of the discernibility analysis based on these critical parameters vary only by 1 % from discernibility analyses based on the full set, but require significantly fewer Monte Carlo runs.

Conclusions

The proposed method and GSA framework provide a fast and valuable approximation for uncertainty quantification. Uncertainty can be represented sparsely by contextually identifying important parameters in a systematic manner. The proposed method integrates with existing step-wise approaches for uncertainty analysis by introducing a global importance analysis before uncertainty propagation.
  相似文献   

9.
Quantification of uncertainty associated with risk estimates is an important part of risk assessment. In recent years, use of second-order distributions, and two-dimensional simulations have been suggested for quantifying both variability and uncertainty. These approaches are better interpreted within the Bayesian framework. To help practitioners better use such methods and interpret the results, in this article, we describe propagation and interpretation of uncertainty in the Bayesian paradigm. We consider both the estimation problem where some summary measures of the risk distribution (e.g., mean, variance, or selected percentiles) are to be estimated, and the prediction problem, where the risk values for some specific individuals are to be predicted. We discuss some connections and differences between uncertainties in estimation and prediction problems, and present an interpretation of a decomposition of total variability/uncertainty into variability and uncertainty in terms of expected squared error of prediction and its reduction from perfect information. We also discuss the role of Monte Carlo methods in characterizing uncertainty. We explain the basic ideas using a simple example, and demonstrate Monte Carlo calculations using another example from the literature.  相似文献   

10.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

11.
The objective of a long-term stability experiment is to confirm analyte stability in a given biological matrix, encompassing the duration of time from sample collection to sample analysis for a clinical or preclinical study. While long-term analyte stability has been identified as a key component of bioanalytical method validation, current regulatory guidance provides no specific recommendations regarding the design and analysis of such experiments. This paper reviews and evaluates various experimental designs, data analysis methods, and acceptance criteria for the assessment of long-term analyte stability. Statistical equivalence tests based on linear regression techniques are advocated. Both a nested errors and bivariate mixed model regression approach are suitable for application to long-term stability assessment, and control the risk of falsely concluding stability.  相似文献   

12.
Goal, Scope and Background Decision-makers demand information about the range of possible outcomes of their actions. Therefore, for developing Life Cycle Assessment (LCA) as a decision-making tool, Life Cycle Inventory (LCI) databases should provide uncertainty information. Approaches for incorporating uncertainty should be selected properly contingent upon the characteristics of the LCI database. For example, in industry-based LCI databases where large amounts of up-to-date process data are collected, statistical methods might be useful for quantifying the uncertainties. However, in practice, there is still a lack of knowledge as to what statistical methods are most effective for obtaining the required parameters. Another concern from the industry's perspective is the confidentiality of the process data. The aim of this paper is to propose a procedure for incorporating uncertainty information with statistical methods in industry-based LCI databases, which at the same time preserves the confidentiality of individual data. Methods The proposed procedure for taking uncertainty in industry-based databases into account has two components: continuous probability distributions fitted to scattering unit process data, and rank order correlation coefficients between inventory flows. The type of probability distribution is selected using statistical methods such as goodness-of-fit statistics or experience based approaches. Parameters of probability distributions are estimated using maximum likelihood estimation. Rank order correlation coefficients are calculated for inventory items in order to preserve data interdependencies. Such probability distributions and rank order correlation coefficients may be used in Monte Carlo simulations in order to quantify uncertainties in LCA results as probability distribution. Results and Discussion A case study is performed on the technology selection of polyethylene terephthalate (PET) chemical recycling systems. Three processes are evaluated based on CO2 reduction compared to the conventional incineration technology. To illustrate the application of the proposed procedure, assumptions were made about the uncertainty of LCI flows. The application of the probability distributions and the rank order correlation coefficient is shown, and a sensitivity analysis is performed. A potential use of the results of the hypothetical case study is discussed. Conclusion and Outlook The case study illustrates how the uncertainty information in LCI databases may be used in LCA. Since the actual scattering unit process data were not available for the case study, the uncertainty distribution of the LCA result is hypothetical. However, the merit of adopting the proposed procedure has been illustrated: more informed decision-making becomes possible, basing the decisions on the significance of the LCA results. With this illustration, the authors hope to encourage both database developers and data suppliers to incorporate uncertainty information in LCI databases.  相似文献   

13.
In the risk assessment methods for new and existing chemicals in the European Union (EU), environmental “risk” is characterized by the deterministic quotient of exposure and effects (PEC/PNEC). From a scientific viewpoint, the uncertainty in the risk quotient should be accounted for explicitly in the decision making, which can be done in a probabilistic risk framework. To demonstrate the feasibility and benefits of such a framework, a sample risk assessment for an existing chemical (dibutylphthalate, DBP) is presented in this paper. The example shows a probabilistic framework to be feasible with relatively little extra effort; such a framework also provides more relevant information. The deterministic risk quotients turned out to be worst cases at generally higher than the 95th percentile of the probability distributions. Sensitivity analysis proves to be a powerful tool in identifying the main sources of uncertainty and thus will be effective for efficient further testing. The distributions assigned to the assess ment factors (derivation of the PNEC) dominate the total uncertainty in the risk assessment; uncertainties in the release estimates come second. Large uncertainties are an inherent part of risk assessment that we have to deal with quantitatively. However, the most appropriate way to characterise effects and risks requires further attention. Recommendations for further study are identified.  相似文献   

14.
This study aims to quantitatively assess the risk of pesticides (used in Irish agriculture) and their degradation products to groundwater and human health. This assessment uses a human health Monte-Carlo risk-based approach that includes the leached quantity combined with an exposure estimate and the No Observed Adverse Effect Level (NOAEL) as a toxicity ranking endpoint, resulting in a chemical intake toxicity ratio statistic (R) for each pesticide. A total of 34 active substances and their metabolites registered and used in the agricultural field were evaluated. MCPA obtained the highest rank (i.e., in order of decreasing human health risk), followed by desethly-terbuthylazine and deethylatrazine (with risk ratio values of 1.1 × 10?5, 9.5 × 10?6, and 5.8 × 10?6, respectively). A sensitivity analysis revealed that the soil organic carbon content and soil sorption coefficient were the most important parameters that affected model predictions (correlation coefficient of –0.60 and –0.58, respectively), highlighting the importance of soil and pesticide properties in influencing risk estimates. The analysis highlights the importance of taking a risk-based approach when assessing pesticide risk. The model can help to prioritize pesticides, with potentially negative human health effects, for monitoring programs as opposed to traditional approaches based on pesticide leaching potential.  相似文献   

15.
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low‐quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision‐making framework will result in better‐informed, more robust decisions.  相似文献   

16.

Purpose

Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to gain more insight into output variance. This study aimed to (1) give insight into and (2) compare methods for global sensitivity analysis in life cycle assessment, with a focus on the inventory stage.

Methods

Five methods that quantify the contribution to output variance were evaluated: squared standardized regression coefficient, squared Spearman correlation coefficient, key issue analysis, Sobol’ indices and random balance design. To be able to compare the performance of global sensitivity methods, two case studies were constructed: one small hypothetical case study describing electricity production that is sensitive to a small change in the input parameters and a large case study describing a production system of a northeast Atlantic fishery. Input parameters with relative small and large input uncertainties were constructed. The comparison of the sensitivity methods was based on four aspects: (I) sampling design, (II) output variance, (III) explained variance and (IV) contribution to output variance of individual input parameters.

Results and discussion

The evaluation of the sampling design (I) relates to the computational effort of a sensitivity method. Key issue analysis does not make use of sampling and was fastest, whereas the Sobol’ method had to generate two sampling matrices and, therefore, was slowest. The total output variance (II) resulted in approximately the same output variance for each method, except for key issue analysis, which underestimated the variance especially for high input uncertainties. The explained variance (III) and contribution to variance (IV) for small input uncertainties were optimally quantified by the squared standardized regression coefficients and the main Sobol’ index. For large input uncertainties, Spearman correlation coefficients and the Sobol’ indices performed best. The comparison, however, was based on two case studies only.

Conclusions

Most methods for global sensitivity analysis performed equally well, especially for relatively small input uncertainties. When restricted to the assumptions that quantification of environmental impact in LCAs behaves linearly, squared standardized regression coefficients, squared Spearman correlation coefficients, Sobol’ indices or key issue analysis can be used for global sensitivity analysis. The choice for one of the methods depends on the available data, the magnitude of the uncertainties of data and the aim of the study.
  相似文献   

17.
Four different probabilistic risk assessment methods were compared using the data from the Sangamo Weston/Lake Hartwell Superfund site. These were one-dimensional Monte Carlo, two-dimensional Monte Carlo considering uncertainty in the concentration term, two-dimensional Monte Carlo considering uncertainty in ingestion rate, and microexposure event analysis. Estimated high-end risks ranged from 2.0×10?4 to 3.3×10?3. Microexposure event analysis produced a lower risk estimate than any of the other methods due to incorporation of time-dependent changes in the concentration term.  相似文献   

18.
Aim   Although parameter estimates are not as affected by spatial autocorrelation as Type I errors, the change from classical null hypothesis significance testing to model selection under an information theoretic approach does not completely avoid problems caused by spatial autocorrelation. Here we briefly review the model selection approach based on the Akaike information criterion (AIC) and present a new routine for Spatial Analysis in Macroecology (SAM) software that helps establishing minimum adequate models in the presence of spatial autocorrelation.
Innovation    We illustrate how a model selection approach based on the AIC can be used in geographical data by modelling patterns of mammal species in South America represented in a grid system ( n  = 383) with 2° of resolution, as a function of five environmental explanatory variables, performing an exhaustive search of minimum adequate models considering three regression methods: non-spatial ordinary least squares (OLS), spatial eigenvector mapping and the autoregressive (lagged-response) model. The models selected by spatial methods included a smaller number of explanatory variables than the one selected by OLS, and minimum adequate models contain different explanatory variables, although model averaging revealed a similar rank of explanatory variables.
Main conclusions    We stress that the AIC is sensitive to the presence of spatial autocorrelation, generating unstable and overfitted minimum adequate models to describe macroecological data based on non-spatial OLS regression. Alternative regression techniques provided different minimum adequate models and have different uncertainty levels. Despite this, the averaged model based on Akaike weights generates consistent and robust results across different methods and may be the best approach for understanding of macroecological patterns.  相似文献   

19.
Statistical methods for the validation of toxicological in vitro test assays are developed and applied. Validation is performed either in comparison with in vivo assays or in comparison with other in vitro assays of established validity. Biostatistical methods are presented which are of potential use and benefit for the validation of alternative methods for the risk assessment of chemicals, providing at least an equivalent level of protection through in vitro toxicity testing to that obtained through the use of current in vivo methods. Characteristic indices are developed and determined. Qualitative outcomes are characterised by the rates of false-positive and false-negative predictions, sensitivity and specificity, and predictive values. Quantitative outcomes are characterised by regression coefficients derived from predictive models. The receiver operating characteristics (ROC) technique, applicable when a continuum of cut-off values is considered, is discussed in detail, in relation to its use for statistical modelling and statistical inference. The methods presented are examined for their use for the proof of safety and for toxicity detection and testing. We emphasise that the final validation of toxicity testing is human toxicity, and that the in vivo test itself is only a predictor with an inherent uncertainty. Therefore, the validation of the in vitro test has to account for the vagueness and uncertainty of the "gold standard" in vivo test. We address model selection and model validation, and a four-step scheme is proposed for the conduct of validation studies. Gaps and research needs are formulated to improve the validation of alternative methods for in vitro toxicity testing.  相似文献   

20.
We revisit the assumptions associated with the derivation and application of species sensitivity distributions (SSDs). Our questions are (1) Do SSDs clarify or obscure the setting of ecological effects thresholds for risk assessment? and (2) Do SSDs reduce or introduce uncertainty into risk assessment? Our conclusions are that if we could determine a community sensitivity distribution, this would provide a better estimate of an ecologically relevant effects threshold and therefore be an improvement for risk assessment. However, the distributions generated are typically based on haphazard collections of species and endpoints and by adjusting these to reflect more realistic trophic structures we show that effects thresholds can be shifted but in a direction and to an extent that is not predictable. Despite claims that the SSD approach uses all available data to assess effects, we demonstrate that in certain frequently used applications only a small fraction of the species going into the SSD determine the effects threshold. If the SSD approach is to lead to better risk assessments, improvements are needed in how the theory is put into practice. This requires careful definition of the risk assessment targets and of the species and endpoints selected for use in generating SSDs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号