首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Panel discussion     

Risk assessment is part of the risk analysis process as it is used in veterinary medicine to estimate risks related to international trade and food safety. Data from monitoring and surveillance systems (MO&;SS) are used throughout the risk assessment process for hazard identification, release assessment, exposure assessment and consequence assessment. As the quality of risk assessments depends to a large extent on the availability and quality of input data, there is a close relationship between MO&;SS and risk assessment. In order to improve the quality of risk assessments, MO&;SS should be designed according to minimum quality standards. Second, recent scientific developments on state-of-the-art design and analysis of surveys need to be translated into field applications and legislation. Finally, knowledge about the risk assessment process among MO&;SS planners and managers should be promoted in order to assure high-quality data.

  相似文献   

2.
Definitions of epidemiological concepts regarding disease monitoring and surveillance can be found in textbooks on veterinary epidemiology. This paper gives a review of how the concepts: monitoring, surveillance, and disease control strategies are defined. Monitoring and surveillance systems (MO&SS) involve measurements of disease occurrence, and the design of the monitoring determines which types of disease occurrence measures can be applied. However, the knowledge of the performance of diagnostic tests (sensitivity and specificity) is essential to estimate the true occurrence of the disease. The terms, disease control programme (DCP) or disease eradication programme (DEP), are defined, and the steps of DCP/DEP are described to illustrate that they are a process rather than a static MO&SS.  相似文献   

3.

Definitions of epidemiological concepts regarding disease monitoring and surveillance can be found in textbooks on veterinary epidemiology. This paper gives a review of how the concepts: monitoring, surveillance, and disease control strategies are defined. Monitoring and surveillance systems (MO&SS) involve measurements of disease occurrence, and the design of the monitoring determines which types of disease occurrence measures can be applied. However, the knowledge of the performance of diagnostic tests (sensitivity and specificity) is essential to estimate the true occurrence of the disease. The terms, disease control programme (DCP) or disease eradication programme (DEP), are defined, and the steps of DCP/DEP are described to illustrate that they are a process rather than a static MO&SS.

  相似文献   

4.
The 1983 book, Risk Assessment in the Federal Government: Managing the Process, recommended developing consistent inference guidelines for cancer risk assessment. Over the last 15 years, extensive guidance have been provided for hazard assessment for cancer and other endpoints. However, as noted in several recent reports, much less progress has occurred in developing consistent guidelines for quantitative dose response assessment methodologies. This paper proposes an approach for dose response assessment guided by consideration of mode of action (pharmacodynamics) and tissue dosimetry (pharmacokinetics). As articulated here, this systematic process involves eight steps in which available information is integrated, leading first to quantitative analyses of dose response behaviors in the test species followed by quantitative analyses of relevant human exposures. The process should be equally appropriate for both cancer and noncancer endpoints. The eight steps describe the necessary procedures for incorporating mechanistic data and provide multiple options based upon the mode of action by which the chemical causes the toxicity. Given the range of issues involved in developing such a procedure, we have simply sketched the process, focusing on major approaches for using toxicological data and on major options; many details remain to be filled in. However, consistent with the revised carcinogen risk assessment guidance (USEPA, 1996c), we propose a process that would ultimately utilize biologically based or chemical specific pharmacokinetic and pharmacodynamic models as the backbone of these analyses. In the nearer term, these approaches will be combined with analysis of data using more empirical models including options intended for use in the absence of detailed information. A major emphasis in developing any harmonized process is distinguishing policy decisions from those decisions that are affected by the quality and quantity of toxicological data. Identification of data limitations also identifies areas where further study should reduce uncertainty in the final risk evaluations. A flexible dose response assessment procedure is needed to insure that sound toxicological study results are appropriately used to influence risk management decision-making and to encourage the conduct of toxicological studies oriented toward application for dose response assessments.  相似文献   

5.
Risk assessment tools for listing invasive alien species need to incorporate all available evidence and expertise. Beyond the wealth of protocols developed to date, we argue that the current way of performing risk analysis has several shortcomings. In particular, lack of data on ecological impacts, transparency and repeatability of assessments as well as the incorporation of uncertainty should all be explicitly considered. We recommend improved quality control of risk assessments through formalized peer review with clear feedback between assessors and reviewers. Alternatively, a consensus building process can be applied to better capture opinions of different experts, thereby maximizing the evidential basis. Elaborating on manageability of invasive species is further needed to fully answer all risk analysis requirements. Tackling the issue of invasive species urges better handling of the acquired information on risk and the exploration of improved methods for decision making on biodiversity management. This is crucial for efficient conservation resource allocation and uptake by stakeholders and the public.  相似文献   

6.
Human and ecological health risk assessments and the decisions that stem from them require the acquisition and analysis of data. In agencies that are responsible for health risk decision-making, data (and/or opinions/judgments) are obtained from sources such as scientific literature, analytical and process measurements, expert elicitation, inspection findings, and public and private research institutions. Although the particulars of conducting health risk assessments of given disciplines may be dramatically different, a common concern is the subjective nature of judging data utility. Often risk assessors are limited to available data that may not be completely appropriate to address the question being asked. Data utility refers to the ability of available data to support a risk-based decision for a particular risk assessment. This article familiarizes the audience with the concept of data utility and is intended to raise the awareness of data collectors (e.g., researchers), risk assessors, and risk managers to data utility issues in health risk assessments so data collection and use will be improved. In order to emphasize the cross-cutting nature of data utility, the discussion has not been organized into a classical partitioning of risk assessment concerns as being either human health- or ecological health-oriented, as per the U.S. Environmental Protection Agency's Superfund Program.  相似文献   

7.
Genotoxicity risk assessment: a proposed classification strategy   总被引:5,自引:0,他引:5  
Recent advances in genetic toxicity (mutagenicity) testing methods and in approaches to performing risk assessment are prompting a renewed effort to harmonize genotoxicity risk assessment across the world. The US Environmental Protection Agency (EPA) first published Guidelines for Mutagenicity Risk Assessment in 1986 that focused mainly on transmissible germ cell genetic risk. Somatic cell genetic risk has also been a risk consideration, usually in support of carcinogenicity assessments. EPA and other international regulatory bodies have published mutagenicity testing requirements for agents (pesticides, pharmaceuticals, etc.) to generate data for use in genotoxicity risk assessments. The scheme that follows provides a proposed harmonization approach in which genotoxicity assessments are fully developed within the risk assessment paradigm used by EPA, and sets out a process that integrates newer thinking in testing battery design with the risk assessment process. A classification strategy for agents based on inherent genotoxicity, dose-responses observed in the data, and an exposure analysis is proposed. The classification leads to an initial level of concern for genotoxic risk to humans. A total risk characterization is performed using all relevant toxicity data and a comprehensive exposure evaluation in association with the genotoxicity data. The result of this characterization is ultimately used to generate a final level of concern for genotoxic risk to humans. The final level of concern and characterized genotoxicity risk assessment are communicated to decision makers for possible regulatory action(s) and to the public.  相似文献   

8.
Recent advances in genetic toxicity (mutagenicity) testing methods and in approaches to performing risk assessment are prompting a renewed effort to harmonize genotoxicity risk assessment across the world. The US Environmental Protection Agency (EPA) first published Guidelines for Mutagenicity Risk Assessment in 1986 that focused mainly on transmissible germ cell genetic risk. Somatic cell genetic risk has also been a risk consideration, usually in support of carcinogenicity assessments. EPA and other international regulatory bodies have published mutagenicity testing requirements for agents (pesticides, pharmaceuticals, etc.) to generate data for use in genotoxicity risk assessments. The scheme that follows provides a proposed harmonization approach in which genotoxicity assessments are fully developed within the risk assessment paradigm used by EPA, and sets out a process that integrates newer thinking in testing battery design with the risk assessment process. A classification strategy for agents based on inherent genotoxicity, dose-responses observed in the data, and an exposure analysis is proposed. The classification leads to an initial level of concern for genotoxic risk to humans. A total risk characterization is performed using all relevant toxicity data and a comprehensive exposure evaluation in association with the genotoxicity data. The result of this characterization is ultimately used to generate a final level of concern for genotoxic risk to humans. The final level of concern and characterized genotoxicity risk assessment are communicated to decision makers for possible regulatory action(s) and to the public.  相似文献   

9.
This paper provides recommendations on experimental design for early-tier laboratory studies used in risk assessments to evaluate potential adverse impacts of arthropod-resistant genetically engineered (GE) plants on non-target arthropods (NTAs). While we rely heavily on the currently used proteins from Bacillus thuringiensis (Bt) in this discussion, the concepts apply to other arthropod-active proteins. A risk may exist if the newly acquired trait of the GE plant has adverse effects on NTAs when they are exposed to the arthropod-active protein. Typically, the risk assessment follows a tiered approach that starts with laboratory studies under worst-case exposure conditions; such studies have a high ability to detect adverse effects on non-target species. Clear guidance on how such data are produced in laboratory studies assists the product developers and risk assessors. The studies should be reproducible and test clearly defined risk hypotheses. These properties contribute to the robustness of, and confidence in, environmental risk assessments for GE plants. Data from NTA studies, collected during the analysis phase of an environmental risk assessment, are critical to the outcome of the assessment and ultimately the decision taken by regulatory authorities on the release of a GE plant. Confidence in the results of early-tier laboratory studies is a precondition for the acceptance of data across regulatory jurisdictions and should encourage agencies to share useful information and thus avoid redundant testing.  相似文献   

10.
Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.  相似文献   

11.
Abstract

As materials intended to be brought into contact with food, food contact materials (FCMs) – including plastics, paper or inks – can transfer their constituents to food under normal or foreseeable use, including direct or indirect food contact. The safety of FCMs in the EU is evaluated by the European Food Safety Authority (EFSA) using risk assessment rules. Results of independent, health-based chemical risk assessments are crucial for the decision-making process to authorize the use of substances in FCMs. However, the risk assessment approach used in the EU has several shortcomings that need to be improved in order to ensure consumer health protection from exposure arising from FCMs. This article presents the use of meta-analysis as a useful tool in chronic risk assessment for substances migrating from FCMs. Meta-analysis can be used for the review and summary of research of FCMs safety in order to provide a more accurate assessment of the impact of exposure with increased statistical power, thus providing more reliable data for risk assessment. The article explains a common methodology of conducting a meta-analysis based on meta-analysis of the dose-effect relationship of cadmium for benchmark dose evaluations performed by EFSA.  相似文献   

12.
It is difficult to overstate the complexity of assessing risks from chemical mixtures. For every valid reason to assess risks from mixtures, there appears an equally valid question as to whether it is possible to do so in a scientifically rigorous and relevant manner. Because so few data exist for mixtures, current mixture assessment methods must rely on untested assumptions and simplifications. That the accuracy of risk estimates improve with the number of chemicals assessed together as mixtures is a valid assumption only if assessment methods for mixtures are better than those based on individual chemicals. On the other hand, arbitrarily truncating a mixture assessment to make it manageable may lead to irrelevant risk estimates. Ideally, mixture assessments should be as broad as necessary to improve accuracy and reduce uncertainty over assessments that only use toxicity data for single chemicals. Further broadening the scope may be ill advised because of the tendency to increase rather than decrease uncertainty. Risk assessment methods that seek to be comprehensive at the expense of increased uncertainty can hardly be viewed as improvements. It would be prudent to verify that uncertainty can be reduced before burdening the risk assessment process with more complexity.  相似文献   

13.
In 1966, Levins presented a philosophical discussion on making inference about populations using clusters of models. In this article we provide an overview of model inference in ecological risk assessment, discuss the benefits and trade-offs of increasing model realism, show the similarities and differences between Levins' model clusters and those used in ecological risk assessment, and present how risk assessment models can incorporate Levins' ideas of truth through independent lies. Two aspects of Levins' philosophy are directly relevant to risk assessment. First, confidence in our interpretation of risk is increased when multiple risk assessments yield similar qualitative results. Second, model clusters should be evaluated to determine if they maximize precision, generality, or realism or a mix of the three. In the later case, the evaluation of each model will differ depending on whether it is more general, precise, or realistic relative to the other models used. We conclude that risk assessments can be strengthened using Levins' idea, but that Levins' caution that model outcome should not be mistaken for truth is still applicable.  相似文献   

14.
We give a mini-review of existing European risk assessment procedures and present a newly developed and tested risk assessment tool for invasive alien species (IAS) in Germany and Austria, the “German–Austrian Black List Information System” (GABLIS). Based on the analysis of existing European national risk assessment systems, we analyse and discuss: the assessment criteria used; which impacts of IAS (biodiversity, economy) have been considered; for which taxonomic groups has the assessment been designed and tested; how many and which list categories have been used; and, the status of the assessment, i.e. legally binding or advisory. We found that the application of risk assessment systems in Europe started belatedly, however recently a considerable number of assessment systems have been developed and tested. These systems encompass a wide range of purposes and approaches, and so far, no common standard on the aspects mentioned above has been emerged.GABLIS has been developed as a trans-national and taxonomically universal risk assessment system, which takes into account solely the detrimental effects of alien species on biodiversity. We describe which kinds of impacts are considered and how the thresholds have been scaled. We present the structure of the list categories, and we discuss the necessary underlying data for assessment, the assessment criteria and their scaling, and the assessment procedure. Five basic and six complementary criteria are used to assess the alien species’ impact. GABLIS includes three main list categories (White List, Grey List, and Black List).We discuss the practicability of GABLIS by presenting the assessment results of a model taxon (fish), and by presenting the assessment protocol for a vascular plant species. We discuss the necessary data quality for assessments, and the factors which account for differences in the assessments between both countries. We also report on experiences gained in assessments (e.g., average time necessary for assessments). The lessons learnt are discussed in the national and European political context of IAS management.Finally, we explore the strengths and caveats of this approach in the context of national policy on IAS in Germany and Austria and the ongoing European political initiatives. GABLIS is intended to serve as a comprehensive, flexible, but robust risk assessment tool for Central Europe. Being a trans-national risk assessment tool, GABLIS also tests principles, which might contribute valuable insights for a future overall strategy against IAS in Europe.  相似文献   

15.
Definition of the term bioavailability varies in the environmental sciences. In human health risk assessment, bioavailability is defined as the fraction of the dose of chemical delivered that is absorbed into the systemic circulation. Bioavailability can be expressed as either absolute or relative bioavailability, and both are important in calculating risks from contaminants in soils. Bioavailability of chemicals is addressed in all risk assessments, although not always in a transparent manner. Because data on bioavailability are limited, approximations and assumptions regarding chemical uptake are extensively used. The risk assessment process could benefit from new information on the bioavailability of chemicals, but there are important questions about the best means to develop this information and how it should be used. To foster discussion on these issues, three articles are presented in this issue of the journal offering different perspectives on bioavailability method development, validation, and use.  相似文献   

16.
Soil microbial toxicity tests are seldom used in ecological risk assessments or in the development of regulatory criteria in the U.S. The primary reason is the lack of an explicit connection between these tests and assessment end-points. Soil microorganisms have three potential roles with respect to ecological assessment endpoints: properties of microbial communities may be end-points; microbial responses may be used to estimate effects on plant production; and microbial responses may be used as surrogates for responses of higher organisms. Rates of microbial processes are important to ecosystem function, and thus should be valued by regulatory agencies. However, the definition of the microbial assessment endpoint is often an impediment to its use in risk assessment. Decreases in rates are not always undesirable. Processes in a nutrient cycle are particularly difficult to define as endpoints, because what constitutes an adverse effect on a process is dependent on the rates of others. Microbial tests may be used as evidence in an assessment of plant production, but the dependence of plants on microbial processes is rarely considered. As assessment endpoints are better defined in the future, microbial ecologists and toxicologists should be provided with more direction for developing appropriate microbial tests.  相似文献   

17.
Chemicals present in contaminated soils generally exhibit altered bioavailability compared to other vehicles used in studies of chemical toxicity. Methods used to assess the bioavailability of soil-borne chemicals have generally been modified versions of methods that are widely used in biomedical research. Oral and dermal bioavailability of semivolatile organic chemicals and metals in soil has been assessed by a variety of in vivo and in vitro methods. Due to variations in metabolism and excretion of different chemicals, approaches to measuring bioavailability must be selected with an understanding of disposition of the chemical being studied. Standard methods need to be modified due to constraints associated with doses relevant to environmental concentrations, the need to reflect weathering behavior in soils over time, and the need to generate data applicable to human health risk assessments. Estimates of relative bioavailability for chemicals in soil can be used directly to modify exposure estimates. Application of bioavailability data in a site-specific risk assessment requires regulatory acceptance of the data. Acceptance of the data will generally be dependent on either the use of a validated test method or a careful scientific review of the test method employed. A process for validating newly developed alternative toxicity methods for routine use developed by the Interagency Coordinating Committee on the Validation of Alternative Methods provides relevant guidance for assessing in vitro methods, but method validation should not be the only litmus test for inclusion of bioavailability data in risk assessments.  相似文献   

18.
19.
Clinical guidelines recommend that violence risk be assessed in schizophrenia. Current approaches are resource-intensive as they employ detailed clinical assessments of dangerousness for most patients. An alternative approach would be to first screen out patients at very low risk of future violence prior to more costly and time-consuming assessments. In order to implement such a stepped strategy, we developed a simple tool to screen out individuals with schizophrenia at very low risk of violent offending. We merged high quality Swedish national registers containing information on psychiatric diagnoses, socio-demographic factors, and violent crime. A cohort of 13,806 individuals with hospital discharge diagnoses of schizophrenia was identified and followed for up to 33 years for violent crime. Cox regression was used to determine risk factors for violent crime and construct the screening tool, the predictive validity of which was measured using four outcome statistics. The instrument was calibrated on 6,903 participants and cross-validated using three independent replication samples of 2,301 participants each. Regression analyses resulted in a tool composed of five items: male sex, previous criminal conviction, young age at assessment, comorbid alcohol abuse, and comorbid drug abuse. At 5 years after discharge, the instrument had a negative predictive value of 0.99 (95% CI = 0.98–0.99), meaning that very few individuals who the tool screened out (n = 2,359 out of original sample of 6,903) were subsequently convicted of a violent offence. Screening out patients who are at very low risk of violence prior to more detailed clinical assessment may assist the risk assessment process in schizophrenia.  相似文献   

20.
An “expansive” risk assessment approach is illustrated, characterizing dose–response relationships for salmonellosis in light of the full body of evidence for human and murine superorganisms. Risk assessments often require analysis of costs and benefits for supporting public health decisions. Decision-makers and the public need to understand uncertainty in such analyses for two reasons. Uncertainty analyses provide a range of possibilities within a framework of present scientific knowledge, thus helping to avoid undesirable consequences associated with the selected policies. And, it encourages the risk assessors to scrutinize all available data and models, thus helping avoid subjective or systematic errors. Without the full analysis of uncertainty, decisions could be biased by judgments based solely on default assumptions, beliefs, and statistical analyses of selected correlative data. Alternative data and theories that incorporate variability and heterogeneity for the human and murine superorganisms, particularly colonization resistance, are emerging as major influences for microbial risk assessment. Salmonellosis risk assessments are often based on conservative default models derived from selected sets of outbreak data that overestimate illness. Consequently, the full extent of uncertainty of estimates of annual number of illnesses is not incorporated in risk assessments and the presently used models may be incorrect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号