首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Multiple lines of evidence (LOE) are often considered when examining the potential impact of contaminated sediment. Three strategies are explored for combining information within and/or among different LOE. One technique uses a multivariate strategy for clustering sites into groups of similar impact. A second method employs meta-analysis to pool empirically derived P-values. The third method uses a quantitative estimation of probability derived from odds ratios. These three strategies are compared with respect to a set of data describing reference conditions and a contaminated area in the Great Lakes. Common themes in these three strategies include the critical issue of defining an appropriate set of reference/control conditions, the definition of impact as a significant departure from the normal variation observed in the reference conditions, and the use of distance from the reference distribution to define any of the effect measures. Reasons for differences in results between the three approaches are explored and strategies for improving the approaches are suggested.  相似文献   

2.
A broad range of perspectives exists regarding the interpretation of potentially adverse ecological changes in ecological risk assessments conducted under Superfund and RCRA. While USEPA's Proposed Guidelines for Ecological Risk Assessment recommend determining whether predicted changes are adverse based on the nature of effects, intensity of effects, spatial scale, temporal scale, and potential for recovery, the guidelines do not provide specific stan dards for judging adversity. Hence, implementation of the proposed guide lines varies with each risk manager's subjective judgments regarding the relative importance of each of these five criteria. In an effort to increase consistency in the scientific interpretation of ecological risk assessments, the following practices are recommended. First, measures of effects should focus on levels of ecological organization that are more complex than the individual organ ism. Second, multiple lines of evidence should be evaluated for each assessment endpoint. Third, bioequivalence testing should be used in place of traditional statistical testing (e.g., Student t-test) because the goal of bioequivalence testing is to answer the biologically relevant question of whether measurements differ by, at most, a biologically small amount. Fourth, in defining biologically small differences, site-specific and species specific conditions should be considered to the greatest extent possible. Fifth, where the outcomes of multiple lines of evidence contradict one another, the risk assessor should employ a quantitative approach to weighing the evidence based on the scientific defensibility of each measure of effect.  相似文献   

3.
Ecological risk assessments often include mechanistic food chain models based on toxicity reference values (TRVs) and a hazard quotient approach. TRVs intended for screening purposes or as part of a larger weight-of-evidence (WOE) assessment are readily available. However, our experience suggests that food chain models using screening-level TRVs often form the primary basis for risk management at smaller industrial sites being redeveloped for residential or urban parkland uses. Iterative improvement of a food chain model or the incorporation of multiple lines of evidence for these sites are often impractical from a cost-benefit perspective when compared to remedial alternatives. We recommend risk assessors examine the assumptions and factors in the TRV derivation process, and where appropriate, modify the TRVs to improve their ecological relevance. Five areas where uncertainty likely contributes to excessively conservative hazard quotients are identified for consideration.  相似文献   

4.
The impacts of sediment contaminants can be evaluated by different lines of evidence, including toxicity tests and ecological community studies. Responses from 10 different toxicity assays/tests were combined to arrive at a “site score.” We employed a relatively simple summary measure, pooled P-values where we quantify a potential decrement in response in a contaminated site relative to nominally clean reference sites. The response-specific P-values were defined relative to a “null” distribution of responses in reference sites, and were then pooled using standard meta-analytic methods. Ecological community data were also evaluated using an analogous strategy. A distribution of distances of the reference sites from thecentroid of the reference sites was obtained. The distance from each of the test sites from the centroid of the reference sites was then calculated, and the proportion of reference distances that exceed the test site difference was used to define an empirical P-value for that test site. A plot of the toxicity P-value versus the community P-value was used to identify sites based on both alteration in community structure and toxicity, that is, by weight-of-evidence. This approach provides a useful strategy for examining multiple lines of evidence that should be accessible to the broader scientific community. The use of a large collection of reference sites to empirically define P-values is appealing in that parametric distribution assumptions are avoided, although this does come at the cost of assuming the reference sites provide an appropriate comparison group for test sites.  相似文献   

5.
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure was developed for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors.  相似文献   

6.
Universal mouse reference RNA derived from neonatal mice   总被引:2,自引:0,他引:2  
He XR  Zhang C  Patterson C 《BioTechniques》2004,37(3):464-468
  相似文献   

7.
8.
Contaminated sediment has been identified as one of the major impediments to ecosystem restoration, but there has been little progress made in the management of sediment contaminants. Four primary lines of evidence are generally required for informed assessments yet the integration of these various lines of evidence is problematic. Using data from 220 reference sites located in the nearshore zone of the Laurentian Great Lakes the normal response of four species of laboratory organisms to sediments representing a wide range of sediment characteristics was examined. The toxicity data from the reference sites were used to establish categories of responses to test sediments. The delineations for the three categories were developed from the standard statistical parameters of population mean and standard deviation (mean ± SD) of an endpoint measured in all reference sediments. Three approaches for integrating information were examined; the first two are score based, the third approach uses a multivariate statistical method to integrate the responses. The methods were examined using both artificial and real test site data and from this it was concluded that ordination is the superior of the three. It is the least subjective within the context of the integration of the endpoints, is quantitative, and also provides appropriate weighting based on the variation observed within reference sites.  相似文献   

9.
This article introduces a new approach for the construction of a risk model for the prediction of Traumatic Brain Injury (TBI) as a result of a car crash. The probability of TBI is assessed through the fusion of an experiment-based logistic regression risk model and a finite element (FE) simulation-based risk model. The proposed approach uses a multilevel framework which includes FE simulations of vehicle crashes with dummy and FE simulations of the human brain. The loading conditions derived from the crash simulations are transferred to the brain model thus allowing the calculation of injury metrics such as the Cumulative Strain Damage Measure (CSDM). The framework is used to propagate uncertainties and obtain probabilities of TBI based on the CSDM injury metric. The risk model from FE simulations is constructed from a support vector machine classifier, adaptive sampling, and Monte-Carlo simulations. An approach to compute the total probability of TBI, which combines the FE-based risk assessment as well as the risk prediction from the experiment-based logistic regression model is proposed. In contrast to previous published work, the proposed methodology includes the uncertainty of explicit parameters such as impact conditions (e.g., velocity, impact angle), and material properties of the brain model. This risk model can provide, for instance, the probability of TBI for a given assumed crash impact velocity.  相似文献   

10.
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.  相似文献   

11.
The goal of this paper is to illustrate the value and importance of the “weight of evidence” approach (use of multiple lines of evidence from field and laboratory data) to assess the occurrence or absence of ecological impairment in the aquatic environment. Single species toxicity tests, microcosms, and community metric approaches such as the Index of Biotic Integrity (IBI) are discussed. Single species toxicity tests or other single lines of evidence are valuable first tier assessments that should be used as screening tools to identify potentially toxic conditions in a effluent or the ambient environment but these tests should not be used as the final quantitative indicator of absolute ecological impairment that may result in regulatory action. Both false positive and false negative predictions of ecological effects can occur due to the inherent variability of measurement endpoints such as survival, growth and reproduction used in single species toxicity tests. A comparison of single species ambient toxicity test results with field data showed that false positives are common and likely related to experimental variability or toxicity to selected test species without measureable effects on the ecosystem. Results from microcosm studies have consistently demonstrated that chemical exposures exceeding the acute or chronic toxicity concentrations for highly sensitive species may cause little or no ecologically significant damage to an aquatic ecosystem. Sources of uncertainty identified when extrapolating from single species tests to ecological effects were: variability in individual response to pesticide exposure; variation among species in sensitivity to pesticides; effects of time varying and repeated exposures; and extrapolation from individual to population-level endpoints. Data sets from the Chesapeake Bay area (Maryland) were used to show the importance of using “multiple lines of evidence” when assessing biological impact due to conflicting results reported from ambient water column and sediment toxicity tests and biological indices (benthic and fish IBIs). Results from water column and sediment toxicity tests with multiple species in tidal areas showed that no single species was consistently the most sensitive. There was also a high degree of disagreement between benthic and fish IBI data for the various stations. The lack of agreement for these biological community indices is not surprising due to the differences in exposure among habitats occupied by these different taxonomic assemblages. Data from a fish IBI, benthic IBI and Maryland Physical Habitat Index (MPHI) were compared for approximately 1100 first through third-order Maryland non-tidal streams to show the complexity of data interpretation and the incidence of conflicting lines of evidence. A key finding from this non-tidal data set was the need for using more than one biological indicator to increase the discriminatory power of identifying impaired streams and reduce the possibility of “false negative results”. Based on historical data, temporal variability associated with an IBI in undisturbed areas was reported to be lower than the variability associated with single species toxicity tests.  相似文献   

12.
Weight‐of‐evidence is the process by which multiple measurement endpoints are related to an assessment endpoint to evaluate whether significant risk of harm is posed to the environment. In this paper, a methodology is offered for reconciling or balancing multiple lines of evidence pertaining to an assessment endpoint. Weight‐of‐evidence is reflected in three characteristics of measurement endpoints: (a) the weight assigned to each measurement endpoint; (b) the magnitude of response observed in the measurement endpoint; and (c) the concurrence among outcomes of multiple measurement endpoints. First, weights are assigned to measurement endpoints based on attributes related to: (a) strength of association between assessment and measurement endpoints; (b) data quality; and (c) study design and execution. Second, the magnitude of response in the measurement endpoint is evaluated with respect to whether the measurement endpoint indicates the presence or absence of harm; as well as the magnitude. Third, concurrence among measurement endpoints is evaluated by plotting the findings of the two preceding steps on a matrix for each measurement endpoint evaluated. The matrix allows easy visual examination of agreements or divergences among measurement endpoints, facilitating interpretation of the collection of measurement endpoints with respect to the assessment endpoint. A qualitative adaptation of the weight‐of‐evidence approach is also presented.  相似文献   

13.
Our review of existing approaches and regulatory uses of weight-of-evidence (WOE) methods suggested the need for a practical strategy for deploying WOE within a predictive ecological risk assessment (ERA). WOE is the process of considering strengths and weaknesses of various pieces of information in order to inform a decision being made among competing alternatives. A predictive ERA uses existing information relating cause and effect to estimate the probability that today's action X will lead to tomorrow's adverse outcome Y. There appears to be no practical guidance for use of WOE in predictive assessments. We therefore propose a strategy for using a WOE approach, within an ERA framework, to weigh and integrate outcomes from various lines of evidence to estimate the probability of an adverse outcome in an assessment endpoint. An ERA framework is necessary to connect the results of an assessment to the management goals of concern to decision-makers and stakeholders. Within that framework, a WOE approach provides a consistent and transparent means of interpreting the myriad types of data and information gathered during a complex ecological assessment. Impediments to application of WOE are discussed, including limited regulatory guidance, limited prior regulatory use, and persistent reliance on threshold-based decision-making.  相似文献   

14.
Internationally recognised guidelines for the assessment of risk posed by non‐native organisms generally suggest that the assessment is disaggregated into a series of components each being scored and then the scores added or averaged to give the final result. Assigning odds instead of scores allows a more rigorous probabilistic treatment of the data, which can offer more effective discrimination between organisms. For each component of the assessment, the odds express how likely is the evidence if the organism poses a risk as a quarantine pest. According to Bayesian theory, the odds for all components are multiplied together and the product divided by itself plus one to give the probability that the organism poses a risk as a quarantine pest given the evidence available (assuming that the prior probability is neutral). A general illustration of the different distributions of outcomes obtained from score averaging and probability is provided. The approach is then applied to a set of risk assessments for 256 potential quarantine pests compiled for Tanzania in 1997. The greater discrimination between cases may help improve communication between risk assessment scientists and regulatory decision makers.  相似文献   

15.
The recent controversy over the increased risk of venous thrombosis with third generation oral contraceptives illustrates the public policy dilemma that can be created by relying on conventional statistical tests and estimates: case-control studies showed a significant increase in risk and forced a decision either to warn or not to warn. Conventional statistical tests are an improper basis for such decisions because they dichotomise results according to whether they are or are not significant and do not allow decision makers to take explicit account of additional evidence--for example, of biological plausibility or of biases in the studies. A Bayesian approach overcomes both these problems. A Bayesian analysis starts with a "prior" probability distribution for the value of interest (for example, a true relative risk)--based on previous knowledge--and adds the new evidence (via a model) to produce a "posterior" probability distribution. Because different experts will have different prior beliefs sensitivity analyses are important to assess the effects on the posterior distributions of these differences. Sensitivity analyses should also examine the effects of different assumptions about biases and about the model which links the data with the value of interest. One advantage of this method is that it allows such assumptions to be handled openly and explicitly. Data presented as a series of posterior probability distributions would be a much better guide to policy, reflecting the reality that degrees of belief are often continuous, not dichotomous, and often vary from one person to another in the face of inconclusive evidence.  相似文献   

16.
Yu ZF  Catalano PJ 《Biometrics》2005,61(3):757-766
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with multiple binary and continuous endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Such studies face two major challenges for continuous outcomes. First, characterizing risk and defining a benchmark dose are difficult. Usually associated with an adverse binary event, risk is clearly definable in quantal settings as presence or absence of an event; finding a similar probability scale for continuous outcomes is less clear. Often, an adverse event is defined for continuous outcomes as any value below a specified cutoff level in a distribution assumed normal or log normal. Second, while continuous outcomes are traditionally analyzed separately for such studies, recent literature advocates also using multiple outcomes to assess risk. We propose a method for modeling and quantitative risk assessment for bivariate continuous outcomes that address both difficulties by extending existing percentile regression methods. The model is likelihood based; it allows separate dose-response models for each outcome while accounting for the bivariate correlation and overall characterization of risk. The approach to estimation of a benchmark dose is analogous to that for quantal data without the need to specify arbitrary cutoff values. We illustrate our methods with data from a neurotoxicity study of triethyl tin exposure in rats.  相似文献   

17.
Contamination within sediments of Sydney Harbour (once a major industrial port) were evaluated using a multiple lines-of-evidence (LOE) ecological risk assessment (ERA) approach prior to divestiture of the harbor. The multiple LOE approach included: (1) measurement of polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals, metalloids, petroleum hydrocarbons(PHCs), and total organic carbon (TOC) concentrations in surface sediments from multiple Sydney Harbour locations; (2) identification and application of sediment quality guidelines (SQGs) from various jurisdictions; (3) comparisons of harbor sediment chemistry against background/reference sediment chemistry; (4) determining number and frequency of exceedances over SQGs; (5) calculating mean probable effect level-quotients (PEL-Qs); (6) PAH forensic source evaluation; (7) review of previous sediment chemistry and biota tissue data; and (8) characterizing benthic habitat at harbor stations. The ERA determined that current sediments exhibited mostly low probability of adverse effects. Furthermore, contaminated sediments exhibiting a high probability of adverse effects were localized to only a few stations within the harbor. Ongoing natural recovery of harbor sediments is likely responsible for attenuating contaminants that historically were higher than those measured in this study and were previously distributed over much wider areas of the harbor. Results suggest that legacy industrial activities and current urban sewage effluents are the major sources of contamination in Sydney Harbour sediments.  相似文献   

18.
Studies have argued that genetic testing will provide limited information for predicting the probability of common diseases, because of the incomplete penetrance of genotypes and the low magnitude of associated risks for the general population. Such studies, however, have usually examined the effect of one gene at time. We argue that disease prediction for common multifactorial diseases is greatly improved by considering multiple predisposing genetic and environmental factors concurrently, provided that the model correctly reflects the underlying disease etiology. We show how likelihood ratios can be used to combine information from several genetic tests to compute the probability of developing a multifactorial disease. To show how concurrent use of multiple genetic tests improves the prediction of a multifactorial disease, we compute likelihood ratios by logistic regression with simulated case-control data for a hypothetical disease influenced by multiple genetic and environmental risk factors. As a practical example, we also apply this approach to venous thrombosis, a multifactorial disease influenced by multiple genetic and nongenetic risk factors. Under reasonable conditions, the concurrent use of multiple genetic tests markedly improves prediction of disease. For example, the concurrent use of a panel of three genetic tests (factor V Leiden, prothrombin variant G20210A, and protein C deficiency) increases the positive predictive value of testing for venous thrombosis at least eightfold. Multiplex genetic testing has the potential to improve the clinical validity of predictive testing for common multifactorial diseases.  相似文献   

19.
In this work we propose the adoption of a statistical framework used in the evaluation of forensic evidence as a tool for evaluating and presenting circumstantial “evidence” of a disease outbreak from syndromic surveillance. The basic idea is to exploit the predicted distributions of reported cases to calculate the ratio of the likelihood of observing n cases given an ongoing outbreak over the likelihood of observing n cases given no outbreak. The likelihood ratio defines the Value of Evidence (V). Using Bayes'' rule, the prior odds for an ongoing outbreak are multiplied by V to obtain the posterior odds. This approach was applied to time series on the number of horses showing clinical respiratory symptoms or neurological symptoms. The separation between prior beliefs about the probability of an outbreak and the strength of evidence from syndromic surveillance offers a transparent reasoning process suitable for supporting decision makers. The value of evidence can be translated into a verbal statement, as often done in forensics or used for the production of risk maps. Furthermore, a Bayesian approach offers seamless integration of data from syndromic surveillance with results from predictive modeling and with information from other sources such as disease introduction risk assessments.  相似文献   

20.
Receptor-based QSAR approaches can enumerate the energetic contributions of amino acid residues toward ligand binding only when experimental binding affinity is associated. The structural data of protein-ligand complexes are witnessing a tremendous growth in the Protein Data Bank deposited with a few entries on binding affinity. We present here a new approach to compute the E nergetic CONT ributions of A mino acid residues and its possible C ross-T alk (ECONTACT) to study ligand binding using per-residue energy decomposition, molecular dynamics simulations and rescoring method without the need for experimental binding affinity. This approach recognizes potential cross-talks among amino acid residues imparting a nonadditive effect to the binding affinity with evidence of correlative motions in the dynamics simulations. The protein-ligand interaction energies deduced from multiple structures are decomposed into per-residue energy terms, which are employed as variables to principal component analysis and generated cross-terms. Out of 16 cross-talks derived from eight datasets of protein-ligand systems, the ECONTACT approach is able to associate 10 potential cross-talks with site-directed mutagenesis, free energy, and dynamics simulations data strongly. We modeled these key determinants of ligand binding using joint probability density function (jPDF) to identify cross-talks in protein structures. The top two cross-talks identified by ECONTACT approach corroborated with the experimental findings. Furthermore, virtual screening exercise using ECONTACT models better discriminated known inhibitors from decoy molecules. This approach proposes the jPDF metric to estimate the probability of observing cross-talks in any protein-ligand complex. The source code and related resources to perform ECONTACT modeling is available freely at https://www.gujaratuniversity.ac.in/econtact /.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号