首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A new method is proposed to derive the size of the interspecies uncertainty factor (UF) that is toxicologically and statistically based. The method is based on the biological/evolutionary assumption that similarity in susceptibility to toxic substances is a function of phylogenetic relatedness. This assumption is assessed via a large and highly structured aquatic database with over 500 agents tested in specific binary toxicity comparison (i.e., when two species have been tested with the same chemical under identical conditions) for dozens of species of wide phylogenetic relatedness. The methodology takes into account the generic need to estimate a response in any species (not just human) and the need to predict responses for new chemical agents. The method involves quantifying interspecies variation in susceptibility to numerous toxic substances via the use of binary interspecies comparisons that are converted to a 95% UF. This interspecies UF represents an estimate of the upper 95% of the population of 95% prediction intervals (PI) for binary interspecies comparisons within four categories of phylogenetic relatedness (species‐within‐genus, genera‐within‐family, families‐within‐order, orders‐within‐class). The 95% interspecies UFs range from a low of 10 for species‐within‐genus up to 65 for orders‐within‐class. Most mammalian toxicology studies involving mice, rats, cats, dogs, gerbils, and rabbits are orders‐within‐class categories for human risk assessment and would be provided a 65‐fold UF. Larger or smaller interspecies UF values could be selected based on the level of protection desired. The procedures described have application to both human and ecological risk assessment.  相似文献   

2.
Because metals can produce health risks, standards for regulating metal exposure are necessary. The purpose of this chapter is to review the application of uncertainty factors to mercury, arsenic, and cadmium. By the conventional method, uncertainty factors are often applied to animal studies to establish the reference dose (RfD) in humans. However, with the availability of a better database from improved study designs, it was demonstrated that uncertainty factors can be decreased. Incorporation of more pharmacokinetic and mechanistic data into the risk assessment process, as well as discussions between risk assessors and the research community to identify research needs are essential in reducing uncertainty factors.  相似文献   

3.
Generalised absolute risk models were fitted to the latest Japanese atomic bomb survivor cancer incidence data using Bayesian Markov Chain Monte Carlo methods, taking account of random errors in the DS86 dose estimates. The resulting uncertainty distributions in the relative risk model parameters were used to derive uncertainties in population cancer risks for a current UK population. Because of evidence for irregularities in the low-dose dose response, flexible dose-response models were used, consisting of a linear-quadratic-exponential model, used to model the high-dose part of the dose response, together with piecewise-linear adjustments for the two lowest dose groups. Following an assumed administered dose of 0.001 Sv, lifetime leukaemia radiation-induced incidence risks were estimated to be 1.11 x 10(-2) Sv(-1) (95% Bayesian CI -0.61, 2.38) using this model. Following an assumed administered dose of 0.001 Sv, lifetime solid cancer radiation-induced incidence risks were calculated to be 7.28 x 10(-2) Sv(-1) (95% Bayesian CI -10.63, 22.10) using this model. Overall, cancer incidence risks predicted by Bayesian Markov Chain Monte Carlo methods are similar to those derived by classical likelihood-based methods and which form the basis of established estimates of radiation-induced cancer risk.  相似文献   

4.
The current guideline for risk assessment of chemicals having a toxic end point routinely uses the reference dose (RfD) approach based on uncertainty factors of 10. With this method the quality of individual risk assessment varies among chemicals, often resulting in either an over‐ or under‐estimation of adverse health risk. The purpose of this investigation is to evaluate whether the magnitude of the 10X uncertainty factors have scientific merit against data from published experimental studies. A compilation and comparison of ratios between LOAEL/NOAEL (Lowest Observed Adverse Effect Level/No Observed Adverse Effect Level), subchronic/chronic, and animal/human values were made. The results of the present investigation revealed that the use of default factors could be over‐conservative or unprotective. More reasonable estimates of the risk to human health would result in a reduction of unnecessary, and expensive over‐regulation. In addition to the LOAEL to NOAEL, and subchronic to chronic ratios, the adequacy of uncertainty factors for animal to human extrapolations were examined. Although a 10‐fold uncertainty factor (UF) is most commonly used in the risk assessment process, an examination of the literature for the compounds presented here suggests that the use of different values is scientifically justifiable.  相似文献   

5.
The traditional “safety factor”; method has been used for years to establish occupational exposure limits (OELs) for active ingredients used in drugs. In the past, a single safety factor was used to address all sources of uncertainty in the limit setting process. The traditional 100‐fold safety factor commonly used to derive an acceptable daily intake value incorporates a default factor of 10 each to account for interindividual variability and interspecies extrapolation. Use of these defaults can lead to overly conservative health‐based limits, especially when they are combined with other (up to 10‐fold) factors to adjust for inadequacies in the available database. In recent years, attempts have been made to quantitate individual sources of uncertainty and variability to improve the scientific basis for OELs. In this paper we discuss the science supporting reductions in the traditional default uncertainty factors. A number of workplace‐specific factors also support reductions in these factors. Recently proposed alternative methodologies provide a framework to make maximum use of preclinical and clinical information, e.g., toxicokinetic and toxicodynamic data, to reduce uncertainties when establishing OELs for pharmaceutical active ingredients.  相似文献   

6.
7.
8.
Researchers usually estimate benchmark dose (BMD) for dichotomous experimental data using a binomial model with a single response function. Several forms of response function have been proposed to fit dose–response models to estimate the BMD and the corresponding benchmark dose lower bound (BMDL). However, if the assumed response function is not correct, then the estimated BMD and BMDL from the fitted model may not be accurate. To account for model uncertainty, model averaging (MA) methods are proposed to estimate BMD averaging over a model space containing a finite number of standard models. Usual model averaging focuses on a pre-specified list of parametric models leading to pitfalls when none of the models in the list is the correct model. Here, an alternative which augments an initial list of parametric models with an infinite number of additional models having varying response functions has been proposed to estimate BMD for dichotomous response data. In addition, different methods for estimating BMDL based on the family of response functions are derived. The proposed approach is compared with MA in a simulation study and applied to a real dataset. Simulation studies are also conducted to compare the four methods of estimating BMDL.  相似文献   

9.
10.
Aim Temporally replicated observations are essential for the calibration and validation of species distribution models (SDMs) aiming at making temporal extrapolations. We study here the usefulness of a general‐purpose monitoring programme for the calibration of hybrid SDMs. As a benchmark case, we take the calibration with data from a monitoring programme that specifically surveys those areas where environmental changes expected to be relevant occur. Location Catalonia, north‐east of Spain. Methods We modelled the distribution changes of twelve open‐habitat bird species in landscapes whose dynamics are driven by fire and forest regeneration. We developed hybrid SDMs combining correlative habitat suitability with mechanistic occupancy models. We used observations from two monitoring programmes to provide maximum‐likelihood estimates for spread parameters: a common breeding bird survey (CBS) and a programme specifically designed to monitor bird communities within areas affected by wildfires (DINDIS). Results Both calibration with CBS and DINDIS data yielded sound spread parameter estimates and range dynamics that suggested dispersal limitations. However, compared to calibration with DINDIS data, calibration with CBS data leads to biased estimates of spread distance for seven species and to a higher degree of uncertainty in predicted range dynamics for six species. Main conclusions We have shown that available monitoring data can be used in the calibration of the mechanistic component of hybrid SDMs. However, if the dynamics of the target species occur within areas not well covered, general‐purpose monitoring data can lead to biased and inaccurate parameter estimates. To determine the potential usefulness of a given monitoring data set for the calibration of the mechanistic component of a hybrid SDM, we recommend quantifying the number of surveyed sites that are predicted to undergo habitat suitability changes.  相似文献   

11.
We compared the effect of uncertainty in dose‐response model form on health risk estimates to the effect of uncertainty and variability in exposure. We used three different dose‐response models to characterize neurological effects in children exposed in utero to methylmercury, and applied these models to calculate risks to a native population exposed to potentially contaminated fish from a reservoir in British Columbia. Uncertainty in model form was explicitly incorporated into the risk estimates. The selection of dose‐response model strongly influenced both mean risk estimates and distributions of risk, and had a much greater impact than altering exposure distributions. We conclude that incorporating uncertainty in dose‐response model form is at least as important as accounting for variability and uncertainty in exposure parameters in probabilistic risk assessment.  相似文献   

12.
Boron, which is ubiquitous in the environment, causes developmental and reproductive effects in experimental animals. This observation has led to efforts to establish a Tolerable Intake value for boron. Although risk assessors agree on the use of fetal weight decreases observed in rats as an appropriate critical effect, consensus on the adequacy of toxicokinetic data as a basis for replacement of default uncertainty factors remains to be reached. A critical analysis of the existing data on boron toxicokinetics was conducted to clarify the appropriateness of replacing default uncertainty factors (10-fold for interspecies differences and 10-fold for intraspecies differences) with data-derived values. The default uncertainty factor for variability in response from animals to humans of 10-fold (default values of 4-fold for kinetics and 2.5-fold for dynamics) was recommended, since clearance of boron is 3-to 4-fold higher in rats than in humans and data on dynamic differences—in order to modify the default value—are unavailable. A data-derived adjustment of 6-fold (1.8 for kinetics and 3.1 for dynamics) rather than the default uncertainty factor of 10-fold was considered appropriate for intrahuman variability, based on variability in glomerular filtration rate during pregnancy in humans and the lack of available data on dynamic differences. Additional studies to investigate the toxicokinetics of boron in rats would be useful to provide a stronger basis for replacement of default uncertainty factors for interspecies variation.  相似文献   

13.
14.
Based on imperfect data and theory, agencies such as the United States Environmental Protection Agency (USEPA) currently derive “reference doses” (RfDs) to guide risk managers charged with ensuring that human exposures to chemicals are below population thresholds. The RfD for a chemical is typically reported as a single number, even though it is widely acknowledged that there are significant uncertainties inherent in the derivation of this number.

In this article, the authors propose a probabilistic alternative to the EPA's method that expresses the human population threshold as a probability distribution of values (rather than a single RfD value), taking into account the major sources of scientific uncertainty in such estimates. The approach is illustrated using much of the same data that USEPA uses to justify their current RfD procedure.

Like the EPA's approach, our approach recognizes the four key extrapolations that are necessary to define the human population threshold based on animal data: animal to human, human heterogeneity, LOAEL to NOAEL, and subchronic to chronic. Rather than using available data to define point estimates of “uncertainty factors” for these extrapolations, the proposed approach uses available data to define a probability distribution of adjustment factors. These initial characterizations of uncertainty can then be refined when more robust or specific data become available for a particular chemical or class of chemicals.

Quantitative characterization of uncertainty in noncancer risk assessment will be useful to risk managers who face complex trade-offs between control costs and protection of public health. The new approach can help decision-makers understand how much extra control cost must be expended to achieve a specified increase in confidence that the human population threshold is not being exceeded.  相似文献   


15.
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model‐building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base‐consumption were found low compared to the large uncertainty observed in the antibiotic and off‐gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases ‐ meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass‐transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009  相似文献   

16.
The dose of toxicant reaching the embryo is a critical determinant of developmental toxicity, and is likely to be a key factor responsible for interspecies variability in response to many test agents. This review compares the mechanisms regulating disposition of toxicants from the maternal circulation to the embryo during organogenesis in humans and the two species used predominantly in regulatory developmental toxicity testing, rats and rabbits. These three species utilize fundamentally different strategies for maternal-embryonic exchange during early pregnancy. Early postimplantation rat embryos rely on the inverted visceral yolk sac placenta, which is in intimate contact with the uterine epithelium and is equipped with an extensive repertoire of transport mechanisms, such as pinocytosis, endocytosis, and specific transporter proteins. Also, the rat yolk sac completely surrounds the embryo, such that the fluid-filled exocoelom survives through most of the period of organogenesis, and can concentrate compounds such as certain weak acids due to pH differences between maternal blood and exocelomic fluid. The early postimplantation rabbit conceptus differs from the rat in that the yolk sac is not closely apposed to the uterus during early organogenesis and does not completely enclose the embryo until relatively later in development (approximately GD13). This suggests that the early rabbit yolk sac might be a relatively inefficient transporter, a conclusion supported by limited data with ethylene glycol and one of its predominant metabolites, glycolic acid, given to GD9 rabbits. In humans, maternal-embryo exchange is thought to occur via the chorioallantoic placenta, although it has recently been conjectured that a supplemental route of transfer could occur via absorption into the yolk sac. Knowledge of the mechanisms underlying species-specific embryonic disposition, factored together with other pharmacokinetic characteristics of the test compound and knowledge of critical periods of susceptibility, can be used on a case-by-case basis to make more accurate extrapolations of test animal data to the human.  相似文献   

17.
A workshop convened to define research needs in toxicology identified several deficiencies in data and methods currently applied in risk assessment. The workshop panel noted that improving the link between chemical exposure and toxicological response requires a better understanding of the biological basis for inter-and intra-human variability and susceptibility. This understanding will not be complete unless all life stages are taken into consideration. Because animal studies serve as a foundation for toxicological assessment, proper accounting for cross-species extrapolation is essential. To achieve this, adjustments for dose-rate effects must be improved, which will aid in extrapolating toxicological responses to low doses and from short-term exposures. Success depends on greater use of validated biologically based dose-response models that include pharmacokinetic and pharmacodynamic data. Research in these areas will help define uncertainty factors and reduce reliance on underlying default assumptions. Throughout the workshop the panel recognized that biomedical science and toxicology in particular is on the verge of a revolution because of advances in genomics and proteomics. Data from these high-output technologies are anticipated to greatly improve risk assessment by enabling scientists to better define and model the elements of the relationship between exposure to biological hazards and health risks in populations with differing susceptibilities.  相似文献   

18.
Halley (2003) proposed that parameter drift decreases the uncertainty in long‐range extinction risk estimates, because drift mitigates the extreme sensitivity of estimated risk to estimated mean growth rate. However, parameter drift has a second, opposing effect: it increases the uncertainty in parameter estimates from a given data set. When both effects are taken into account, parameter drift can increase, sometimes substantially, the uncertainty in risk estimates. The net effect depends sensitively on the type of drift and on which model parameters must be estimated from observational data on the population at risk. In general, unless many parameters are estimated from independent data, parameter drift increases the uncertainty in extinction risk. These findings suggest that more mechanistic PVA models, using long‐term data on key environmental variables and experiments to quantify their demographic impacts, offer the best prospects for escaping the high data requirements when extinction risk is estimated from observational data.  相似文献   

19.
For the risk to human health posed by chemicals that show threshold toxicity there is an increasing need to move away from using the default approaches, which inherently incorporate uncertainty, towards more biologically defensible risk assessments. However, most chemical databases do not contain data of sufficient quantity or quality that can be used to replace either the interspecies or interindividual aspects of toxicokinetic and toxicodynamic uncertainty. The purpose of the current analysis was to evaluate the use of alternative, species-specific, pathway-related, “categorical” default values to replace the current interspecies toxicokinetic default uncertainty factor of 4.0. The extent of the difference in the internal dose of a compound, for each test species, could then be related to the specific route of metabolism in humans. This refinement would allow for different categories of defaults to be used, providing that the metabolic fate of a toxicant was known in humans. Interspecies differences in metabolism, excretion, and bioavailability have been compared for probe substrates for four different human xenobiotic-metabolizing enzymes: CYP1A2 (caffeine, paraxanthine, theobromine, and theophylline), CYP3A4 (lidocaine), UDP-glucuronyltransferase (AZT), and esterases (aspirin). The results of this analysis showed that there are significant differences between humans and the four test species in the metabolic fate of the probe compounds, the enzymes involved, the route of excretion and oral bioavailability — all of which are factors that can influence the extent of the difference between humans and a test species in the internal dose of a toxicant. The wide variability between both compounds and the individual species suggests that the categorical approach for species differences may be of limited use in refining the current default approach. However, future work to incorporate a wider database of compounds that are metabolized extensively by any pathway in humans to provide more information on the extent to which the different test species are not covered by the default of 4.0. Ultimately this work supports the necessity to remove the uncertainty from the risk assessment process by the generation and use of compound-specific data.  相似文献   

20.
Hodgson DR  Suga H 《Biopolymers》2004,73(1):130-150
In vitro selection has allowed the isolation of many new ribozymes that are able to catalyze an ever-widening array of chemical transformations. Mechanistic studies on these selected ribozymes have provided valuable insight into the methods that RNA can invoke to overcome different catalytic tasks. We focus on the methods employed in these mechanistic studies using the acyl-transferase family of selected ribozymes as well-studied reference systems. Chemical and biochemical techniques have been used in tandem in order to draw conclusions on the various modes of catalysis employed by the different family members. In turn, this type of mechanistic information may provide a means for the redesign and optimization of existing ribozymes or the basis for new selection systems for more powerful RNA catalysts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号