首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single‐agent setting to joint‐action, two‐agent studies. Focus is on continuous response outcomes. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile—a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR—is defined for use in quantitative risk characterization and assessment.  相似文献   

2.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

3.
4.
We compared the effect of uncertainty in dose‐response model form on health risk estimates to the effect of uncertainty and variability in exposure. We used three different dose‐response models to characterize neurological effects in children exposed in utero to methylmercury, and applied these models to calculate risks to a native population exposed to potentially contaminated fish from a reservoir in British Columbia. Uncertainty in model form was explicitly incorporated into the risk estimates. The selection of dose‐response model strongly influenced both mean risk estimates and distributions of risk, and had a much greater impact than altering exposure distributions. We conclude that incorporating uncertainty in dose‐response model form is at least as important as accounting for variability and uncertainty in exposure parameters in probabilistic risk assessment.  相似文献   

5.
Human exposure to endocrine disrupters (EDs) is widespread and is considered to pose a growing threat to human health. Recent advances in molecular and genetic research and better understanding of mechanisms of blastic cell transformation have led to efforts to improve cancer risk assessment for populations exposed to this family of xenobiotics. In risk assessment, low dose extrapolation of cancer incidence data from both experimental animals and epidemiology studies has been largely based on models assuming linear correlation at low doses, despite existence of evidence showing otherwise. Another weakness of ED risk assessment is poor exposure data in ecological studies. Those are frequently rough estimates derived from contaminated items of local food basket surveys. Polyhalogenated hydrocarbons are treated as examples. There is growing sense of urgency to develop a biologically based dose response model of cancer risk, integrating emerging data from molecular biology and epidemiology to provide more realistic data for risk assessors, public, public health managers and environmental issues administrators.  相似文献   

6.
While epidemiological data typically contain a multivariate response and often also multiple exposure parameters, current methods for safe dose calculations, including the widely used benchmark approach, rely on standard regression techniques. In practice, dose-response modeling and calculation of the exposure limit are often based on the seemingly most sensitive outcome. However, this procedure ignores other available data, is inefficient, and fails to account for multiple testing. Instead, risk assessment could be based on structural equation models, which can accommodate both a multivariate exposure and a multivariate response function. Furthermore, such models will allow for measurement error in the observed variables, which is a requirement for unbiased estimation of the benchmark dose. This methodology is illustrated with the data on neurobehavioral effects in children prenatally exposed to methylmercury, where results based on standard regression models cause an underestimation of the true risk.  相似文献   

7.
Abstract

As materials intended to be brought into contact with food, food contact materials (FCMs) – including plastics, paper or inks – can transfer their constituents to food under normal or foreseeable use, including direct or indirect food contact. The safety of FCMs in the EU is evaluated by the European Food Safety Authority (EFSA) using risk assessment rules. Results of independent, health-based chemical risk assessments are crucial for the decision-making process to authorize the use of substances in FCMs. However, the risk assessment approach used in the EU has several shortcomings that need to be improved in order to ensure consumer health protection from exposure arising from FCMs. This article presents the use of meta-analysis as a useful tool in chronic risk assessment for substances migrating from FCMs. Meta-analysis can be used for the review and summary of research of FCMs safety in order to provide a more accurate assessment of the impact of exposure with increased statistical power, thus providing more reliable data for risk assessment. The article explains a common methodology of conducting a meta-analysis based on meta-analysis of the dose-effect relationship of cadmium for benchmark dose evaluations performed by EFSA.  相似文献   

8.
A primary objective in quantitative risk or safety assessment is characterization of the severity and likelihood of an adverse effect caused by a chemical toxin or pharmaceutical agent. In many cases data are not available at low doses or low exposures to the agent, and inferences at those doses must be based on the high-dose data. A modern method for making low-dose inferences is known as benchmark analysis, where attention centers on the dose at which a fixed benchmark level of risk is achieved. Both upper confidence limits on the risk and lower confidence limits on the "benchmark dose" are of interest. In practice, a number of possible benchmark risks may be under study; if so, corrections must be applied to adjust the limits for multiplicity. In this short note, we discuss approaches for doing so with quantal response data.  相似文献   

9.
Fryer HR  McLean AR 《PloS one》2011,6(8):e23664
Understanding the circumstances under which exposure to transmissible spongiform encephalopathies (TSEs) leads to infection is important for managing risks to public health. Based upon ideas in toxicology and radiology, it is plausible that exposure to harmful agents, including TSEs, is completely safe if the dose is low enough. However, the existence of a threshold, below which infection probability is zero has never been demonstrated experimentally. Here we explore this question by combining data and mathematical models that describe scrapie infections in mice following experimental challenge over a broad range of doses. We analyse data from 4338 mice inoculated at doses ranging over ten orders of magnitude. These data are compared to results from a within-host model in which prions accumulate according to a stochastic birth-death process. Crucially, this model assumes no threshold on the dose required for infection. Our data reveal that infection is possible at the very low dose of a 1000 fold dilution of the dose that infects half the challenged animals (ID50). Furthermore, the dose response curve closely matches that predicted by the model. These findings imply that there is no safe dose of prions and that assessments of the risk from low dose exposure are right to assume a linear relationship between dose and probability of infection. We also refine two common perceptions about TSE incubation periods: that their mean values decrease linearly with logarithmic decreases in dose and that they are highly reproducible between hosts. The model and data both show that the linear decrease in incubation period holds only for doses above the ID50. Furthermore, variability in incubation periods is greater than predicted by the model, not smaller. This result poses new questions about the sources of variability in prion incubation periods. It also provides insight into the limitations of the incubation period assay.  相似文献   

10.
Benchmark dose calculation from epidemiological data   总被引:7,自引:0,他引:7  
A threshold for dose-dependent toxicity is crucial for standards setting but may not be possible to specify from empirical studies. Crump (1984) instead proposed calculating the lower statistical confidence bound of the benchmark dose, which he defined as the dose that causes a small excess risk. This concept has several advantages and has been adopted by regulatory agencies for establishing safe exposure limits for toxic substances such as mercury. We have examined the validity of this method as applied to an epidemiological study of continuous response data associated with mercury exposure. For models that are linear in the parameters, we derived an approximative expression for the lower confidence bound of the benchmark dose. We find that the benchmark calculations are highly dependent on the choice of the dose-effect function and the definition of the benchmark dose. We therefore recommend that several sets of biologically relevant default settings be used to illustrate the effect on the benchmark results and to stimulate research that will guide an a priori choice of proper default settings.  相似文献   

11.
12.
Basal metabolic rate (BMR) constitutes the minimal metabolic rate in the zone of thermo‐neutrality, where heat production is not elevated for temperature regulation. BMR thus constitutes the minimum metabolic rate that is required for maintenance. Interspecific variation in BMR in birds is correlated with food habits, climate, habitat, flight activity, torpor, altitude, and migration, although the selective forces involved in the evolution of these presumed adaptations are not always obvious. I suggest that BMR constitutes the minimum level required for maintenance, and that variation in this minimum level reflects the fitness costs and benefits in terms of ability to respond to selective agents like predators, implying that an elevated level of BMR is a cost of wariness towards predators. This hypothesis predicts a positive relationship between BMR and measures of risk taking such as flight initiation distance (FID) of individuals approached by a potential predator. Consistent with this suggestion, I show in a comparative analysis of 76 bird species that species with higher BMR for their body mass have longer FID when approached by a potential predator. This effect was independent of potentially confounding variables and similarity among species due to common phylogenetic descent. These results imply that BMR is positively related to risk‐taking behaviour, and that predation constitutes a neglected factor in the evolution of BMR.  相似文献   

13.
We study the use of simultaneous confidence bands for low-dose risk estimation with quantal response data, and derive methods for estimating simultaneous upper confidence limits on predicted extra risk under a multistage model. By inverting the upper bands on extra risk, we obtain simultaneous lower bounds on the benchmark dose (BMD). Monte Carlo evaluations explore characteristics of the simultaneous limits under this setting, and a suite of actual data sets are used to compare existing methods for placing lower limits on the BMD.  相似文献   

14.
Typical enteric pathogens including enteroviruses, Salmonella typhi, Shigella spp., and Eschierichia coli were selected and monitored during a 1-year period in urban surface waters using the real-time polymerase chain reaction (PCR) method. By considering two routes of human exposure to urban surface waters (i.e., drinking water and involuntary intake), and supposing that the dose–response relation may follow either an exponential model or the Beta-Poisson model, health risk assessment was conducted to estimate the safety under a given acceptable risk level upon exposure to each water and to evaluate the required level of pathogen inactivation for safeguarding human health. As a result, it was found that human health risk due to enteroviruses is often greater than that due to bacterial pathogens, and greater removal of enteroviruses would be required for safeguarding at the same acceptable risk level.  相似文献   

15.
The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to ‘prove the null hypothesis.’ Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the No Observed Adverse Effect Level (NOAEL), the Benchmark approach and the “Hockey Stick” model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is however often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?  相似文献   

16.
Toxaphene is a liver tumor promoter in B6C3F1 mice but not in F344 rats or hamsters. Recent studies demonstrate that key events leading to the mouse liver tumor response for toxaphene are mediated by activation of the constitutive androstane receptor (CAR). Benchmark dose modeling was conducted on available data for five endpoints in B6C3F1 mouse liver tissue or cultured liver cells (tumor response, cytotoxicity, proliferation, gap junction intercellular communication inhibition, and CAR-mediated CYP2B10 induction) and for CAR activation in human HepG2 cells, all reported in previous studies. The available evidence supports a nonlinear CAR-mediated mode of action (MOA) for toxaphene-induced mouse tumors including demonstration of a J-shaped dose-response pattern for human CAR activation, indicating that linear risk extrapolation at low doses is not supported for this MOA. Based on analysis of benchmark dose lower confidence limits at 10% response (BMDL10) and no observed effect levels (NOELs) for potential key events in the mouse liver tumor MOA for toxaphene, an RfD of 0.13 mg/kg-d is proposed based on a the BMDL10 for human CAR activation in human HepG2 cells. This value is below candidate RfD values based on BMDL10 estimates for both mouse liver tumors and mouse hepatocyte proliferation and therefore can be considered protective for human risk of liver tumor promotion and other CAR-mediated adverse health effects based on available data.  相似文献   

17.
Immune responses are highly dynamic. The magnitude and efficiency of an immune response to a pathogen can change markedly across individuals, and such changes may be influenced by variance in a range of intrinsic (e.g. age, genotype, sex) and external (e.g. abiotic stress, pathogen identity, strain) factors. Life history theory predicts that up‐regulation of the immune system will come at a physiological cost, and studies have confirmed that increased investment in immunity can reduce reproductive output and survival. Furthermore, males and females often have divergent reproductive strategies, and this might drive the evolution of sex‐specific life history trade‐offs involving immunity, and sexual dimorphism in immune responses per se. Here, we employ an experiment design to elucidate dose‐dependent and sex‐specific responses to exposure to a nonpathogenic immune elicitor at two scales – the ‘ultimate’ life history and the underlying ‘proximate’ immune level in Drosophila melanogaster. We found dose‐dependent effects of immune challenges on both male and female components of reproductive success, but not on survival, as well as a response in antimicrobial activity. These results indicate that even in the absence of the direct pathogenic effects that are associated with actual disease, individual life histories respond to a perceived immune challenge – but with the magnitude of this response being contingent on the initial dose of exposure. Furthermore, the results indicate that immune responses at the ultimate life history level may indeed reflect underlying processes that occur at the proximate level.  相似文献   

18.
Aim Ixodes scapularis is the most important vector of human tick‐borne pathogens in the United States, which include the agents of Lyme disease, human babesiosis and human anaplasmosis, among others. The density of host‐seeking I. scapularis nymphs is an important component of human risk for acquiring Borrelia burgdorferi, the aetiological agent of Lyme disease. In this study we used climate and field sampling data to generate a predictive map of the density of host‐seeking I. scapularis nymphs that can be used by the public, physicians and public health agencies to assist with the diagnosis and reporting of disease, and to better target disease prevention and control efforts. Location Eastern United States of America. Methods We sampled host‐seeking I. scapularis nymphs in 304 locations uniformly distributed east of the 100th meridian between 2004 and 2006. Between May and September, 1000 m2 were drag sampled three to six times per site. We developed a zero‐inflated negative binomial model to predict the density of host‐seeking I. scapularis nymphs based on altitude, interpolated weather station and remotely sensed data. Results Variables that had the strongest relationship with nymphal density were altitude, monthly mean vapour pressure deficit and spatial autocorrelation. Forest fragmentation and soil texture were not predictive. The best‐fit model identified two main foci – the north‐east and upper Midwest – and predicted the presence and absence of I. scapularis nymphs with 82% accuracy, with 89% sensitivity and 82% specificity. Areas of concordance and discordance with previous studies were discussed. Areas with high predicted but low observed densities of host‐seeking nymphs were identified as potential expansion fronts. Main conclusions This model is unique in its extensive and unbiased field sampling effort, allowing for an accurate delineation of the density of host‐seeking I. scapularis nymphs, an important component of human risk of infection for B. burgdorferi and other I. scapularis‐borne pathogens.  相似文献   

19.
Excess mortality in persons with severe mental disorders (SMD) is a major public health challenge that warrants action. The number and scope of truly tested interventions in this area remain limited, and strategies for implementation and scaling up of programmes with a strong evidence base are scarce. Furthermore, the majority of available interventions focus on a single or an otherwise limited number of risk factors. Here we present a multilevel model highlighting risk factors for excess mortality in persons with SMD at the individual, health system and socio‐environmental levels. Informed by that model, we describe a comprehensive framework that may be useful for designing, implementing and evaluating interventions and programmes to reduce excess mortality in persons with SMD. This framework includes individual‐focused, health system‐focused, and community level and policy‐focused interventions. Incorporating lessons learned from the multilevel model of risk and the comprehensive intervention framework, we identify priorities for clinical practice, policy and research agendas.  相似文献   

20.
Aims: To develop a predictive dose–response model for describing the survival of animals exposed to Bacillus anthracis to support risk management options. Methods and Results: Dose–response curves were generated from a large dose–mortality data set (>11 000 data points) consisting of guinea pigs exposed via the inhalation route to 76 different product preparations of B. anthracis. Because of the predictive nature of the Bayesian hierarchical approach (BHA), this method was used. The utility of this method in planning for a variety of scenarios from best case to worst case was demonstrated. Conclusions: A wide range of expected virulence was observed across products. Median estimates of virulence match well with previously published statistical estimates, but upper bound values of virulence are much greater than previous statistical estimates. Significance and Impact of the Study: This study is the first meta‐analysis in open literature to estimate the dose–response relationship for B. anthracis from a very large data set, generally a rare occurrence for highly infectious pathogens. The results are also the first to suggest the extent of variability, which is contributed by product preparation and/or dissemination methods, information needed for health‐based risk management decisions in response to a deliberate release. A set of possible benchmark values produced through this analysis can be tied to the risk tolerance of the decision‐maker or available intelligence. Further, the substantial size of the data set led to the ability to assess the appropriateness of the assumed distributional form of the prior, a common limitation in Bayesian analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号