首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 0 毫秒
1.
Summary .  This article introduces an original methodology based on empirical likelihood, which aims at combining different food contamination and consumption surveys to provide risk managers with a risk measure, taking into account all the available information. This risk index is defined as the probability that exposure to a contaminant exceeds a safe dose. It is naturally expressed as a nonlinear functional of the different consumption and contamination distributions, more precisely as a generalized U-statistic. This nonlinearity and the huge size of the data sets make direct computation of the problem unfeasible. Using linearization techniques and incomplete versions of the U-statistic, a tractable "approximated" empirical likelihood program is solved yielding asymptotic confidence intervals for the risk index. An alternative "Euclidean likelihood program" is also considered, replacing the Kullback–Leibler distance involved in the empirical likelihood by the Euclidean distance. Both methodologies are tested on simulated data and applied to assess the risk due to the presence of methyl mercury in fish and other seafood.  相似文献   

2.
The epidemiologic concept of the adjusted attributable risk is a useful approach to quantitatively describe the importance of risk factors on the population level. It measures the proportional reduction in disease probability when a risk factor is eliminated from the population, accounting for effects of confounding and effect-modification by nuisance variables. The computation of asymptotic variance estimates for estimates of the adjusted attributable risk is often done by applying the delta method. Investigations on the delta method have shown, however, that the delta method generally tends to underestimate the standard error, leading to biased confidence intervals. We compare confidence intervals for the adjusted attributable risk derived by applying computer intensive methods like the bootstrap or jackknife to confidence intervals based on asymptotic variance estimates using an extensive Monte Carlo simulation and within a real data example from a cohort study in cardiovascular disease epidemiology. Our results show that confidence intervals based on bootstrap and jackknife methods outperform intervals based on asymptotic theory. Best variants of computer intensive confidence intervals are indicated for different situations.  相似文献   

3.
Abstract

As materials intended to be brought into contact with food, food contact materials (FCMs) – including plastics, paper or inks – can transfer their constituents to food under normal or foreseeable use, including direct or indirect food contact. The safety of FCMs in the EU is evaluated by the European Food Safety Authority (EFSA) using risk assessment rules. Results of independent, health-based chemical risk assessments are crucial for the decision-making process to authorize the use of substances in FCMs. However, the risk assessment approach used in the EU has several shortcomings that need to be improved in order to ensure consumer health protection from exposure arising from FCMs. This article presents the use of meta-analysis as a useful tool in chronic risk assessment for substances migrating from FCMs. Meta-analysis can be used for the review and summary of research of FCMs safety in order to provide a more accurate assessment of the impact of exposure with increased statistical power, thus providing more reliable data for risk assessment. The article explains a common methodology of conducting a meta-analysis based on meta-analysis of the dose-effect relationship of cadmium for benchmark dose evaluations performed by EFSA.  相似文献   

4.
Humans experience chronic cumulative trace-level exposure to mixtures of volatile, semi-volatile, and non-volatile polycyclic aromatic hydrocarbons (PAHs) present in the environment as by-products of combustion processes. Certain PAHs are known or suspected human carcinogens and so we have developed methodology for measuring their circulating (blood borne) concentrations as a tool to assess internal dose and health risk. We use liquid/liquid extraction and gas chromatography–mass spectrometry and present analytical parameters including dynamic range (0–250 ng/ml), linearity (>0.99 for all compounds), and instrument sensitivity (range 2–22 pg/ml) for a series of 22 PAHs representing 2–6-rings. The method is shown to be sufficiently sensitive for estimating PAHs baseline levels (typical median range from 1 to 1000 pg/ml) in groups of normal control subjects using 1-ml aliquots of human plasma but we note that some individuals have very low background concentrations for 5- and 6-ring compounds that fall below robust quantitation levels.  相似文献   

5.
Yu ZF  Catalano PJ 《Biometrics》2005,61(3):757-766
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with multiple binary and continuous endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Such studies face two major challenges for continuous outcomes. First, characterizing risk and defining a benchmark dose are difficult. Usually associated with an adverse binary event, risk is clearly definable in quantal settings as presence or absence of an event; finding a similar probability scale for continuous outcomes is less clear. Often, an adverse event is defined for continuous outcomes as any value below a specified cutoff level in a distribution assumed normal or log normal. Second, while continuous outcomes are traditionally analyzed separately for such studies, recent literature advocates also using multiple outcomes to assess risk. We propose a method for modeling and quantitative risk assessment for bivariate continuous outcomes that address both difficulties by extending existing percentile regression methods. The model is likelihood based; it allows separate dose-response models for each outcome while accounting for the bivariate correlation and overall characterization of risk. The approach to estimation of a benchmark dose is analogous to that for quantal data without the need to specify arbitrary cutoff values. We illustrate our methods with data from a neurotoxicity study of triethyl tin exposure in rats.  相似文献   

6.
Data requirements are not harmonized globally for the regulation of food and feed derived from stacked genetically modified (GM) events, produced by combining individual GM events through conventional breeding. The data required by some regulatory agencies have increased despite the absence of substantiated adverse effects to animals or humans from the consumption of GM crops. Data from studies conducted over a 15‐year period for several stacked GM event maize (Zea mays L.) products (Bt11 ×  GA21, Bt11 ×  MIR604, MIR604 ×  GA21, Bt11 ×  MIR604 ×  GA21, Bt11 ×  MIR162 ×  GA21 and Bt11 ×  MIR604 ×  MIR162 ×  GA21), together with their component single events, are presented. These data provide evidence that no substantial changes in composition, protein expression or insert stability have occurred after combining the single events through conventional breeding. An alternative food and feed risk assessment strategy for stacked GM events is suggested based on a problem formulation approach that utilizes (i) the outcome of the single event risk assessments, and (ii) the potential for interactions in the stack, based on an understanding of the mode of action of the transgenes and their products.  相似文献   

7.
Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single‐agent setting to joint‐action, two‐agent studies. Focus is on continuous response outcomes. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile—a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR—is defined for use in quantitative risk characterization and assessment.  相似文献   

8.
9.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

10.
This commentary focuses on the potential added value of and need for (sub)‐chronic testing of whole genetically modified (GM) foods in rodents to assess their safety. Such routine testing should not be required since, due to apparent weaknesses in the approach, it does not add to current risk assessment of GM foods. Moreover, the demand for routine testing using animals is in conflict with the European Union (EU) Commission's efforts to reduce animal experimentation. Regulating agencies in the EU are invited to respect the sound scientific principles applied to the risk assessment of foods derived from GM plants and not to interfere in the risk assessment by introducing extra requirements based on pseudo‐scientific or political considerations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号