首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Yu ZF  Catalano PJ 《Biometrics》2005,61(3):757-766
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with multiple binary and continuous endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Such studies face two major challenges for continuous outcomes. First, characterizing risk and defining a benchmark dose are difficult. Usually associated with an adverse binary event, risk is clearly definable in quantal settings as presence or absence of an event; finding a similar probability scale for continuous outcomes is less clear. Often, an adverse event is defined for continuous outcomes as any value below a specified cutoff level in a distribution assumed normal or log normal. Second, while continuous outcomes are traditionally analyzed separately for such studies, recent literature advocates also using multiple outcomes to assess risk. We propose a method for modeling and quantitative risk assessment for bivariate continuous outcomes that address both difficulties by extending existing percentile regression methods. The model is likelihood based; it allows separate dose-response models for each outcome while accounting for the bivariate correlation and overall characterization of risk. The approach to estimation of a benchmark dose is analogous to that for quantal data without the need to specify arbitrary cutoff values. We illustrate our methods with data from a neurotoxicity study of triethyl tin exposure in rats.  相似文献   

2.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

3.
Benchmark dose calculation from epidemiological data   总被引:7,自引:0,他引:7  
A threshold for dose-dependent toxicity is crucial for standards setting but may not be possible to specify from empirical studies. Crump (1984) instead proposed calculating the lower statistical confidence bound of the benchmark dose, which he defined as the dose that causes a small excess risk. This concept has several advantages and has been adopted by regulatory agencies for establishing safe exposure limits for toxic substances such as mercury. We have examined the validity of this method as applied to an epidemiological study of continuous response data associated with mercury exposure. For models that are linear in the parameters, we derived an approximative expression for the lower confidence bound of the benchmark dose. We find that the benchmark calculations are highly dependent on the choice of the dose-effect function and the definition of the benchmark dose. We therefore recommend that several sets of biologically relevant default settings be used to illustrate the effect on the benchmark results and to stimulate research that will guide an a priori choice of proper default settings.  相似文献   

4.
Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single‐agent setting to joint‐action, two‐agent studies. Focus is on continuous response outcomes. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile—a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR—is defined for use in quantitative risk characterization and assessment.  相似文献   

5.
Summary Benchmark analysis is a widely used tool in public health risk analysis. Therein, estimation of minimum exposure levels, called Benchmark Doses (BMDs), that induce a prespecified Benchmark Response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This article demonstrates how the benchmark modeling paradigm can be expanded from the single‐dose setting to joint‐action, two‐agent studies. Focus is on response outcomes expressed as proportions. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile (BMP) – a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR – is defined for use in quantitative risk characterization and assessment. The resulting, joint, low‐dose guidelines can improve public health planning and risk regulation when dealing with low‐level exposures to combinations of hazardous agents.  相似文献   

6.
Motivated by a clinical prediction problem, a simulation study was performed to compare different approaches for building risk prediction models. Robust prediction models for hospital survival in patients with acute heart failure were to be derived from three highly correlated blood parameters measured up to four times, with predictive ability having explicit priority over interpretability. Methods that relied only on the original predictors were compared with methods using an expanded predictor space including transformations and interactions. Predictors were simulated as transformations and combinations of multivariate normal variables which were fitted to the partly skewed and bimodally distributed original data in such a way that the simulated data mimicked the original covariate structure. Different penalized versions of logistic regression as well as random forests and generalized additive models were investigated using classical logistic regression as a benchmark. Their performance was assessed based on measures of predictive accuracy, model discrimination, and model calibration. Three different scenarios using different subsets of the original data with different numbers of observations and events per variable were investigated. In the investigated setting, where a risk prediction model should be based on a small set of highly correlated and interconnected predictors, Elastic Net and also Ridge logistic regression showed good performance compared to their competitors, while other methods did not lead to substantial improvements or even performed worse than standard logistic regression. Our work demonstrates how simulation studies that mimic relevant features of a specific data set can support the choice of a good modeling strategy.  相似文献   

7.
Phosgene has been a long-term subject of toxicological research due to its widespread use, high toxicity, and status as a model of chemically induced lung injury. To take advantage of the abundant data set for the acute inhalation toxicity of phosgene, methods for exposure-response analysis that use more data than the traditional no-observed-adverse-effect level approach were used to perform an exposure-response assessment for phosgene. Categorical regression is particularly useful for acute exposures due to the ability to combine studies of various exposure durations, and thus provide estimates of effect severity for a range of both exposure concentrations and durations. Results from the categorical regression approach were compared to those from parametric curve fitting models (i.e., benchmark concentration models) that make use of information from an entire dose-response, but only for one exposure duration. While categorical regression analysis provided results that were comparable to benchmark concentration results, categorical regression provides an improvement over that technique by accounting for the effects of both exposure concentration and duration on response. The other major advantage afforded by categorical regression is the ability to combine studies, allowing the quantitative use of a larger data set, which increases confidence in the final result.  相似文献   

8.
Researchers usually estimate benchmark dose (BMD) for dichotomous experimental data using a binomial model with a single response function. Several forms of response function have been proposed to fit dose–response models to estimate the BMD and the corresponding benchmark dose lower bound (BMDL). However, if the assumed response function is not correct, then the estimated BMD and BMDL from the fitted model may not be accurate. To account for model uncertainty, model averaging (MA) methods are proposed to estimate BMD averaging over a model space containing a finite number of standard models. Usual model averaging focuses on a pre-specified list of parametric models leading to pitfalls when none of the models in the list is the correct model. Here, an alternative which augments an initial list of parametric models with an infinite number of additional models having varying response functions has been proposed to estimate BMD for dichotomous response data. In addition, different methods for estimating BMDL based on the family of response functions are derived. The proposed approach is compared with MA in a simulation study and applied to a real dataset. Simulation studies are also conducted to compare the four methods of estimating BMDL.  相似文献   

9.
A primary objective in quantitative risk or safety assessment is characterization of the severity and likelihood of an adverse effect caused by a chemical toxin or pharmaceutical agent. In many cases data are not available at low doses or low exposures to the agent, and inferences at those doses must be based on the high-dose data. A modern method for making low-dose inferences is known as benchmark analysis, where attention centers on the dose at which a fixed benchmark level of risk is achieved. Both upper confidence limits on the risk and lower confidence limits on the "benchmark dose" are of interest. In practice, a number of possible benchmark risks may be under study; if so, corrections must be applied to adjust the limits for multiplicity. In this short note, we discuss approaches for doing so with quantal response data.  相似文献   

10.
In this paper a method for quantitative risk assessment in epidemiological studies investigating threshold effects is proposed. The simple logistic regression model is used to describe the association between a binary response variable and a continuous risk factor. By defining acceptable levels for the absolute risk and the risk gradient the corresponding benchmark values of the risk factor can be calculated by means of nonlinear functions of the logistic regression coefficients. Standard errors and confidence intervals of the benchmark values are derived by means of the multivariate delta method. The proposed approach is compared with the threshold model of Ulm (1991) for assessing threshold values in epidemiological studies.  相似文献   

11.
Large-scale surveys, such as national forest inventories and vegetation monitoring programs, usually have complex sampling designs that include geographical stratification and units organized in clusters. When models are developed using data from such programs, a key question is whether or not to utilize design information when analyzing the relationship between a response variable and a set of covariates. Standard statistical regression methods often fail to account for complex sampling designs, which may lead to severely biased estimators of model coefficients. Furthermore, ignoring that data are spatially correlated within clusters may underestimate the standard errors of regression coefficient estimates, with a risk for drawing wrong conclusions. We first review general approaches that account for complex sampling designs, e.g. methods using probability weighting, and stress the need to explore the effects of the sampling design when applying logistic regression models. We then use Monte Carlo simulation to compare the performance of the standard logistic regression model with two approaches to model correlated binary responses, i.e. cluster-specific and population-averaged logistic regression models. As an example, we analyze the occurrence of epiphytic hair lichens in the genus Bryoria; an indicator of forest ecosystem integrity. Based on data from the National Forest Inventory (NFI) for the period 1993–2014 we generated a data set on hair lichen occurrence on  >100,000 Picea abies trees distributed throughout Sweden. The NFI data included ten covariates representing forest structure and climate variables potentially affecting lichen occurrence. Our analyses show the importance of taking complex sampling designs and correlated binary responses into account in logistic regression modeling to avoid the risk of obtaining notably biased parameter estimators and standard errors, and erroneous interpretations about factors affecting e.g. hair lichen occurrence. We recommend comparisons of unweighted and weighted logistic regression analyses as an essential step in development of models based on data from large-scale surveys.  相似文献   

12.
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.  相似文献   

13.
Data analytic methods for matched case-control studies   总被引:3,自引:0,他引:3  
D Pregibon 《Biometrics》1984,40(3):639-651
The recent introduction of complex multivariate statistical models in matched case-control studies is a mixed blessing. Their use can lead to a better understanding of the way in which many variables contribute to the risk of disease. On the other hand, these powerful methods can obscure salient features in the data that might have been detected by other, less sophisticated methods. This shortcoming is due to a lack of support methodology for the routine use of these models. Satisfactory computation of estimated relative risks and their standard errors is not sufficient justification for the fitted model. Goodness of fit must be examined if inferences are to be trusted. This paper is concerned with the analysis of matched case-control studies with logistic models. Analogies of these models to linear regression models are emphasized. In particular, basic concepts such as analysis of variance, multiple correlation coefficient, one-degree-of-freedom tests, and residual analysis are discussed. The fairly new field of regression diagnostics is also introduced. All procedures are illustrated on a study of bladder cancer in males.  相似文献   

14.
The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to ‘prove the null hypothesis.’ Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the No Observed Adverse Effect Level (NOAEL), the Benchmark approach and the “Hockey Stick” model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is however often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?  相似文献   

15.
The Epidemiology Work Group at the Workshop on Future Research for Improving Risk Assessment Methods, Of Mice, Men, and Models, held August 16 to 18, 2000, at Snowmass Village, Aspen, Colorado, concluded that in order to improve the utility of epidemiologic studies for risk assessment, methodologic research is needed in the following areas: (1) aspects of epidemiologic study designs that affect doseresponse estimation; (2) alternative methods for estimating dose in human studies; and (3) refined methods for dose-response modeling for epidemiologic data. Needed research in aspects of epidemiologic study design includes recognition and control of study biases, identification of susceptible subpopulations, choice of exposure metrics, and choice of epidemiologic risk parameters. Much of this research can be done with existing data. Research needed to improve determinants of dose in human studies includes additional individual-level data (e.g., diet, co-morbidity), development of more extensive human data for physiologically based pharmacokinetic (PBPK) dose modeling, tissue registries to increase the availability of tissue for studies of exposure/dose and susceptibility biomarkers, and biomarker data to assess exposures in humans and animals. Research needed on dose-response modeling of human studies includes more widespread application of flexible statistical methods (e.g., general additive models), development of methods to compensate for epidemiologic bias in dose-response models, improved biological models using human data, and evaluation of the benchmark dose using human data. There was consensus among the Work Group that, whereas most prior risk assessments have focused on cancer, there is a growing need for applications to other health outcomes. Developmental and reproductive effects, injuries, respiratory disease, and cardiovascular disease were identified as especially high priorities for research. It was also a consensus view that epidemiologists, industrial hygienists, and other scientists focusing on human data need to play a stronger role throughout the risk assessment process. Finally, the group agreed that there was a need to improve risk communication, particularly on uncertainty inherent in risk assessments that use epidemiologic data.  相似文献   

16.
Human exposure to endocrine disrupters (EDs) is widespread and is considered to pose a growing threat to human health. Recent advances in molecular and genetic research and better understanding of mechanisms of blastic cell transformation have led to efforts to improve cancer risk assessment for populations exposed to this family of xenobiotics. In risk assessment, low dose extrapolation of cancer incidence data from both experimental animals and epidemiology studies has been largely based on models assuming linear correlation at low doses, despite existence of evidence showing otherwise. Another weakness of ED risk assessment is poor exposure data in ecological studies. Those are frequently rough estimates derived from contaminated items of local food basket surveys. Polyhalogenated hydrocarbons are treated as examples. There is growing sense of urgency to develop a biologically based dose response model of cancer risk, integrating emerging data from molecular biology and epidemiology to provide more realistic data for risk assessors, public, public health managers and environmental issues administrators.  相似文献   

17.
This paper investigates image processing and pattern recognition techniques to estimate atmospheric visibility based on the visual content of images from off-the-shelf cameras. We propose a prediction model that first relates image contrast measured through standard image processing techniques to atmospheric transmission. This is then related to the most common measure of atmospheric visibility, the coefficient of light extinction. The regression model is learned using a training set of images and corresponding light extinction values as measured using a transmissometer.The major contributions of this paper are twofold. First, we propose two predictive models that incorporate multiple scene regions into the estimation: regression trees and multivariate linear regression. Incorporating multiple regions is important since regions at different distances are effective for estimating light extinction under different visibility regimes. The second major contribution is a semi-supervised learning framework, which incorporates unlabeled training samples to improve the learned models. Leveraging unlabeled data for learning is important since in many applications, it is easier to obtain observations than to label them. We evaluate our models using a dataset of images and ground truth light extinction values from a visibility camera system in Phoenix, Arizona.  相似文献   

18.
Ornithologists interested in the drivers of nest success and brood parasitism benefit from the development of new analytical approaches. One example is the development of so-called "log exposure" models for analyzing nest success. However, analyses of brood parasitism data have not kept pace with developments in nest success analyses. The standard approach uses logistic regression which does not account for multiple parasitism events, nor does it prevent bias from using observed proportions of parasitized nests. Likewise, logistic regression analyses do not capture fine scale temporal variation in parasitism. At first glance, it might be tempting to apply log exposure models to parasitism data, but the process of parasitism is inherently different from the process of nest predation. We modeled daily parasitism rate as a Poisson process, which allowed us to correct potential biases in parasitism rate. We were also able to use our estimated parasitism rate to model parasitism risk as the probability of one or more parasitism events. We applied this model to red-winged blackbird Agelaius phoeniceus nesting colonies subject to parasitism by brown-headed cowbirds Molothrus ater . Our approach allowed us to model parasitism using a wider rage of covariates, especially functions of time. We found strong support for models combining temporal fluctuations in parasitism rate and nest-site characteristics. Similarly, we found that our annual predicted parasitism risk was lower on average than the risk estimated from observed parasitism levels. Our approach improves upon traditional logistic regression analyses and opens the door for more mechanistic modeling of the process of parasitism.  相似文献   

19.
Many existing cohort studies initially designed to investigate disease risk as a function of environmental exposures have collected genomic data in recent years with the objective of testing for gene-environment interaction (G × E) effects. In environmental epidemiology, interest in G × E arises primarily after a significant effect of the environmental exposure has been documented. Cohort studies often collect rich exposure data; as a result, assessing G × E effects in the presence of multiple exposure markers further increases the burden of multiple testing, an issue already present in both genetic and environment health studies. Latent variable (LV) models have been used in environmental epidemiology to reduce dimensionality of the exposure data, gain power by reducing multiplicity issues via condensing exposure data, and avoid collinearity problems due to presence of multiple correlated exposures. We extend the LV framework to characterize gene-environment interaction in presence of multiple correlated exposures and genotype categories. Further, similar to what has been done in case-control G × E studies, we use the assumption of gene-environment (G-E) independence to boost the power of tests for interaction. The consequences of making this assumption, or the issue of how to explicitly model G-E association has not been previously investigated in LV models. We postulate a hierarchy of assumptions about the LV model regarding the different forms of G-E dependence and show that making such assumptions may influence inferential results on the G, E, and G × E parameters. We implement a class of shrinkage estimators to data adaptively trade-off between the most restrictive to most flexible form of G-E dependence assumption and note that such class of compromise estimators can serve as a benchmark of model adequacy in LV models. We demonstrate the methods with an example from the Early Life Exposures in Mexico City to Neuro-Toxicants Study of lead exposure, iron metabolism genes, and birth weight.  相似文献   

20.
This paper demonstrates the advantages of sharing information about unknown features of covariates across multiple model components in various nonparametric regression problems including multivariate, heteroscedastic, and semicontinuous responses. In this paper, we present a methodology which allows for information to be shared nonparametrically across various model components using Bayesian sum-of-tree models. Our simulation results demonstrate that sharing of information across related model components is often very beneficial, particularly in sparse high-dimensional problems in which variable selection must be conducted. We illustrate our methodology by analyzing medical expenditure data from the Medical Expenditure Panel Survey (MEPS). To facilitate the Bayesian nonparametric regression analysis, we develop two novel models for analyzing the MEPS data using Bayesian additive regression trees—a heteroskedastic log-normal hurdle model with a “shrink-toward-homoskedasticity” prior and a gamma hurdle model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号