首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Li Y  Guolo A  Hoffman FO  Carroll RJ 《Biometrics》2007,63(4):1226-1236
In radiation epidemiology, it is often necessary to use mathematical models in the absence of direct measurements of individual doses. When complex models are used as surrogates for direct measurements to estimate individual doses that occurred almost 50 years ago, dose estimates will be associated with considerable error, this error being a mixture of (a) classical measurement error due to individual data such as diet histories and (b) Berkson measurement error associated with various aspects of the dosimetry system. In the Nevada Test Site(NTS) Thyroid Disease Study, the Berkson measurement errors are correlated within strata. This article concerns the development of statistical methods for inference about risk of radiation dose on thyroid disease, methods that account for the complex error structure inherence in the problem. Bayesian methods using Markov chain Monte Carlo and Monte-Carlo expectation-maximization methods are described, with both sharing a key Metropolis-Hastings step. Regression calibration is also considered, but we show that regression calibration does not use the correlation structure of the Berkson errors. Our methods are applied to the NTS Study, where we find a strong dose-response relationship between dose and thyroiditis. We conclude that full consideration of mixtures of Berkson and classical uncertainties in reconstructed individual doses are important for quantifying the dose response and its credibility/confidence interval. Using regression calibration and expectation values for individual doses can lead to a substantial underestimation of the excess relative risk per gray and its 95% confidence intervals.  相似文献   

2.
The methodical peculiarities of experimental construction of regression "dose-effect" relationships used for the dose reconstruction are discussed. The method of computer simulations is applied to study the efficiency of different statistical procedures for plotting regression curves as well as the dependence of errors in dose prediction on the volume of examined material and on the choice of doses for a calibration curve. The causes of essential variability of calibrations obtained by different teams of researchers are discussed. A number of methodical recommendations is given for statistical processing of cytogenetic data. The procedure of constructing calibration dose dependence of the frequency of dicentrics on the basis experiments with in vitro gamma-irradiation of lymphocytes from blood samples of 5 donors is considered in detail. The expressions for statistical errors occurring in the dose reconstruction made on the base of the frequency of aberrations were derived and checked by the computer experiment.  相似文献   

3.
Generalized relative and absolute risk models are fitted to the latest Japanese atomic bomb survivor solid cancer and leukemia mortality data (through 2000), with the latest (DS02) dosimetry, by classical (regression calibration) and Bayesian techniques, taking account of errors in dose estimates and other uncertainties. Linear-quadratic and linear-quadratic-exponential models are fitted and used to assess risks for contemporary populations of China, Japan, Puerto Rico, the U.S. and the UK. Many of these models are the same as or very similar to models used in the UNSCEAR 2006 report. For a test dose of 0.1 Sv, the solid cancer mortality for a UK population using the generalized linear-quadratic relative risk model is estimated as 5.4% Sv(-1) [90% Bayesian credible interval (BCI) 3.1, 8.0]. At 0.1 Sv, leukemia mortality for a UK population using the generalized linear-quadratic relative risk model is estimated as 0.50% Sv(-1) (90% BCI 0.11, 0.97). Risk estimates varied little between populations; at 0.1 Sv the central estimates ranged from 3.7 to 5.4% Sv(-1) for solid cancers and from 0.4 to 0.6% Sv(-1) for leukemia. Analyses using regression calibration techniques yield central estimates of risk very similar to those for the Bayesian approach. The central estimates of population risk were similar for the generalized absolute risk model and the relative risk model. Linear-quadratic-exponential models predict lower risks (at least at low test doses) and appear to fit as well, although for other (theoretical) reasons we favor the simpler linear-quadratic models.  相似文献   

4.
Estimating data transformations in nonlinear mixed effects models   总被引:1,自引:0,他引:1  
Oberg A  Davidian M 《Biometrics》2000,56(1):65-72
A routine practice in the analysis of repeated measurement data is to represent individual responses by a mixed effects model on some transformed scale. For example, for pharmacokinetic, growth, and other data, both the response and the regression model are typically transformed to achieve approximate within-individual normality and constant variance on the new scale; however, the choice of transformation is often made subjectively or by default, with adoption of a standard choice such as the log. We propose a mixed effects framework based on the transform-both-sides model, where the transformation is represented by a monotone parametric function and is estimated from the data. For this model, we describe a practical fitting strategy based on approximation of the marginal likelihood. Inference is complicated by the fact that estimation of the transformation requires modification of the usual standard errors for estimators of fixed effects; however, we show that, under conditions relevant to common applications, this complication is asymptotically negligible, allowing straightforward implementation via standard software.  相似文献   

5.
In some clinical trials or clinical practice, the therapeutic agent is administered repeatedly, and doses are adjusted in each patient based on repeatedly measured continuous responses, to maintain the response levels in a target range. Because a lower dose tends to be selected for patients with a better outcome, simple summarizations may wrongly show a better outcome for the lower dose, producing an incorrect dose–response relationship. In this study, we consider the dose–response relationship under these situations. We show that maximum‐likelihood estimates are consistent without modeling the dose‐modification mechanisms when the selection of the dose as a time‐dependent covariate is based only on observed, but not on unobserved, responses, and measurements are generated based on administered doses. We confirmed this property by performing simulation studies under several dose‐modification mechanisms. We examined an autoregressive linear mixed effects model. The model represents profiles approaching each patient's asymptote when identical doses are repeatedly administered. The model takes into account the previous dose history and provides a dose–response relationship of the asymptote as a summary measure. We also examined a linear mixed effects model assuming all responses are measured at steady state. In the simulation studies, the estimates of both the models were unbiased under the dose modification based on observed responses, but biased under the dose modification based on unobserved responses. In conclusion, the maximum‐likelihood estimates of the dose–response relationship are consistent under the dose modification based only on observed responses.  相似文献   

6.
This interlaboratory comparison validates the dicentric chromosome assay for assessing radiation dose in mass casualty accidents and identifies the advantages and limitations of an international biodosimetry network. The assay's validity and accuracy were determined among five laboratories following the International Organization for Standardization guidelines. Blood samples irradiated at the Armed Forces Radiobiology Research Institute were shipped to all laboratories, which constructed individual radiation calibration curves and assessed the dose to dose-blinded samples. Each laboratory constructed a dose-effect calibration curve for the yield of dicentrics for (60)Co gamma rays in the 0 to 5-Gy range, using the maximum likelihood linear-quadratic model, Y = c + alphaD + betaD(2). For all laboratories, the estimated coefficients of the fitted curves were within the 99.7% confidence intervals (CIs), but the observed dicentric yields differed. When each laboratory assessed radiation doses to four dose-blinded blood samples by comparing the observed dicentric yield with the laboratory's own calibration curve, the estimates were accurate in all laboratories at all doses. For all laboratories, actual doses were within the 99.75% CI for the assessed dose. Across the dose range, the error in the estimated doses, compared to the physical doses, ranged from 15% underestimation to 15% overestimation.  相似文献   

7.
We consider the statistical modeling and analysis of replicated multi-type point process data with covariates. Such data arise when heterogeneous subjects experience repeated events or failures which may be of several distinct types. The underlying processes are modeled as nonhomogeneous mixed Poisson processes with random (subject) and fixed (covariate) effects. The method of maximum likelihood is used to obtain estimates and standard errors of the failure rate parameters and regression coefficients. Score tests and likelihood ratio statistics are used for covariate selection. A graphical test of goodness of fit of the selected model is based on generalized residuals. Measures for determining the influence of an individual observation on the estimated regression coefficients and on the score test statistic are developed. An application is described to a large ongoing randomized controlled clinical trial for the efficacy of nutritional supplements of selenium for the prevention of two types of skin cancer.  相似文献   

8.
Generalized least squares regression with variance function estimation was used to derive the calibration function for measurement of methotrexate plasma concentration and its results were compared with weighted least squares regression by usual weight factors and also with that of ordinary least squares method. In the calibration curve range of 0.05 to 100 microM, both heteroscedasticity and non-linearity were present therefore ordinary least squares linear regression methods could result in large errors in the calculation of methotrexate concentration. Generalized least squares regression with variance function estimation worked better than both the weighted regression with the usual weight factors and ordinary least squares regression and gave better estimates for methotrexate concentration.  相似文献   

9.
D C Thomas  M Blettner  N E Day 《Biometrics》1992,48(3):781-794
A method is proposed for analysis of nested case-control studies that combines the matched comparison of covariate values between cases and controls and a comparison of the observed numbers of cases in the nesting cohort with expected numbers based on external rates and average relative risks estimated from the controls. The former comparison is based on the conditional likelihood for matched case-control studies and the latter on the unconditional likelihood for Poisson regression. It is shown that the two likelihoods are orthogonal and that their product is an estimator of the full survival likelihood that would have been obtained on the total cohort, had complete covariate data been available. Parameter estimation and significance tests follow in the usual way by maximizing this product likelihood. The method is illustrated using data on leukemia following irradiation for cervical cancer. In this study, the original cohort study showed a clear excess of leukemia in the first 15 years after exposure, but it was not feasible to obtain dose estimates on the entire cohort. However, the subsequent nested case-control study failed to demonstrate significant differences between alternative dose-response relations and effects of time-related modifiers. The combined analysis allows much clearer discrimination between alternative dose-time-response models.  相似文献   

10.
Pulmonary carcinomas were recorded in a life-span experiment of male Sprague-Dawley rats exposed to fission neutrons. Mortality-corrected prevalences are obtained by the method of isotonic regression. In a second part of the paper a comparison is made with data obtained earlier for radon-daughter inhalations in the same strain of rats. A simultaneous maximum likelihood analysis is applied jointly to all experimental groups from the radon inhalation and the fission neutron study. The dependence of the resulting coefficients for the different groups on absorbed dose or inhalation dose permits a derivation of equivalence ratios. At low doses the equivalence ratio is 3 WLM (working level months) of radon-daughter exposure to 1 mGy of fission neutrons. At higher doses the equivalence ratio decreases. The neutron data are also utilized to derive mortality-corrected lifetime incidences of pulmonary carcinomas in the exposed animals. At low doses the relation is consistent with linearity, but sublinearity (dose exponent less than 1) cannot be excluded.  相似文献   

11.
Summary It has become increasingly common in epidemiological studies to pool specimens across subjects to achieve accurate quantitation of biomarkers and certain environmental chemicals. In this article, we consider the problem of fitting a binary regression model when an important exposure is subject to pooling. We take a regression calibration approach and derive several methods, including plug‐in methods that use a pooled measurement and other covariate information to predict the exposure level of an individual subject, and normality‐based methods that make further adjustments by assuming normality of calibration errors. Within each class we propose two ways to perform the calibration (covariate augmentation and imputation). These methods are shown in simulation experiments to effectively reduce the bias associated with the naive method that simply substitutes a pooled measurement for all individual measurements in the pool. In particular, the normality‐based imputation method performs reasonably well in a variety of settings, even under skewed distributions of calibration errors. The methods are illustrated using data from the Collaborative Perinatal Project.  相似文献   

12.
Follmann D  Nason M 《Biometrics》2011,67(3):1127-1134
Summary Quantal bioassay experiments relate the amount or potency of some compound; for example, poison, antibody, or drug to a binary outcome such as death or infection in animals. For infectious diseases, probit regression is commonly used for inference and a key measure of potency is given by the IDP , the amount that results in P% of the animals being infected. In some experiments, a validation set may be used where both direct and proxy measures of the dose are available on a subset of animals with the proxy being available on all. The proxy variable can be viewed as a messy reflection of the direct variable, leading to an errors‐in‐variables problem. We develop a model for the validation set and use a constrained seemingly unrelated regression (SUR) model to obtain the distribution of the direct measure conditional on the proxy. We use the conditional distribution to derive a pseudo‐likelihood based on probit regression and use the parametric bootstrap for statistical inference. We re‐evaluate an old experiment in 21 monkeys where neutralizing antibodies (nABs) to HIV were measured using an old (proxy) assay in all monkeys and with a new (direct) assay in a validation set of 11 who had sufficient stored plasma. Using our methods, we obtain an estimate of the ID1 for the new assay, an important target for HIV vaccine candidates. In simulations, we compare the pseudo‐likelihood estimates with regression calibration and a full joint likelihood approach.  相似文献   

13.
In many settings, including oncology, increasing the dose of treatment results in both increased efficacy and toxicity. With the increasing availability of validated biomarkers and prediction models, there is the potential for individualized dosing based on patient specific factors. We consider the setting where there is an existing dataset of patients treated with heterogenous doses and including binary efficacy and toxicity outcomes and patient factors such as clinical features and biomarkers. The goal is to analyze the data to estimate an optimal dose for each (future) patient based on their clinical features and biomarkers. We propose an optimal individualized dose finding rule by maximizing utility functions for individual patients while limiting the rate of toxicity. The utility is defined as a weighted combination of efficacy and toxicity probabilities. This approach maximizes overall efficacy at a prespecified constraint on overall toxicity. We model the binary efficacy and toxicity outcomes using logistic regression with dose, biomarkers and dose–biomarker interactions. To incorporate the large number of potential parameters, we use the LASSO method. We additionally constrain the dose effect to be non-negative for both efficacy and toxicity for all patients. Simulation studies show that the utility approach combined with any of the modeling methods can improve efficacy without increasing toxicity relative to fixed dosing. The proposed methods are illustrated using a dataset of patients with lung cancer treated with radiation therapy.  相似文献   

14.
It has generally been assumed that the neutron and γ-ray absorbed doses in the data from the life span study (LSS) of the Japanese A-bomb survivors are too highly correlated for an independent separation of the all solid cancer risks due to neutrons and due to γ-rays. However, with the release of the most recent data for all solid cancer incidence and the increased statistical power over previous datasets, it is instructive to consider alternatives to the usual approaches. Simple excess relative risk (ERR) models for radiation-induced solid cancer incidence fitted to the LSS epidemiological data have been applied with neutron and γ-ray absorbed doses as separate explanatory covariables. A simple evaluation of the degree of independent effects from γ-ray and neutron absorbed doses on the all solid cancer risk with the hierarchical partitioning (HP) technique is presented here. The degree of multi-collinearity between the γ-ray and neutron absorbed doses has also been considered. The results show that, whereas the partial correlation between the neutron and γ-ray colon absorbed doses may be considered to be high at 0.74, this value is just below the level beyond which remedial action, such as adding the doses together, is usually recommended. The resulting variance inflation factor is 2.2. Applying HP indicates that just under half of the drop in deviance resulting from adding the γ-ray and neutron absorbed doses to the baseline risk model comes from the joint effects of the neutrons and γ-rays—leaving a substantial proportion of this deviance drop accounted for by individual effects of the neutrons and γ-rays. The average ERR/Gy γ-ray absorbed dose and the ERR/Gy neutron absorbed dose that have been obtained here directly for the first time, agree well with previous indirect estimates. The average relative biological effectiveness (RBE) of neutrons relative to γ-rays, calculated directly from fit parameters to the all solid cancer ERR model with both colon absorbed dose covariables, is 65 (95 %CI: 11; 170). Therefore, although the 95 % CI is quite wide, reference to the colon doses with a neutron weighting of 10 may not be optimal as the basis for the determination of all solid cancer risks. Further investigations into the neutron RBE are required, ideally based on the LSS data with organ-specific neutron and γ-ray absorbed doses for all organs rather than the RBE weighted absorbed doses currently provided. The HP method is also suggested for use in other epidemiological cohort analyses that involve correlated explanatory covariables.  相似文献   

15.
We develop a Bayesian approach to a calibration problem with one interested covariate subject to multiplicative measurement errors. Our work is motivated by a stem cell study with the objective of establishing the recommended minimum doses for stem cell engraftment after a blood transplant. When determining a safe stem cell dose based on the prefreeze samples, the postcryopreservation recovery rate enters in the model as a multiplicative measurement error term, as shown in the model. We examine the impact of ignoring measurement errors in terms of asymptotic bias in the regression coefficient. According to the general structure of data available in practice, we propose a two-stage Bayesian method to perform model estimation via R2WinBUGS (Sturtz, Ligges, and Gelman, 2005, Journal of Statistical Software 12, 1-16). We illustrate this method by the aforementioned motivating example. The results of this study allow routine peripheral blood stem cell processing laboratories to establish recommended minimum stem cell doses for transplant and develop a systematic approach for further deciding whether the postthaw analysis is warranted.  相似文献   

16.
We present a graphical measure of assessing the explanatory power of regression models with a binary response. The binary regression quantile plot and an area defined by it are used for the visual comparison and ordering of nested binary response regression models. The plot shows how well various models explain the data. Two data sets are analyzed and the area representing the fit of a model is shown to agree with the usual likelihood ratio test.  相似文献   

17.
Errors in the estimation of exposures or doses are a major source of uncertainty in epidemiological studies of cancer among nuclear workers. This paper presents a Monte Carlo maximum likelihood method that can be used for estimating a confidence interval that reflects both statistical sampling error and uncertainty in the measurement of exposures. The method is illustrated by application to an analysis of all cancer (excluding leukemia) mortality in a study of nuclear workers at the Oak Ridge National Laboratory (ORNL). Monte Carlo methods were used to generate 10,000 data sets with a simulated corrected dose estimate for each member of the cohort based on the estimated distribution of errors in doses. A Cox proportional hazards model was applied to each of these simulated data sets. A partial likelihood, averaged over all of the simulations, was generated; the central risk estimate and confidence interval were estimated from this partial likelihood. The conventional unsimulated analysis of the ORNL study yielded an excess relative risk (ERR) of 5.38 per Sv (90% confidence interval 0.54-12.58). The Monte Carlo maximum likelihood method yielded a slightly lower ERR (4.82 per Sv) and wider confidence interval (0.41-13.31).  相似文献   

18.
Summary Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106 , 1575–1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24‐hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food's usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR‐reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet‐health outcome associations. Applying the method to data from the Eating at America's Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context.  相似文献   

19.
Ryu D  Li E  Mallick BK 《Biometrics》2011,67(2):454-466
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves.  相似文献   

20.
H G Müller  T Schmitt 《Biometrics》1990,46(1):117-129
We address the question of how to choose the number of doses when estimating the median effective dose (ED50) of a symmetric dose-response curve by the maximum likelihood method. One criterion for this choice here is the asymptotic mean squared error (determined by the asymptotic variance) of the estimated ED50 of a dose-response relationship with qualitative responses. The choice is based on an analysis of the inverse of the information matrix. We find that in many cases, assuming various symmetric dose-response curves and various design densities, choice of as many doses as possible, i.e., the allocation of one subject per dose, is optimal. The theoretical and numerical results are supported by simulations and by an example concerning choice of design in an adolescence study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号