首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kottas A  Branco MD  Gelfand AE 《Biometrics》2002,58(3):593-600
In cytogenetic dosimetry, samples of cell cultures are exposed to a range of doses of a given agent. In each sample at each dose level, some measure of cell disability is recorded. The objective is to develop models that explain cell response to dose. Such models can be used to predict response at unobserved doses. More important, such models can provide inference for unknown exposure doses given the observed responses. Typically, cell disability is viewed as a Poisson count, but in the present work, a more appropriate response is a categorical classification. In the literature, modeling in this case is very limited. What exists is purely parametric. We propose a fully Bayesian nonparametric approach to this problem. We offer comparison with a parametric model through a simulation study and the analysis of a real dataset modeling blood cultures exposed to radiation where classification is with regard to number of micronuclei per cell.  相似文献   

2.
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.  相似文献   

3.
Summary We introduce a correction for covariate measurement error in nonparametric regression applied to longitudinal binary data arising from a study on human sleep. The data have been surveyed to investigate the association of some hormonal levels and the probability of being asleep. The hormonal effect is modeled flexibly while we account for the error‐prone measurement of its concentration in the blood and the longitudinal character of the data. We present a fully Bayesian treatment utilizing Markov chain Monte Carlo inference techniques, and also introduce block updating to improve sampling and computational performance in the binary case. Our model is partly inspired by the relevance vector machine with radial basis functions, where usually very few basis functions are automatically selected for fitting the data. In the proposed approach, we implement such data‐driven complexity regulation by adopting the idea of Bayesian model averaging. Besides the general theory and the detailed sampling scheme, we also provide a simulation study for the Gaussian and the binary cases by comparing our method to the naive analysis ignoring measurement error. The results demonstrate a clear gain when using the proposed correction method, particularly for the Gaussian case with medium and large measurement error variances, even if the covariate model is misspecified.  相似文献   

4.
Commonly accepted intensity-dependent normalization in spotted microarray studies takes account of measurement errors in the differential expression ratio but ignores measurement errors in the total intensity, although the definitions imply the same measurement error components are involved in both statistics. Furthermore, identification of differentially expressed genes is usually considered separately following normalization, which is statistically problematic. By incorporating the measurement errors in both total intensities and differential expression ratios, we propose a measurement-error model for intensity-dependent normalization and identification of differentially expressed genes. This model is also flexible enough to incorporate intra-array and inter-array effects. A Bayesian framework is proposed for the analysis of the proposed measurement-error model to avoid the potential risk of using the common two-step procedure. We also propose a Bayesian identification of differentially expressed genes to control the false discovery rate instead of the ad hoc thresholding of the posterior odds ratio. The simulation study and an application to real microarray data demonstrate promising results.  相似文献   

5.
Summary Doubling time has been widely used to represent the growth pattern of cells. A traditional method for finding the doubling time is to apply gray-scaled cells, where the logarithmic transformed scale is used. As an alternative statistical method, the log-linear model was recently proposed, for which actual cell numbers are used instead of the transformed gray-scaled cells. In this paper, I extend the log-linear model and propose the extended log-linear model. This model is designed for extra-Poisson variation, where the log-linear model produces the less appropriate estimate of the doubling time. Moreover, I compare statistical properties of the gray-scaled method, the log-linear model, and the extended log-linear model. For this purpose, I perform a Monte Carlo simulation study with three data-generating models: the additive error model, the multiplicative error model, and the overdispersed Poisson model. From the simulation study, I found that the gray-scaled method highly depends on the normality assumption of the gray-scaled cells; hence, this method is appropriate when the error model is multiplicative with the log-normally distributed errors. However, it is less efficient for other types of error distributions, especially when the error model is additive or the errors follow the Poisson distribution. The estimated standard error for the doubling time is not accurate in this case. The log-linear model was found to be efficient when the errors follow the Poisson distribution or nearly Poisson distribution. The efficiency of the log-linear model was decreased accordingly as the overdispersion increased, compared to the extended log-linear model. When the error model is additive or multiplicative with Gamma-distributed errors, the log-linear model is more efficient than the gray-scaled method. The extended log-linear model performs well overall for all three data-generating models. The loss of efficiency of the extended log-linear model is observed only when the error model is multiplicative with log-normally distributed errors, where the gray-scaled method is appropriate. However, the extended log-linear model is more efficient than the log-linear model in this case.  相似文献   

6.
Gelman A  Chew GL  Shnaidman M 《Biometrics》2004,60(2):407-417
In a serial dilution assay, the concentration of a compound is estimated by combining measurements of several different dilutions of an unknown sample. The relation between concentration and measurement is nonlinear and heteroscedastic, and so it is not appropriate to weight these measurements equally. In the standard existing approach for analysis of these data, a large proportion of the measurements are discarded as being above or below detection limits. We present a Bayesian method for jointly estimating the calibration curve and the unknown concentrations using all the data. Compared to the existing method, our estimates have much lower standard errors and give estimates even when all the measurements are outside the "detection limits." We evaluate our method empirically using laboratory data on cockroach allergens measured in house dust samples. Our estimates are much more accurate than those obtained using the usual approach. In addition, we develop a method for determining the "effective weight" attached to each measurement, based on a local linearization of the estimated model. The effective weight can give insight into the information conveyed by each data point and suggests potential improvements in design of serial dilution experiments.  相似文献   

7.
We construct Bayesian methods for semiparametric modeling of a monotonic regression function when the predictors are measured with classical error. Berkson error, or a mixture of the two. Such methods require a distribution for the unobserved (latent) predictor, a distribution we also model semiparametrically. Such combinations of semiparametric methods for the dose response as well as the latent variable distribution have not been considered in the measurement error literature for any form of measurement error. In addition, our methods represent a new approach to those problems where the measurement error combines Berkson and classical components. While the methods are general, we develop them around a specific application, namely, the study of thyroid disease in relation to radiation fallout from the Nevada test site. We use this data to illustrate our methods, which suggest a point estimate (posterior mean) of relative risk at high doses nearly double that of previous analyses but that also suggest much greater uncertainty in the relative risk.  相似文献   

8.
Pan W  Lin X  Zeng D 《Biometrics》2006,62(2):402-412
We propose a new class of models, transition measurement error models, to study the effects of covariates and the past responses on the current response in longitudinal studies when one of the covariates is measured with error. We show that the response variable conditional on the error-prone covariate follows a complex transition mixed effects model. The naive model obtained by ignoring the measurement error correctly specifies the transition part of the model, but misspecifies the covariate effect structure and ignores the random effects. We next study the asymptotic bias in naive estimator obtained by ignoring the measurement error for both continuous and discrete outcomes. We show that the naive estimator of the regression coefficient of the error-prone covariate is attenuated, while the naive estimators of the regression coefficients of the past responses are generally inflated. We then develop a structural modeling approach for parameter estimation using the maximum likelihood estimation method. In view of the multidimensional integration required by full maximum likelihood estimation, an EM algorithm is developed to calculate maximum likelihood estimators, in which Monte Carlo simulations are used to evaluate the conditional expectations in the E-step. We evaluate the performance of the proposed method through a simulation study and apply it to a longitudinal social support study for elderly women with heart disease. An additional simulation study shows that the Bayesian information criterion (BIC) performs well in choosing the correct transition orders of the models.  相似文献   

9.
We study a linear mixed effects model for longitudinal data, where the response variable and covariates with fixed effects are subject to measurement error. We propose a method of moment estimation that does not require any assumption on the functional forms of the distributions of random effects and other random errors in the model. For a classical measurement error model we apply the instrumental variable approach to ensure identifiability of the parameters. Our methodology, without instrumental variables, can be applied to Berkson measurement errors. Using simulation studies, we investigate the finite sample performances of the estimators and show the impact of measurement error on the covariates and the response on the estimation procedure. The results show that our method performs quite satisfactory, especially for the fixed effects with measurement error (even under misspecification of measurement error model). This method is applied to a real data example of a large birth and child cohort study.  相似文献   

10.
We present a method for parameter estimation in a two-compartment hidden Markov model of the first two stages of hematopoiesis. Hematopoiesis is the specialization of stem cells into mature blood cells. As stem cells are not distinguishable in bone marrow, little is known about their behavior, although it is known that they have the ability to self-renew or to differentiate to more specialized (progenitor) cells. We observe progenitor cells in samples of bone marrow taken from hybrid cats whose cells contain a natural binary marker. With data consisting of the changing proportions of this binary marker over time from several cats, estimates for stem cell self-renewal and differentiation parameters are obtained using an estimating equations approach.  相似文献   

11.
Non-myeloablative allogeneic haematopoietic stem cell transplantation (HSCT) is rarely achievable clinically, except where donor cells have selective advantages. Murine non-myeloablative conditioning regimens have limited clinical success, partly through use of clinically unachievable cell doses or strain combinations permitting allograft acceptance using immunosuppression alone. We found that reducing busulfan conditioning in murine syngeneic HSCT, increases bone marrow (BM):blood SDF-1 ratio and total donor cells homing to BM, but reduces the proportion of donor cells engrafting. Despite this, syngeneic engraftment is achievable with non-myeloablative busulfan (25 mg/kg) and higher cell doses induce increased chimerism. Therefore we investigated regimens promoting initial donor cell engraftment in the major histocompatibility complex barrier mismatched CBA to C57BL/6 allo-transplant model. This requires full myeloablation and immunosuppression with non-depleting anti-CD4/CD8 blocking antibodies to achieve engraftment of low cell doses, and rejects with reduced intensity conditioning (≤75 mg/kg busulfan). We compared increased antibody treatment, G-CSF, niche disruption and high cell dose, using reduced intensity busulfan and CD4/8 blockade in this model. Most treatments increased initial donor engraftment, but only addition of co-stimulatory blockade permitted long-term engraftment with reduced intensity or non-myeloablative conditioning, suggesting that signal 1 and 2 T-cell blockade is more important than early BM niche engraftment for transplant success.  相似文献   

12.
Huang YH  Hwang WH  Chen FY 《Biometrics》2011,67(4):1471-1480
Measurement errors in covariates may result in biased estimates in regression analysis. Most methods to correct this bias assume nondifferential measurement errors-i.e., that measurement errors are independent of the response variable. However, in regression models for zero-truncated count data, the number of error-prone covariate measurements for a given observational unit can equal its response count, implying a situation of differential measurement errors. To address this challenge, we develop a modified conditional score approach to achieve consistent estimation. The proposed method represents a novel technique, with efficiency gains achieved by augmenting random errors, and performs well in a simulation study. The method is demonstrated in an ecology application.  相似文献   

13.
以胸径和树高作为自变量,基于多元似然分析、似乎不相关回归等方法研建了黑龙江省天然蒙古栎可加性生物量模型系统。结果表明: 树高显著提高了树干生物量模型的效果,决定系数(R2)从0.953提高到0.988,均方根误差(RMSE)减小14 kg,对树枝、树叶和树根生物量的影响并不显著。单变量(仅含胸径)和双变量(胸径、树高)幂函数形式的生物量模型系统的误差结构均为相乘型,表明对数转换后的线性模型形式更合适。树干、树枝、树叶、树根生物量模型的R2分别为0.953~0.988、0.982~0.983、0.916~0.917、0.951~0.952,RMSE分别为13.42~27.03、6.84~7.00、1.95~1.97、9.71~9.84 kg。与广义最小二乘法(FGLS)相比,贝叶斯估计产生了相似的模型拟合效果,却提供了不同变异大小的参数估计值。FGLS各参数标准误为0.054~0.211,而使用Jeffreys不变先验的两种贝叶斯估计方法(DMC和Gibbs1)产生相似的参数变异(标准差为0.055~0.221);使用均值向量为0、方差为1000且协方差为0的多元正态先验(Gibbs2)和使用来自栎属树种生物量模型历史研究汇总的先验(Gibbs3)产生了更大的变异(标准差为0.080~0.278),使用自身数据获取的先验(Gibbs4)估计得到的各参数变异小于其他方法(标准差为0.004~0.013)。当使用Gibbs4法建立模型时,两类模型不仅能提供最窄的95%预测区间,还能产生更小的预估偏差,树干、树枝、树叶、树根和总生物量在单变量模型中的平均绝对偏差百分比(MAPE)分别为19.8%、24.7%、24.6%、29.0%和13.1%,树干和总生物量在双变量模型中的MAPE分别减小到10.5%和9.8%,其他组织MAPE未改变,表明Gibbs4法能提供更准确的生物量预测值。与传统回归方法相比,准确的先验信息使贝叶斯统计在估计稳定性和不确定性减小方面具有优势。  相似文献   

14.
Thall PF  Inoue LY  Martin TG 《Biometrics》2002,58(3):560-568
We describe an adaptive Bayesian design for a clinical trial of an experimental treatment for patients with hematologic malignancies who initially received an allogeneic bone marrow transplant but subsequently suffered a disease recurrence. Treatment consists of up to two courses of targeted immunotherapy followed by allogeneic donor lymphocyte infusion. The immunotherapy is a necessary precursor to the lymphocyte infusion, but it may cause severe liver toxicity and is certain to cause a low white blood cell count and low platelets. The primary scientific goal is to determine the infusion time that has the highest probability of treatment success, defined as the event that the patient does not suffer severe toxicity and is alive with recovered white blood cell count 50 days from the start of therapy. The method is based on a parametric model accounting for toxicity, time to white blood cell recovery, and survival time. The design includes an algorithm for between-patient immunotherapy dose de-escalation based on the toxicity data and an adaptive randomization among five possible infusion times according to their most recent posterior success probabilities. A simulation study shows that the design reliably selects the best infusion time while randomizing greater proportions of patients to superior infusion times.  相似文献   

15.
We present Bayesian hierarchical models for the analysis of Affymetrix GeneChip data. The approach we take differs from other available approaches in two fundamental aspects. Firstly, we aim to integrate all processing steps of the raw data in a common statistically coherent framework, allowing all components and thus associated errors to be considered simultaneously. Secondly, inference is based on the full posterior distribution of gene expression indices and derived quantities, such as fold changes or ranks, rather than on single point estimates. Measures of uncertainty on these quantities are thus available. The models presented represent the first building block for integrated Bayesian Analysis of Affymetrix GeneChip data: the models take into account additive as well as multiplicative error, gene expression levels are estimated using perfect match and a fraction of mismatch probes and are modeled on the log scale. Background correction is incorporated by modeling true signal and cross-hybridization explicitly, and a need for further normalization is considerably reduced by allowing for array-specific distributions of nonspecific hybridization. When replicate arrays are available for a condition, posterior distributions of condition-specific gene expression indices are estimated directly, by a simultaneous consideration of replicate probe sets, avoiding averaging over estimates obtained from individual replicate arrays. The performance of the Bayesian model is compared to that of standard available point estimate methods on subsets of the well known GeneLogic and Affymetrix spike-in data. The Bayesian model is found to perform well and the integrated procedure presented appears to hold considerable promise for further development.  相似文献   

16.
《Biophysical journal》2020,118(7):1749-1768
Epithelial-mesenchymal transition (EMT) is a fundamental biological process that plays a central role in embryonic development, tissue regeneration, and cancer metastasis. Transforming growth factor-β (TGFβ) is a potent inducer of this cellular transition, which is composed of transitions from an epithelial state to intermediate or partial EMT state(s) to a mesenchymal state. Using computational models to predict cell state transitions in a specific experiment is inherently difficult for reasons including model parameter uncertainty and error associated with experimental observations. In this study, we demonstrate that a data-assimilation approach using an ensemble Kalman filter, which combines limited noisy observations with predictions from a computational model of TGFβ-induced EMT, can reconstruct the cell state and predict the timing of state transitions. We used our approach in proof-of-concept “synthetic” in silico experiments, in which experimental observations were produced from a known computational model with the addition of noise. We mimic parameter uncertainty in in vitro experiments by incorporating model error that shifts the TGFβ doses associated with the state transitions and reproduces experimentally observed variability in cell state by either shifting a single parameter or generating “populations” of model parameters. We performed synthetic experiments for a wide range of TGFβ doses, investigating different cell steady-state conditions, and conducted parameter studies varying properties of the data-assimilation approach including the time interval between observations and incorporating multiplicative inflation, a technique to compensate for underestimation of the model uncertainty and mitigate the influence of model error. We find that cell state can be successfully reconstructed and the future cell state predicted in synthetic experiments, even in the setting of model error, when experimental observations are performed at a sufficiently short time interval and incorporate multiplicative inflation. Our study demonstrates the feasibility and utility of a data-assimilation approach to forecasting the fate of cells undergoing EMT.  相似文献   

17.
Since the first successful cord blood transplant was performed in 1988 there has been a gradual increase in the use of cord blood for hemopoietic stem cell transplantation. Worldwide, over 8,000 unrelated cord blood transplants have been performed with the majority being for children with hemopoietic malignancies. Transplantation for adults has increased but is limited by the low number of nucleated cells and CD34(+) cells within a single cord blood collection. Cord blood hemopoietic stem cells are more primitive than their adult counterparts and have high proliferative potential. Cord blood ex vivo expansion is designed to improve transplant outcomes by increasing the number of hemopoietic stem cells with long term repopulating potential and their differentiated progeny. However, despite a large amount of research activity during the last decade, this aim has not been realized. Herein we discuss the rationale for this approach; culture methods for ex vivo expansion, ways to assess the functional capacity of ex vivo generated hemopoietic stem cells and clinical outcomes following transplantation with ex vivo expanded cord blood.  相似文献   

18.
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.  相似文献   

19.
We develop a new Bayesian approach to interval estimation for both the risk difference and the risk ratio for a 2 x 2 table with a structural zero using Markov chain Monte Carlo (MCMC) methods. We also derive a normal approximation for the risk difference and a gamma approximation for the risk ratio. We then compare the coverage and interval width of our new intervals to the score-based intervals over various parameter and sample-size configurations. Finally, we consider a Bayesian method for sample-size determination.  相似文献   

20.
The specialized microenvironment or niche where stem cells reside provides regulatory input governing stem cell function. We tested the hypothesis that targeting the niche might improve stem cell-based therapies using three mouse models that are relevant to clinical uses of hematopoietic stem (HS) cells. We and others previously identified the osteoblast as a component of the adult HS cell niche and established that activation of the parathyroid hormone (PTH) receptor on osteoblasts increases stem cell number. Here we show that pharmacologic use of PTH increases the number of HS cells mobilized into the peripheral blood for stem cell harvests, protects stem cells from repeated exposure to cytotoxic chemotherapy and expands stem cells in transplant recipients. These data provide evidence that the niche may be an attractive target for drug-based stem cell therapeutics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号