首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary The rapid development of new biotechnologies allows us to deeply understand biomedical dynamic systems in more detail and at a cellular level. Many of the subject‐specific biomedical systems can be described by a set of differential or difference equations that are similar to engineering dynamic systems. In this article, motivated by HIV dynamic studies, we propose a class of mixed‐effects state‐space models based on the longitudinal feature of dynamic systems. State‐space models with mixed‐effects components are very flexible in modeling the serial correlation of within‐subject observations and between‐subject variations. The Bayesian approach and the maximum likelihood method for standard mixed‐effects models and state‐space models are modified and investigated for estimating unknown parameters in the proposed models. In the Bayesian approach, full conditional distributions are derived and the Gibbs sampler is constructed to explore the posterior distributions. For the maximum likelihood method, we develop a Monte Carlo EM algorithm with a Gibbs sampler step to approximate the conditional expectations in the E‐step. Simulation studies are conducted to compare the two proposed methods. We apply the mixed‐effects state‐space model to a data set from an AIDS clinical trial to illustrate the proposed methodologies. The proposed models and methods may also have potential applications in other biomedical system analyses such as tumor dynamics in cancer research and genetic regulatory network modeling.  相似文献   

2.
Na Cai  Wenbin Lu  Hao Helen Zhang 《Biometrics》2012,68(4):1093-1102
Summary In analysis of longitudinal data, it is not uncommon that observation times of repeated measurements are subject‐specific and correlated with underlying longitudinal outcomes. Taking account of the dependence between observation times and longitudinal outcomes is critical under these situations to assure the validity of statistical inference. In this article, we propose a flexible joint model for longitudinal data analysis in the presence of informative observation times. In particular, the new procedure considers the shared random‐effect model and assumes a time‐varying coefficient for the latent variable, allowing a flexible way of modeling longitudinal outcomes while adjusting their association with observation times. Estimating equations are developed for parameter estimation. We show that the resulting estimators are consistent and asymptotically normal, with variance–covariance matrix that has a closed form and can be consistently estimated by the usual plug‐in method. One additional advantage of the procedure is that it provides a unified framework to test whether the effect of the latent variable is zero, constant, or time‐varying. Simulation studies show that the proposed approach is appropriate for practical use. An application to a bladder cancer data is also given to illustrate the methodology.  相似文献   

3.
Summary In 2001, the U.S. Office of Personnel Management required all health plans participating in the Federal Employees Health Benefits Program to offer mental health and substance abuse benefits on par with general medical benefits. The initial evaluation found that, on average, parity did not result in either large spending increases or increased service use over the four‐year observational period. However, some groups of enrollees may have benefited from parity more than others. To address this question, we propose a Bayesian two‐part latent class model to characterize the effect of parity on mental health use and expenditures. Within each class, we fit a two‐part random effects model to separately model the probability of mental health or substance abuse use and mean spending trajectories among those having used services. The regression coefficients and random effect covariances vary across classes, thus permitting class‐varying correlation structures between the two components of the model. Our analysis identified three classes of subjects: a group of low spenders that tended to be male, had relatively rare use of services, and decreased their spending pattern over time; a group of moderate spenders, primarily female, that had an increase in both use and mean spending after the introduction of parity; and a group of high spenders that tended to have chronic service use and constant spending patterns. By examining the joint 95% highest probability density regions of expected changes in use and spending for each class, we confirmed that parity had an impact only on the moderate spender class.  相似文献   

4.
Summary In recent years, nonlinear mixed‐effects (NLME) models have been proposed for modeling complex longitudinal data. Covariates are usually introduced in the models to partially explain intersubject variations. However, one often assumes that both model random error and random effects are normally distributed, which may not always give reliable results if the data exhibit skewness. Moreover, some covariates such as CD4 cell count may be often measured with substantial errors. In this article, we address these issues simultaneously by jointly modeling the response and covariate processes using a Bayesian approach to NLME models with covariate measurement errors and a skew‐normal distribution. A real data example is offered to illustrate the methodologies by comparing various potential models with different distribution specifications. It is showed that the models with skew‐normality assumption may provide more reasonable results if the data exhibit skewness and the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

5.
6.
7.
8.
Lei Xu  Jun Shao 《Biometrics》2009,65(4):1175-1183
Summary In studies with longitudinal or panel data, missing responses often depend on values of responses through a subject‐level unobserved random effect. Besides the likelihood approach based on parametric models, there exists a semiparametric method, the approximate conditional model (ACM) approach, which relies on the availability of a summary statistic and a linear or polynomial approximation to some random effects. However, two important issues must be addressed in applying ACM. The first is how to find a summary statistic and the second is how to estimate the parameters in the original model using estimates of parameters in ACM. Our study is to address these two issues. For the first issue, we derive summary statistics under various situations. For the second issue, we propose to use a grouping method, instead of linear or polynomial approximation to random effects. Because the grouping method is a moment‐based approach, the conditions we assumed in deriving summary statistics are weaker than the existing ones in the literature. When the derived summary statistic is continuous, we propose to use a classification tree method to obtain an approximate summary statistic for grouping. Some simulation results are presented to study the finite sample performance of the proposed method. An application is illustrated using data from the study of Modification of Diet in Renal Disease.  相似文献   

9.
10.
The potency of antiretroviral agents in AIDS clinical trials can be assessed on the basis of an early viral response such as viral decay rate or change in viral load (number of copies of HIV RNA) of the plasma. Linear, parametric nonlinear, and semiparametric nonlinear mixed‐effects models have been proposed to estimate viral decay rates in viral dynamic models. However, before applying these models to clinical data, a critical question that remains to be addressed is whether these models produce coherent estimates of viral decay rates, and if not, which model is appropriate and should be used in practice. In this paper, we applied these models to data from an AIDS clinical trial of potent antiviral treatments and found significant incongruity in the estimated rates of reduction in viral load. Simulation studies indicated that reliable estimates of viral decay rate were obtained by using the parametric and semiparametric nonlinear mixed‐effects models. Our analysis also indicated that the decay rates estimated by using linear mixed‐effects models should be interpreted differently from those estimated by using nonlinear mixed‐effects models. The semiparametric nonlinear mixed‐effects model is preferred to other models because arbitrary data truncation is not needed. Based on real data analysis and simulation studies, we provide guidelines for estimating viral decay rates from clinical data. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
Summary We propose a Bayesian chi‐squared model diagnostic for analysis of data subject to censoring. The test statistic has the form of Pearson's chi‐squared test statistic and is easy to calculate from standard output of Markov chain Monte Carlo algorithms. The key innovation of this diagnostic is that it is based only on observed failure times. Because it does not rely on the imputation of failure times for observations that have been censored, we show that under heavy censoring it can have higher power for detecting model departures than a comparable test based on the complete data. In a simulation study, we show that tests based on this diagnostic exhibit comparable power and better nominal Type I error rates than a commonly used alternative test proposed by Akritas (1988, Journal of the American Statistical Association 83, 222–230). An important advantage of the proposed diagnostic is that it can be applied to a broad class of censored data models, including generalized linear models and other models with nonidentically distributed and nonadditive error structures. We illustrate the proposed model diagnostic for testing the adequacy of two parametric survival models for Space Shuttle main engine failures.  相似文献   

12.
Summary Randomized experiments are the gold standard for evaluating proposed treatments. The intent to treat estimand measures the effect of treatment assignment, but not the effect of treatment if subjects take treatments to which they are not assigned. The desire to estimate the efficacy of the treatment in this case has been the impetus for a substantial literature on compliance over the last 15 years. In papers dealing with this issue, it is typically assumed there are different types of subjects, for example, those who will follow treatment assignment (compliers), and those who will always take a particular treatment irrespective of treatment assignment. The estimands of primary interest are the complier proportion and the complier average treatment effect (CACE). To estimate CACE, researchers have used various methods, for example, instrumental variables and parametric mixture models, treating compliers as a single class. However, it is often unreasonable to believe all compliers will be affected. This article therefore treats compliers as a mixture of two types, those belonging to a zero‐effect class, others to an effect class. Second, in most experiments, some subjects drop out or simply do not report the value of the outcome variable, and the failure to take into account missing data can lead to biased estimates of treatment effects. Recent work on compliance in randomized experiments has addressed this issue by assuming missing data are missing at random or latently ignorable. We extend this work to the case where compliers are a mixture of types and also examine alternative types of nonignorable missing data assumptions.  相似文献   

13.
Analysis of longitudinal data with excessive zeros has gained increasing attention in recent years; however, current approaches to the analysis of longitudinal data with excessive zeros have primarily focused on balanced data. Dropouts are common in longitudinal studies; therefore, the analysis of the resulting unbalanced data is complicated by the missing mechanism. Our study is motivated by the analysis of longitudinal skin cancer count data presented by Greenberg, Baron, Stukel, Stevens, Mandel, Spencer, Elias, Lowe, Nierenberg, Bayrd, Vance, Freeman, Clendenning, Kwan, and the Skin Cancer Prevention Study Group[New England Journal of Medicine 323 , 789–795]. The data consist of a large number of zero responses (83% of the observations) as well as a substantial amount of dropout (about 52% of the observations). To account for both excessive zeros and dropout patterns, we propose a pattern‐mixture zero‐inflated model with compound Poisson random effects for the unbalanced longitudinal skin cancer data. We also incorporate an autoregressive of order 1 correlation structure in the model to capture longitudinal correlation of the count responses. A quasi‐likelihood approach has been developed in the estimation of our model. We illustrated the method with analysis of the longitudinal skin cancer data.  相似文献   

14.
Summary Absence of a perfect reference test is an acknowledged source of bias in diagnostic studies. In the case of tuberculous pleuritis, standard reference tests such as smear microscopy, culture and biopsy have poor sensitivity. Yet meta‐analyses of new tests for this disease have always assumed the reference standard is perfect, leading to biased estimates of the new test’s accuracy. We describe a method for joint meta‐analysis of sensitivity and specificity of the diagnostic test under evaluation, while considering the imperfect nature of the reference standard. We use a Bayesian hierarchical model that takes into account within‐ and between‐study variability. We show how to obtain pooled estimates of sensitivity and specificity, and how to plot a hierarchical summary receiver operating characteristic curve. We describe extensions of the model to situations where multiple reference tests are used, and where index and reference tests are conditionally dependent. The performance of the model is evaluated using simulations and illustrated using data from a meta‐analysis of nucleic acid amplification tests (NAATs) for tuberculous pleuritis. The estimate of NAAT specificity was higher and the sensitivity lower compared to a model that assumed that the reference test was perfect.  相似文献   

15.
Summary Neuroimaging data collected at repeated occasions are gaining increasing attention in the neuroimaging community due to their potential in answering questions regarding brain development, aging, and neurodegeneration. These datasets are large and complicated, characterized by the intricate spatial dependence structure of each response image, multiple response images per subject, and covariates that may vary with time. We propose a multiscale adaptive generalized method of moments (MA‐GMM) approach to estimate marginal regression models for imaging datasets that contain time‐varying, spatially related responses and some time‐varying covariates. Our method categorizes covariates into types to determine the valid moment conditions to combine during estimation. Further, instead of assuming independence of voxels (the components that make up each subject’s response image at each time point) as many current neuroimaging analysis techniques do, this method “adaptively smoothes” neuroimaging response data, computing parameter estimates by iteratively building spheres around each voxel and combining observations within the spheres with weights. MA‐GMM’s development adds to the few available modeling approaches intended for longitudinal imaging data analysis. Simulation studies and an analysis of a real longitudinal imaging dataset from the Alzheimer’s Disease Neuroimaging Initiative are used to assess the performance of MA‐GMM. Martha Skup, Hongtu Zhu, and Heping Zhang for the Alzheimer’s Disease Neuroimaging Initiative.  相似文献   

16.
17.
18.
The application of stabilized multivariate tests is demonstrated in the analysis of a two‐stage adaptive clinical trial with three treatment arms. Due to the clinical problem, the multiple comparisons include tests of superiority as well as a test for non‐inferiority, where non‐inferiority is (because of missing absolute tolerance limits) expressed as linear contrast of the three treatments. Special emphasis is paid to the combination of the three sources of multiplicity – multiple endpoints, multiple treatments, and two stages of the adaptive design. Particularly, the adaptation after the first stage comprises a change of the a‐priori order of hypotheses.  相似文献   

19.
Organic solar cells (OSCs) containing non‐fullerene acceptors have realized high power conversion efficiency (PCE) up to 14%. However, most of these high‐performance non‐fullerene OSCs have been reported with optimal active layer thickness of about 100 nm, mainly due to the low electron mobility (≈10?4–10?5 cm2 V?1 s?1) of non‐fullerene acceptors, which are not suitable for roll‐to‐roll large‐scale processing. In this work, an efficient non‐fullerene OSC based on poly[(5,6‐difluoro‐2,1,3‐benzothiadiazol‐4,7‐diyl)‐alt‐(3,3′″‐di(2‐octyldodecyl)‐2,2′;5′,2″;5″,2′″‐quaterthiophen‐5,5′′′‐diyl)] (PffBT4T‐2OD):EH‐IDTBR (consists of electron‐rich indaceno[1,2‐b:5,6‐b′]dithiophene as the central unit and an electron‐deficient 5,6‐benzo[c][1,2,5]thiadiazole unit flanked with rhodanine as the peripheral group) with thickness‐independent PCE (maintaining a PCE of 9.1% with an active layer thickness of 300 nm) is presented by optimizing device architectures to overcome the space‐charge effects. Optical modeling reveals that most of the incident light is absorbed near the transparent electrode side in thick‐film devices. The transport distance of electrons with lower mobility will therefore be shortened when using inverted device architecture, in which most of the excitons are generated close to the cathode side and therefore substantially reduces the accumulation of electrons in the device. As a result, an efficient thick‐film non‐fullerene OSC is realized. These results provide important guidelines for the development of more efficient thick‐film non‐fullerene OSCs.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号