首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The combination of population pharmacokinetic studies   总被引:4,自引:0,他引:4  
Wakefield J  Rahman N 《Biometrics》2000,56(1):263-270
Pharmacokinetic data consist of drug concentrations with associated known sampling times and are collected following the administration of known dosage regimens. Population pharmacokinetic data consist of such data on a number of individuals, possibly along with individual-specific characteristics. During drug development, a number of population pharmacokinetic studies are typically carried out and the combination of such studies is of great importance for characterizing the drug and, in particular, for the design of future studies. In this paper, we describe a model that may be used to combine population pharmacokinetic data. The model is illustrated using six phase I studies of the antiasthmatic drug fluticasone propionate. Our approach is Bayesian and computation is carried out using Markov chain Monte Carlo. We provide a number of simplifications to the model that may be made in order to ease simulation from the posterior distribution.  相似文献   

2.
This article provides a fully Bayesian approach for modeling of single-dose and complete pharmacokinetic data in a population pharmacokinetic (PK) model. To overcome the impact of outliers and the difficulty of computation, a generalized linear model is chosen with the hypothesis that the errors follow a multivariate Student t distribution which is a heavy-tailed distribution. The aim of this study is to investigate and implement the performance of the multivariate t distribution to analyze population pharmacokinetic data. Bayesian predictive inferences and the Metropolis-Hastings algorithm schemes are used to process the intractable posterior integration. The precision and accuracy of the proposed model are illustrated by the simulating data and a real example of theophylline data.  相似文献   

3.
Stoklosa J  Hwang WH  Wu SH  Huggins R 《Biometrics》2011,67(4):1659-1665
In practice, when analyzing data from a capture-recapture experiment it is tempting to apply modern advanced statistical methods to the observed capture histories. However, unless the analysis takes into account that the data have only been collected from individuals who have been captured at least once, the results may be biased. Without the development of new software packages, methods such as generalized additive models, generalized linear mixed models, and simulation-extrapolation cannot be readily implemented. In contrast, the partial likelihood approach allows the analysis of a capture-recapture experiment to be conducted using commonly available software. Here we examine the efficiency of this approach and apply it to several data sets.  相似文献   

4.
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest–posttest longitudinal data. In particular, we consider log‐normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE‐based models may be preferable when the goal is to compare the marginal expected responses.  相似文献   

5.
Variable Selection for Semiparametric Mixed Models in Longitudinal Studies   总被引:2,自引:0,他引:2  
Summary .  We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study.  相似文献   

6.
He Z  Sun D 《Biometrics》2000,56(2):360-367
A Bayesian hierarchical generalized linear model is used to estimate hunting success rates at the subarea level for postseason harvest surveys. The model includes fixed week effects, random geographic effects, and spatial correlations between neighboring subareas. The computation is done by Gibbs sampling and adaptive rejection sampling techniques. The method is illustrated using data from the Missouri Turkey Hunting Survey in the spring of 1996. Bayesian model selection methods are used to demonstrate that there are significant week differences and spatial correlations of hunting success rates among counties. The Bayesian estimates are also shown to be quite robust in terms of changes of hyperparameters.  相似文献   

7.
Summary .   A common and important problem in clustered sampling designs is that the effect of within-cluster exposures (i.e., exposures that vary within clusters) on outcome may be confounded by both measured and unmeasured cluster-level factors (i.e., measurements that do not vary within clusters). When some of these are ill/not accounted for, estimation of this effect through population-averaged models or random-effects models may introduce bias. We accommodate this by developing a general theory for the analysis of clustered data, which enables consistent and asymptotically normal estimation of the effects of within-cluster exposures in the presence of cluster-level confounders. Semiparametric efficient estimators are obtained by solving so-called conditional generalized estimating equations. We compare this approach with a popular proposal by Neuhaus and Kalbfleisch (1998, Biometrics 54, 638–645) who separate the exposure effect into a within- and a between-cluster component within a random intercept model. We find that the latter approach yields consistent and efficient estimators when the model is linear, but is less flexible in terms of model specification. Under nonlinear models, this approach may yield inconsistent and inefficient estimators, though with little bias in most practical settings.  相似文献   

8.
We are studying variable selection in multiple regression models in which molecular markers and/or gene-expression measurements as well as intensity measurements from protein spectra serve as predictors for the outcome variable (i.e., trait or disease state). Finding genetic biomarkers and searching genetic–epidemiological factors can be formulated as a statistical problem of variable selection, in which, from a large set of candidates, a small number of trait-associated predictors are identified. We illustrate our approach by analyzing the data available for chronic fatigue syndrome (CFS). CFS is a complex disease from several aspects, e.g., it is difficult to diagnose and difficult to quantify. To identify biomarkers we used microarray data and SELDI-TOF-based proteomics data. We also analyzed genetic marker information for a large number of SNPs for an overlapping set of individuals. The objectives of the analyses were to identify markers specific to fatigue that are also possibly exclusive to CFS. The use of such models can be motivated, for example, by the search for new biomarkers for the diagnosis and prognosis of cancer and measures of response to therapy. Generally, for this we use Bayesian hierarchical modeling and Markov Chain Monte Carlo computation.  相似文献   

9.
A Bayesian model for sparse functional data   总被引:1,自引:0,他引:1  
Thompson WK  Rosen O 《Biometrics》2008,64(1):54-63
Summary.   We propose a method for analyzing data which consist of curves on multiple individuals, i.e., longitudinal or functional data. We use a Bayesian model where curves are expressed as linear combinations of B-splines with random coefficients. The curves are estimated as posterior means obtained via Markov chain Monte Carlo (MCMC) methods, which automatically select the local level of smoothing. The method is applicable to situations where curves are sampled sparsely and/or at irregular time points. We construct posterior credible intervals for the mean curve and for the individual curves. This methodology provides unified, efficient, and flexible means for smoothing functional data.  相似文献   

10.
Scientists may wish to analyze correlated outcome data with constraints among the responses. For example, piecewise linear regression in a longitudinal data analysis can require use of a general linear mixed model combined with linear parameter constraints. Although well developed for standard univariate models, there are no general results that allow a data analyst to specify a mixed model equation in conjunction with a set of constraints on the parameters. We resolve the difficulty by precisely describing conditions that allow specifying linear parameter constraints that insure the validity of estimates and tests in a general linear mixed model. The recommended approach requires only straightforward and noniterative calculations to implement. We illustrate the convenience and advantages of the methods with a comparison of cognitive developmental patterns in a study of individuals from infancy to early adulthood for children from low-income families.  相似文献   

11.
Wang L  Dunson DB 《Biometrics》2011,67(3):1111-1118
Current status data are a type of interval-censored event time data in which all the individuals are either left or right censored. For example, our motivation is drawn from a cross-sectional study, which measured whether or not fibroid onset had occurred by the age of an ultrasound exam for each woman. We propose a semiparametric Bayesian proportional odds model in which the baseline event time distribution is estimated nonparametrically by using adaptive monotone splines in a logistic regression model and the potential risk factors are included in the parametric part of the mean structure. The proposed approach has the advantage of being straightforward to implement using a simple and efficient Gibbs sampler, whereas alternative semiparametric Bayes' event time models encounter problems for current status data. The model is generalized to allow systematic underreporting in a subset of the data, and the methods are applied to an epidemiologic study of uterine fibroids.  相似文献   

12.
Marginalized models (Heagerty, 1999, Biometrics 55, 688-698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica 90, 329-336) are analyzed.  相似文献   

13.
A General Monte Carlo Method for Mapping Multiple Quantitative Trait Loci   总被引:2,自引:0,他引:2  
R. C. Jansen 《Genetics》1996,142(1):305-311
In this paper we address the mapping of multiple quantitative trait loci (QTLs) in line crosses for which the genetic data are highly incomplete. Such complicated situations occur, for instance, when dominant markers are used or when unequally informative markers are used in experiments with outbred populations. We describe a general and flexible Monte Carlo expectation-maximization (Monte Carlo EM) algorithm for fitting multiple-QTL models to such data. Implementation of this algorithm is straightforward in standard statistical software, but computation may take much time. The method may be generalized to cope with more complex models for animal and human pedigrees. A practical example is presented, where a three-QTL model is adopted in an outbreeding situation with dominant markers. The example is concerned with the linkage between randomly amplified polymorphic DNA (RAPD) markers and QTLs for partial resistance to Fusarium oxysporum in lily.  相似文献   

14.
A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs), are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC) framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD) epidemic in the U.K. Our results indicate that substantial computation savings can be obtained—albeit, of course, with some information loss—suggesting that such techniques may be of use in the analysis of very large epidemic data sets.  相似文献   

15.
Summary .   Biometrical genetic modeling of twin or other family data can be used to decompose the variance of an observed response or 'phenotype' into genetic and environmental components. Convenient parameterizations requiring few random effects are proposed, which allow such models to be estimated using widely available software for linear mixed models (continuous phenotypes) or generalized linear mixed models (categorical phenotypes). We illustrate the proposed approach by modeling family data on the continuous phenotype birth weight and twin data on the dichotomous phenotype depression. The example data sets and commands for Stata and R/S-PLUS are available at the Biometrics website.  相似文献   

16.
In this article we describe a recursive Bayesian estimator for the identification of diffusing fluorophores using photon arrival-time data from a single spectral channel. We present derivations for all relevant diffusion and fluorescence models, and we use simulated diffusion trajectories and photon streams to evaluate the estimator's performance. We consider simplified estimation schemes that bin the photon counts within time intervals of fixed duration, and show that they can perform well in realistic parameter regimes. The latter results indicate the feasibility of performing identification experiments in real time. It will be straightforward to generalize our approach for use in more complicated scenarios, e.g., with multiple spectral channels or fast photophysical dynamics.  相似文献   

17.
Summary .  Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture–recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture–recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies  f  are a known linear transformation    f = A ' x    of a latent multinomial variable  x  with cell probability vector    π = π ( θ )  . Given that full conditional distributions   [ θ  |  x ]   can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution   [ x  |  f , θ ]  , which is made possible by knowledge of the null space of A ' . We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks.  相似文献   

18.
French B  Heagerty PJ 《Biometrics》2009,65(2):415-422
Summary .  Longitudinal studies typically collect information on the timing of key clinical events and on specific characteristics that describe those events. Random variables that measure qualitative or quantitative aspects associated with the occurrence of an event are known as marks. Recurrent marked point process data consist of possibly recurrent events, with the mark (and possibly exposure) measured if and only if an event occurs. Analysis choices depend on which aspect of the data is of primary scientific interest. First, factors that influence the occurrence or timing of the event may be characterized using recurrent event analysis methods. Second, if there is more than one event per subject, then the association between exposure and the mark may be quantified using repeated measures regression methods. We detail assumptions required of any time-dependent exposure process and the event time process to ensure that linear or generalized linear mixed models and generalized estimating equations provide valid estimates. We provide theoretical and empirical evidence that if these conditions are not satisfied, then an independence estimating equation should be used for consistent estimation of association. We conclude with the recommendation that analysts carefully explore both the exposure and event time processes prior to implementing a repeated measures analysis of recurrent marked point process data.  相似文献   

19.
Several approaches have been developed to calculate the relative contributions of parental populations in single admixture event scenarios, including Bayesian methods. In many breeds and populations, it may be more realistic to consider multiple admixture events. However, no approach has been developed to date to estimate admixture in such cases. This report describes a program application, 2BAD (for 2-event Bayesian ADmixture), which allows the consideration of up to two independent admixture events involving two or three parental populations and a single admixed population, depending on the number of populations sampled. For each of these models, it is possible to estimate several parameters (admixture, effective sizes, etc.) using an approximate Bayesian computation approach. In addition, the program allows comparing pairs of admixture models, determining which is the most likely given data. The application was tested through simulations and was found to provide good estimates for the contribution of the populations at the two admixture events. We were also able to determine whether an admixture model was more likely than a simple split model.  相似文献   

20.
Zhang T  Lin G 《Biometrics》2009,65(2):353-360
Summary .  Spatial clustering is commonly modeled by a Bayesian method under the framework of generalized linear mixed effect models (GLMMs). Spatial clusters are commonly detected by a frequentist method through hypothesis testing. In this article, we provide a frequentist method for assessing spatial properties of GLMMs. We propose a strategy that detects spatial clusters through parameter estimates of spatial associations, and assesses spatial aspects of model improvement through iterated residuals. Simulations and a case study show that the proposed method is able to consistently and efficiently detect the locations and magnitudes of spatial clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号