首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary .  Many hormones are secreted in pulses. The pulsatile relationship between hormones regulates many biological processes. To understand endocrine system regulation, time series of hormone concentrations are collected. The goal is to characterize pulsatile patterns and associations between hormones. Currently each hormone on each subject is fitted univariately. This leads to estimates of the number of pulses and estimates of the amount of hormone secreted; however, when the signal-to-noise ratio is small, pulse detection and parameter estimation remains difficult with existing approaches. In this article, we present a bivariate deconvolution model of pulsatile hormone data focusing on incorporating pulsatile associations. Through simulation, we exhibit that using the underlying pulsatile association between two hormones improves the estimation of the number of pulses and the other parameters defining each hormone. We develop the one-to-one, driver–response case and show how birth–death Markov chain Monte Carlo can be used for estimation. We exhibit these features through a simulation study and apply the method to luteinizing and follicle stimulating hormones.  相似文献   

2.
Summary .  Time course microarray data consist of mRNA expression from a common set of genes collected at different time points. Such data are thought to reflect underlying biological processes developing over time. In this article, we propose a model that allows us to examine differential expression and gene network relationships using time course microarray data. We model each gene-expression profile as a random functional transformation of the scale, amplitude, and phase of a common curve. Inferences about the gene-specific amplitude parameters allow us to examine differential gene expression. Inferences about measures of functional similarity based on estimated time-transformation functions allow us to examine gene networks while accounting for features of the gene-expression profiles. We discuss applications to simulated data as well as to microarray data on prostate cancer progression.  相似文献   

3.
Disease incidence or mortality data are typically available as rates or counts for specified regions, collected over time. We propose Bayesian nonparametric spatial modeling approaches to analyze such data. We develop a hierarchical specification using spatial random effects modeled with a Dirichlet process prior. The Dirichlet process is centered around a multivariate normal distribution. This latter distribution arises from a log-Gaussian process model that provides a latent incidence rate surface, followed by block averaging to the areal units determined by the regions in the study. With regard to the resulting posterior predictive inference, the modeling approach is shown to be equivalent to an approach based on block averaging of a spatial Dirichlet process to obtain a prior probability model for the finite dimensional distribution of the spatial random effects. We introduce a dynamic formulation for the spatial random effects to extend the model to spatio-temporal settings. Posterior inference is implemented through Gibbs sampling. We illustrate the methodology with simulated data as well as with a data set on lung cancer incidences for all 88 counties in the state of Ohio over an observation period of 21 years.  相似文献   

4.
Xie B  Pan W  Shen X 《Biometrics》2008,64(3):921-930
Summary .   Penalized model-based clustering has been proposed for high-dimensional but small sample-sized data, such as arising from genomic studies; in particular, it can be used for variable selection. A new regularization scheme is proposed to group together multiple parameters of the same variable across clusters, which is shown both analytically and numerically to be more effective than the conventional L 1 penalty for variable selection. In addition, we develop a strategy to combine this grouping scheme with grouping structured variables. Simulation studies and applications to microarray gene expression data for cancer subtype discovery demonstrate the advantage of the new proposal over several existing approaches.  相似文献   

5.
Albert PS 《Biometrics》2000,56(2):602-608
Binary longitudinal data are often collected in clinical trials when interest is on assessing the effect of a treatment over time. Our application is a recent study of opiate addiction that examined the effect of a new treatment on repeated urine tests to assess opiate use over an extended follow-up. Drug addiction is episodic, and a new treatment may affect various features of the opiate-use process such as the proportion of positive urine tests over follow-up and the time to the first occurrence of a positive test. Complications in this trial were the large amounts of dropout and intermittent missing data and the large number of observations on each subject. We develop a transitional model for longitudinal binary data subject to nonignorable missing data and propose an EM algorithm for parameter estimation. We use the transitional model to derive summary measures of the opiate-use process that can be compared across treatment groups to assess treatment effect. Through analyses and simulations, we show the importance of properly accounting for the missing data mechanism when assessing the treatment effect in our example.  相似文献   

6.
A stepped-wedge cluster randomized trial (CRT) is a unidirectional crossover study in which timings of treatment initiation for clusters are randomized. Because the timing of treatment initiation is different for each cluster, an emerging question is whether the treatment effect depends on the exposure time, namely, the time duration since the initiation of treatment. Existing approaches for assessing exposure-time treatment effect heterogeneity either assume a parametric functional form of exposure time or model the exposure time as a categorical variable, in which case the number of parameters increases with the number of exposure-time periods, leading to a potential loss in efficiency. In this article, we propose a new model formulation for assessing treatment effect heterogeneity over exposure time. Rather than a categorical term for each level of exposure time, the proposed model includes a random effect to represent varying treatment effects by exposure time. This allows for pooling information across exposure-time periods and may result in more precise average and exposure-time-specific treatment effect estimates. In addition, we develop an accompanying permutation test for the variance component of the heterogeneous treatment effect parameters. We conduct simulation studies to compare the proposed model and permutation test to alternative methods to elucidate their finite-sample operating characteristics, and to generate practical guidance on model choices for assessing exposure-time treatment effect heterogeneity in stepped-wedge CRTs.  相似文献   

7.
The aim of dose finding studies is sometimes to estimate parameters in a fitted model. The precision of the parameter estimates should be as high as possible. This can be obtained by increasing the number of subjects in the study, N, choosing a good and efficient estimation approach, and by designing the dose finding study in an optimal way. Increasing the number of subjects is not always feasible because of increasing cost, time limitations, etc. In this paper, we assume fixed N and consider estimation approaches and study designs for multiresponse dose finding studies. We work with diabetes dose–response data and compare a system estimation approach that fits a multiresponse Emax model to the data to equation‐by‐equation estimation that fits uniresponse Emax models to the data. We then derive some optimal designs for estimating the parameters in the multi‐ and uniresponse Emax model and study the efficiency of these designs.  相似文献   

8.
Survival data are often modelled by the Cox proportional hazards model, which assumes that covariate effects are constant over time. In recent years however, several new approaches have been suggested which allow covariate effects to vary with time. Non-proportional hazard functions, with covariate effects changing dynamically, can be fitted using penalised spline (P-spline) smoothing. By utilising the link between P-spline smoothing and generalised linear mixed models, the smoothing parameters steering the amount of smoothing can be selected. A hybrid routine, combining the mixed model approach with a classical Akaike criterion, is suggested. This approach is evaluated with simulations and applied to data from the West of Scotland Coronary Prevention Study.  相似文献   

9.
10.
Qin LX  Self SG 《Biometrics》2006,62(2):526-533
Identification of differentially expressed genes and clustering of genes are two important and complementary objectives addressed with gene expression data. For the differential expression question, many "per-gene" analytic methods have been proposed. These methods can generally be characterized as using a regression function to independently model the observations for each gene; various adjustments for multiplicity are then used to interpret the statistical significance of these per-gene regression models over the collection of genes analyzed. Motivated by this common structure of per-gene models, we proposed a new model-based clustering method--the clustering of regression models method, which groups genes that share a similar relationship to the covariate(s). This method provides a unified approach for a family of clustering procedures and can be applied for data collected with various experimental designs. In addition, when combined with per-gene methods for assessing differential expression that employ the same regression modeling structure, an integrated framework for the analysis of microarray data is obtained. The proposed methodology was applied to two microarray data sets, one from a breast cancer study and the other from a yeast cell cycle study.  相似文献   

11.
Houseman EA  Marsit C  Karagas M  Ryan LM 《Biometrics》2007,63(4):1269-1277
Increasingly used in health-related applications, latent variable models provide an appealing framework for handling high-dimensional exposure and response data. Item response theory (IRT) models, which have gained widespread popularity, were originally developed for use in the context of educational testing, where extremely large sample sizes permitted the estimation of a moderate-to-large number of parameters. In the context of public health applications, smaller sample sizes preclude large parameter spaces. Therefore, we propose a penalized likelihood approach to reduce mean square error and improve numerical stability. We present a continuous family of models, indexed by a tuning parameter, that range between the Rasch model and the IRT model. The tuning parameter is selected by cross validation or approximations such as Akaike Information Criterion. While our approach can be placed easily in a Bayesian context, we find that our frequentist approach is more computationally efficient. We demonstrate our methodology on a study of methylation silencing of gene expression in bladder tumors. We obtain similar results using both frequentist and Bayesian approaches, although the frequentist approach is less computationally demanding. In particular, we find high correlation of methylation silencing among 16 loci in bladder tumors, that methylation is associated with smoking and also with patient survival.  相似文献   

12.
Disease mapping and spatial regression with count data   总被引:3,自引:0,他引:3  
In this paper, we provide critical reviews of methods suggested for the analysis of aggregate count data in the context of disease mapping and spatial regression. We introduce a new method for picking prior distributions, and propose a number of refinements of previously used models. We also consider ecological bias, mutual standardization, and choice of both spatial model and prior specification. We analyze male lip cancer incidence data collected in Scotland over the period 1975-1980, and outline a number of problems with previous analyses of these data. In disease mapping studies, hierarchical models can provide robust estimation of area-level risk parameters, though care is required in the choice of covariate model, and it is important to assess the sensitivity of estimates to the spatial model chosen, and to the prior specifications on the variance parameters. Spatial ecological regression is a far more hazardous enterprise for two reasons. First, there is always the possibility of ecological bias, and this can only be alleviated by the inclusion of individual-level data. For the Scottish data, we show that the previously used mean model has limited interpretation from an individual perspective. Second, when residual spatial dependence is modeled, and if the exposure has spatial structure, then estimates of exposure association parameters will change when compared with those obtained from the independence across space model, and the data alone cannot choose the form and extent of spatial correlation that is appropriate.  相似文献   

13.
Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.  相似文献   

14.
Understanding and characterising biochemical processes inside single cells requires experimental platforms that allow one to perturb and observe the dynamics of such processes as well as computational methods to build and parameterise models from the collected data. Recent progress with experimental platforms and optogenetics has made it possible to expose each cell in an experiment to an individualised input and automatically record cellular responses over days with fine time resolution. However, methods to infer parameters of stochastic kinetic models from single-cell longitudinal data have generally been developed under the assumption that experimental data is sparse and that responses of cells to at most a few different input perturbations can be observed. Here, we investigate and compare different approaches for calculating parameter likelihoods of single-cell longitudinal data based on approximations of the chemical master equation (CME) with a particular focus on coupling the linear noise approximation (LNA) or moment closure methods to a Kalman filter. We show that, as long as cells are measured sufficiently frequently, coupling the LNA to a Kalman filter allows one to accurately approximate likelihoods and to infer model parameters from data even in cases where the LNA provides poor approximations of the CME. Furthermore, the computational cost of filtering-based iterative likelihood evaluation scales advantageously in the number of measurement times and different input perturbations and is thus ideally suited for data obtained from modern experimental platforms. To demonstrate the practical usefulness of these results, we perform an experiment in which single cells, equipped with an optogenetic gene expression system, are exposed to various different light-input sequences and measured at several hundred time points and use parameter inference based on iterative likelihood evaluation to parameterise a stochastic model of the system.  相似文献   

15.

Conventional approaches to layout design of block stacked warehouses assume perfect staggering of product inflow leading to perfect sharing of space among products. Since such an assumption is seldom true, we argue that warehouses designed using the conventional approach may result in choking situations with large quantities of inventory waiting outside the storage areas. On the other hand, liberal space allocation such as in dedicated policy might lead to under-utilization of space. In this paper, we take a fresh look at the block stacked layout problem, modeling the effect of imperfectly staggered product arrivals using queuing theory. Analytical expressions are derived for arrival time and processing time coefficients of variation using warehouse parameters and design variables. Further, we develop a bi-objective optimization model to minimize both the space cost and waiting time. Our approach provides design options over a total space cost-waiting time trade-off frontier as opposed to singular design points given by conventional approaches. Computational experiments are conducted to derive further insights into the design of block stacked warehouses.

  相似文献   

16.
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.  相似文献   

17.
18.
In this paper, we present a general selection-mutation model of evolution on a one-dimensional continuous fitness space. The formulation of our model includes both the classical diffusion approach to mutation process as well as an alternative approach based on an integral operator with a mutation kernel. We show that both approaches produce fundamentally equivalent results. To illustrate the suitability of our model, we focus its analytical study into its application to recent experimental studies of in vitro viral evolution. More specifically, these experiments were designed to test previous theoretical predictions regarding the effects of multiple infection dynamics (i.e., coinfection and superinfection) on the virulence of evolving viral populations. The results of these experiments, however, did not match with previous theory. By contrast, the model we present here helps to understand the underlying viral dynamics on these experiments and makes new testable predictions about the role of parameters such the time between successive infections and the growth rates of resident and invading populations.  相似文献   

19.
Model-based geostatistical design involves the selection of locations to collect data to minimize an expected loss function over a set of all possible locations. The loss function is specified to reflect the aim of data collection, which, for geostatistical studies, could be to minimize the prediction uncertainty at unobserved locations. In this paper, we propose a new approach to design such studies via a loss function derived through considering the entropy about the model predictions and the parameters of the model. The approach includes a multivariate extension to generalized linear spatial models, and thus can be used to design experiments with more than one response. Unfortunately, evaluating our proposed loss function is computationally expensive so we provide an approximation such that our approach can be adopted to design realistically sized geostatistical studies. This is demonstrated through a simulated study and through designing an air quality monitoring program in Queensland, Australia. The results show that our designs remain highly efficient in achieving each experimental objective individually, providing an ideal compromise between the two objectives. Accordingly, we advocate that our approach could be adopted more generally in model-based geostatistical design.  相似文献   

20.
A spatial open-population capture-recapture model is described that extends both the non-spatial open-population model of Schwarz and Arnason and the spatially explicit closed-population model of Borchers and Efford. The superpopulation of animals available for detection at some time during a study is conceived as a two-dimensional Poisson point process. Individual probabilities of birth and death follow the conventional open-population model. Movement between sampling times may be modeled with a dispersal kernel using a recursive Markovian algorithm. Observations arise from distance-dependent sampling at an array of detectors. As in the closed-population spatial model, the observed data likelihood relies on integration over the unknown animal locations; maximization of this likelihood yields estimates of the birth, death, movement, and detection parameters. The models were fitted to data from a live-trapping study of brushtail possums (Trichosurus vulpecula) in New Zealand. Simulations confirmed that spatial modeling can greatly reduce the bias of capture-recapture survival estimates and that there is a degree of robustness to misspecification of the dispersal kernel. An R package is available that includes various extensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号