首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new Gompertz-type diffusion process with application to random growth   总被引:2,自引:0,他引:2  
Stochastic models describing growth kinetics are very important for predicting many biological phenomena. In this paper, a new Gompertz-type diffusion process is introduced, by means of which bounded sigmoidal growth patterns can be modeled by time-continuous variables. The main innovation of the process is that the bound can depend on the initial value, a situation that is not provided by the models considered to date. After building the model, a comprehensive study is presented, including its main characteristics and a simulation of sample paths. With the aim of applying this model to real-life situations, and given its possibilities in forecasting via the mean function, discrete sampling based inference is developed. The likelihood equations are not directly solvable, and because of difficulties that arise with the usual numerical methods employed to solve them, an iterative procedure is proposed. The possibilities of the new process are illustrated by means of an application to real data, concretely, to growth in rabbits.  相似文献   

2.
由于土壤碳通量在空间分布上具有很强的异质性,传统的采样方法难以对区域土壤碳通量进行精确估算,因此确定适当的采样策略对区域土壤碳通量的估算具有重要意义.本文提出一种逐点递增式采样的区域剖分部署策略(RDPG):设定初始采样点,使用改进的凸包插值算法构造Delaunay三角网,根据邻近已知采样点插值计算三角形各边垂直平分线的交点的离散度,选择离散度最大的点作为新增采样点.采用该方法对变异系数为0.42~0.59的仿真试验区域进行多次试验,结果表明:在相同试验条件下,RDPG布局策略能够获得比随机采样和均匀采样策略更高的区域土壤碳通量估算准确度.RDPG方法考虑了区域土壤碳通量的空间异质性,提高了区域土壤碳通量拟合精度.  相似文献   

3.
We present a Bayesian approach to analyze matched "case-control" data with multiple disease states. The probability of disease development is described by a multinomial logistic regression model. The exposure distribution depends on the disease state and could vary across strata. In such a model, the number of stratum effect parameters grows in direct proportion to the sample size leading to inconsistent MLEs for the parameters of interest even when one uses a retrospective conditional likelihood. We adopt a semiparametric Bayesian framework instead, assuming a Dirichlet process prior with a mixing normal distribution on the distribution of the stratum effects. We also account for possible missingness in the exposure variable in our model. The actual estimation is carried out through a Markov chain Monte Carlo numerical integration scheme. The proposed methodology is illustrated through simulation and an example of a matched study on low birth weight of newborns (Hosmer, D. A. and Lemeshow, S., 2000, Applied Logistic Regression) with two possible disease groups matched with a control group.  相似文献   

4.
Methods for the analysis of unmatched case-control data based on a finite population sampling model are developed. Under this model, and the prospective logistic model for disease probabilities, a likelihood for case-control data that accommodates very general sampling of controls is derived. This likelihood has the form of a weighted conditional logistic likelihood. The flexibility of the methods is illustrated by providing a number of control sampling designs and a general scheme for their analyses. These include frequency matching, counter-matching, case-base, randomized recruitment, and quota sampling. A study of risk factors for childhood asthma illustrates an application of the counter-matching design. Some asymptotic efficiency results are presented and computational methods discussed. Further, it is shown that a 'marginal' likelihood provides a link to unconditional logistic methods. The methods are examined in a simulation study that compares frequency and counter-matching using conditional and unconditional logistic analyses and indicate that the conditional logistic likelihood has superior efficiency. Extensions that accommodate sampling of cases and multistage designs are presented. Finally, we compare the analysis methods presented here to other approaches, compare counter-matching and two-stage designs, and suggest areas for further research.To whom correspondence should be addressed.  相似文献   

5.
Evaluation of the likelihood in mixed models for non-normal data, e.g. dependent binary data, involves high dimensional integration, which offers severe numerical problems. Penalized quasi-likelihood, iterative re-weighted restricted maximum likelihood and adjusted profile h-likelihood estimation are methods which avoid numerical integration. They will be derived by approximation of the maximum likelihood equations. For binary data, these estimation procedures may yield seriously biased estimates for components of variance, intra-class correlation or heritability. An analytical evaluation of a simple example illustrates how very critical the approximations can be for the performance of the variance component estimators.  相似文献   

6.

Background

Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered.

Methods

An extensive simulation study was carried out to investigate the reduction in average ''loss'', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored.

Results

It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy.

Conclusions

Penalized maximum likelihood estimation provides the means to ''make the most'' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should become part of our everyday toolkit for multivariate estimation in quantitative genetics.  相似文献   

7.
A recursive method of obtaining the maximum likelihood estimates of the parameters of the quadratic logistic discriminant function is presented. This method is an extension of the Walker and Duncan procedure (1967) proposed for the linear logistic discriminant function in a dichotomous case. A generalization of the method to the problem of discrimination between several populations is also given in the paper. It works for both linear and quadratic logistic discriminant function. After an estimation of the parameters of the logistic function a classification can be performed. An example of application of the method to automatic diagnosis of some respiratory diseases is presented. Comparison with the standard procedures used for the estimation is done by a short simulation study.  相似文献   

8.
Birth‐and‐death processes are widely used to model the development of biological populations. Although they are relatively simple models, their parameters can be challenging to estimate, as the likelihood can become numerically unstable when data arise from the most common sampling schemes, such as annual population censuses. A further difficulty arises when the discrete observations are not equi‐spaced, for example, when census data are unavailable for some years. We present two approaches to estimating the birth, death, and growth rates of a discretely observed linear birth‐and‐death process: via an embedded Galton‐Watson process and by maximizing a saddlepoint approximation to the likelihood. We study asymptotic properties of the estimators, compare them on numerical examples, and apply the methodology to data on monitored populations.  相似文献   

9.
A fundamental research goal in clinical studies of progressive, multi-stage disease is to understand its natural history and its relationship with prognostic factors. Our current understanding of this topic is based on the use of two-stage methods for event-time analysis which neglect intermediate transition information. In contrast, a multi-stage model utilizes all available data and provides more accurate insight into disease progression. We specify a forward-flowing multi-stage Markov model based on the discrete clinical stages of disease. By assuming the process to be Markovian, we avoid unnecessary complications to our numerical estimation procedure. Due to noncontinuous patient monitoring and the chronic nature of progressive disease, heavy right- and interval-censoring exists in the transition data. We develop a modified ECM algorithm to numerically carry out the otherwise complicated parameter estimation for this process. We also identify significant prognostic factors relevant to each transition, along with the relative importance of each prognostic factor. The numerical estimation is stable, and the parameter estimates are maximum likelihood estimates (Meng, 1990). In general our forward-flowing multi-stage models provide a flexible framework for the study of the effects of prognostic factors on progression among several stages. We apply our Markov model to a dataset of malignant melanoma patients, and present an inferential discussion. Results from our multi-stage Markov model provide an improved understanding of melanoma progression.  相似文献   

10.
Zhu B  Song PX  Taylor JM 《Biometrics》2011,67(4):1295-1304
This article presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate-specific antigen data, where we also show how the models can be used for forecasting.  相似文献   

11.
A predictive continuous time model is developed for continuous panel data to assess the effect of time‐varying covariates on the general direction of the movement of a continuous response that fluctuates over time. This is accomplished by reparameterizing the infinitesimal mean of an Ornstein–Uhlenbeck processes in terms of its equilibrium mean and a drift parameter, which assesses the rate that the process reverts to its equilibrium mean. The equilibrium mean is modeled as a linear predictor of covariates. This model can be viewed as a continuous time first‐order autoregressive regression model with time‐varying lag effects of covariates and the response, which is more appropriate for unequally spaced panel data than its discrete time analog. Both maximum likelihood and quasi‐likelihood approaches are considered for estimating the model parameters and their performances are compared through simulation studies. The simpler quasi‐likelihood approach is suggested because it yields an estimator that is of high efficiency relative to the maximum likelihood estimator and it yields a variance estimator that is robust to the diffusion assumption of the model. To illustrate the proposed model, an application to diastolic blood pressure data from a follow‐up study on cardiovascular diseases is presented. Missing observations are handled naturally with this model.  相似文献   

12.
Ranked set sampling (RSS) is a sampling procedure that can be considerably more efficient than simple random sampling (SRS). When the variable of interest is binary, ranking of the sample observations can be implemented using the estimated probabilities of success obtained from a logistic regression model developed for the binary variable. The main objective of this study is to use substantial data sets to investigate the application of RSS to estimation of a proportion for a population that is different from the one that provides the logistic regression. Our results indicate that precision in estimation of a population proportion is improved through the use of logistic regression to carry out the RSS ranking and, hence, the sample size required to achieve a desired precision is reduced. Further, the choice and the distribution of covariates in the logistic regression model are not overly crucial for the performance of a balanced RSS procedure.  相似文献   

13.
When the underlying responses are discrete, the interval estimation of the intraclass correlation derived from the normality assumption is not strictly valid for use. This paper focuses the interval estimation on the intraclass correlation under the negative binomial distribution, that has been commonly applied in epidemiological or consumer purchasing behaviour studies. This paper develops two simple asymptotic interval estimation procedures in closed forms for the intraclass correlation. To evaluate the performance of these procedures, a Monte Carlo simulation is carried out for a variety of situations. An example about consumer purchasing behaviors is also included to illustrate the use of the two proposed interval estimation procedures.  相似文献   

14.
Yau KK 《Biometrics》2001,57(1):96-102
A method for modeling survival data with multilevel clustering is described. The Cox partial likelihood is incorporated into the generalized linear mixed model (GLMM) methodology. Parameter estimation is achieved by maximizing a log likelihood analogous to the likelihood associated with the best linear unbiased prediction (BLUP) at the initial step of estimation and is extended to obtain residual maximum likelihood (REML) estimators of the variance component. Estimating equations for a three-level hierarchical survival model are developed in detail, and such a model is applied to analyze a set of chronic granulomatous disease (CGD) data on recurrent infections as an illustration with both hospital and patient effects being considered as random. Only the latter gives a significant contribution. A simulation study is carried out to evaluate the performance of the REML estimators. Further extension of the estimation procedure to models with an arbitrary number of levels is also discussed.  相似文献   

15.
Parameter estimation in a Gompertzian stochastic model for tumor growth   总被引:2,自引:0,他引:2  
Ferrante L  Bompadre S  Possati L  Leone L 《Biometrics》2000,56(4):1076-1081
The problem of estimating parameters in the drift coefficient when a diffusion process is observed continuously requires some specific assumptions. In this paper, we consider a stochastic version of the Gompertzian model that describes in vivo tumor growth and its sensitivity to treatment with antiangiogenic drugs. An explicit likelihood function is obtained, and we discuss some properties of the maximum likelihood estimator for the intrinsic growth rate of the stochastic Gompertzian model. Furthermore, we show some simulation results on the behavior of the corresponding discrete estimator. Finally, an application is given to illustrate the estimate of the model parameters using real data.  相似文献   

16.
Latent class regression on latent factors   总被引:1,自引:0,他引:1  
In the research of public health, psychology, and social sciences, many research questions investigate the relationship between a categorical outcome variable and continuous predictor variables. The focus of this paper is to develop a model to build this relationship when both the categorical outcome and the predictor variables are latent (i.e. not observable directly). This model extends the latent class regression model so that it can include regression on latent predictors. Maximum likelihood estimation is used and two numerical methods for performing it are described: the Monte Carlo expectation and maximization algorithm and Gaussian quadrature followed by quasi-Newton algorithm. A simulation study is carried out to examine the behavior of the model under different scenarios. A data example involving adolescent health is used for demonstration where the latent classes of eating disorders risk are predicted by the latent factor body satisfaction.  相似文献   

17.
A generalized case-control (GCC) study, like the standard case-control study, leverages outcome-dependent sampling (ODS) to extend to nonbinary responses. We develop a novel, unifying approach for analyzing GCC study data using the recently developed semiparametric extension of the generalized linear model (GLM), which is substantially more robust to model misspecification than existing approaches based on parametric GLMs. For valid estimation and inference, we use a conditional likelihood to account for the biased sampling design. We describe analysis procedures for estimation and inference for the semiparametric GLM under a conditional likelihood, and we discuss problems with estimation and inference under a conditional likelihood when the response distribution is misspecified. We demonstrate the flexibility of our approach over existing ones through extensive simulation studies, and we apply the methodology to an analysis of the Asset and Health Dynamics Among the Oldest Old study, which motives our research. The proposed approach yields a simple yet versatile solution for handling ODS in a wide variety of possible response distributions and sampling schemes encountered in practice.  相似文献   

18.
Though stochastic models are widely used to describe single ion channel behaviour, statistical inference based on them has received little consideration. This paper describes techniques of statistical inference, in particular likelihood methods, suitable for Markov models incorporating limited time resolution by means of a discrete detection limit. To simplify the analysis, attention is restricted to two-state models, although the methods have more general applicability. Non-uniqueness of the mean open-time and mean closed-time estimators obtained by moment methods based on single exponential approximations to the apparent open-time and apparent closed-time distributions has been reported. The present study clarifies and extends this previous work by proving that, for such approximations, the likelihood equations as well as the moment equations (usually) have multiple solutions. Such non-uniqueness corresponds to non-identifiability of the statistical model for the apparent quantities. By contrast, higher-order approximations yield theoretically identifiable models. Likelihood-based estimation procedures are developed for both single exponential and bi-exponential approximations. The methods and results are illustrated by numerical examples based on literature and simulated data, with consideration given to empirical distributions and model control, likelihood plots, and point estimation and confidence regions.  相似文献   

19.
The ascertainment problem arises when families are sampled by a nonrandom process and some assumption about this sampling process must be made in order to estimate genetic parameters. Under classical ascertainment assumptions, estimation of genetic parameters cannot be separated from estimation of the parameters of the ascertainment process, so that any misspecification of the ascertainment process causes biases in estimation of the genetic parameters. Ewens and Shute proposed a resolution to this problem, involving conditioning the likelihood of the sample on the part of the data which is "relevant to ascertainment." The usefulness of this approach can only be assessed by examining the properties (in particular, bias and standard error) of the estimates which arise by using it for a wide range of parameter values and family size distributions and then comparing these biases and standard errors with those arising under classical ascertainment procedures. These comparisons are carried out in the present paper, and we also compare the proposed method with procedures which condition on, or ignore, parts of the data.  相似文献   

20.
The standard Cox model is perhaps the most commonly used model for regression analysis of failure time data but it has some limitations such as the assumption on linear covariate effects. To relax this, the nonparametric additive Cox model, which allows for nonlinear covariate effects, is often employed, and this paper will discuss variable selection and structure estimation for this general model. For the problem, we propose a penalized sieve maximum likelihood approach with the use of Bernstein polynomials approximation and group penalization. To implement the proposed method, an efficient group coordinate descent algorithm is developed and can be easily carried out for both low- and high-dimensional scenarios. Furthermore, a simulation study is performed to assess the performance of the presented approach and suggests that it works well in practice. The proposed method is applied to an Alzheimer's disease study for identifying important and relevant genetic factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号