首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces a simple stochastic model for waterfowl movement. After outlining the properties of the model, we focus on parameter estimation. We compare three standard least squares estimation procedures with maximum likelihood (ML) estimates using Monte Carlo simulations. For our model, little is gained by incorporating information about the covariance structure of the process into least squares estimation. In fact, misspecifying the covariance produces worse estimates than ignoring heteroscedasticity and autocorrelation. We also develop a modified least squares procedure that performs as well as ML. We then apply the five estimators to field data and show that differences in the statistical properties of the estimators can greatly affect our interpretation of the data. We conclude by highlighting the effects of density on per capita movement rates.  相似文献   

2.
We present a new modification of nonlinear regression models for repeated measures data with heteroscedastic error structures by combining the transform-both-sides and weighting model from Caroll and Ruppert (1988) with the nonlinear random effects model from Lindstrom and Bates (1990). The proposed parameter estimators are a combination of pseudo maximum likelihood estimators for the transform-both-sides and weighting model and maximum likelihood (ML) or restricted maximum likelihood (REML) estimators for linear mixed effects models. The new method is investigated by analyzing simulated enzyme kinetic data published by Jones (1993).  相似文献   

3.
Shrinkage Estimators for Covariance Matrices   总被引:1,自引:0,他引:1  
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically depend on the covariance estimator chosen. We recommend making inference using a particular shrinkage estimator that provides a reasonable compromise between structured and unstructured estimators.  相似文献   

4.
Adhesion flow assays are commonly employed to characterize the kinetics and force-dependence of receptor-ligand interactions. As transient cellular adhesion events are often mediated by a small number of receptor-ligand complexes (tether bonds) their durations are highly variable, which in turn presents obstacles to standard methods of analysis. In this paper, we employ the stochastic approach to chemical kinetics to construct the pause time distribution. Using this distribution, we develop a robust maximum likelihood (ML) approach to the robust estimation of rate constants associated with receptor-mediated transient adhesion and their confidence intervals. We then formulate robust estimators of the parameters of models for the force-dependence of the off-rate. Lastly, we develop a robust method of elucidation of the force-dependence of the off-rate using Akaike's information criterion (AIC). Our findings conclusively demonstrate that ML estimators of adhesion kinetics are substantial improvements over more conventional approaches, and when combined with Fisher information, they may be used to objectively and reproducibly distinguish the kinetics of different receptor-ligand complexes. Software for the implementation of these methods with experimental data is publicly available as for download at http://www.laurenzi.net.  相似文献   

5.
Semiparametric smoothing methods are usually used to model longitudinal data, and the interest is to improve efficiency for regression coefficients. This paper is concerned with the estimation in semiparametric varying‐coefficient models (SVCMs) for longitudinal data. By the orthogonal projection method, local linear technique, quasi‐score estimation, and quasi‐maximum likelihood estimation, we propose a two‐stage orthogonality‐based method to estimate parameter vector, coefficient function vector, and covariance function. The developed procedures can be implemented separately and the resulting estimators do not affect each other. Under some mild conditions, asymptotic properties of the resulting estimators are established explicitly. In particular, the asymptotic behavior of the estimator of coefficient function vector at the boundaries is examined. Further, the finite sample performance of the proposed procedures is assessed by Monte Carlo simulation experiments. Finally, the proposed methodology is illustrated with an analysis of an acquired immune deficiency syndrome (AIDS) dataset.  相似文献   

6.
Recurrent events data are commonly encountered in medical studies. In many applications, only the number of events during the follow‐up period rather than the recurrent event times is available. Two important challenges arise in such studies: (a) a substantial portion of subjects may not experience the event, and (b) we may not observe the event count for the entire study period due to informative dropout. To address the first challenge, we assume that underlying population consists of two subpopulations: a subpopulation nonsusceptible to the event of interest and a subpopulation susceptible to the event of interest. In the susceptible subpopulation, the event count is assumed to follow a Poisson distribution given the follow‐up time and the subject‐specific characteristics. We then introduce a frailty to account for informative dropout. The proposed semiparametric frailty models consist of three submodels: (a) a logistic regression model for the probability such that a subject belongs to the nonsusceptible subpopulation; (b) a nonhomogeneous Poisson process model with an unspecified baseline rate function; and (c) a Cox model for the informative dropout time. We develop likelihood‐based estimation and inference procedures. The maximum likelihood estimators are shown to be consistent. Additionally, the proposed estimators of the finite‐dimensional parameters are asymptotically normal and the covariance matrix attains the semiparametric efficiency bound. Simulation studies demonstrate that the proposed methodologies perform well in practical situations. We apply the proposed methods to a clinical trial on patients with myelodysplastic syndromes.  相似文献   

7.
T R Fears  C C Brown 《Biometrics》1986,42(4):955-960
There are a number of possible designs for case-control studies. The simplest uses two separate simple random samples, but an actual study may use more complex sampling procedures. Typically, stratification is used to control for the effects of one or more risk factors in which we are interested. It has been shown (Anderson, 1972, Biometrika 59, 19-35; Prentice and Pyke, 1979, Biometrika 66, 403-411) that the unconditional logistic regression estimators apply under stratified sampling, so long as the logistic model includes a term for each stratum. We consider the case-control problem with stratified samples and assume a logistic model that does not include terms for strata, i.e., for fixed covariates the (prospective) probability of disease does not depend on stratum. We assume knowledge of the proportion sampled in each stratum as well as the total number in the stratum. We use this knowledge to obtain the maximum likelihood estimators for all parameters in the logistic model including those for variables completely associated with strata. The approach may also be applied to obtain estimators under probability sampling.  相似文献   

8.
The problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of individuals is considered by using the EM algorithm. As shown, the EM algorithm provides a general solution for this problem. Its implementation is simple and in its most general form requires no extra iterative procedures within the M step. One important feature of the algorithm in this set up is that the error variance estimates are always positive. Thus, it can be seen as a kind of restricted maximization procedure. The expected information matrix for the maximum likelihood estimators is derived, upon which the large sample estimated covariance matrix for the maximum likelihood estimators can be computed. The problem of testing hypothesis about the calibration lines can be approached by using the Wald statistics. The approach is illustrated by re-analysing two data sets in the literature.  相似文献   

9.
Bivariate samples may be subject to censoring of both random variables. For example, for two toxins measured in batches of wheat grain, there may be specific detection limits. Alternatively, censoring may be incomplete over a certain domain, with the probability of detection depending on the toxin level. In either case, data are not missing at random, and the missing data pattern bears some information on the parameters of the underlying model (informative missingness), which can be exploited for a fully efficient analysis. Estimation (after suitable data transformation) of the correlation in such samples is the subject of the present paper. We consider several estimators. The first is based on the tetrachoric correlation. It is simple to compute, but does not exploit the full information. The other two estimators exploit all information and use full maximum likelihood, but involve heavier computations. The one assumes fixed detection limits, while the other involves a logistic model for the probability of detection. For a real data set, a logistic model for the probability of detection fitted markedly better than a model with fixed detection limits, suggesting that censoring is not complete.  相似文献   

10.
Genetic correlations are frequently estimated from natural and experimental populations, yet many of the statistical properties of estimators of are not known, and accurate methods have not been described for estimating the precision of estimates of Our objective was to assess the statistical properties of multivariate analysis of variance (MANOVA), restricted maximum likelihood (REML), and maximum likelihood (ML) estimators of by simulating bivariate normal samples for the one-way balanced linear model. We estimated probabilities of non-positive definite MANOVA estimates of genetic variance-covariance matrices and biases and variances of MANOVA, REML, and ML estimators of and assessed the accuracy of parametric, jackknife, and bootstrap variance and confidence interval estimators for MANOVA estimates of were normally distributed. REML and ML estimates were normally distributed for but skewed for and 0.9. All of the estimators were biased. The MANOVA estimator was less biased than REML and ML estimators when heritability (H), the number of genotypes (n), and the number of replications (r) were low. The biases were otherwise nearly equal for different estimators and could not be reduced by jackknifing or bootstrapping. The variance of the MANOVA estimator was greater than the variance of the REML or ML estimator for most H, n, and r. Bootstrapping produced estimates of the variance of close to the known variance, especially for REML and ML. The observed coverages of the REML and ML bootstrap interval estimators were consistently close to stated coverages, whereas the observed coverage of the MANOVA bootstrap interval estimator was unsatisfactory for some H, n, and r. The other interval estimators produced unsatisfactory coverages. REML and ML bootstrap interval estimates were narrower than MANOVA bootstrap interval estimates for most H, and r. Received: 6 July 1995 / Accepted: 8 March 1996  相似文献   

11.
In order to have confidence in model-based phylogenetic analysis, the model of nucleotide substitution adopted must be selected in a statistically rigorous manner. Several model-selection methods are applicable to maximum likelihood (ML) analysis, including the hierarchical likelihood-ratio test (hLRT), Akaike information criterion (AIC), Bayesian information criterion (BIC), and decision theory (DT), but their performance relative to empirical data has not been investigated thoroughly. In this study, we use 250 phylogenetic data sets obtained from TreeBASE to examine the effects that choice in model selection has on ML estimation of phylogeny, with an emphasis on optimal topology, bootstrap support, and hypothesis testing. We show that the use of different methods leads to the selection of two or more models for approximately 80% of the data sets and that the AIC typically selects more complex models than alternative approaches. Although ML estimation with different best-fit models results in incongruent tree topologies approximately 50% of the time, these differences are primarily attributable to alternative resolutions of poorly supported nodes. Furthermore, topologies and bootstrap values estimated with ML using alternative statistically supported models are more similar to each other than to topologies and bootstrap values estimated with ML under the Kimura two-parameter (K2P) model or maximum parsimony (MP). In addition, Swofford-Olsen-Waddell-Hillis (SOWH) tests indicate that ML trees estimated with alternative best-fit models are usually not significantly different from each other when evaluated with the same model. However, ML trees estimated with statistically supported models are often significantly suboptimal to ML trees made with the K2P model when both are evaluated with K2P, indicating that not all models perform in an equivalent manner. Nevertheless, the use of alternative statistically supported models generally does not affect tests of monophyletic relationships under either the Shimodaira-Hasegawa (S-H) or SOWH methods. Our results suggest that although choice in model selection has a strong impact on optimal tree topology, it rarely affects evolutionary inferences drawn from the data because differences are mainly confined to poorly supported nodes. Moreover, since ML with alternative best-fit models tends to produce more similar estimates of phylogeny than ML under the K2P model or MP, the use of any statistically based model-selection method is vastly preferable to forgoing the model-selection process altogether.  相似文献   

12.
Comparison of the performance and accuracy of different inference methods, such as maximum likelihood (ML) and Bayesian inference, is difficult because the inference methods are implemented in different programs, often written by different authors. Both methods were implemented in the program MIGRATE, that estimates population genetic parameters, such as population sizes and migration rates, using coalescence theory. Both inference methods use the same Markov chain Monte Carlo algorithm and differ from each other in only two aspects: parameter proposal distribution and maximization of the likelihood function. Using simulated datasets, the Bayesian method generally fares better than the ML approach in accuracy and coverage, although for some values the two approaches are equal in performance. MOTIVATION: The Markov chain Monte Carlo-based ML framework can fail on sparse data and can deliver non-conservative support intervals. A Bayesian framework with appropriate prior distribution is able to remedy some of these problems. RESULTS: The program MIGRATE was extended to allow not only for ML(-) maximum likelihood estimation of population genetics parameters but also for using a Bayesian framework. Comparisons between the Bayesian approach and the ML approach are facilitated because both modes estimate the same parameters under the same population model and assumptions.  相似文献   

13.
We consider the question: In a segregation analysis, can knowledge of the family-size distribution (FSD) in the population from which a sample is drawn improve the estimators of genetic parameters? In other words, should one incorporate the population FSD into a segregation analysis if one knows it? If so, then under what circumstances? And how much improvement may result? We examine the variance and bias of the maximum likelihood estimators both asymptotically and in finite samples. We consider Poisson and geometric FSDs, as well as a simple two-valued FSD in which all families in the population have either one or two children. We limit our study to a simple genetic model with truncate selection. We find that if the FSD is completely specified, then the asymptotic variance of the estimator may be reduced by as much as 5%-10%, especially when the FSD is heavily skewed toward small families. Results in small samples are less clear-cut. For some of the simple two-valued FSDs, the variance of the estimator in small samples of one- and two-child families may actually be increased slightly when the FSD is included in the analysis. If one knows only the statistical form of the FSD, but not its parameter, then the estimator is improved only minutely. Our study also underlines the fact that results derived from asymptotic maximum likelihood theory do not necessarily hold in small samples. We conclude that in most practical applications it is not worth incorporating the FSD into a segregation analysis. However, this practice may be justified under special circumstances where the FSD is completely specified, without error, and the population consists overwhelmingly of small families.  相似文献   

14.
The structure of dependence between neighboring genetic loci is intractable under some models that treat each locus as a single data-point. Composite likelihood-based methods present a simple approach under such models by treating the data as if they are independent. A maximum composite likelihood estimator (MCLE) is not easy to find numerically, as in most cases we do not have a way of knowing if a maximum is global. We study the local maxima of the composite likelihood (ECLE, the efficient composite likelihood estimators), which is straightforward to compute. We establish desirable properties of the ECLE and provide an estimator of the variance of MCLE and ECLE. We also modify two proper likelihood-based tests to be used with composite likelihood. We modify our methods to make them applicable to datasets where some loci are excluded.  相似文献   

15.
Imputation, weighting, direct likelihood, and direct Bayesian inference (Rubin, 1976) are important approaches for missing data regression. Many useful semiparametric estimators have been developed for regression analysis of data with missing covariates or outcomes. It has been established that some semiparametric estimators are asymptotically equivalent, but it has not been shown that many are numerically the same. We applied some existing methods to a bladder cancer case-control study and noted that they were the same numerically when the observed covariates and outcomes are categorical. To understand the analytical background of this finding, we further show that when observed covariates and outcomes are categorical, some estimators are not only asymptotically equivalent but also actually numerically identical. That is, although their estimating equations are different, they lead numerically to exactly the same root. This includes a simple weighted estimator, an augmented weighted estimator, and a mean-score estimator. The numerical equivalence may elucidate the relationship between imputing scores and weighted estimation procedures.  相似文献   

16.
Survival estimation using splines   总被引:1,自引:0,他引:1  
A nonparametric maximum likelihood procedure is given for estimating the survivor function from right-censored data. It approximates the hazard rate by a simple function such as a spline, with different approximations yielding different estimators. A special case is that proposed by Nelson (1969, Journal of Quality Technology 1, 27-52) and Altshuler (1970, Mathematical Biosciences 6, 1-11). The estimators are uniformly consistent and have the same asymptotic weak convergence properties as the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) estimator. However, in small and in heavily censored samples, the simplest spline estimators have uniformly smaller mean squared error than do the Kaplan-Meier and Nelson-Altshuler estimators. The procedure is extended to estimate the baseline hazard rate and regression coefficients in the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model and is illustrated using experimental carcinogenesis data.  相似文献   

17.
This paper considers the use of ante-dependence models in problems with repeated measures through time. These are conditional regression models which reflect the dependence of a measure on some of the previous observations from the same subject. We present maximum likelihood estimators of the covariance matrix and procedures for selecting the order of ante-degendence based on penalized like-lihoods. Extensions to missing data situations are discussed. We propose Wald-type test statistics and apply them in two situations common in experiments with repeated measures: one with pre-study observations and another one with small sample size relative to the number of time periods. In these examples, tests assuming ante-dependence find effects which are not detected using competing procedures.  相似文献   

18.
Summary Latent class analysis (LCA) and latent class regression (LCR) are widely used for modeling multivariate categorical outcomes in social science and biomedical studies. Standard analyses assume data of different respondents to be mutually independent, excluding application of the methods to familial and other designs in which participants are clustered. In this article, we consider multilevel latent class models, in which subpopulation mixing probabilities are treated as random effects that vary among clusters according to a common Dirichlet distribution. We apply the expectation‐maximization (EM) algorithm for model fitting by maximum likelihood (ML). This approach works well, but is computationally intensive when either the number of classes or the cluster size is large. We propose a maximum pairwise likelihood (MPL) approach via a modified EM algorithm for this case. We also show that a simple latent class analysis, combined with robust standard errors, provides another consistent, robust, but less‐efficient inferential procedure. Simulation studies suggest that the three methods work well in finite samples, and that the MPL estimates often enjoy comparable precision as the ML estimates. We apply our methods to the analysis of comorbid symptoms in the obsessive compulsive disorder study. Our models' random effects structure has more straightforward interpretation than those of competing methods, thus should usefully augment tools available for LCA of multilevel data.  相似文献   

19.
Modeling compositional heterogeneity   总被引:12,自引:0,他引:12  
Compositional heterogeneity among lineages can compromise phylogenetic analyses, because models in common use assume compositionally homogeneous data. Models that can accommodate compositional heterogeneity with few extra parameters are described here, and used in two examples where the true tree is known with confidence. It is shown using likelihood ratio tests that adequate modeling of compositional heterogeneity can be achieved with few composition parameters, that the data may not need to be modelled with separate composition parameters for each branch in the tree. Tree searching and placement of composition vectors on the tree are done in a Bayesian framework using Markov chain Monte Carlo (MCMC) methods. Assessment of fit of the model to the data is made in both maximum likelihood (ML) and Bayesian frameworks. In an ML framework, overall model fit is assessed using the Goldman-Cox test, and the fit of the composition implied by a (possibly heterogeneous) model to the composition of the data is assessed using a novel tree-and model-based composition fit test. In a Bayesian framework, overall model fit and composition fit are assessed using posterior predictive simulation. It is shown that when composition is not accommodated, then the model does not fit, and incorrect trees are found; but when composition is accommodated, the model then fits, and the known correct phylogenies are obtained.  相似文献   

20.
Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号