首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The log-det estimator is a measure of divergence (evolutionary distance) between sequences of biological characters, DNA or amino acids, for example, and has been shown to be robust to biases in composition that can cause problems for other estimators. We provide a statistical framework to construct high-accuracy confidence intervals for log-det estimates and compare the efficiency of the estimator to that of maximum likelihood using time-reversible Markov models. The log-det estimator is found to have good statistical properties under such general models.  相似文献   

2.
Estimation of a common effect parameter from sparse follow-up data   总被引:30,自引:0,他引:30  
Breslow (1981, Biometrika 68, 73-84) has shown that the Mantel-Haenszel odds ratio is a consistent estimator of a common odds ratio in sparse stratifications. For cohort studies, however, estimation of a common risk ratio or risk difference can be of greater interest. Under a binomial sparse-data model, the Mantel-Haenszel risk ratio and risk difference estimators are consistent in sparse stratifications, while the maximum likelihood and weighted least squares estimators are biased. Under Poisson sparse-data models, the Mantel-Haenszel and maximum likelihood rate ratio estimators have equal asymptotic variances under the null hypothesis and are consistent, while the weighted least squares estimators are again biased; similarly, of the common rate difference estimators the weighted least squares estimators are biased, while the estimator employing "Mantel-Haenszel" weights is consistent in sparse data. Variance estimators that are consistent in both sparse data and large strata can be derived for all the Mantel-Haenszel estimators.  相似文献   

3.
Y. X. Fu 《Genetics》1994,138(4):1375-1386
Mutations resulting in segregating sites of a sample of DNA sequences can be classified by size and type and the frequencies of mutations of different sizes and types can be inferred from the sample. A framework for estimating the essential parameter θ = 4Nu utilizing the frequencies of mutations of various sizes and types is developed in this paper, where N is the effective size of a population and μ is mutation rate per sequence per generation. The framework is a combination of coalescent theory, general linear model and Monte-Carlo integration, which leads to two new estimators θ(ξ) and θ(η) as well as a general Watterson''s estimator θ(K) and a general Tajima''s estimator θ(π). The greatest strength of the framework is that it can be used under a variety of population models. The properties of the framework and the four estimators θ(K), θ(π), θ(ξ) and θ(η) are investigated under three important population models: the neutral Wright-Fisher model, the neutral model with recombination and the neutral Wright''s finite-islands model. Under all these models, it is shown that θ(ξ) is the best estimator among the four even when recombination rate or migration rate has to be estimated. Under the neutral Wright-Fisher model, it is shown that the new estimator θ(ξ) has a variance close to a lower bound of variances of all unbiased estimators of θ which suggests that θ(ξ) is a very efficient estimator.  相似文献   

4.
A general statistical framework is proposed for comparing linear models of spatial process and pattern. A spatial linear model for nested analysis of variance can be based on either fixed effects or random effects. Greig-Smith (1952) originally used a fixed effects model, but there are also examples of random effects models in the soil science literature. Assuming intrinsic stationarity for a linear model, the expectations of a spatial nested ANOVA and two term local variance (TTLV, Hill 1973) are functions of the variogram, and several examples are given. Paired quadrat variance (PQV, Ludwig & Goodall 1978) is a variogram estimator which can be used to approximate TTLV, and we provide an example from ecological data. Both nested ANOVA and TTLV can be seen as weighted lag-1 variogram estimators that are functions of support, rather than distance. We show that there are two unbiased estimators for the variogram under aggregation, and computer simulation shows that the estimator with smaller variance depends on the process autocorrelation.  相似文献   

5.
Copt S  Heritier S 《Biometrics》2007,63(4):1045-1052
Mixed linear models are commonly used to analyze data in many settings. These models are generally fitted by means of (restricted) maximum likelihood techniques relying heavily on normality. The sensitivity of the resulting estimators and related tests to this underlying assumption has been identified as a weakness that can even lead to wrong interpretations. Very recently a highly robust estimator based on a scale estimate, that is, an S-estimator, has been proposed for general mixed linear models. It has the advantage of being easy to compute and allows the computation of a robust score test. However, this proposal cannot be used to define a likelihood ratio type test that is certainly the most direct route to robustify an F-test. As the latter is usually a key tool of hypothesis testing in mixed linear models, we propose two new robust estimators that allow the desired extension. They also lead to resistant Wald-type tests useful for testing contrasts and covariate effects. We study their properties theoretically and by means of simulations. The analysis of a real data set illustrates the advantage of the new approach in the presence of outlying observations.  相似文献   

6.
Logistic regression in capture-recapture models   总被引:6,自引:1,他引:5  
J M Alho 《Biometrics》1990,46(3):623-635
The effect of population heterogeneity in capture-recapture, or dual registration, models is discussed. An estimator of the unknown population size based on a logistic regression model is introduced. The model allows different capture probabilities across individuals and across capture times. The probabilities are estimated from the observed data using conditional maximum likelihood. The resulting population estimator is shown to be consistent and asymptotically normal. A variance estimator under population heterogeneity is derived. The finite-sample properties of the estimators are studied via simulation. An application to Finnish occupational disease registration data is presented.  相似文献   

7.
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.  相似文献   

8.
Otto SP  Jones CD 《Genetics》2000,156(4):2093-2107
Recent studies have begun to reveal the genes underlying quantitative trait differences between closely related populations. Not all quantitative trait loci (QTL) are, however, equally likely to be detected. QTL studies involve a limited number of crosses, individuals, and genetic markers and, as a result, often have little power to detect genetic factors of small to moderate effects. In this article, we develop an estimator for the total number of fixed genetic differences between two parental lines. Like the Castle-Wright estimator, which is based on the observed segregation variance in classical crossbreeding experiments, our QTL-based estimator requires that a distribution be specified for the expected effect sizes of the underlying loci. We use this expected distribution and the observed mean and minimum effect size of the detected QTL in a likelihood model to estimate the total number of loci underlying the trait difference. We then test the QTL-based estimator and the Castle-Wright estimator in Monte Carlo simulations. When the assumptions of the simulations match those of the model, both estimators perform well on average. The 95% confidence limits of the Castle-Wright estimator, however, often excluded the true number of underlying loci, while the confidence limits for the QTL-based estimator typically included the true value approximately 95% of the time. Furthermore, we found that the QTL-based estimator was less sensitive to dominance and to allelic effects of opposite sign than the Castle-Wright estimator. We therefore suggest that the QTL-based estimator be used to assess how many loci may have been missed in QTL studies.  相似文献   

9.
Pan W  Lin X  Zeng D 《Biometrics》2006,62(2):402-412
We propose a new class of models, transition measurement error models, to study the effects of covariates and the past responses on the current response in longitudinal studies when one of the covariates is measured with error. We show that the response variable conditional on the error-prone covariate follows a complex transition mixed effects model. The naive model obtained by ignoring the measurement error correctly specifies the transition part of the model, but misspecifies the covariate effect structure and ignores the random effects. We next study the asymptotic bias in naive estimator obtained by ignoring the measurement error for both continuous and discrete outcomes. We show that the naive estimator of the regression coefficient of the error-prone covariate is attenuated, while the naive estimators of the regression coefficients of the past responses are generally inflated. We then develop a structural modeling approach for parameter estimation using the maximum likelihood estimation method. In view of the multidimensional integration required by full maximum likelihood estimation, an EM algorithm is developed to calculate maximum likelihood estimators, in which Monte Carlo simulations are used to evaluate the conditional expectations in the E-step. We evaluate the performance of the proposed method through a simulation study and apply it to a longitudinal social support study for elderly women with heart disease. An additional simulation study shows that the Bayesian information criterion (BIC) performs well in choosing the correct transition orders of the models.  相似文献   

10.
M C Wu  K R Bailey 《Biometrics》1989,45(3):939-955
A general linear regression model for the usual least squares estimated rate of change (slope) on censoring time is described as an approximation to account for informative right censoring in estimating and comparing changes of a continuous variable in two groups. Two noniterative estimators for the group slope means, the linear minimum variance unbiased (LMVUB) estimator and the linear minimum mean squared error (LMMSE) estimator, are proposed under this conditional model. In realistic situations, we illustrate that the LMVUB and LMMSE estimators, derived under a simple linear regression model, are quite competitive compared to the pseudo maximum likelihood estimator (PMLE) derived by modeling the censoring probabilities. Generalizations to polynomial response curves and general linear models are also described.  相似文献   

11.
Bayesian phylogenetic methods are generating noticeable enthusiasm in the field of molecular systematics. Many phylogenetic models are often at stake, and different approaches are used to compare them within a Bayesian framework. The Bayes factor, defined as the ratio of the marginal likelihoods of two competing models, plays a key role in Bayesian model selection. We focus on an alternative estimator of the marginal likelihood whose computation is still a challenging problem. Several computational solutions have been proposed, none of which can be considered outperforming the others simultaneously in terms of simplicity of implementation, computational burden and precision of the estimates. Practitioners and researchers, often led by available software, have privileged so far the simplicity of the harmonic mean (HM) estimator. However, it is known that the resulting estimates of the Bayesian evidence in favor of one model are biased and often inaccurate, up to having an infinite variance so that the reliability of the corresponding conclusions is doubtful. We consider possible improvements of the generalized harmonic mean (GHM) idea that recycle Markov Chain Monte Carlo (MCMC) simulations from the posterior, share the computational simplicity of the original HM estimator, but, unlike it, overcome the infinite variance issue. We show reliability and comparative performance of the improved harmonic mean estimators comparing them to approximation techniques relying on improved variants of the thermodynamic integration.  相似文献   

12.
In this article, we propose a two-stage approach to modeling multilevel clustered non-Gaussian data with sufficiently large numbers of continuous measures per cluster. Such data are common in biological and medical studies utilizing monitoring or image-processing equipment. We consider a general class of hierarchical models that generalizes the model in the global two-stage (GTS) method for nonlinear mixed effects models by using any square-root-n-consistent and asymptotically normal estimators from stage 1 as pseudodata in the stage 2 model, and by extending the stage 2 model to accommodate random effects from multiple levels of clustering. The second-stage model is a standard linear mixed effects model with normal random effects, but the cluster-specific distributions, conditional on random effects, can be non-Gaussian. This methodology provides a flexible framework for modeling not only a location parameter but also other characteristics of conditional distributions that may be of specific interest. For estimation of the population parameters, we propose a conditional restricted maximum likelihood (CREML) approach and establish the asymptotic properties of the CREML estimators. The proposed general approach is illustrated using quartiles as cluster-specific parameters estimated in the first stage, and applied to the data example from a collagen fibril development study. We demonstrate using simulations that in samples with small numbers of independent clusters, the CREML estimators may perform better than conditional maximum likelihood estimators, which are a direct extension of the estimators from the GTS method.  相似文献   

13.
In this article, we estimate heritability or intraclass correlation in a mixed linear model having two sources of variation. In most applications, the commonly used restricted maximum likelihood (REML) estimator can only be obtained via an iterative approach. In some cases, the algorithm used to compute REML estimates may be slow or may even fail to converge. We develop a set of closed-form approximations to the REML estimator, and the performance of these estimators is compared with that of the REML estimator. We provide guidelines regarding how to choose the estimator that best approximates the REML estimator. Examples presented in the article suggest that the closed-form estimators compete with and, in some cases, outperform the REML estimator.  相似文献   

14.
Doubly robust estimation in missing data and causal inference models   总被引:3,自引:0,他引:3  
Bang H  Robins JM 《Biometrics》2005,61(4):962-973
The goal of this article is to construct doubly robust (DR) estimators in ignorable missing data and causal inference models. In a missing data model, an estimator is DR if it remains consistent when either (but not necessarily both) a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified. Because with observational data one can never be sure that either a missingness model or a complete data model is correct, perhaps the best that can be hoped for is to find a DR estimator. DR estimators, in contrast to standard likelihood-based or (nonaugmented) inverse probability-weighted estimators, give the analyst two chances, instead of only one, to make a valid inference. In a causal inference model, an estimator is DR if it remains consistent when either a model for the treatment assignment mechanism or a model for the distribution of the counterfactual data is correctly specified. Because with observational data one can never be sure that a model for the treatment assignment mechanism or a model for the counterfactual data is correct, inference based on DR estimators should improve upon previous approaches. Indeed, we present the results of simulation studies which demonstrate that the finite sample performance of DR estimators is as impressive as theory would predict. The proposed method is applied to a cardiovascular clinical trial.  相似文献   

15.
Maximum-likelihood estimation of relatedness   总被引:8,自引:0,他引:8  
Milligan BG 《Genetics》2003,163(3):1153-1167
Relatedness between individuals is central to many studies in genetics and population biology. A variety of estimators have been developed to enable molecular marker data to quantify relatedness. Despite this, no effort has been given to characterize the traditional maximum-likelihood estimator in relation to the remainder. This article quantifies its statistical performance under a range of biologically relevant sampling conditions. Under the same range of conditions, the statistical performance of five other commonly used estimators of relatedness is quantified. Comparison among these estimators indicates that the traditional maximum-likelihood estimator exhibits a lower standard error under essentially all conditions. Only for very large amounts of genetic information do most of the other estimators approach the likelihood estimator. However, the likelihood estimator is more biased than any of the others, especially when the amount of genetic information is low or the actual relationship being estimated is near the boundary of the parameter space. Even under these conditions, the amount of bias can be greatly reduced, potentially to biologically irrelevant levels, with suitable genetic sampling. Additionally, the likelihood estimator generally exhibits the lowest root mean-square error, an indication that the bias in fact is quite small. Alternative estimators restricted to yield only biologically interpretable estimates exhibit lower standard errors and greater bias than do unrestricted ones, but generally do not improve over the maximum-likelihood estimator and in some cases exhibit even greater bias. Although some nonlikelihood estimators exhibit better performance with respect to specific metrics under some conditions, none approach the high level of performance exhibited by the likelihood estimator across all conditions and all metrics of performance.  相似文献   

16.
The structure of dependence between neighboring genetic loci is intractable under some models that treat each locus as a single data-point. Composite likelihood-based methods present a simple approach under such models by treating the data as if they are independent. A maximum composite likelihood estimator (MCLE) is not easy to find numerically, as in most cases we do not have a way of knowing if a maximum is global. We study the local maxima of the composite likelihood (ECLE, the efficient composite likelihood estimators), which is straightforward to compute. We establish desirable properties of the ECLE and provide an estimator of the variance of MCLE and ECLE. We also modify two proper likelihood-based tests to be used with composite likelihood. We modify our methods to make them applicable to datasets where some loci are excluded.  相似文献   

17.
Y Hochberg  I Marom  R Keret  S Peleg 《Biometrics》1983,39(1):97-107
Two new estimators for calibrating unknowns from dose-response curves, in a system of quality-controlled assays, are examined. In contrast with the conventional estimator which uses only the results of the one assay in which the response of the unknown dose is measured, the new estimators also utilize the results of all other assays through the replications of the control samples in the system. The first estimator is based on maximizing the likelihood of the given system (with respect to the different dose-response parameters, the levels of the control samples and the levels of the unknowns) when response errors are normally distributed. The second estimator is a regression-like estimator obtained by subtracting from the conventional estimator its estimated regression on the deviation of the calibrated control levels in the given assay from their average values in the system. Evaluations of the reductions in bias and variance attained by the new estimators show when substantial reductions in mean square error can be expected. The new estimators are illustrated with a system of 22 hFSH radioimmunoassays.  相似文献   

18.
Summary We derive regression estimators that can compare longitudinal treatments using only the longitudinal propensity scores as regressors. These estimators, which assume knowledge of the variables used in the treatment assignment, are important for reducing the large dimension of covariates for two reasons. First, if the regression models on the longitudinal propensity scores are correct, then our estimators share advantages of correctly specified model‐based estimators, a benefit not shared by estimators based on weights alone. Second, if the models are incorrect, the misspecification can be more easily limited through model checking than with models based on the full covariates. Thus, our estimators can also be better when used in place of the regression on the full covariates. We use our methods to compare longitudinal treatments for type II diabetes mellitus.  相似文献   

19.
Zhang C  Jiang Y  Chai Y 《Biometrika》2010,97(3):551-566
Regularization methods are characterized by loss functions measuring data fits and penalty terms constraining model parameters. The commonly used quadratic loss is not suitable for classification with binary responses, whereas the loglikelihood function is not readily applicable to models where the exact distribution of observations is unknown or not fully specified. We introduce the penalized Bregman divergence by replacing the negative loglikelihood in the conventional penalized likelihood with Bregman divergence, which encompasses many commonly used loss functions in the regression analysis, classification procedures and machine learning literature. We investigate new statistical properties of the resulting class of estimators with the number p(n) of parameters either diverging with the sample size n or even nearly comparable with n, and develop statistical inference tools. It is shown that the resulting penalized estimator, combined with appropriate penalties, achieves the same oracle property as the penalized likelihood estimator, but asymptotically does not rely on the complete specification of the underlying distribution. Furthermore, the choice of loss function in the penalized classifiers has an asymptotically relatively negligible impact on classification performance. We illustrate the proposed method for quasilikelihood regression and binary classification with simulation evaluation and real-data application.  相似文献   

20.
Shrinkage Estimators for Covariance Matrices   总被引:1,自引:0,他引:1  
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically depend on the covariance estimator chosen. We recommend making inference using a particular shrinkage estimator that provides a reasonable compromise between structured and unstructured estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号