首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We review three methods for estimating the frequency of null alleles at codominant loci (such as microsatellite loci) and present a new maximum likelihood approach. Computer simulations show that the maximum likelihood estimator has a smaller root mean squared error than previous estimators.  相似文献   

2.
Composite likelihood methods have become very popular for the analysis of large-scale genomic data sets because of the computational intractability of the basic coalescent process and its generalizations: It is virtually impossible to calculate the likelihood of an observed data set spanning a large chromosomal region without using approximate or heuristic methods. Composite likelihood methods are approximate methods and, in the present article, assume the likelihood is written as a product of likelihoods, one for each of a number of smaller regions that together make up the whole region from which data is collected. A very general framework for neutral coalescent models is presented and discussed. The framework comprises many of the most popular coalescent models that are currently used for analysis of genetic data. Assume data is collected from a series of consecutive regions of equal size. Then it is shown that the observed data forms a stationary, ergodic process. General conditions are given under which the maximum composite estimator of the parameters describing the model (e.g. mutation rates, demographic parameters and the recombination rate) is a consistent estimator as the number of regions tends to infinity.  相似文献   

3.
Thoresen M  Laake P 《Biometrics》2000,56(3):868-872
Measurement error models in logistic regression have received considerable theoretical interest over the past 10-15 years. In this paper, we present the results of a simulation study that compares four estimation methods: the so-called regression calibration method, probit maximum likelihood as an approximation to the logistic maximum likelihood, the exact maximum likelihood method based on a logistic model, and the naive estimator, which is the result of simply ignoring the fact that some of the explanatory variables are measured with error. We have compared the behavior of these methods in a simple, additive measurement error model. We show that, in this situation, the regression calibration method is a very good alternative to more mathematically sophisticated methods.  相似文献   

4.
The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As the non-parametric model, additive hazards offer a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper, we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We show that the maximum likelihood estimator may be obtained by separately maximizing the log-likelihood contribution of each event time point, and we show that the maximizing problem is equivalent to fitting a series of Poisson regression models with an identity link under non-negativity constraints. We derive an analytic solution to the maximum likelihood estimator. We contrast the maximum likelihood estimator with the ordinary least-squares estimator in a simulation study and show that the maximum likelihood estimator has smaller mean squared error than the ordinary least-squares estimator. An illustration with data on patients with carcinoma of the oropharynx is provided.  相似文献   

5.
The reliabilities of parsimony-based and likelihood-based methods for inferring positive selection at single amino acid sites were studied using the nucleotide sequences of human leukocyte antigen (HLA) genes, in which positive selection is known to be operating at the antigen recognition site. The results indicate that the inference by parsimony-based methods is robust to the use of different evolutionary models and generally more reliable than that by likelihood-based methods. In contrast, the results obtained by likelihood-based methods depend on the models and on the initial parameter values used. It is sometimes difficult to obtain the maximum likelihood estimates of parameters for a given model, and the results obtained may be false negatives or false positives depending on the initial parameter values. It is therefore preferable to use parsimony-based methods as long as the number of sequences is relatively large and the branch lengths of the phylogenetic tree are relatively small.  相似文献   

6.
We consider the question: In a segregation analysis, can knowledge of the family-size distribution (FSD) in the population from which a sample is drawn improve the estimators of genetic parameters? In other words, should one incorporate the population FSD into a segregation analysis if one knows it? If so, then under what circumstances? And how much improvement may result? We examine the variance and bias of the maximum likelihood estimators both asymptotically and in finite samples. We consider Poisson and geometric FSDs, as well as a simple two-valued FSD in which all families in the population have either one or two children. We limit our study to a simple genetic model with truncate selection. We find that if the FSD is completely specified, then the asymptotic variance of the estimator may be reduced by as much as 5%-10%, especially when the FSD is heavily skewed toward small families. Results in small samples are less clear-cut. For some of the simple two-valued FSDs, the variance of the estimator in small samples of one- and two-child families may actually be increased slightly when the FSD is included in the analysis. If one knows only the statistical form of the FSD, but not its parameter, then the estimator is improved only minutely. Our study also underlines the fact that results derived from asymptotic maximum likelihood theory do not necessarily hold in small samples. We conclude that in most practical applications it is not worth incorporating the FSD into a segregation analysis. However, this practice may be justified under special circumstances where the FSD is completely specified, without error, and the population consists overwhelmingly of small families.  相似文献   

7.
The models of nucleotide substitution used by most maximum likelihood-based methods assume that the evolutionary process is stationary, reversible, and homogeneous. We present an extension of the Barry and Hartigan model, which can be used to estimate parameters by maximum likelihood (ML) when the data contain invariant sites and there are violations of the assumptions of stationarity, reversibility, and homogeneity. Unlike most ML methods for estimating invariant sites, we estimate the nucleotide composition of invariant sites separately from that of variable sites. We analyze a bacterial data set where problems due to lack of stationarity and homogeneity have been previously well noted and use the parametric bootstrap to show that the data are consistent with our general Markov model. We also show that estimates of invariant sites obtained using our method are fairly accurate when applied to data simulated under the general Markov model.  相似文献   

8.
Leveraging information in aggregate data from external sources to improve estimation efficiency and prediction accuracy with smaller scale studies has drawn a great deal of attention in recent years. Yet, conventional methods often either ignore uncertainty in the external information or fail to account for the heterogeneity between internal and external studies. This article proposes an empirical likelihood-based framework to improve the estimation of the semiparametric transformation models by incorporating information about the t-year subgroup survival probability from external sources. The proposed estimation procedure incorporates an additional likelihood component to account for uncertainty in the external information and employs a density ratio model to characterize population heterogeneity. We establish the consistency and asymptotic normality of the proposed estimator and show that it is more efficient than the conventional pseudopartial likelihood estimator without combining information. Simulation studies show that the proposed estimator yields little bias and outperforms the conventional approach even in the presence of information uncertainty and heterogeneity. The proposed methodologies are illustrated with an analysis of a pancreatic cancer study.  相似文献   

9.
A new genetic estimator of the effective population size (N(e)) is introduced. This likelihood-based (LB) estimator uses two temporally spaced genetic samples of individuals from a population. We compared its performance to that of the classical F-statistic-based N(e) estimator (N(eFk)) by using data from simulated populations with known N(e) and real populations. The new likelihood-based estimator (N(eLB)) showed narrower credible intervals and greater accuracy than (N(eFk)) when genetic drift was strong, but performed only slightly better when genetic drift was relatively weak. When drift was strong (e.g., N(e) = 20 for five generations), as few as approximately 10 loci (heterozygosity of 0.6; samples of 30 individuals) are sufficient to consistently achieve credible intervals with an upper limit <50 using the LB method. In contrast, approximately 20 loci are required for the same precision when using the classical F-statistic approach. The N(eLB) estimator is much improved over the classical method when there are many rare alleles. It will be especially useful in conservation biology because it less often overestimates N(e) than does N(eLB) and thus is less likely to erroneously suggest that a population is large and has a low extinction risk.  相似文献   

10.
Tallmon DA  Luikart G  Beaumont MA 《Genetics》2004,167(2):977-988
We describe and evaluate a new estimator of the effective population size (N(e)), a critical parameter in evolutionary and conservation biology. This new "SummStat" N(e) estimator is based upon the use of summary statistics in an approximate Bayesian computation framework to infer N(e). Simulations of a Wright-Fisher population with known N(e) show that the SummStat estimator is useful across a realistic range of individuals and loci sampled, generations between samples, and N(e) values. We also address the paucity of information about the relative performance of N(e) estimators by comparing the SummStat estimator to two recently developed likelihood-based estimators and a traditional moment-based estimator. The SummStat estimator is the least biased of the four estimators compared. In 32 of 36 parameter combinations investigated using initial allele frequencies drawn from a Dirichlet distribution, it has the lowest bias. The relative mean square error (RMSE) of the SummStat estimator was generally intermediate to the others. All of the estimators had RMSE > 1 when small samples (n = 20, five loci) were collected a generation apart. In contrast, when samples were separated by three or more generations and N(e) < or = 50, the SummStat and likelihood-based estimators all had greatly reduced RMSE. Under the conditions simulated, SummStat confidence intervals were more conservative than the likelihood-based estimators and more likely to include true N(e). The greatest strength of the SummStat estimator is its flexible structure. This flexibility allows it to incorporate any potentially informative summary statistic from population genetic data.  相似文献   

11.
Food frequency questionnaires (FFQs) are commonly used to assess dietary intake in epidemiologic research. To evaluate the FFQ reliability, the commonly used approach is to estimate the correlation coefficient between the data given in FFQ and those in food records (for example, 4-day food records [4DFR]) for nutrients of interest. However, in a dietary intervention study, a criterion for eligibility may be to select participants who have baseline FFQ-measured dietary intake of percent energy from fat above a prespecified quantity. Other instruments, such as the 4DFR, may be subsequently administrated only to eligible participants. Under these circumstances, analysis without adjusting for the restricted population will usually lead to biased estimation of correlation coefficients and other parameters of interest. In this paper, we apply likelihood-based and multiple imputation (MI) methods to accommodate such incomplete data obtained as a result of the study design. A simulation study is conducted to examine finite sample performance of various estimators. We note that both the MI estimate and the maximum likelihood (ML) estimate based on a bivariate-normal model are not sensitive to departures from this normality assumption. This led us to investigate robustness properties of the ML estimator analytically. We present some data analyses from a dietary assessment study from the Women's Health Initiative to illustrate the methods.  相似文献   

12.
Studies in genetics and ecology often require estimates of relatedness coefficients based on genetic marker data. Many diploid estimators have been developed using either method‐of‐moments or maximum‐likelihood estimates. However, there are no relatedness estimators for polyploids. The development of a moment estimator for polyploids with polysomic inheritance, which simultaneously incorporates the two‐gene relatedness coefficient and various ‘higher‐order’ coefficients, is described here. The performance of the estimator is compared to other estimators under a variety of conditions. When using a small number of loci, the estimator is biased because of an increase in ill‐conditioned matrices. However, the estimator becomes asymptotically unbiased with large numbers of loci. The ambiguity of polyploid heterozygotes (when balanced heterozygotes cannot be distinguished from unbalanced heterozygotes) is also considered; as with low numbers of loci, genotype ambiguity leads to bias. A software, PolyRelatedness , implementing this method and supporting a maximum ploidy of 8 is provided.  相似文献   

13.
Generalized linear mixed models (GLMMs) have become a frequently used tool for the analysis of non-Gaussian longitudinal data. Estimation is based on maximum likelihood theory, which assumes that the underlying probability model is correctly specified. Recent research is showing that the results obtained from these models are not always robust against departures from the assumptions on which these models are based. In the present work we have used simulations with a logistic random-intercept model to study the impact of misspecifying the random-effects distribution on the type I and II errors of the tests for the mean structure in GLMMs. We found that the misspecification can either increase or decrease the power of the tests, depending on the shape of the underlying random-effects distribution, and it can considerably inflate the type I error rate. Additionally, we have found a theoretical result which states that whenever a subset of fixed-effects parameters, not included in the random-effects structure equals zero, the corresponding maximum likelihood estimator will consistently estimate zero. This implies that under certain conditions a significant effect could be considered as a reliable result, even if the random-effects distribution is misspecified.  相似文献   

14.
The genetic length of a genome, in units of Morgans or centimorgans, is a fundamental characteristic of an organism. We propose a maximum likelihood method for estimating this quantity from counts of recombinants and nonrecombinants between marker locus pairs studied from a backcross linkage experiment, assuming no interference and equal chromosome lengths. This method allows the calculation of the standard deviation of the estimate and a confidence interval containing the estimate. Computer simulations have been performed to evaluate and compare the accuracy of the maximum likelihood method and a previously suggested method-of-moments estimator. Specifically, we have investigated the effects of the number of meioses, the number of marker loci, and variation in the genetic lengths of individual chromosomes on the estimate. The effect of missing data, obtained when the results of two separate linkage studies with a fraction of marker loci in common are pooled, is also investigated. The maximum likelihood estimator, in contrast to the method-of-moments estimator, is relatively insensitive to violation of the assumptions made during analysis and is the method of choice. The various methods are compared by application to partial linkage data from Xiphophorus.  相似文献   

15.
Estimates of relatedness coefficients, based on genetic marker data, are often necessary for studies of genetics and ecology. Whilst many estimates based on method‐of‐moment or maximum‐likelihood methods exist for diploid organisms, no such estimators exist for organisms with multiple ploidy levels, which occur in some insect and plant species. Here, we extend five estimators to account for different levels of ploidy: one relatedness coefficient estimator, three coefficients of coancestry estimators and one maximum‐likelihood estimator. We use arrhenotoky (when unfertilized eggs develop into haploid males) as an example in evaluations of estimator performance by Monte Carlo simulation. Also, three virtual sex‐determination systems are simulated to evaluate their performances for higher levels of ploidy. Additionally, we used two real data sets to test the robustness of these estimators under actual conditions. We make available a software package, PolyRelatedness , for other researchers to apply to organisms that have various levels of ploidy.  相似文献   

16.
We consider (approximate) likelihood methods for estimating the population-scaled recombination rate from population genetic data. We show that the dependence between the data from two regions of a chromosome decays inversely with the amount of recombination between the two regions. We use this result to show that the maximum likelihood estimator (mle) for the recombination rate, based on the composite likelihood of Fearnhead and Donnelly, is consistent. We also consider inference based on the pairwise likelihood of Hudson. We consider two approximations to this likelihood, and prove that the mle based on one of these approximations is consistent, while the mle based on the other approximation (which is used by McVean, Awadalla and Fearnhead) is not.  相似文献   

17.
The field of phylogenetic tree estimation has been dominated by three broad classes of methods: distance-based approaches, parsimony and likelihood-based methods (including maximum likelihood (ML) and Bayesian approaches). Here we introduce two new approaches to tree inference: pairwise likelihood estimation and a distance-based method that estimates the number of substitutions along the paths through the tree. Our results include the derivation of the formulae for the probability that two leaves will be identical at a site given a number of substitutions along the path connecting them. We also derive the posterior probability of the number of substitutions along a path between two sequences. The calculations for the posterior probabilities are exact for group-based, symmetric models of character evolution, but are only approximate for more general models.  相似文献   

18.
K F Yu 《Biometrics》1992,48(3):961-3; discussion 963-4
An estimator proposed by Greenland and Holland (1991, Biometrics 47, 319-322) for a standardized risk difference parameter is shown to be a maximum likelihood estimator if the consistent estimator of the common odds ratio is appropriately chosen. The statistical problem under consideration is reparameterized. Likelihood equations are derived.  相似文献   

19.
MOTIVATION: In recent years there has been increased interest in producing large and accurate phylogenetic trees using statistical approaches. However for a large number of taxa, it is not feasible to construct large and accurate trees using only a single processor. A number of specialized parallel programs have been produced in an attempt to address the huge computational requirements of maximum likelihood. We express a number of concerns about the current set of parallel phylogenetic programs which are currently severely limiting the widespread availability and use of parallel computing in maximum likelihood-based phylogenetic analysis. RESULTS: We have identified the suitability of phylogenetic analysis to large-scale heterogeneous distributed computing. We have completed a distributed and fully cross-platform phylogenetic tree building program called distributed phylogeny reconstruction by maximum likelihood. It uses an already proven maximum likelihood-based tree building algorithm and a popular phylogenetic analysis library for all its likelihood calculations. It offers one of the most extensive sets of DNA substitution models currently available. We are the first, to our knowledge, to report the completion of a distributed phylogenetic tree building program that can achieve near-linear speedup while only using the idle clock cycles of machines. For those in an academic or corporate environment with hundreds of idle desktop machines, we have shown how distributed computing can deliver a 'free' ML supercomputer.  相似文献   

20.
The log-det estimator is a measure of divergence (evolutionary distance) between sequences of biological characters, DNA or amino acids, for example, and has been shown to be robust to biases in composition that can cause problems for other estimators. We provide a statistical framework to construct high-accuracy confidence intervals for log-det estimates and compare the efficiency of the estimator to that of maximum likelihood using time-reversible Markov models. The log-det estimator is found to have good statistical properties under such general models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号