首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The comparison of the efficiency of two binary diagnostic tests requires one to know the disease status for all patients in the sample, by applying a gold standard. In two-phase studies the gold standard is not applied to all patients in a sample, and the problem of partial verification of the disease arises. At present, one of the approaches most used for comparing two binary diagnostic tests are the likelihood ratios. In this study, the maximum likelihood estimators of likelihood ratios are obtained. The tests of hypothesis to compare the likelihood ratios of two binary diagnostic tests when both are applied to the same random sample in the presence of verification bias are deduced, and simulation experiments are performed in order to investigate the asymptotic behaviour of the tests of hypothesis. The results obtained have been applied to the study of Alzheimer's disease.  相似文献   

2.
Estimation for an island model where mutation maintains ak-allele neutral polymorphism at a single locus on each island is considered. The likelihood of an observed sample type configuration is obtained by applying a computational algorithm analogous to Griffiths and Tavaré (Theor. Popul. Biol.46(1994), 131–159). This allows the computation of sampling distributions in an island model and investigation of their properties. Given a sample type configuration, the maximum likelihood estimate of the migration parameter is obtained by simulating independently the likelihood at a grid of points and, also, using a surface simulation method. The latter method generates the whole likelihood trajectory in a single application of the simulation program. An estimate of variance of the estimate of the migration parameter is obtained using the likelihood trajectory. A comparison of the maximum likelihood estimates of the gene flow between subpopulations is made with those obtained by using Wright'sFSTstatistic.  相似文献   

3.
Small sample properties of the maximum likelihood estimator for the rate constant of a stochastic first order reaction are investigated. The approximate bias and variance of the maximum likelihood estimator are derived and tabulated. If observations of the system are made at timesiτ,i=1, 2, ...,N; τ>0, the observational spacing τ which minimizes the approximate variance of the maximum likelihood estimator is found. The non-applicability of large sample theory to confidence interval derivation is demonstrated by examination of the relative likelihood. Bartlett’s method is employed to derive approximate confidence limits, and is illustrated by using simulated kinetic runs.  相似文献   

4.
Maximum likelihood estimation of the model parameters for a spatial population based on data collected from a survey sample is usually straightforward when sampling and non-response are both non-informative, since the model can then usually be fitted using the available sample data, and no allowance is necessary for the fact that only a part of the population has been observed. Although for many regression models this naive strategy yields consistent estimates, this is not the case for some models, such as spatial auto-regressive models. In this paper, we show that for a broad class of such models, a maximum marginal likelihood approach that uses both sample and population data leads to more efficient estimates since it uses spatial information from sampled as well as non-sampled units. Extensive simulation experiments based on two well-known data sets are used to assess the impact of the spatial sampling design, the auto-correlation parameter and the sample size on the performance of this approach. When compared to some widely used methods that use only sample data, the results from these experiments show that the maximum marginal likelihood approach is much more precise.  相似文献   

5.
We report a theory that gives the sampling distribution of two-marker haplotypes that are linked to a rare disease mutation. The sampling distribution is generated with successive Monte Carlo realizations of the coalescence of the disease mutation having recombination and marker mutation events placed along the lineage. Given a sample of mutation-bearing, two-marker haplotypes, the maximum likelihood estimate of the location of the disease mutation can be calculated from the generated sampling distribution, provided that one knows enough about the population history in order to model it. The two-marker likelihood method is compared to a single-marker likelihood and a composite likelihood. The two-marker maximum likelihood gives smaller confidence intervals for the location of the disease locus than a comparable single-marker maximum likelihood. The composite likelihood can give biased results and the bias increases as the extent of linkage disequilibrium on mutation-bearing chromosomes decreases. Haplotype configurations exist for which the composite likelihood will fail to place the disease locus in the correct marker interval.  相似文献   

6.
M. K. Kuhner  J. Yamato    J. Felsenstein 《Genetics》1995,140(4):1421-1430
We present a new way to make a maximum likelihood estimate of the parameter 4N(e)μ (effective population size times mutation rate per site, or θ) based on a population sample of molecular sequences. We use a Metropolis-Hastings Markov chain Monte Carlo method to sample genealogies in proportion to the product of their likelihood with respect to the data and their prior probability with respect to a coalescent distribution. A specific value of θ must be chosen to generate the coalescent distribution, but the resulting trees can be used to evaluate the likelihood at other values of θ, generating a likelihood curve. This procedure concentrates sampling on those genealogies that contribute most of the likelihood, allowing estimation of meaningful likelihood curves based on relatively small samples. The method can potentially be extended to cases involving varying population size, recombination, and migration.  相似文献   

7.
W W Hauck 《Biometrics》1984,40(4):1117-1123
The finite-sample properties of various point estimators of a common odds ratio from multiple 2 X 2 tables have been considered in a number of simulation studies. However, the conditional maximum likelihood estimator has received only limited attention. That omission is partially rectified here for cases of relatively small numbers of tables and moderate to large within-table sample sizes. The conditional maximum likelihood estimator is found to be superior to the unconditional maximum likelihood estimator, and equal or superior to the Mantel-Haenszel estimator in both bias and precision.  相似文献   

8.
Summary Analysis of variance and principal components methods have been suggested for estimating repeatability. In this study, six estimation procedures are compared: ANOVA, principal components based on the sample covariance matrix and also on the sample correlation matrix, a related multivariate method (structural analysis) based on the sample covariance matrix and also on the sample correlation matrix, and maximum likelihood estimation. A simulation study indicates that when the standard linear model assumptions are met, the estimators are quite similar except when the repeatability is small. Overall, maximum likelihood appears the preferred method. If the assumption of equal variance is relaxed, the methods based on the sample correlation matrix perform better although others are surprisingly robust. The structural analysis method (with sample correlation matrix) appears to be best.Paper number 776 from the Department of Meat and Animal Science, University of Wisconsin-Madison.  相似文献   

9.
Clinical trials with Poisson distributed count data as the primary outcome are common in various medical areas such as relapse counts in multiple sclerosis trials or the number of attacks in trials for the treatment of migraine. In this article, we present approximate sample size formulae for testing noninferiority using asymptotic tests which are based on restricted or unrestricted maximum likelihood estimators of the Poisson rates. The Poisson outcomes are allowed to be observed for unequal follow‐up schemes, and both the situations that the noninferiority margin is expressed in terms of the difference and the ratio are considered. The exact type I error rates and powers of these tests are evaluated and the accuracy of the approximate sample size formulae is examined. The test statistic using the restricted maximum likelihood estimators (for the difference test problem) and the test statistic that is based on the logarithmic transformation and employs the maximum likelihood estimators (for the ratio test problem) show favorable type I error control and can be recommended for practical application. The approximate sample size formulae show high accuracy even for small sample sizes and provide power values identical or close to the aspired ones. The methods are illustrated by a clinical trial example from anesthesia.  相似文献   

10.
This article investigates maximum likelihood estimation with saturated and unsaturated models for correlated exchangeable binary data, when a sample of independent clusters of varying sizes is available. We discuss various parameterizations of these models, and propose using the EM algorithm to obtain maximum likelihood estimates. The methodology is illustrated by applications to a study of familial disease aggregation and to the design of a proposed group randomized cancer prevention trial.  相似文献   

11.
Outcome-dependent sampling (ODS) schemes can be a cost effective way to enhance study efficiency. The case-control design has been widely used in epidemiologic studies. However, when the outcome is measured on a continuous scale, dichotomizing the outcome could lead to a loss of efficiency. Recent epidemiologic studies have used ODS sampling schemes where, in addition to an overall random sample, there are also a number of supplemental samples that are collected based on a continuous outcome variable. We consider a semiparametric empirical likelihood inference procedure in which the underlying distribution of covariates is treated as a nuisance parameter and is left unspecified. The proposed estimator has asymptotic normality properties. The likelihood ratio statistic using the semiparametric empirical likelihood function has Wilks-type properties in that, under the null, it follows a chi-square distribution asymptotically and is independent of the nuisance parameters. Our simulation results indicate that, for data obtained using an ODS design, the semiparametric empirical likelihood estimator is more efficient than conditional likelihood and probability weighted pseudolikelihood estimators and that ODS designs (along with the proposed estimator) can produce more efficient estimates than simple random sample designs of the same size. We apply the proposed method to analyze a data set from the Collaborative Perinatal Project (CPP), an ongoing environmental epidemiologic study, to assess the relationship between maternal polychlorinated biphenyl (PCB) level and children's IQ test performance.  相似文献   

12.
Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics.  相似文献   

13.
This paper goes one step beyond the age determination of the individual skeleton. It presents two methods for the reconstruction of the distribution of the age at death in a sample of skeletons. It is shown that the maximum likelihood method, in spite of certain weaknesses, is superior to the traditionally used proportional method, especially in situations when the analyzed sample consists of skeletons with very different conditions of preservation and thus age intervals of differing lengths. The maximum likelihood method eliminates a bias towards increased average age at death which is introduced by the proportional method. An analysis of an empirical example indicates that the difference between results obtained using the two methods increases with increasing age.  相似文献   

14.
本文研究H广义线性模型中未知参数的两种估计方法,一种是边际似然函数法,另一种是Lee和Nelder提出来的L-N法.对于一类具有两个随机效应的典型的Poisson-Gamma类模型,在一些正则性条件之下,我们已经证明了其中固定效应卢的L-N估计的强相合性及渐近正态性,并得到了其收敛于真值的速度.针对这类模型,本文进一步给出了其边际似然函数的解析表达式,并且通过Monte Carlo模拟,对模型中固定效应β的边际似然估计和L—N估计进行了比较,模拟表明L—N估计比边际似然估计在拟Poisson-Gamma模型中有着更加优良的表现,具有更高的精度。  相似文献   

15.
Little attention has been paid to the use of multi‐sample batch‐marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. ( 2010 ) present a pseudo‐likelihood for a multi‐sample batch‐marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson‐type estimator. We have developed and maximized the likelihood for batch‐marking studies. We use data simulated from a Jolly–Seber‐type study and convert this to what would have been obtained from an extended batch‐marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch‐marking model to determine the efficiency of collecting and analyzing batch‐marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo‐likelihood method of Huggins et al. ( 2010 ). When faced with designing a batch‐marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size.  相似文献   

16.
A maximum likelihood procedure is developed to estimate the dependence relations between plants at equal distances along a row, by fitting simultaneous bilateral models to the observations. Where there is more than one characteristic measured on each plant, a simultaneous bilateral vector model can be fitted by maximum likelihood procedures. The latter model also applies when one characteristic is measured on each plant in a two-dimensional planting array where interplant distances within each row are equal but interrow spacing varies. The estimation is particularly suited to the small sample situation.  相似文献   

17.
Meyer K 《Heredity》2008,101(3):212-221
Mixed model analyses via restricted maximum likelihood, fitting the so-called animal model, have become standard methodology for the estimation of genetic variances. Models involving multiple genetic variance components, due to different modes of gene action, are readily fitted. It is shown that likelihood-based calculations may provide insight into the quality of the resulting parameter estimates, and are directly applicable to the validation of experimental designs. This is illustrated for the example of a design suggested recently to estimate X-linked genetic variances. In particular, large sample variances and sampling correlations are demonstrated to provide an indication of 'problem' scenarios. Using simulation, it is shown that the profile likelihood function provides more appropriate estimates of confidence intervals than large sample variances. Examination of the likelihood function and its derivatives are recommended as part of the design stage of quantitative genetic experiments.  相似文献   

18.
Thre methods of estimating the parameters of the Johnson S6 distribution were tested by simulation. The maximum likelihood method, the method based on percentiles of a sample and the method based on moments of a transformed random variable were taken into consideration. Many sets of samples were generated differing in sizes and in the actual values of parameters, whereupon the parameters were estimated by the three methods. It was proved that if the sample is small or the skewness of the distribution is considerable, the maximum likelihood estimates can assume preposterous values. The method based on moments is recommended due to its simplicity and to the fact that the estimates, though usually biased, never assume absurd values.  相似文献   

19.
通过重复试验方差分析获得误差方差以探索只利用B、:2和B2:2或F2:3家系世代鉴定多基因存在的方法,混合分布参数的估计采用IECM算法,以油菜株高B1:2和B2:2家系平均数资料和千粒重F2:3家系平均数资料为例阐明该方法。  相似文献   

20.
We would like to use maximum likelihood to estimate parameters such as the effective population size N(e) or, if we do not know mutation rates, the product 4N(e) mu of mutation rate per site and effective population size. To compute the likelihood for a sample of unrecombined nucleotide sequences taken from a random-mating population it is necessary to sum over all genealogies that could have led to the sequences, computing for each one the probability that it would have yielded the sequences, and weighting each one by its prior probability. The genealogies vary in tree topology and in branch lengths. Although the likelihood and the prior are straightforward to compute, the summation over all genealogies seems at first sight hopelessly difficult. This paper reports that it is possible to carry out a Monte Carlo integration to evaluate the likelihoods approximately. The method uses bootstrap sampling of sites to create data sets for each of which a maximum likelihood tree is estimated. The resulting trees are assumed to be sampled from a distribution whose height is proportional to the likelihood surface for the full data. That it will be so is dependent on a theorem which is not proven, but seems likely to be true if the sequences are not short. One can use the resulting estimated likelihood curve to make a maximum likelihood estimate of the parameter of interest, N(e) or of 4N(e) mu. The method requires at least 100 times the computational effort required for estimation of a phylogeny by maximum likelihood, but is practical on today's work stations. The method does not at present have any way of dealing with recombination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号