首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

2.
A modified chi-square test for testing the equality of two multinomial populations against an ordering restricted alternative in one sample and two sample cases is constructed. The relation between a concept of dependence called dependence by chi-square and stochastic ordering is established. A tabulation of the asymptotic distribution of the test statistic under the null hypothesis is given. Simulations are used to compare the power of this test with the power of the likelihood ratio test of stochastic ordering of the two multinomial populations.  相似文献   

3.
Al-Shiha and Yang (1999) proposed a multistage procedure for analysing unreplicated factorial experiments, which is based on the statistic that is derived from the generalised likelihood ratio test statistic under the assumption of normality. It was shown by their simulation study that the method is quite competitive with Lenth's (1989) method. In their paper, because of the difficulty of determining the null distribution analytically, the quantiles of the null distribution were empirically simulated. In this paper, we give the exact null distribution of their test statistic, which makes it possible to calculate the critical values of the test.  相似文献   

4.
We consider a two-factor experiment in which the factors have the same number of levels with a natural ordering among levels. We test the hypothesis that the effects of the two treatments are symmetric against a one-sided alternative using the likelihood ratio criteria. Test of the one-sided alternative as a null hypothesis against no restriction has also been studied. Exact distribution theory under the null hypothesis is derived and is shown to be a weighted mixture of chi-square distributions. An example is used to illustrate the procedure.  相似文献   

5.
There are a number of nonparametric procedures known for testing goodness-of-fit in the univariate case. Similar procedures can be derived for testing goodness-of-fit in the multivariate case through an application of the theory of statistically equivalent blocks (SEB). The SEB transforms the data into coverages which are distributed as spacings from a uniform distribution on [0,1], under the null hypothesis. In this paper, we present a multivariate nonparametric test of goodness-of-fit based on the SEB when the multivariate distributions under the null hypothesis and the alternative hypothesis are “weakly” ordered. Empirical results are given on the performance of the proposed test in an application to the problem of assessing the reliability of a p-component system.  相似文献   

6.
The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.  相似文献   

7.
A method of analysis for comparing the variability of two samples drawn from two populations has been developed. The method is also suitable for the nonnumeric form of data. A test based on ordered observations for testing the null hypothesis of equality of two variances has been given. The test statistic is a function of the sum of ranks assigned to smaller size sample. Ranking procedure has been modified to depict the variability in the data by the sum of ranks. The null distribution of the test-statistic has been worked out for small samples and it turns out to be chi-square distribution for large samples. The analytical procedure has been explained by a numerical example on the productivity and production of rice and wheat in India from 1950–51 to 1983–84.  相似文献   

8.
In this paper the detection of rare variants association with continuous phenotypes of interest is investigated via the likelihood-ratio based variance component test under the framework of linear mixed models. The hypothesis testing is challenging and nonstandard, since under the null the variance component is located on the boundary of its parameter space. In this situation the usual asymptotic chisquare distribution of the likelihood ratio statistic does not necessarily hold. To circumvent the derivation of the null distribution we resort to the bootstrap method due to its generic applicability and being easy to implement. Both parametric and nonparametric bootstrap likelihood ratio tests are studied. Numerical studies are implemented to evaluate the performance of the proposed bootstrap likelihood ratio test and compare to some existing methods for the identification of rare variants. To reduce the computational time of the bootstrap likelihood ratio test we propose an effective approximation mixture for the bootstrap null distribution. The GAW17 data is used to illustrate the proposed test.  相似文献   

9.
Mehrotra DV  Chan IS  Berger RL 《Biometrics》2003,59(2):441-450
Fisher's exact test for comparing response proportions in a randomized experiment can be overly conservative when the group sizes are small or when the response proportions are close to zero or one. This is primarily because the null distribution of the test statistic becomes too discrete, a partial consequence of the inference being conditional on the total number of responders. Accordingly, exact unconditional procedures have gained in popularity, on the premise that power will increase because the null distribution of the test statistic will presumably be less discrete. However, we caution researchers that a poor choice of test statistic for exact unconditional inference can actually result in a substantially less powerful analysis than Fisher's conditional test. To illustrate, we study a real example and provide exact test size and power results for several competing tests, for both balanced and unbalanced designs. Our results reveal that Fisher's test generally outperforms exact unconditional tests based on using as the test statistic either the observed difference in proportions, or the observed difference divided by its estimated standard error under the alternative hypothesis, the latter for unbalanced designs only. On the other hand, the exact unconditional test based on the observed difference divided by its estimated standard error under the null hypothesis (score statistic) outperforms Fisher's test, and is recommended. Boschloo's test, in which the p-value from Fisher's test is used as the test statistic in an exact unconditional test, is uniformly more powerful than Fisher's test, and is also recommended.  相似文献   

10.
A multi-sample slippage test based on ordered observations has been given. The test statistic is based on the sum of ranks of the sample. The probability distribution of the test statistic has been worked out for small sample and it turns out to be chi-square distribution for large sample. The analytical procedure has been explained by a numerical example.  相似文献   

11.
The problem of testing the separability of a covariance matrix against an unstructured variance‐covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first‐order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ2 distribution. The tests are implemented on a real dataset from medical studies.  相似文献   

12.
In biostatistics, more and more complex models are being developed. This is particularly the case in system biology. Fitting complex models can be very time‐consuming, since many models often have to be explored. Among the possibilities are the introduction of explanatory variables and the determination of random effects. The particularity of this use of the score test is that the null hypothesis is not itself very simple; typically, some random effects may be present under the null hypothesis. Moreover, the information matrix cannot be computed, but only an approximation based on the score. This article examines this situation with the specific example of HIV dynamics models. We examine the score test statistics for testing the effect of explanatory variables and the variance of random effect in this complex situation. We study type I errors and the statistical powers of this score test statistics and we apply the score test approach to a real data set of HIV‐infected patients.  相似文献   

13.
For clinical trials with interim analyses conditional rejection probabilities play an important role when stochastic curtailment or design adaptations are performed. The conditional rejection probability gives the conditional probability to finally reject the null hypothesis given the interim data. It is computed either under the null or the alternative hypothesis. We investigate the properties of the conditional rejection probability for the one sided, one sample t‐test and show that it can be non monotone in the interim mean of the data and non monotone in the non‐centrality parameter for the alternative. We give several proposals how to implement design adaptations (that are based on the conditional rejection probability) for the t‐test and give a numerical example. Additionally, the conditional rejection probability given the interim t‐statistic is investigated. It does not depend on the unknown σ and can be used in stochastic curtailment procedures. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
We propose a general likelihood-based approach to the linkage analysis of qualitative and quantitative traits using identity by descent (IBD) data from sib-pairs. We consider the likelihood of IBD data conditional on phenotypes and test the null hypothesis of no linkage between a marker locus and a gene influencing the trait using a score test in the recombination fraction theta between the two loci. This method unifies the linkage analysis of qualitative and quantitative traits into a single inferential framework, yielding a simple and intuitive test statistic. Conditioning on phenotypes avoids unrealistic random sampling assumptions and allows sib-pairs from differing ascertainment mechanisms to be incorporated into a single likelihood analysis. In particular, it allows the selection of sib-pairs based on their trait values and the analysis of only those pairs having the most informative phenotypes. The score test is based on the full likelihood, i.e. the likelihood based on all phenotype data rather than just differences of sib-pair phenotypes. Considering only phenotype differences, as in Haseman and Elston (1972) and Kruglyak and Lander (1995), may result in important losses in power. The linkage score test is derived under general genetic models for the trait, which may include multiple unlinked genes. Population genetic assumptions, such as random mating or linkage equilibrium at the trait loci, are not required. This score test is thus particularly promising for the analysis of complex human traits. The score statistic readily extends to accommodate incomplete IBD data at the test locus, by using the hidden Markov model implemented in the programs MAPMAKER/SIBS and GENEHUNTER (Kruglyak and Lander, 1995; Kruglyak et al., 1996). Preliminary simulation studies indicate that the linkage score test generally matches or outperforms the Haseman-Elston test, the largest gains in power being for selected samples of sib-pairs with extreme phenotypes.  相似文献   

15.
Statistically nonsignificant (p > .05) results from a null hypothesis significance test (NHST) are often mistakenly interpreted as evidence that the null hypothesis is true—that there is “no effect” or “no difference.” However, many of these results occur because the study had low statistical power to detect an effect. Power below 50% is common, in which case a result of no statistical significance is more likely to be incorrect than correct. The inference of “no effect” is not valid even if power is high. NHST assumes that the null hypothesis is true; p is the probability of the data under the assumption that there is no effect. A statistical test cannot confirm what it assumes. These incorrect statistical inferences could be eliminated if decisions based on p values were replaced by a biological evaluation of effect sizes and their confidence intervals. For a single study, the observed effect size is the best estimate of the population effect size, regardless of the p value. Unlike p values, confidence intervals provide information about the precision of the observed effect. In the biomedical and pharmacology literature, methods have been developed to evaluate whether effects are “equivalent,” rather than zero, as tested with NHST. These methods could be used by biological anthropologists to evaluate the presence or absence of meaningful biological effects. Most of what appears to be known about no difference or no effect between sexes, between populations, between treatments, and other circumstances in the biological anthropology literature is based on invalid statistical inference.  相似文献   

16.
Meirmans PG 《Molecular ecology》2012,21(12):2839-2846
The genetic population structure of many species is characterised by a pattern of isolation by distance (IBD): due to limited dispersal, individuals that are geographically close tend to be genetically more similar than individuals that are far apart. Despite the ubiquity of IBD in nature, many commonly used statistical tests are based on a null model that is completely non-spatial, the Island model. Here, I argue that patterns of spatial autocorrelation deriving from IBD present a problem for such tests as it can severely bias their outcome. I use simulated data to illustrate this problem for two widely used types of tests: tests of hierarchical population structure and the detection of loci under selection. My results show that for both types of tests the presence of IBD can indeed lead to a large number of false positives. I therefore argue that all analyses in a study should take the spatial dependence in the data into account, unless it can be shown that there is no spatial autocorrelation in the allele frequency distribution that is under investigation. Thus, it is urgent to develop additional statistical approaches that are based on a spatially explicit null model instead of the non-spatial Island model.  相似文献   

17.
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff’s methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff’s statistics for clusters of high population density or large size; otherwise Kulldorff’s statistics are superior.  相似文献   

18.
DiRienzo AG 《Biometrics》2003,59(3):497-504
When testing the null hypothesis that treatment arm-specific survival-time distributions are equal, the log-rank test is asymptotically valid when the distribution of time to censoring is conditionally independent of randomized treatment group given survival time. We introduce a test of the null hypothesis for use when the distribution of time to censoring depends on treatment group and survival time. This test does not make any assumptions regarding independence of censoring time and survival time. Asymptotic validity of this test only requires a consistent estimate of the conditional probability that the survival event is observed given both treatment group and that the survival event occurred before the time of analysis. However, by not making unverifiable assumptions about the data-generating mechanism, there exists a set of possible values of corresponding sample-mean estimates of these probabilities that are consistent with the observed data. Over this subset of the unit square, the proposed test can be calculated and a rejection region identified. A decision on the null that considers uncertainty because of censoring that may depend on treatment group and survival time can then be directly made. We also present a generalized log-rank test that enables us to provide conditions under which the ordinary log-rank test is asymptotically valid. This generalized test can also be used for testing the null hypothesis when the distribution of censoring depends on treatment group and survival time. However, use of this test requires semiparametric modeling assumptions. A simulation study and an example using a recent AIDS clinical trial are provided.  相似文献   

19.
Lee OE  Braun TM 《Biometrics》2012,68(2):486-493
Inference regarding the inclusion or exclusion of random effects in linear mixed models is challenging because the variance components are located on the boundary of their parameter space under the usual null hypothesis. As a result, the asymptotic null distribution of the Wald, score, and likelihood ratio tests will not have the typical χ(2) distribution. Although it has been proved that the correct asymptotic distribution is a mixture of χ(2) distributions, the appropriate mixture distribution is rather cumbersome and nonintuitive when the null and alternative hypotheses differ by more than one random effect. As alternatives, we present two permutation tests, one that is based on the best linear unbiased predictors and one that is based on the restricted likelihood ratio test statistic. Both methods involve weighted residuals, with the weights determined by the among- and within-subject variance components. The null permutation distributions of our statistics are computed by permuting the residuals both within and among subjects and are valid both asymptotically and in small samples. We examine the size and power of our tests via simulation under a variety of settings and apply our test to a published data set of chronic myelogenous leukemia patients.  相似文献   

20.
In survivorship modelling using the proportional hazards model of Cox (1972, Journal of the Royal Statistical Society, Series B, 34, 187–220), it is often desired to test a subset of the vector of unknown regression parameters β in the expression for the hazard rate at time t. The likelihood ratio test statistic is well behaved in most situations but may be expensive to calculate. The Wald (1943, Transactions of the American Mathematical Society 54, 426–482) test statistic is easier to calculate, but has some drawbacks. In testing a single parameter in a binomial logit model, Hauck and Donner (1977, Journal of the American Statistical Association 72, 851–853) show that the Wald statistic decreases to zero the further the parameter estimate is from the null and that the asymptotic power of the test decreases to the significance level. The Wald statistic is extensively used in statistical software packages for survivorship modelling and it is therefore important to understand its behavior. The present work examines empirically the behavior of the Wald statistic under various departures from the null hypothesis and under the presence of Type I censoring and covariates in the model. It is shown via examples that the Wald statistic's behavior is not as aberrant as found for the logistic model. For the single parameter case, the asymptotic non-null distribution of the Wald statistic is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号