首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
K J Worsley 《Biometrics》1988,44(1):259-263
We wish to test that the hazard rate of survival or failure-time data is constant against the alternative of a change in hazard after an unspecified time. The likelihood ratio is unbounded but the exact null distribution of a restricted likelihood-ratio test statistic is found. This distribution is not affected by Type II censoring but it does depend very strongly on the interval in which the unknown change-point is assumed to lie. Some exact percentage points are given which are much larger than simulated points that have been reported in the literature.  相似文献   

2.
NOETHER (1987) proposed a method of sample size determination for the Wilcoxon-Mann-Whitney test. To obtain a sample size formula, he restricted himself to alternatives that differ only slightly from the null hypothesis, so that the unknown variance o2 of the Mann-Whitney statistic can be approximated by the known variance under the null hypothesis which depends only on n. This fact is frequently forgotten in statistical practice. In this paper, we compare Noether's large sample solution against an alternative approach based on upper bounds of σ2 which is valid for any alternatives. This comparison shows that Noether's approximation is sufficiently reliable with small and large deviations from the null hypothesis.  相似文献   

3.
We consider a two-factor experiment in which the factors have the same number of levels with a natural ordering among levels. We test the hypothesis that the effects of the two treatments are symmetric against a one-sided alternative using the likelihood ratio criteria. Test of the one-sided alternative as a null hypothesis against no restriction has also been studied. Exact distribution theory under the null hypothesis is derived and is shown to be a weighted mixture of chi-square distributions. An example is used to illustrate the procedure.  相似文献   

4.
Lee OE  Braun TM 《Biometrics》2012,68(2):486-493
Inference regarding the inclusion or exclusion of random effects in linear mixed models is challenging because the variance components are located on the boundary of their parameter space under the usual null hypothesis. As a result, the asymptotic null distribution of the Wald, score, and likelihood ratio tests will not have the typical χ(2) distribution. Although it has been proved that the correct asymptotic distribution is a mixture of χ(2) distributions, the appropriate mixture distribution is rather cumbersome and nonintuitive when the null and alternative hypotheses differ by more than one random effect. As alternatives, we present two permutation tests, one that is based on the best linear unbiased predictors and one that is based on the restricted likelihood ratio test statistic. Both methods involve weighted residuals, with the weights determined by the among- and within-subject variance components. The null permutation distributions of our statistics are computed by permuting the residuals both within and among subjects and are valid both asymptotically and in small samples. We examine the size and power of our tests via simulation under a variety of settings and apply our test to a published data set of chronic myelogenous leukemia patients.  相似文献   

5.
Moming Li  Guoqing Diao  Jing Qin 《Biometrics》2020,76(4):1216-1228
We consider a two-sample problem where data come from symmetric distributions. Usual two-sample data with only magnitudes recorded, arising from case-control studies or logistic discriminant analyses, may constitute a symmetric two-sample problem. We propose a semiparametric model such that, in addition to symmetry, the log ratio of two unknown density functions is modeled in a known parametric form. The new semiparametric model, tailor-made for symmetric two-sample data, can also be viewed as a biased sampling model subject to symmetric constraint. A maximum empirical likelihood estimation approach is adopted to estimate the unknown model parameters, and the corresponding profile empirical likelihood ratio test is utilized to perform hypothesis testing regarding the two population distributions. Symmetry, however, comes with irregularity. It is shown that, under the null hypothesis of equal symmetric distributions, the maximum empirical likelihood estimator has degenerate Fisher information, and the test statistic has a mixture of χ2-type asymptotic distribution. Extensive simulation studies have been conducted to demonstrate promising statistical powers under correct and misspecified models. We apply the proposed methods to two real examples.  相似文献   

6.
Liang KY  Rathouz PJ 《Biometrics》1999,55(1):65-74
In this paper we propose a new class of statistics to test a simple hypothesis against a family of alternatives characterized by a mixture model. Unlike the likelihood ratio statistic, whose large sample distribution is still unknown in this situation, these new statistics have a simple asymptotic distribution to which to refer under the null hypothesis. Simulation results suggest that it has adequate power in detecting the alternatives. Its application to genetic linkage analysis in the presence of the genetic heterogeneity that motivated this work is emphasized.  相似文献   

7.
A modified chi-square test for testing the equality of two multinomial populations against an ordering restricted alternative in one sample and two sample cases is constructed. The relation between a concept of dependence called dependence by chi-square and stochastic ordering is established. A tabulation of the asymptotic distribution of the test statistic under the null hypothesis is given. Simulations are used to compare the power of this test with the power of the likelihood ratio test of stochastic ordering of the two multinomial populations.  相似文献   

8.
The classical normal-theory tests for testing the null hypothesis of common variance and the classical estimates of scale have long been known to be quite nonrobust to even mild deviations from normality assumptions for moderate sample sizes. Levene (1960) suggested a one-way ANOVA type statistic as a robust test. Brown and Forsythe (1974) considered a modified version of Levene's test by replacing the sample means with sample medians as estimates of population locations, and their test is computationally the simplest among the three tests recommended by Conover , Johnson , and Johnson (1981) in terms of robustness and power. In this paper a new robust and powerful test for homogeneity of variances is proposed based on a modification of Levene's test using the weighted likelihood estimates (Markatou , Basu , and Lindsay , 1996) of the population means. For two and three populations the proposed test using the Hellinger distance based weighted likelihood estimates is observed to achieve better empirical level and power than Brown-Forsythe's test in symmetric distributions having a thicker tail than the normal, and higher empirical power in skew distributions under the use of F distribution critical values.  相似文献   

9.
Tamhane AC  Logan BR 《Biometrics》2002,58(3):650-656
Tang, Gnecco, and Geller (1989, Biometrika 76, 577-583) proposed an approximate likelihood ratio (ALR) test of the null hypothesis that a normal mean vector equals a null vector against the alternative that all of its components are nonnegative with at least one strictly positive. This test is useful for comparing a treatment group with a control group on multiple endpoints, and the data from the two groups are assumed to follow multivariate normal distributions with different mean vectors and a common covariance matrix (the homoscedastic case). Tang et al. derived the test statistic and its null distribution assuming a known covariance matrix. In practice, when the covariance matrix is estimated, the critical constants tabulated by Tang et al. result in a highly liberal test. To deal with this problem, we derive an accurate small-sample approximation to the null distribution of the ALR test statistic by using the moment matching method. The proposed approximation is then extended to the heteroscedastic case. The accuracy of both the approximations is verified by simulations. A real data example is given to illustrate the use of the approximations.  相似文献   

10.
The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.  相似文献   

11.
The nonparametric Behrens‐Fisher hypothesis is the most appropriate null hypothesis for the two‐sample comparison when one does not wish to make restrictive assumptions about possible distributions. In this paper, a numerical approach is described by which the likelihood ratio test can be calculated for the nonparametric Behrens‐Fisher problem. The approach taken here effectively reduces the number of parameters in the score equations to one by using a recursive formula for the remaining parameters. The resulting single dimensional problem can be solved numerically. The power of the likelihood ratio test is compared by simulation to that of a generalized Wilcoxon test of Brunner and Munzel. The tests have similar power for all alternatives considered when a simulated null distribution is used to generate cutoff values for the tests. The methods are illustrated on data on shoulder pain from a clinical trial.  相似文献   

12.
In this paper the detection of rare variants association with continuous phenotypes of interest is investigated via the likelihood-ratio based variance component test under the framework of linear mixed models. The hypothesis testing is challenging and nonstandard, since under the null the variance component is located on the boundary of its parameter space. In this situation the usual asymptotic chisquare distribution of the likelihood ratio statistic does not necessarily hold. To circumvent the derivation of the null distribution we resort to the bootstrap method due to its generic applicability and being easy to implement. Both parametric and nonparametric bootstrap likelihood ratio tests are studied. Numerical studies are implemented to evaluate the performance of the proposed bootstrap likelihood ratio test and compare to some existing methods for the identification of rare variants. To reduce the computational time of the bootstrap likelihood ratio test we propose an effective approximation mixture for the bootstrap null distribution. The GAW17 data is used to illustrate the proposed test.  相似文献   

13.
Several tests of molecular phylogenies have been proposed over the last decades, but most of them lead to strikingly different P-values. I propose that such discrepancies are principally due to different forms of null hypotheses. To support this hypothesis, two new tests are described. Both consider the composite null hypothesis that all the topologies are equidistant from the true but unknown topology. This composite hypothesis can either be reduced to the simple hypothesis at the least favorable distribution (frequentist significance test [FST]) or to the maximum likelihood topology (frequentist hypothesis test [FHT]). In both cases, the reduced null hypothesis is tested against each topology included in the analysis. The tests proposed have an information-theoretic justification, and the distribution of their test statistic is estimated by a nonparametric bootstrap, adjusting P-values for multiple comparisons. I applied the new tests to the reanalysis of two chloroplast genes, psaA and psbB, and compared the results with those of previously described tests. As expected, the FST and the FHT behaved approximately like the Shimodaira-Hasegawa test and the bootstrap, respectively. Although the tests give overconfidence in a wrong tree when an overly simple nucleotide substitution model is assumed, more complex models incorporating heterogeneity among codon positions resolve some conflicts. To further investigate the influence of the null hypothesis, a power study was conducted. Simulations showed that FST and the Shimodaira-Hasegawa test are the least powerful and FHT is the most powerful across the parameter space. Although the size of all the tests is affected by misspecification, the two new tests appear more robust against misspecification of the model of evolution and consistently supported the hypothesis that the Gnetales are nested within gymnosperms.  相似文献   

14.
The problem of testing the separability of a covariance matrix against an unstructured variance‐covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first‐order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ2 distribution. The tests are implemented on a real dataset from medical studies.  相似文献   

15.
The analysis of the haplotype-phenotype relationship has become more and more important. We have developed an algorithm, using individual genotypes at linked loci as well as their quantitative phenotypes, to estimate the parameters of the distribution of the phenotypes for subjects with and without a particular haplotype by an expectation-maximization (EM) algorithm. We assumed that the phenotype for a diplotype configuration follows a normal distribution. The algorithm simultaneously calculates the maximum likelihood (L0max) under the null hypothesis (i.e., nonassociation between the haplotype and phenotype), and the maximum likelihood (Lmax) under the alternative hypothesis (i.e., association between the haplotype and phenotype). Then we tested the association between the haplotype and the phenotype using a test statistic, -2 log(L0max/Lmax). The above algorithm along with some extensions for different modes of inheritance was implemented as a computer program, QTLHAPLO. Simulation studies using single-nucleotide polymorphism (SNP) genotypes have clarified that the estimation was very accurate when the linkage disequilibrium between linked loci was rather high. Empirical power using the simulated data was high enough. We applied QTLHAPLO for the analysis of the real data of the genotypes at the calpain 10 gene obtained from diabetic and control subjects in various laboratories.  相似文献   

16.
17.
Two statistics are proposed for testing the hypothesis of equality of the means of a bivariate normal distribution with unknown common variance and correlation coefficient when observations are missing on both variates. One of the statistics reduces to the one proposed by Bhoj (1978, 1984) when the unpaired observations on the variates are equal. The distributions of the statistics are approximated by well known distributions under the null hypothesis. The empirical powers of the tests are computed and compared with those of some known statistics. The comparison supports the use of one of the statistics proposed in this paper.  相似文献   

18.
Segregation analysis, employing nuclear families, is the most frequently used method to evaluate the mode of inheritance of a trait. To our knowledge, there exists no tabular information regarding the sample sizes required of individuals and families needed to perform a significance test of a specific segregation ratio for a predetermined power and significance level. To fill this gap, we have developed sample-size tables based on the asymptotic variance of the maximum likelihood estimate of the segregation ratio and on the normal approximation for two-sided hypothesis testing. Assuming homogeneous sibship size, minimum sample sizes were determined for testing the null hypothesis for the segregation ratio of 1/4 or 1/2 vs. alternative values of .05-.80, for the significance level of .05 and power of .8, for ascertainment probabilities of nearly 0 to 1.0, and sibship sizes 2-7. The results of these calculations indicate a complex interaction of the null and the alternate hypotheses, ascertainment probability, and sibship size in determining the sample size required for simple segregation analysis. The accompanying tables should aid in the appropriate design and cost assessment of future genetic epidemiologic studies.  相似文献   

19.
It is natural to want to relax the assumption of homoscedasticity and Gaussian error in ANOVA models. For a two-way ANOVA model with 2 x k cells, one can derive tests of main effect for the factor with two levels (referred to as group) without assuming homoscedasticity or Gaussian error. Empirical likelihood can be used to derive testing procedures. An approximate empirical likelihood ratio test (AELRT) is derived for the test of group main effect. To approximate the distributions of the test statistics under the null hypothesis, simulation from the approximate empirical maximum likelihood estimate (AEMLE) restricted by the null hypothesis is used. The homoscedastic ANOVA F -test and a Box-type approximation to the distribution of the heteroscedastic ANOVA F -test are compared to the AELRT in level and power. The AELRT procedure is shown by simulation to have appropriate type I error control (although possibly conservative) when the distribution of the test statistics are approximated by simulation from the constrained AEMLE. The methodology is motivated and illustrated by an analysis of folate levels in the blood among two alcohol intake groups while accounting for gender.  相似文献   

20.
J Herson 《Biometrics》1979,35(4):775-783
A phase II clinical trial is designed to gather data to help decide whether an experimental treatment has sufficient effectiveness to justify further study. In a one-arm trial with dichotomous outcome, we wish to test a simple null hypothesis on the Bernoulli parameter against a one-sided alternative in a sample of N patients. It is advisable to have a rule to terminate the trial early when evidence accumulates that the treatment is ineffective. Predictive probabilities based on the binomial distribution and beta and uniform prior distributions for the binomial parameter are found to be useful as the basis of group sequential designs. Size, power and average sample size for these designs are discussed. A process for the specification of an early termination plan, advice on the quantification of prior beliefs, and illustrative examples are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号