首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Donner A  Klar N  Zou G 《Biometrics》2004,60(4):919-925
Split-cluster designs are frequently used in the health sciences when naturally occurring clusters such as multiple sites or organs in the same subject are assigned to different treatments. However, statistical methods for the analysis of binary data arising from such designs are not well developed. The purpose of this article is to propose and evaluate a new procedure for testing the equality of event rates in a design dividing each of k clusters into two segments having multiple sites (e.g., teeth, lesions). The test statistic proposed is a generalization of a previously published procedure based on adjusting the standard Pearson chi-square statistic, but can also be derived as a score test using the approach of generalized estimating equations.  相似文献   

2.
A distribution–free test is considered for testing the treatment effects in block designs with different cell frequencies. A test statistic which is a function of treatment ranks has been proposed which is distributed as chi-square for large samples. The null distribution of the test statistic has been obtained. The entire procedure has been explained by a numerical example.  相似文献   

3.
A modified chi-squared statistic Z is proposed for testing hypotheses about category occupancy rates for individuals distributed by clusters, when the cluster sizes are observed. This statistic is the Pearson chi-square statistic based on the individuals' counts divided by 1 + M* where M* is the mean number of other individuals per cluster per individual. The kind of alternative hypothesis for which the Z-based test compares favourably in power with the Pearson chi-square test based on the cluster frequencies is given. However, we prove that this latter test is more powerful than the former one as long as the equidistribution of the random choice vectors is assumed.  相似文献   

4.
Tango T 《Biometrics》2007,63(1):119-127
A class of tests with quadratic forms for detecting spatial clustering of health events based on case-control point data is proposed. It includes Cuzick and Edwards's test statistic (1990, Journal of the Royal Statistical Society, Series B 52, 73-104). Although they used the property of asymptotic normality of the test statistic, we show that such an approximation is generally poor for moderately large sample sizes. Instead, we suggest a central chi-square distribution as a better approximation to the asymptotic distribution of the test statistic. Furthermore, not only to estimate the optimal value of the unknown parameter on the scale of cluster but also to adjust for multiple testing due to repeating the procedure by changing the parameter value, we propose the minimum of the profile p-value of the test statistic for the parameter as an integrated test statistic. We also provide a statistic to estimate the areas or cases which make large contributions to significant clustering. The proposed methods are illustrated with a data set concerning the locations of cases of childhood leukemia and lymphoma and another on early medieval grave site locations consisting of affected and nonaffected grave sites.  相似文献   

5.
The use of the Pearson chi-square statistic for testing hypotheses on biological populations is not appropriate when the individuals are distributed by clusters. In the case where the clusters are distributed independently of each other, we propose an asymptotically chi-square distributed test statistic taking into account the cluster size distribution. An example provided by European Corn Borer eggs data is used to illustrate the test procedure.  相似文献   

6.
Permutation tests are amongst the most commonly used statistical tools in modern genomic research, a process by which p-values are attached to a test statistic by randomly permuting the sample or gene labels. Yet permutation p-values published in the genomic literature are often computed incorrectly, understated by about 1/m, where m is the number of permutations. The same is often true in the more general situation when Monte Carlo simulation is used to assign p-values. Although the p-value understatement is usually small in absolute terms, the implications can be serious in a multiple testing context. The understatement arises from the intuitive but mistaken idea of using permutation to estimate the tail probability of the test statistic. We argue instead that permutation should be viewed as generating an exact discrete null distribution. The relevant literature, some of which is likely to have been relatively inaccessible to the genomic community, is reviewed and summarized. A computation strategy is developed for exact p-values when permutations are randomly drawn. The strategy is valid for any number of permutations and samples. Some simple recommendations are made for the implementation of permutation tests in practice.  相似文献   

7.
Bilder CR  Loughin TM 《Biometrics》2004,60(1):241-248
Questions that ask respondents to "choose all that apply" from a set of items occur frequently in surveys. Categorical variables that summarize this type of survey data are called both pick any/c variables and multiple-response categorical variables. It is often of interest to test for independence between two categorical variables. When both categorical variables can have multiple responses, traditional Pearson chi-square tests for independence should not be used because of the within-subject dependence among responses. An intuitively constructed version of the Pearson statistic is proposed to perform the test using bootstrap procedures to approximate its sampling distribution. First- and second-order adjustments to the proposed statistic are given in order to use a chi-square distribution approximation. A Bonferroni adjustment is proposed to perform the test when the joint set of responses for individual subjects is unavailable. Simulations show that the bootstrap procedures hold the correct size more consistently than the other procedures.  相似文献   

8.
A previous paper proposed a new method of QTL mapping called joint mapping (JM). Some problems have been found in model fitting and model testing due to the neglect of the correlations among different observations of the dependent variable in this model. The present paper reports a method of solving the problems. The coefficient of correlation between two observations of the dependent variable is derived. A generalized least square (GLS) approach is developed for model fitting and a strategy and procedure of model testing based on a chi-square test is suggested. A simulated example is given. The example shows that the JM method is quite efficient in mapping multiple linked QTLs.  相似文献   

9.
It is a common practice to use resampling methods such as the bootstrap for calculating the p-value for each test when performing large scale multiple testing. The precision of the bootstrap p-values and that of the false discovery rate (FDR) relies on the number of bootstraps used for testing each hypothesis. Clearly, the larger the number of bootstraps the better the precision. However, the required number of bootstraps can be computationally burdensome, and it multiplies the number of tests to be performed. Further adding to the computational challenge is that in some applications the calculation of the test statistic itself may require considerable computation time. As technology improves one can expect the dimension of the problem to increase as well. For instance, during the early days of microarray technology, the number of probes on a cDNA chip was less than 10,000. Now the Affymetrix chips come with over 50,000 probes per chip. Motivated by this important need, we developed a simple adaptive bootstrap methodology for large scale multiple testing, which reduces the total number of bootstrap calculations while ensuring the control of the FDR. The proposed algorithm results in a substantial reduction in the number of bootstrap samples. Based on a simulation study we found that, relative to the number of bootstraps required for the Benjamini-Hochberg (BH) procedure, the standard FDR methodology which was the proposed methodology achieved a very substantial reduction in the number of bootstraps. In some cases the new algorithm required as little as 1/6th the number of bootstraps as the conventional BH procedure. Thus, if the conventional BH procedure used 1,000 bootstraps, then the proposed method required only 160 bootstraps. This methodology has been implemented for time-course/dose-response data in our software, ORIOGEN, which is available from the authors upon request.  相似文献   

10.
A noniterative procedure based upon the minimum modified X2 approach is employed to test the model of homogeneity of one-dimensional margins in square tables. Such tables may arise from matched pairs with k outcomes. The special case of double dichotomy (i.e. matched pairs with two outcomes) reduces to the McNemar test statistic. The case of multiple matched controls is also dealt with. The Cochran's Q test is used to test the marginal homogeneity in cases comparing m distinct matched samples in addition to testing trends in proportions. Reference is made to the equivalence between these tests and the approach of hierarchical log-linear models for testing marginal homogeneity of square tables.  相似文献   

11.
The central theme in case-control genetic association studies is to efficiently identify genetic markers associated with trait status. Powerful statistical methods are critical to accomplishing this goal. A popular method is the omnibus Pearson's chi-square test applied to genotype counts. To achieve increased power, tests based on an assumed trait model have been proposed. However, they are not robust to model misspecification. Much research has been carried out on enhancing robustness of such model-based tests. An analysis framework that tests the equality of allele frequency while allowing for different deviation from Hardy-Weinberg equilibrium (HWE) between cases and controls is proposed. The proposed method does not require specification of trait models nor HWE. It involves only 1 degree of freedom. The likelihood ratio statistic, score statistic, and Wald statistic associated with this framework are introduced. Their performance is evaluated by extensive computer simulation in comparison with existing methods.  相似文献   

12.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

13.
A multi-sample slippage test based on ordered observations has been given. The test statistic is based on the sum of ranks of the sample. The probability distribution of the test statistic has been worked out for small sample and it turns out to be chi-square distribution for large sample. The analytical procedure has been explained by a numerical example.  相似文献   

14.
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.  相似文献   

15.
Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.  相似文献   

16.
The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.  相似文献   

17.
A statistical procedure for comparing the performance of a new product with the existing products on the basis of sensory characters has been developed. A test statistic was evolved for testing the null hypothesis of equality of treatment effects in the case of fractional triad comparisons. The null distribution of the test statistic has been obtained and it is found that it has a χ2-distribution for large number of observations. The procedure is quite simple and is based on a distribution-free test requiring only ordinal scale measurement. The method of analysis has been explained by a numerical example.  相似文献   

18.
Sensitivity and specificity have traditionally been used to assess the performance of a diagnostic procedure. Diagnostic procedures with both high sensitivity and high specificity are desirable, but these procedures are frequently too expensive, hazardous, and/or difficult to operate. A less sophisticated procedure may be preferred, if the loss of the sensitivity or specificity is determined to be clinically acceptable. This paper addresses the problem of simultaneous testing of sensitivity and specificity for an alternative test procedure with a reference test procedure when a gold standard is present. The hypothesis is formulated as a compound hypothesis of two non‐inferiority (one‐sided equivalence) tests. We present an asymptotic test statistic based on the restricted maximum likelihood estimate in the framework of comparing two correlated proportions under the prospective and retrospective sampling designs. The sample size and power of an asymptotic test statistic are derived. The actual type I error and power are calculated by enumerating the exact probabilities in the rejection region. For applications that require high sensitivity as well as high specificity, a large number of positive subjects and a large number of negative subjects are needed. We also propose a weighted sum statistic as an alternative test by comparing a combined measure of sensitivity and specificity of the two procedures. The sample size determination is independent of the sampling plan for the two tests.  相似文献   

19.
Investigations of sample size for planning case-control studies have usually been limited to detecting a single factor. In this paper, we investigate sample size for multiple risk factors in strata-matched case-control studies. We construct an omnibus statistic for testing M different risk factors based on the jointly sufficient statistics of parameters associated with the risk factors. The statistic is non-iterative, and it reduces to the Cochran statistic when M = 1. The asymptotic power function of the test is a non-central chi-square with M degrees of freedom and the sample size required for a specific power can be obtained by the inverse relationship. We find that the equal sample allocation is optimum. A Monte Carlo experiment demonstrates that an approximate formula for calculating sample size is satisfactory in typical epidemiologic studies. An approximate sample size obtained using Bonferroni's method for multiple comparisons is much larger than that obtained using the omnibus test. Approximate sample size formulas investigated in this paper using the omnibus test, as well as the individual tests, can be useful in designing case-control studies for detecting multiple risk factors.  相似文献   

20.
The three‐arm design with a test treatment, an active control and a placebo group is the gold standard design for non‐inferiority trials if it is ethically justifiable to expose patients to placebo. In this paper, we first use the closed testing principle to establish the hierarchical testing procedure for the multiple comparisons involved in the three‐arm design. For the effect preservation test we derive the explicit formula for the optimal allocation ratios. We propose a group sequential type design, which naturally accommodates the hierarchical testing procedure. Under this proposed design, Monte Carlo simulations are conducted to evaluate the performance of the sequential effect preservation test when the variance of the test statistic is estimated based on the restricted maximum likelihood estimators of the response rates under the null hypothesis. When there are uncertainties for the placebo response rate, the proposed design demonstrates better operating characteristics than the fixed sample design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号