首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 130 毫秒
1.
Tamhane AC  Logan BR 《Biometrics》2002,58(3):650-656
Tang, Gnecco, and Geller (1989, Biometrika 76, 577-583) proposed an approximate likelihood ratio (ALR) test of the null hypothesis that a normal mean vector equals a null vector against the alternative that all of its components are nonnegative with at least one strictly positive. This test is useful for comparing a treatment group with a control group on multiple endpoints, and the data from the two groups are assumed to follow multivariate normal distributions with different mean vectors and a common covariance matrix (the homoscedastic case). Tang et al. derived the test statistic and its null distribution assuming a known covariance matrix. In practice, when the covariance matrix is estimated, the critical constants tabulated by Tang et al. result in a highly liberal test. To deal with this problem, we derive an accurate small-sample approximation to the null distribution of the ALR test statistic by using the moment matching method. The proposed approximation is then extended to the heteroscedastic case. The accuracy of both the approximations is verified by simulations. A real data example is given to illustrate the use of the approximations.  相似文献   

2.
Benjamini Y  Heller R 《Biometrics》2008,64(4):1215-1222
SUMMARY: We consider the problem of testing for partial conjunction of hypothesis, which argues that at least u out of n tested hypotheses are false. It offers an in-between approach to the testing of the conjunction of null hypotheses against the alternative that at least one is not, and the testing of the disjunction of null hypotheses against the alternative that all hypotheses are not null. We suggest powerful test statistics for testing such a partial conjunction hypothesis that are valid under dependence between the test statistics as well as under independence. We then address the problem of testing many partial conjunction hypotheses simultaneously using the false discovery rate (FDR) approach. We prove that if the FDR controlling procedure in Benjamini and Hochberg (1995, Journal of the Royal Statistical Society, Series B 57, 289-300) is used for this purpose the FDR is controlled under various dependency structures. Moreover, we can screen at all levels simultaneously in order to display the findings on a superimposed map and still control an appropriate FDR measure. We apply the method to examples from microarray analysis and functional magnetic resonance imaging (fMRI), two application areas where the need for partial conjunction analysis has been identified.  相似文献   

3.
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.  相似文献   

4.
The conventional nonparametric tests in survival analysis, such as the log‐rank test, assess the null hypothesis that the hazards are equal at all times. However, hazards are hard to interpret causally, and other null hypotheses are more relevant in many scenarios with survival outcomes. To allow for a wider range of null hypotheses, we present a generic approach to define test statistics. This approach utilizes the fact that a wide range of common parameters in survival analysis can be expressed as solutions of differential equations. Thereby, we can test hypotheses based on survival parameters that solve differential equations driven by cumulative hazards, and it is easy to implement the tests on a computer. We present simulations, suggesting that our tests perform well for several hypotheses in a range of scenarios. As an illustration, we apply our tests to evaluate the effect of adjuvant chemotherapies in patients with colon cancer, using data from a randomized controlled trial.  相似文献   

5.
6.
Dunson DB  Herring AH 《Biometrics》2003,59(4):916-923
In studying the relationship between an ordered categorical predictor and an event time, it is standard practice to include dichotomous indicators of the different levels of the predictor in a Cox model. One can then use a multiple degree-of-freedom score or partial likelihood ratio test for hypothesis testing. Often, interest focuses on comparing the null hypothesis of no difference to an order-restricted alternative, such as a monotone increase across levels of a predictor. This article proposes a Bayesian approach for addressing hypotheses of this type. We reparameterize the Cox model in terms of a cumulative product of parameters having conjugate prior densities, consisting of mixtures of point masses at one, and truncated gamma densities. Due to the structure of the model, posterior computation can proceed via a simple and efficient Gibbs sampling algorithm. Posterior probabilities for the global null hypothesis and subhypotheses, comparing the hazards for specific groups, can be calculated directly from the output of a single Gibbs chain. The approach allows for level sets across which a predictor has no effect. Generalizations to multiple predictors are described, and the method is applied to a study of emergency medical treatment for stroke.  相似文献   

7.
Solow AR 《Biometrics》2000,56(4):1272-1273
A common problem in ecology is to determine whether the diversity of a biological community has changed over time or in response to a disturbance. This involves testing the null hypothesis that diversity is constant against the alternative hypothesis that it has changed. As the power of this test may be low, Fritsch and Hsu (1999, Biometrics 55, 1300-1305) proposed reversing the null and alternative hypothesis. Here, I return to the original formulation and point out that power can be improved by adopting a parametric model for relative abundances and by testing against an ordered alternative.  相似文献   

8.
We consider a two-factor experiment in which the factors have the same number of levels with a natural ordering among levels. We test the hypothesis that the effects of the two treatments are symmetric against a one-sided alternative using the likelihood ratio criteria. Test of the one-sided alternative as a null hypothesis against no restriction has also been studied. Exact distribution theory under the null hypothesis is derived and is shown to be a weighted mixture of chi-square distributions. An example is used to illustrate the procedure.  相似文献   

9.
Both theoretical calculations and simulation studies have been used to compare and contrast the statistical power of methods for mapping quantitative trait loci (QTLs) in simple and complex pedigrees. A widely used approach in such studies is to derive or simulate the expected mean test statistic under the alternative hypothesis of a segregating QTL and to equate a larger mean test statistic with larger power. In the present study, we show that, even when the test statistic under the null hypothesis of no linkage follows a known asymptotic distribution (the standard being chi(2)), it cannot be assumed that the distribution under the alternative hypothesis is noncentral chi(2). Hence, mean test statistics cannot be used to indicate power differences, and a comparison between methods that are based on simulated average test statistics may lead to the wrong conclusion. We illustrate this important finding, through simulations and analytical derivations, for a recently proposed new regression method for the analysis of general pedigrees to map quantitative trait loci. We show that this regression method is not necessarily more powerful nor computationally more efficient than a maximum-likelihood variance-component approach. We advocate the use of empirical power to compare trait-mapping methods.  相似文献   

10.
D Zelterman  C T Le 《Biometrics》1991,47(2):751-755
We examine several tests of homogeneity of the odds ratio in the analysis of 2 x 2 tables arising from epidemiologic 1:R matched case-control studies. The T4 and T5 statistics proposed by Liang and Self (1985, Biometrika 72, 353-358) are unable to detect obvious inhomogeneity in two numerical examples and in simulation studies. The null hypothesis is rejected by the chi-square statistic of Ejigou and McHugh (1984, Biometrika 71, 408-411) and by a new proposed method whose significance level must be simulated.  相似文献   

11.
There are a number of nonparametric procedures known for testing goodness-of-fit in the univariate case. Similar procedures can be derived for testing goodness-of-fit in the multivariate case through an application of the theory of statistically equivalent blocks (SEB). The SEB transforms the data into coverages which are distributed as spacings from a uniform distribution on [0,1], under the null hypothesis. In this paper, we present a multivariate nonparametric test of goodness-of-fit based on the SEB when the multivariate distributions under the null hypothesis and the alternative hypothesis are “weakly” ordered. Empirical results are given on the performance of the proposed test in an application to the problem of assessing the reliability of a p-component system.  相似文献   

12.
The Cox hazards model (Cox, 1972, Journal of the Royal Statistical Society, Series B34, 187-220) for survival data is routinely used in many applied fields, sometimes, however, with too little emphasis on the fit of the model. A useful alternative to the Cox model is the Aalen additive hazards model (Aalen, 1980, in Lecture Notes in Statistics-2, 1-25) that can easily accommodate time changing covariate effects. It is of interest to decide which of the two models that are most appropriate to apply in a given application. This is a nontrivial problem as these two classes of models are nonnested except only for special cases. In this article we explore the Mizon-Richard encompassing test for this particular problem. It turns out that it corresponds to fitting of the Aalen model to the martingale residuals obtained from the Cox regression analysis. We also consider a variant of this method, which relates to the proportional excess model (Martinussen and Scheike, 2002, Biometrika 89, 283-298). Large sample properties of the suggested methods under the two rival models are derived. The finite-sample properties of the proposed procedures are assessed through a simulation study. The methods are further applied to the well-known primary biliary cirrhosis data set.  相似文献   

13.
The development of molecular typing techniques applied to the study of population genetic diversity originates data with increasing precision but at the cost of some ambiguities. As distinct techniques may produce distinct kinds of ambiguities, a crucial issue is to assess the differences between frequency distributions estimated from data produced by alternative techniques for the same sample. To that aim, we developed a resampling scheme that allows evaluating, by statistical means, the significance of the difference between two frequency distributions. The same approach is then shown to be applicable to test selective neutrality when only sample frequencies are known. The use of these original methods is presented here through an application to the genetic study of a Munda human population sample, where three different HLA loci were typed using two different molecular methods (reverse PCR-SSO typing on microbeads arrays based on Luminex technology and PCR-SSP typing), as described in details in the companion article by Riccio et al. [The Austroasiatic Munda population from India and its enigmatic origin: An HLA diversity study. Hum. Biol. 38:405-435 (2011)]. The differences between the frequency estimates of the two typing techniques were found to be smaller than those resulting from sampling. Overall, we show that using a resampling scheme in validating frequency estimates is effective when alternative frequency estimates are available. Moreover, resampling appears to be the unique way to test selective neutrality when only frequency data are available to describe the genetic structure of populations.  相似文献   

14.
Datta S  Satten GA 《Biometrics》2008,64(2):501-507
Summary .   We consider the problem of comparing two outcome measures when the pairs are clustered. Using the general principle of within-cluster resampling, we obtain a novel signed-rank test for clustered paired data. We show by a simple informative cluster size simulation model that only our test maintains the correct size under a null hypothesis of marginal symmetry compared to four other existing signed rank tests; further, our test has adequate power when cluster size is noninformative. In general, cluster size is informative if the distribution of pair-wise differences within a cluster depends on the cluster size. An application of our method to testing radiation toxicity trend is presented.  相似文献   

15.
Lack of persistence, or erosion, of the regression effect is an alternative to proportional hazards of particular interest in many medical applications. Such a departure from proportional hazards is often the most likely direction in which the model may be inadequate. Questions such as, is the effect of treatment only transitory or to what extent does an initially measured prognostic variable maintain its impact, frequently arise. In the context of a simple changepoint model, we propose a test of the null hypothesis of proportional hazards against the specific alternative of erosion of the regression effect. The particular changepoint model used can be viewed as a first approximation to a more complex reality, an approximation that enables us to avoid specifically modeling the functional form that any erosion might take. Practical guidelines for carrying out the test are provided. The approach is illustrated in the context of a study on risk factors for breast cancer survival.  相似文献   

16.
Multipoint (MP) linkage analysis represents a valuable tool for whole-genome studies but suffers from the disadvantage that its probability distribution is unknown and varies as a function of marker information and density, genetic model, number and structure of pedigrees, and the affection status distribution [Xing and Elston: Genet Epidemiol 2006;30:447-458; Hodge et al.: Genet Epidemiol 2008;32:800-815]. This implies that the MP significance criterion can differ for each marker and each dataset, and this fact makes planning and evaluation of MP linkage studies difficult. One way to circumvent this difficulty is to use simulations or permutation testing. Another approach is to use an alternative statistical paradigm to assess the statistical evidence for linkage, one that does not require computation of a p value. Here we show how to use the evidential statistical paradigm for planning, conducting, and interpreting MP linkage studies when the disease model is known (lod analysis) or unknown (mod analysis). As a key feature, the evidential paradigm decouples uncertainty (i.e. error probabilities) from statistical evidence. In the planning stage, the user calculates error probabilities, as functions of one's design choices (sample size, choice of alternative hypothesis, choice of likelihood ratio (LR) criterion k) in order to ensure a reliable study design. In the data analysis stage one no longer pays attention to those error probabilities. In this stage, one calculates the LR for two simple hypotheses (i.e. trait locus is unlinked vs. trait locus is located at a particular position) as a function of the parameter of interest (position). The LR directly measures the strength of evidence for linkage in a given data set and remains completely divorced from the error probabilities calculated in the planning stage. An important consequence of this procedure is that one can use the same criterion k for all analyses. This contrasts with the situation described above, in which the value one uses to conclude significance may differ for each marker and each dataset in order to accommodate a fixed test size, α. In this study we accomplish two goals that lead to a general algorithm for conducting evidential MP linkage studies. (1) We provide two theoretical results that translate into guidelines for investigators conducting evidential MP linkage: (a) Comparing mods to lods, error rates (including probabilities of weak evidence) are generally higher for mods when the null hypothesis is true, but lower for mods in the presence of true linkage. Royall [J Am Stat Assoc 2000;95:760-780] has shown that errors based on lods are bounded and generally small. Therefore when the true disease model is unknown and one chooses to use mods, one needs to control misleading evidence rates only under the null hypothesis; (b) for any given pair of contiguous marker loci, error rates under the null are greatest at the midpoint between the markers spaced furthest apart, which provides an obvious simple alternative hypothesis to specify for planning MP linkage studies. (2) We demonstrate through extensive simulation that this evidential approach can yield low error rates under the null and alternative hypotheses for both lods and mods, despite the fact that mod scores are not true LRs. Using these results we provide a coherent approach to implement a MP linkage study using the evidential paradigm.  相似文献   

17.
Zheng G  Chen Z 《Biometrics》2005,61(1):254-258
In many practical problems, a hypothesis testing involves a nuisance parameter which appears only under the alternative hypothesis. Davies (1977, Biometrika 64, 247-254) proposed the maximum of the score statistics over the whole range of the nuisance parameter as a test statistic for this type of hypothesis testing. Freidlin, Podgor, and Gastwirth (1999, Biometrics 55, 883-886) studied two other simpler maximum test statistics, the maximum of the score statistics at two extreme points of the nuisance parameter, and the maximum of the score statistics at three points of the nuisance parameter including the two extreme points. In this article, we compare the powers of these three maximum-type statistics in the context of three genetic problems.  相似文献   

18.
Tests for monotone mean residual life, using randomly censored data   总被引:1,自引:1,他引:0  
At any age the mean residual life function gives the expected remaining life at that age. Reliabilists and biometricians have found it useful to categorize failure distributions by the monotonicity properties of the mean residual life function. Hollander and Proschan (1975, Biometrika 62, 585-593) have derived tests of the null hypothesis that the underlying failure distribution is exponential, versus the alternative that it has a monotone mean residual life function. These tests are based on a complete sample. Often, however, data are incomplete because of withdrawals from the study and because of survivors at the time the data are analyzed. In this paper we generalize the Hollander-Proschan tests to accommodate randomly censored data. The efficiency loss due to the presence of censoring is also investigated.  相似文献   

19.
We propose a new approach to fitting marginal models to clustered data when cluster size is informative. This approach uses a generalized estimating equation (GEE) that is weighted inversely with the cluster size. We show that our approach is asymptotically equivalent to within-cluster resampling (Hoffman, Sen, and Weinberg, 2001, Biometrika 73, 13-22), a computationally intensive approach in which replicate data sets containing a randomly selected observation from each cluster are analyzed, and the resulting estimates averaged. Using simulated data and an example involving dental health, we show the superior performance of our approach compared to unweighted GEE, the equivalence of our approach with WCR for large sample sizes, and the superior performance of our approach compared with WCR when sample sizes are small.  相似文献   

20.
M S Pepe  T R Fleming 《Biometrics》1989,45(2):497-507
A class of statistics based on the integrated weighted difference in Kaplan-Meier estimators is introduced for the two-sample censored data problem. With positive weight functions these statistics are intuitive for and sensitive against the alternative of stochastic ordering. The standard weighted log-rank statistics are not always sensitive against this alternative, particularly if the hazard functions cross. Qualitative comparisons are made between the weighted log-rank statistics and these weighted Kaplan-Meier (WKM) statistics. A statement of null asymptotic distribution theory is given and the choice of weight function is discussed in some detail. Results from small-sample simulation studies indicate that these statistics compare favorably with the log-rank procedure even under the proportional hazards alternative, and may perform better than it under the crossing hazards alternative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号