首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Typical animal carcinogenicity studies involve the comparison of several dose groups to a negative control. The uncorrected asymptotic Cochran‐Armitage trend test with equally spaced dose scores is the most frequently used test in such set‐ups. However, this test based on a weighted linear regression on proportions. It is well known that the Cochran‐Armitage test lacks in power for other shapes than the assumed linear one. Therefore, dichotomous multiple contrast tests are introduced. These build the maximum over several single contrasts, where each of them is chosen appropriately to cover a specific dose‐response shape. An extensive power study has been conducted to compare multiple contrast tests with the approaches used so far. Crucial results will be presented in this paper. Moreover, exact tests and continuity corrected versions are introduced and compared to the traditional uncorrected approaches regarding size and power behaviour. A trend test for any shape of the dose‐response relationship for either crude tumour rates or mortality‐ adjusted rates based on the simple Poly‐3 transformation is proposed for evaluation of carcinogenicity studies.  相似文献   

2.
The goal of this paper is to illustrate the value and importance of the “weight of evidence” approach (use of multiple lines of evidence from field and laboratory data) to assess the occurrence or absence of ecological impairment in the aquatic environment. Single species toxicity tests, microcosms, and community metric approaches such as the Index of Biotic Integrity (IBI) are discussed. Single species toxicity tests or other single lines of evidence are valuable first tier assessments that should be used as screening tools to identify potentially toxic conditions in a effluent or the ambient environment but these tests should not be used as the final quantitative indicator of absolute ecological impairment that may result in regulatory action. Both false positive and false negative predictions of ecological effects can occur due to the inherent variability of measurement endpoints such as survival, growth and reproduction used in single species toxicity tests. A comparison of single species ambient toxicity test results with field data showed that false positives are common and likely related to experimental variability or toxicity to selected test species without measureable effects on the ecosystem. Results from microcosm studies have consistently demonstrated that chemical exposures exceeding the acute or chronic toxicity concentrations for highly sensitive species may cause little or no ecologically significant damage to an aquatic ecosystem. Sources of uncertainty identified when extrapolating from single species tests to ecological effects were: variability in individual response to pesticide exposure; variation among species in sensitivity to pesticides; effects of time varying and repeated exposures; and extrapolation from individual to population-level endpoints. Data sets from the Chesapeake Bay area (Maryland) were used to show the importance of using “multiple lines of evidence” when assessing biological impact due to conflicting results reported from ambient water column and sediment toxicity tests and biological indices (benthic and fish IBIs). Results from water column and sediment toxicity tests with multiple species in tidal areas showed that no single species was consistently the most sensitive. There was also a high degree of disagreement between benthic and fish IBI data for the various stations. The lack of agreement for these biological community indices is not surprising due to the differences in exposure among habitats occupied by these different taxonomic assemblages. Data from a fish IBI, benthic IBI and Maryland Physical Habitat Index (MPHI) were compared for approximately 1100 first through third-order Maryland non-tidal streams to show the complexity of data interpretation and the incidence of conflicting lines of evidence. A key finding from this non-tidal data set was the need for using more than one biological indicator to increase the discriminatory power of identifying impaired streams and reduce the possibility of “false negative results”. Based on historical data, temporal variability associated with an IBI in undisturbed areas was reported to be lower than the variability associated with single species toxicity tests.  相似文献   

3.
Since the seminal work of Prentice and Pyke, the prospective logistic likelihood has become the standard method of analysis for retrospectively collected case‐control data, in particular for testing the association between a single genetic marker and a disease outcome in genetic case‐control studies. In the study of multiple genetic markers with relatively small effects, especially those with rare variants, various aggregated approaches based on the same prospective likelihood have been developed to integrate subtle association evidence among all the markers considered. Many of the commonly used tests are derived from the prospective likelihood under a common‐random‐effect assumption, which assumes a common random effect for all subjects. We develop the locally most powerful aggregation test based on the retrospective likelihood under an independent‐random‐effect assumption, which allows the genetic effect to vary among subjects. In contrast to the fact that disease prevalence information cannot be used to improve efficiency for the estimation of odds ratio parameters in logistic regression models, we show that it can be utilized to enhance the testing power in genetic association studies. Extensive simulations demonstrate the advantages of the proposed method over the existing ones. A real genome‐wide association study is analyzed for illustration.  相似文献   

4.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

5.
Tests for a monotonic trend between an ordered categorical exposure and disease status are routinely carried out from case‐control data using the Mantel‐extension trend test or the asymptotically equivalent Cochran‐Armitage test. In this study, we considered two alternative tests based on isotonic regression, namely an order‐restricted likelihood ratio test and an isotonic modification of the Mantel‐extension test extending the recent proposal by Mancuso, Ahn and Chen (2001) to case‐control data. Furthermore, we considered three tests based on contrasts, namely a single contrast (SC) test based on Schaafsma's coefficients, the Dosemeci and Benichou (DB) test, a multiple contrast (MC) test based on the Helmert, reverse‐Helmert and linear contrasts and we derived their case‐control versions. Using simulations, we compared the statistical properties of these five alternative tests to those of the Mantel‐extension test under various patterns including no relationship, as well as monotonic and non‐monotonic relationships between exposure and disease status. In the case of no relationship, all tests had close to nominal type I error except in situations combining a very unbalanced exposure distribution and small sample size, where the asymptotic versions of the three tests based on contrasts were highly anticonservative. The use of bootstrap instead of asymptotic versions corrected this anticonservatism. For monotonic patterns, all tests had close powers. For non monotonic patterns, the DB‐test showed the most favourable results as it was the least powerful test. The two tests based on isotonic regression were the most powerful tests and the Mantel‐extension test, the SC‐ and MC‐tests had in‐between powers. The six tests were applied to data from a case‐control study investigating the relationship between alcohol consumption and risk of laryngeal cancer in Turkey. In situations with no evidence of a monotonic relationship between exposure and disease status, the three tests based on contrasts did not conclude in favour of a significant trend whereas all the other tests did. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
O'Brien (1984, Biometrics 40, 1079-1087) introduced a simple nonparametric test procedure for testing whether multiple outcomes in one treatment group have consistently larger values than outcomes in the other treatment group. We first explore the theoretical properties of O'Brien's test. We then extend it to the general nonparametric Behrens-Fisher hypothesis problem when no assumption is made regarding the shape of the distributions. We provide conditions when O'Brien's test controls its error probability asymptotically and when it fails. We also provide adjusted tests when the conditions do not hold. Throughout this article, we do not assume that all outcomes are continuous. Simulations are performed to compare the adjusted tests to O'Brien's test. The difference is also illustrated using data from a Parkinson's disease clinical trial.  相似文献   

7.
The Jonckheere test is a widely used test for trend in the nonparametric location model. We present an analogue of Jonckheere's test which can be performed both for normally and binomially distributed endpoints. This test is a contrast test, therefore, we can also construct a reverse test. It is shown that in several situations the proposed tests are superior to the Helmert and the reverse-Helmert contrast tests in terms of size and power, especially for finite dichotomous data. The tests are applied to data of two preclinical studies.  相似文献   

8.
Han F  Pan W 《Biometrics》2012,68(1):307-315
Many statistical tests have been proposed for case-control data to detect disease association with multiple single nucleotide polymorphisms (SNPs) in linkage disequilibrium. The main reason for the existence of so many tests is that each test aims to detect one or two aspects of many possible distributional differences between cases and controls, largely due to the lack of a general and yet simple model for discrete genotype data. Here we propose a latent variable model to represent SNP data: the observed SNP data are assumed to be obtained by discretizing a latent multivariate Gaussian variate. Because the latent variate is multivariate Gaussian, its distribution is completely characterized by its mean vector and covariance matrix, in contrast to much more complex forms of a general distribution for discrete multivariate SNP data. We propose a composite likelihood approach for parameter estimation. A direct application of this latent variable model is to association testing with multiple SNPs in a candidate gene or region. In contrast to many existing tests that aim to detect only one or two aspects of many possible distributional differences of discrete SNP data, we can exclusively focus on testing the mean and covariance parameters of the latent Gaussian distributions for cases and controls. Our simulation results demonstrate potential power gains of the proposed approach over some existing methods.  相似文献   

9.
The identification of population bottlenecks is critical in conservation because populations that have experienced significant reductions in abundance are subject to a variety of genetic and demographic processes that can hasten extinction. Genetic bottleneck tests constitute an appealing and popular approach for determining if a population decline has occurred because they only require sampling at a single point in time, yet reflect demographic history over multiple generations. However, a review of the published literature indicates that, as typically applied, microsatellite-based bottleneck tests often do not detect bottlenecks in vertebrate populations known to have experienced declines. This observation was supported by simulations that revealed that bottleneck tests can have limited statistical power to detect bottlenecks largely as a result of limited sample sizes typically used in published studies. Moreover, commonly assumed values for mutation model parameters do not appear to encompass variation in microsatellite evolution observed in vertebrates and, on average, the proportion of multi-step mutations is underestimated by a factor of approximately two. As a result, bottleneck tests can have a higher probability of 'detecting' bottlenecks in stable populations than expected based on the nominal significance level. We provide recommendations that could add rigor to inferences drawn from future bottleneck tests and highlight new directions for the characterization of demographic history.  相似文献   

10.
Case‐parent trio studies considering genotype data from children affected by a disease and their parents are frequently used to detect single nucleotide polymorphisms (SNPs) associated with disease. The most popular statistical tests for this study design are transmission/disequilibrium tests (TDTs). Several types of these tests have been developed, for example, procedures based on alleles or genotypes. Therefore, it is of great interest to examine which of these tests have the highest statistical power to detect SNPs associated with disease. Comparisons of the allelic and the genotypic TDT for individual SNPs have so far been conducted based on simulation studies, since the test statistic of the genotypic TDT was determined numerically. Recently, however, it has been shown that this test statistic can be presented in closed form. In this article, we employ this analytic solution to derive equations for calculating the statistical power and the required sample size for different types of the genotypic TDT. The power of this test is then compared with the one of the corresponding score test assuming the same mode of inheritance as well as the allelic TDT based on a multiplicative mode of inheritance, which is equivalent to the score test assuming an additive mode of inheritance. This is, thus, the first time the power of these tests are compared based on equations, yielding instant results and omitting the need for time‐consuming simulation studies. This comparison reveals that these tests have almost the same power, with the score test being slightly more powerful.  相似文献   

11.
Summary As the nonparametric generalization of the one‐way analysis of variance model, the Kruskal–Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal–Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal–Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal–Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods.  相似文献   

12.
The Cochran–Armitage (CA) linear trend test for proportions is often used for genotype‐based analysis of candidate gene association. Depending on the underlying genetic mode of inheritance, the use of model‐specific scores maximises the power. Commonly, the underlying genetic model, i.e. additive, dominant or recessive mode of inheritance, is a priori unknown. Association studies are commonly analysed using permutation tests, where both inference and identification of the underlying mode of inheritance are important. Especially interesting are tests for case–control studies, defined by a maximum over a series of standardised CA tests, because such a procedure has power under all three genetic models. We reformulate the test problem and propose a conditional maximum test of scores‐specific linear‐by‐linear association tests. For maximum‐type, sum and quadratic test statistics the asymptotic expectation and covariance can be derived in a closed form and the limiting distribution is known. Both the limiting distribution and approximations of the exact conditional distribution can easily be computed using standard software packages. In addition to these technical advances, we extend the area of application to stratified designs, studies involving more than two groups and the simultaneous analysis of multiple loci by means of multiplicity‐adjusted p‐values for the underlying multiple CA trend tests. The new test is applied to reanalyse a study investigating genetic components of different subtypes of psoriasis. A new and flexible inference tool for association studies is available both theoretically as well as practically since already available software packages can be easily used to implement the suggested test procedures.  相似文献   

13.
Investigations of sample size for planning case-control studies have usually been limited to detecting a single factor. In this paper, we investigate sample size for multiple risk factors in strata-matched case-control studies. We construct an omnibus statistic for testing M different risk factors based on the jointly sufficient statistics of parameters associated with the risk factors. The statistic is non-iterative, and it reduces to the Cochran statistic when M = 1. The asymptotic power function of the test is a non-central chi-square with M degrees of freedom and the sample size required for a specific power can be obtained by the inverse relationship. We find that the equal sample allocation is optimum. A Monte Carlo experiment demonstrates that an approximate formula for calculating sample size is satisfactory in typical epidemiologic studies. An approximate sample size obtained using Bonferroni's method for multiple comparisons is much larger than that obtained using the omnibus test. Approximate sample size formulas investigated in this paper using the omnibus test, as well as the individual tests, can be useful in designing case-control studies for detecting multiple risk factors.  相似文献   

14.
In this study, we used the phenotype simulation package naturalgwas to test the performance of Zhao's Random Forest method in comparison to an uncorrected Random Forest test, latent factor mixed models (LFMM), genome-wide efficient mixed models (GEMMA), and confounder adjusted linear regression (CATE). We created 400 sets of phenotypes, corresponding to five effect sizes and two, five, 15, or 30 causal loci, simulated from two empirical data sets containing SNPs from Striped Bass representing three and 13 populations. All association methods were evaluated for their ability to detect genotype–phenotype associations based on power, false discovery rates, and number of false positives. Genomic inflation was highest for uncorrected Random Forest and LFMM tests and lowest for Gemma and Zhao's Random Forest. All association tests had similar power to detect causal loci, and Zhao's Random Forest had the lowest false discovery rate in all scenarios. To measure the performance of association tests in small data sets with few loci surrounding a causal gene we also ran analyses again after removing causal loci from each data set. All association tests were only able to find true positives, defined as loci located within 30 kbp of a causal locus, in 3%–18% of simulations. In contrast, at least one false positive was found in 17%–44% of simulations. Zhao's Random Forest again identified the fewest false positives of all association tests studied. The ability to test the power of association tests for individual empirical data sets can be an extremely useful first step when designing a GWAS study.  相似文献   

15.
16.
Marginal tests based on individual SNPs are routinely used in genetic association studies. Studies have shown that haplotype‐based methods may provide more power in disease mapping than methods based on single markers when, for example, multiple disease‐susceptibility variants occur within the same gene. A limitation of haplotype‐based methods is that the number of parameters increases exponentially with the number of SNPs, inducing a commensurate increase in the degrees of freedom and weakening the power to detect associations. To address this limitation, we introduce a hierarchical linkage disequilibrium model for disease mapping, based on a reparametrization of the multinomial haplotype distribution, where every parameter corresponds to the cumulant of each possible subset of a set of loci. This hierarchy present in the parameters enables us to employ flexible testing strategies over a range of parameter sets: from standard single SNP analyses through the full haplotype distribution tests, reducing degrees of freedom and increasing the power to detect associations. We show via extensive simulations that our approach maintains the type I error at nominal level and has increased power under many realistic scenarios, as compared to single SNP and standard haplotype‐based studies. To evaluate the performance of our proposed methodology in real data, we analyze genome‐wide data from the Wellcome Trust Case‐Control Consortium.  相似文献   

17.
Although linear rank statistics for the two‐sample problem are distribution free tests, their power depends on the distribution of the data. In the planning phase of an experiment, researchers are often uncertain about the shape of this distribution and so the choice of test statistic for the analysis and the determination of the required sample size are based on vague information. Adaptive designs with interim analysis can potentially overcome both problems. And in particular, adaptive tests based on a selector statistic are a solution to the first. We investigate whether adaptive tests can be usefully implemented in flexible two‐stage designs to gain power. In a simulation study, we compare several methods for choosing a test statistic for the second stage of an adaptive design based on interim data with the procedure that applies adaptive tests in both stages. We find that the latter is a sensible approach that leads to the best results in most situations considered here. The different methods are illustrated using a clinical trial example.  相似文献   

18.
We develop a new method for testing a portion of a tree (called a clade) based on multiple tests of many 4-taxon trees in this paper. This is particularly useful when the phylogenetic tree constructed by other methods have a clade that is difficult to explain from a biological point of view. The statement about the test of the clade can be made through the multiple P values from these individual tests. By controlling the familywise error rate or the false discovery rate (FDR), 4 different tree test methods are evaluated through simulation methods. It shows that the combination of the approximately unbiased (AU) test and the FDR-controlling procedure provides strong power along with reasonable type I error rate and less heavy computation.  相似文献   

19.
The most commonly used method in evolutionary biology for combining information across multiple tests of the same null hypothesis is Fisher's combined probability test. This note shows that an alternative method called the weighted Z-test has more power and more precision than does Fisher's test. Furthermore, in contrast to some statements in the literature, the weighted Z-method is superior to the unweighted Z-transform approach. The results in this note show that, when combining P-values from multiple tests of the same hypothesis, the weighted Z-method should be preferred.  相似文献   

20.
A multiple comparison procedure (MCP) is proposed for the comparison of all pairs of several independent samples. This MCP is essentially the closed procedure with union-intersection tests based on given single tests Qij for the minimal hypotheses Hij. In such cases where the α-levels of the nominal tests associated with the MCP can be exhausted, this MCP has a uniformly higher all pair power than any refined Bonferroni test using the same Qij. Two different general algorithms are described in section 3. A probability inequality for ranges of i.i.d. random variables which is useful for some algorithms is proved in section 4. Section 5 contains the application to independent normally distributed estimates and section 6 the comparisons of polynomial distributions by multivariate ranges. Further applications are possible. Tables of the 0.05-bounds for the tests of section 5 and 6 are enclosed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号