首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fine mapping versus replication in whole-genome association studies   总被引:3,自引:0,他引:3       下载免费PDF全文
Association replication studies have a poor track record and, even when successful, often claim association with different markers, alleles, and phenotypes than those reported in the primary study. It is unknown whether these outcomes reflect genuine associations or false-positive results. A greater understanding of these observations is essential for genomewide association (GWA) studies, since they have the potential to identify multiple new associations that that will require external validation. Theoretically, a repeat association with precisely the same variant in an independent sample is the gold standard for replication, but testing additional variants is commonplace in replication studies. Finding different associated SNPs within the same gene or region as that originally identified is often reported as confirmatory evidence. Here, we compare the probability of replicating a gene or region under two commonly used marker-selection strategies: an "exact" approach that involves only the originally significant markers and a "local" approach that involves both the originally significant markers and others in the same region. When a region of high intermarker linkage disequilibrium is tested to replicate an initial finding that is only weak association with disease, the local approach is a good strategy. Otherwise, the most powerful and efficient strategy for replication involves testing only the initially identified variants. Association with a marker other than that originally identified can occur frequently, even in the presence of real effects in a low-powered replication study, and instances of such association increase as the number of included variants increases. Our results provide a basis for the design and interpretation of GWA replication studies and point to the importance of a clear distinction between fine mapping and replication after GWA.  相似文献   

2.
Switching between testing for superiority and non-inferiority has been an important statistical issue in the design and analysis of active controlled clinical trial. In practice, it is often conducted with a two-stage testing procedure. It has been assumed that there is no type I error rate adjustment required when either switching to test for non-inferiority once the data fail to support the superiority claim or switching to test for superiority once the null hypothesis of non-inferiority is rejected with a pre-specified non-inferiority margin in a generalized historical control approach. However, when using a cross-trial comparison approach for non-inferiority testing, controlling the type I error rate sometimes becomes an issue with the conventional two-stage procedure. We propose to adopt a single-stage simultaneous testing concept as proposed by Ng (2003) to test both non-inferiority and superiority hypotheses simultaneously. The proposed procedure is based on Fieller's confidence interval procedure as proposed by Hauschke et al. (1999).  相似文献   

3.
Elliott Sober (1987, 1993) and Orzack and Sober (forthcoming) argue that adaptationism is a very general hypothesis that can be tested by testing various particular hypotheses that invoke natural selection to explain the presence of traits in populations of organisms. In this paper, I challenge Sobers claim that adaptationism is an hypothesis and I argue that it is best viewed as a heuristic (or research strategy). Biologists would still have good reasons for employing this research strategy even if it turns out that natural selection is not the most important cause of evolution.  相似文献   

4.
Aspects of the statistical modeling and assessment of hypotheses concerning quantitative traits in genetics research are discussed. It is suggested that a traditional approach to such modeling and hypothesis testing, whereby competing models are "nested" in an effort to simplify their probabilistic assessment, can be complimented by an alternative statistical paradigm - the separate-families-of-hypotheses approach to segregation analysis. Two bootstrap-based methods are described that allow testing of any two, possibly non-nested, parametric genetic hypotheses. These procedures utilize a strategy in which the unknown distribution of a likelihood ratio-based test statistic is simulated, thereby allowing the estimation of critical values for the test statistic. Though the focus of this paper concerns quantitative traits, the strategies described can be applied to qualitative traits as well. The conceptual advantages and computational ease of these strategies are discussed, and their significance levels and power are examined through Monte Carlo experimentation. It is concluded that the separate-families-of-hypotheses approach, when carried out with the methods described in this paper, not only possesses some favorable statistical properties but also is well suited for genetic segregation analysis.  相似文献   

5.
This paper proposes a research strategy for examining laypeople's thoughts and reflections on innovations in the science of race and genetics. While some sociologists have shown a reluctance to engage in such discussions, this paper argues that social scientists need to take such views seriously. To do this, the paper brings together an anthropological approach to the study of scientific literacy and recent scholarship in the field of Whiteness studies. The combining of these literatures raises a set of interesting and sometimes uncomfortable questions about the ways in which social scientists and research participants contribute to the reproduction of White power and dominance in Western societies.  相似文献   

6.
The confirmatory analysis of pre-specified multiple hypotheses has become common in pivotal clinical trials. In the recent past multiple test procedures have been developed that reflect the relative importance of different study objectives, such as fixed sequence, fallback, and gatekeeping procedures. In addition, graphical approaches have been proposed that facilitate the visualization and communication of Bonferroni-based closed test procedures for common multiple test problems, such as comparing several treatments with a control, assessing the benefit of a new drug for more than one endpoint, combined non-inferiority and superiority testing, or testing a treatment at different dose levels in an overall and a subpopulation. In this paper, we focus on extended graphical approaches by dissociating the underlying weighting strategy from the employed test procedure. This allows one to first derive suitable weighting strategies that reflect the given study objectives and subsequently apply appropriate test procedures, such as weighted Bonferroni tests, weighted parametric tests accounting for the correlation between the test statistics, or weighted Simes tests. We illustrate the extended graphical approaches with several examples. In addition, we describe briefly the gMCP package in R, which implements some of the methods described in this paper.  相似文献   

7.
ABSTRACT The controversy over the use of null hypothesis statistical testing (NHST) has persisted for decades, yet NHST remains the most widely used statistical approach in wildlife sciences and ecology. A disconnect exists between those opposing NHST and many wildlife scientists and ecologists who conduct and publish research. This disconnect causes confusion and frustration on the part of students. We, as students, offer our perspective on how this issue may be addressed. Our objective is to encourage academic institutions and advisors of undergraduate and graduate students to introduce students to various statistical approaches so we can make well-informed decisions on the appropriate use of statistical tools in wildlife and ecological research projects. We propose an academic course that introduces students to various statistical approaches (e.g., Bayesian, frequentist, Fisherian, information theory) to build a foundation for critical thinking in applying statistics. We encourage academic advisors to become familiar with the statistical approaches available to wildlife scientists and ecologists and thus decrease bias towards one approach. Null hypothesis statistical testing is likely to persist as the most common statistical analysis tool in wildlife science until academic institutions and student advisors change their approach and emphasize a wider range of statistical methods.  相似文献   

8.
For genetic association studies with multiple phenotypes, we propose a new strategy for multiple testing with family-based association tests (FBATs). The strategy increases the power by both using all available family data and reducing the number of hypotheses tested while being robust against population admixture and stratification. By use of conditional power calculations, the approach screens all possible null hypotheses without biasing the nominal significance level, and it identifies the subset of phenotypes that has optimal power when tested for association by either univariate or multivariate FBATs. An application of our strategy to an asthma study shows the practical relevance of the proposed methodology. In simulation studies, we compare our testing strategy with standard methodology for family studies. Furthermore, the proposed principle of using all data without biasing the nominal significance in an analysis prior to the computation of the test statistic has broad and powerful applications in many areas of family-based association studies.  相似文献   

9.
Signal detection in functional magnetic resonance imaging (fMRI) inherently involves the problem of testing a large number of hypotheses. A popular strategy to address this multiplicity is the control of the false discovery rate (FDR). In this work we consider the case where prior knowledge is available to partition the set of all hypotheses into disjoint subsets or families, e. g., by a-priori knowledge on the functionality of certain regions of interest. If the proportion of true null hypotheses differs between families, this structural information can be used to increase statistical power. We propose a two-stage multiple test procedure which first excludes those families from the analysis for which there is no strong evidence for containing true alternatives. We show control of the family-wise error rate at this first stage of testing. Then, at the second stage, we proceed to test the hypotheses within each non-excluded family and obtain asymptotic control of the FDR within each family at this second stage. Our main mathematical result is that this two-stage strategy implies asymptotic control of the FDR with respect to all hypotheses. In simulations we demonstrate the increased power of this new procedure in comparison with established procedures in situations with highly unbalanced families. Finally, we apply the proposed method to simulated and to real fMRI data.  相似文献   

10.
The aim of this contribution is to give an overview of approaches to testing for non-inferiority of one out of two binomial distributions as compared to the other in settings involving independent samples (the paired samples case is not considered here but the major conclusions and recommendations can be shown to hold for both sampling schemes). In principle, there is an infinite number of different ways of defining (one-sided) equivalence in any multiparameter setting. In the binomial two-sample problem, the following three choices of a measure of dissimilarity between the underlying distributions are of major importance for real applications: the odds ratio (OR), the relative risk (RR), and the difference (DEL) of both binomial parameters. It is shown that for all three possibilities of formulating the hypotheses of a non-inferiority problem concerning two binomial proportions, reasonable testing procedures providing exact control over the type-I error risk are available. As a particularly useful and versatile way of handling mathematically nonnatural parametrizations like RR and DELTA, the approach through Bayesian posterior probabilities of hypotheses with respect to some non-informative reference prior has much to recommend it. In order to ensure that the corresponding testing procedure be valid in the classical, i.e. frequentist sense, it suffices to use straightforward computational techniques yielding suitably corrected nominal significance levels. In view of the availability of testing procedures with satisfactory properties for all parametrizations of main practical interest, the discussion of the pros and cons of these methods has to focus on the question of which of the underlying measures of dissimilarity should be preferred on grounds of logic and intuition. It is argued that the OR clearly merits to be given preference also with regard to this latter kind of criteria since the non-inferiority hypotheses defined in terms of the other parametric functions are bounded by lines which cross the boundaries of the parameter space. From this fact, we conclude that the exact Fisher type test for one-sided equivalence provides the most reasonable approach to the confirmatory analysis of non-inferiority trials involving two independent samples of binary data. The marked conservatism of the nonrandomized version of this test can largely be removed by using a suitably increased nominal significance level (depending, in addition to the target level, on the sample sizes and the equivalence margin), or by replacing it with a Bayesian test for non-inferiority with respect to the odds ratio.  相似文献   

11.
To understand why population growth rate is sometimes positive and sometimes negative, ecologists have adopted two main approaches. The most common approach is through the density paradigm by plotting population growth rate against population density. The second approach is through the mechanistic paradigm by plotting population growth rate against the relevant ecological processes affecting the population. The density paradigm is applied a posteriori, works sometimes but not always and is remarkably useless in solving management problems or in providing an understanding of why populations change in size. The mechanistic paradigm investigates the factors that supposedly drive density changes and is identical to Caughley's declining population paradigm of conservation biology. The assumption that we can uncover invariant relationships between population growth rate and some other variables is an article of faith. Numerous commercial fishery applications have failed to find the invariant relationships between stock and recruitment that are predicted by the density paradigm. Environmental variation is the rule, and non-equilibrial dynamics should force us to look for the mechanisms of population change. If multiple factors determine changes in population density, there can be no predictability in either of these paradigms and we will become environmental historians rather than scientists with useful generalizations for the population problems of this century. Defining our questions clearly and adopting an experimental approach with crisp alternative hypotheses and adequate controls will be essential to building useful generalizations for solving the practical problems of population management in fisheries, wildlife and conservation.  相似文献   

12.
Hoskins SG  Stevens LM  Nehm RH 《Genetics》2007,176(3):1381-1389
CREATE (consider, read, elucidate hypotheses, analyze and interpret the data, and think of the next experiment) is a new method for teaching science and the nature of science through primary literature. CREATE uses a unique combination of novel pedagogical tools to guide undergraduates through analysis of journal articles, highlighting the evolution of scientific ideas by focusing on a module of four articles from the same laboratory. Students become fluent in the universal language of data analysis as they decipher the figures, interpret the findings, and propose and defend further experiments to test their own hypotheses about the system under study. At the end of the course students gain insight into the individual experiences of article authors by reading authors' responses to an e-mail questionnaire generated by CREATE students. Assessment data indicate that CREATE students gain in ability to read and critically analyze scientific data, as well as in their understanding of, and interest in, research and researchers. The CREATE approach demystifies the process of reading a scientific article and at the same time humanizes scientists. The positive response of students to this method suggests that it could make a significant contribution to retaining undergraduates as science majors.  相似文献   

13.
Constraints are factors that limit evolutionary change. A subset of constraints is developmental, and acts during embryonic development. There is some uncertainty about how to define developmental constraints, and how to formulate them as testable hypotheses. Furthermore, concepts such as constraint-breaking, universal constraints, and forbidden morphologies present some conceptual difficulties. One of our aims is to clarify these issues. After briefly reviewing current classifications of constraint, we define developmental constraints as those affecting morphogenetic processes in ontogeny. They may be generative or selective, although a clear distinction cannot always be drawn. We support the idea that statements about constraints are in fact statements about the relative frequency of particular transformations (where 'transformation' indicates a change from the ancestral condition). An important consequence of this is that the same transformation may be constrained in one developmental or phylogenetic context, but evolutionarily plastic in another. In this paper, we analyse developmental constraints within a phylogenetic framework, building on similar work by previous authors. Our approach is based on the following assumptions from the literature: (1) constraints are identified when there is a discrepancy between the observed frequency of a transformation, and its expected frequency; (2) the 'expected' distribution is derived by examining the phylogenetic distribution of the transformation and its associated selection pressures. Thus, by looking for congruence between these various phylogenetic distribution patterns, we can test hypotheses about constraint. We critically examine this approach using a test case: variation in phalanx-number in the amniote limb.  相似文献   

14.
Determining how ecological and evolutionary processes produce spatial variation in local species richness remains an unresolved challenge. Using mountains as a model system, we outline an integrative research approach to evaluate the influence of ecological and evolutionary mechanisms on the generation and maintenance of patterns of species richness along and among elevational gradients. Biodiversity scientists interested in patterns of species richness typically start by documenting patterns of species richness at regional and local scales, and based on their knowledge of the taxon, and the environmental and historical characteristics of a mountain region, they then ask whether diversity–environment relationships, if they exist, are explained mostly by ecological or evolutionary hypotheses. The final step, and perhaps most challenging one, is to tease apart the relative influence of ecological and evolutionary mechanisms. We propose that elucidating the relative influence of ecological and evolutionary mechanisms can be achieved by taking advantage of the replicated settings afforded by mountains, combined with targeted experiments along elevational gradients. This approach will not only identify potential mechanisms that drive patterns of species richness, but also allow scientists to generate more robust hypotheses about which factors generate and maintain local diversity.  相似文献   

15.
Scientific research progresses by the dialectic dialogue between hypothesis building and the experimental testing of these hypotheses. Microbiologists as biologists in general can rely on an increasing set of sophisticated experimental methods for hypothesis testing such that many scientists maintain that progress in biology essentially comes with new experimental tools. While this is certainly true, the importance of hypothesis building in science should not be neglected. Some scientists rely on intuition for hypothesis building. However, there is also a large body of philosophical thinking on hypothesis building whose knowledge may be of use to young scientists. The present essay presents a primer into philosophical thoughts on hypothesis building and illustrates it with two hypotheses that played a major role in the history of science (the parallel axiom and the fifth element hypothesis). It continues with philosophical concepts on hypotheses as a calculus that fits observations (Copernicus), the need for plausibility (Descartes and Gilbert) and for explicatory power imposing a strong selection on theories (Darwin, James and Dewey). Galilei introduced and James and Poincaré later justified the reductionist principle in hypothesis building. Waddington stressed the feed-forward aspect of fruitful hypothesis building, while Poincaré called for a dialogue between experiment and hypothesis and distinguished false, true, fruitful and dangerous hypotheses. Theoretical biology plays a much lesser role than theoretical physics because physical thinking strives for unification principle across the universe while biology is confronted with a breathtaking diversity of life forms and its historical development on a single planet. Knowledge of the philosophical foundations on hypothesis building in science might stimulate more hypothesis-driven experimentation that simple observation-oriented “fishing expeditions” in biological research.  相似文献   

16.
In this paper we will outline several empirical approaches to developing and testing hypotheses about the determinants of species borders. We highlight environmental change as an important opportunity – arguing that these unplanned, large-scale manipulations can be used to study mechanisms which limit species distributions. Our discussion will emphasize three main ideas. First, we review the traditional biogeographic approach. We show how modern analytical and computer techniques have improved this approach and generated important new hypotheses concerning species' range determinants. However, abilities to test those hypotheses continue to be limited. Next we look at how the additions of temporal data, field and lab experimentation, biological details and replication, when applied to systems that have been the subject of classical biogeographic studies, have been used to support or refute hypotheses on range determinants. Such a multi-faceted approach adds rigor, consistency and plausible mechanisms to the study of species ranges, and has been especially fruitful in the study of climate and species' ranges. Lastly, we present an alternative avenue for exploration of range-limiting mechanisms which has been under-utilized. We argue that carefully designed comparisons and contrasts between groups of species or systems provide a powerful tool for examining hypotheses on species' borders. The seasonality hypothesis as an explanation for Rapoport's rule serves as a model of this approach. A test is constructed by comparing patterns of seasonality and range size among marine and terrestrial systems. The seasonality hypothesis is not supported.  相似文献   

17.
18.
ABSTRACT In spite of the wide use and acceptance of information theoretic approaches in the wildlife sciences, debate continues on the correct use and interpretation of Akaike's Information Criterion as compared to frequentist methods. Misunderstandings as to the fundamental nature of such comparisons continue. Here we agree with Steidl's argument about situation-specific use of each approach. However, Steidl did not make clear the distinction between statistical and biological hypotheses. Certainly model selection is not statistical, or null, hypothesis testing; importantly, it represents a more effective means to test among competing biological, or research, hypotheses. Employed correctly, it leads to superior strength of inference and reduces the risk that favorite hypotheses are uncritically accepted.  相似文献   

19.
20.
Evolution should render individuals resistant to stress and particularly to stress experienced by ancestors. However, many studies report negative effects of stress experienced by one generation on the performance of subsequent generations. To assess the strength of such transgenerational effects we propose a strategy aimed at overcoming the problem of type I errors when testing multiple proxies of stress in multiple ancestors against multiple offspring performance traits, and we apply it to a large observational dataset on captive zebra finches (Taeniopygia guttata). We combine clear one-tailed hypotheses with steps of validation, meta-analytic summary of mean effect sizes, and independent confirmatory testing. We find that drastic differences in early growth conditions (nestling body mass 8 days after hatching varied sevenfold between 1.7 and 12.4 g) had only moderate direct effects on adult morphology (95% confidence interval [CI]: r = 0.19–0.27) and small direct effects on adult fitness traits (r = 0.02–0.12). In contrast, we found no indirect effects of parental or grandparental condition (r = −0.017 to 0.002; meta-analytic summary of 138 effect sizes), and mixed evidence for small benefits of matching environments between parents and offspring, as the latter was not robust to confirmatory testing in independent datasets. This study shows that evolution has led to a remarkable robustness of zebra finches against undernourishment. Our study suggests that transgenerational effects are absent in this species, because CIs exclude all biologically relevant effect sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号