首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract: Statistics is one of the most important yet difficult subjects for many ecology and wildlife graduate students to learn. Insufficient knowledge about how to conduct quality science and the ongoing debate about the relative value of competing statistical ideologies contribute to uncertainties among graduate students regarding which statistical tests are most appropriate. Herein, we argue that increased education of the available statistical tests alone is unlikely to ameliorate the problem. Instead, we suggest that statistical uncertainties among graduate students are a secondary symptom of a larger problem. We believe the root cause lies in the lack of education on how to conduct science as an integrated process from hypothesis creation through statistical analysis. We argue that if students are taught to think about how each step of the process will affect all other steps, many statistical uncertainties will be avoided.  相似文献   

2.
ABSTRACT In spite of the wide use and acceptance of information theoretic approaches in the wildlife sciences, debate continues on the correct use and interpretation of Akaike's Information Criterion as compared to frequentist methods. Misunderstandings as to the fundamental nature of such comparisons continue. Here we agree with Steidl's argument about situation-specific use of each approach. However, Steidl did not make clear the distinction between statistical and biological hypotheses. Certainly model selection is not statistical, or null, hypothesis testing; importantly, it represents a more effective means to test among competing biological, or research, hypotheses. Employed correctly, it leads to superior strength of inference and reduces the risk that favorite hypotheses are uncritically accepted.  相似文献   

3.
In their ambitious Evolutionary Anthropology paper, Winterhalder and Smith 1 review the history, theory, and methods of human behavioral ecology (HBE). In establishing how HBE differs from traditional approaches within sociocultural anthropology, they and others laud its hypothetical‐deductive research method. 1 - 3 Our aim is to critically examine how human behavioral ecologists conduct their research, specifically how they analyze and interpret data as evidence for scientific hypotheses. Through computer simulations and a review of empirical studies of human sex ratios, we consider some limitations of the status quo and present alternatives that could strengthen the field. In particular, we suggest that because human behavioral ecologists often consider multiple hypotheses, they should use statistical approaches that can quantify the evidence in empirical data for competing hypotheses. Although we focus on HBE, the principles of this paper apply broadly within biological anthropology.  相似文献   

4.
Nathan P. Lemoine 《Oikos》2019,128(7):912-928
Throughout the last two decades, Bayesian statistical methods have proliferated throughout ecology and evolution. Numerous previous references established both philosophical and computational guidelines for implementing Bayesian methods. However, protocols for incorporating prior information, the defining characteristic of Bayesian philosophy, are nearly nonexistent in the ecological literature. Here, I hope to encourage the use of weakly informative priors in ecology and evolution by providing a ‘consumer's guide’ to weakly informative priors. The first section outlines three reasons why ecologists should abandon noninformative priors: 1) common flat priors are not always noninformative, 2) noninformative priors provide the same result as simpler frequentist methods, and 3) noninformative priors suffer from the same high type I and type M error rates as frequentist methods. The second section provides a guide for implementing informative priors, wherein I detail convenient ‘reference’ prior distributions for common statistical models (i.e. regression, ANOVA, hierarchical models). I then use simulations to visually demonstrate how informative priors influence posterior parameter estimates. With the guidelines provided here, I hope to encourage the use of weakly informative priors for Bayesian analyses in ecology. Ecologists can and should debate the appropriate form of prior information, but should consider weakly informative priors as the new ‘default’ prior for any Bayesian model.  相似文献   

5.
6.
Ecological risk assessment (ERA) is concerned with making decisions about the natural environment under uncertainty. Statistical methodology provides a natural framework for risk characterization and manipulation with many quantitative ERAs relying heavily on Neyman-Pearson hypothesis testing and other frequentist modes of inference. Bayesian statistical methods are becoming increasingly popular in ERA as they are seen to provide legitimate ways of incorporating subjective belief or expert opinion in the form of prior probability distributions. This article explores some of the concepts, strengths and weaknesses, and difficulties associated with both paradigms. The main points are illustrated with an example of setting a risk-based “trigger” level for uranium concentrations in the Magela Creek catchment of the Northern Territory of Australia.  相似文献   

7.
Nested clade phylogeographical analysis (NCPA) has become a common tool in intraspecific phylogeography. To evaluate the validity of its inferences, NCPA was applied to actual data sets with 150 strong a priori expectations, the majority of which had not been analysed previously by NCPA. NCPA did well overall, but it sometimes failed to detect an expected event and less commonly resulted in a false positive. An examination of these errors suggested some alterations in the NCPA inference key, and these modifications reduce the incidence of false positives at the cost of a slight reduction in power. Moreover, NCPA does equally well in inferring events regardless of the presence or absence of other, unrelated events. A reanalysis of some recent computer simulations that are seemingly discordant with these results revealed that NCPA performed appropriately in these simulated samples and was not prone to a high rate of false positives under sampling assumptions that typify real data sets. NCPA makes a posteriori use of an explicit inference key for biological interpretation after statistical hypothesis testing. Alternatives to NCPA that claim that biological inference emerges directly from statistical testing are shown in fact to use an a priori inference key, albeit implicitly. It is argued that the a priori and a posteriori approaches to intraspecific phylogeography are complementary, not contradictory. Finally, cross-validation using multiple DNA regions is shown to be a powerful method of minimizing inference errors. A likelihood ratio hypothesis testing framework has been developed that allows testing of phylogeographical hypotheses, extends NCPA to testing specific hypotheses not within the formal inference key (such as the out-of-Africa replacement hypothesis of recent human evolution) and integrates intra- and interspecific phylogeographical inference.  相似文献   

8.
Abstract The impact of the ongoing rapid climate change on natural systems is a major issue for human societies. An important challenge for ecologists is to identify the climatic factors that drive temporal variation in demographic parameters, and, ultimately, the dynamics of natural populations. The analysis of long-term monitoring data at the individual scale is often the only available approach to estimate reliably demographic parameters of vertebrate populations. We review statistical procedures used in these analyses to study links between climatic factors and survival variation in vertebrate populations. We evaluated the efficiency of various statistical procedures from an analysis of survival in a population of white stork, Ciconia ciconia, a simulation study and a critical review of 78 papers published in the ecological literature. We identified six potential methodological problems: (i) the use of statistical models that are not well-suited to the analysis of long-term monitoring data collected at the individual scale; (ii) low ratios of number of statistical units to number of candidate climatic covariates; (iii) collinearity among candidate climatic covariates; (iv) the use of statistics, to assess statistical support for climatic covariates effects, that deal poorly with unexplained variation in survival; (v) spurious detection of effects due to the co-occurrence of trends in survival and the climatic covariate time series; and (vi) assessment of the magnitude of climatic effects on survival using measures that cannot be compared across case studies. The critical review of the ecological literature revealed that five of these six methodological problems were often poorly tackled. As a consequence we concluded that many of these studies generated hypotheses but only few provided solid evidence for impacts of climatic factors on survival or reliable measures of the magnitude of such impacts. We provide practical advice to solve efficiently most of the methodological problems identified. The only frequent issue that still lacks a straightforward solution was the low ratio of the number of statistical units to the number of candidate climatic covariates. In the perspective of increasing this ratio and therefore of producing more robust analyses of the links between climate and demography, we suggest leads to improve the procedures for designing field protocols and selecting a set of candidate climatic covariates. Finally, we present recent statistical methods with potential interest for assessing the impact of climatic factors on demographic parameters.  相似文献   

9.
10.
随着质谱技术的快速发展,蛋白质组学已成为继基因组学、转录组学之后的又一研究热点,寻找可靠的差异表达蛋白对于生物标记物的发现至关重要.因此,如何准确、灵敏地筛选出差异蛋白已成为基于质谱的定量蛋白质组学的主要研究内容之一.目前,针对该问题的研究方法众多,但这些方法策略的适用范围不尽相同.总体来说,基于质谱技术筛选差异蛋白的统计学策略可以分为3类:基于经典统计学派的策略、基于贝叶斯学派的统计检验策略和其他策略,这3类方法有各自的应用范围、特点及不足.此外,筛选过程还将产生部分假阳性结果,可以采用其他方法对差异表达蛋白的质量进行控制,以提高统计检验结果的可靠性.  相似文献   

11.
A fundamental methodology in neurophysiology involves recording the electrical signals associated with individual neurons within brains of awake behaving animals. Traditional statistical analyses have relied mainly on mean firing rates over some epoch (often several hundred milliseconds) that are compared across experimental conditions by analysis of variance. Often, however, the time course of the neuronal firing patterns is of interest, and a more refined procedure can produce substantial additional information. In this paper we compare neuronal firing in the supplementary eye field of a macaque monkey across two experimental conditions. We take the electrical discharges, or 'spikes', to be arrivals in a inhomogeneous Poisson process and then model the firing intensity function using both a simple parametric form and more flexible splines. Our main interest is in making inferences about certain characteristics of the intensity, including the timing of the maximal firing rate. We examine data from 84 neurons individually and also combine results into a hierarchical model. We use Bayesian estimation methods and frequentist significance tests based on a nonparametric bootstrap procedure. We are thereby able to conclude that a substantial fraction of the neurons exhibit important temporal differences in firing intensity across the two conditions, and we quantify the effect across the population of neurons.  相似文献   

12.
Multiple lines of evidence (LOE) are often considered when examining the potential impact of contaminated sediment. Three strategies are explored for combining information within and/or among different LOE. One technique uses a multivariate strategy for clustering sites into groups of similar impact. A second method employs meta-analysis to pool empirically derived P-values. The third method uses a quantitative estimation of probability derived from odds ratios. These three strategies are compared with respect to a set of data describing reference conditions and a contaminated area in the Great Lakes. Common themes in these three strategies include the critical issue of defining an appropriate set of reference/control conditions, the definition of impact as a significant departure from the normal variation observed in the reference conditions, and the use of distance from the reference distribution to define any of the effect measures. Reasons for differences in results between the three approaches are explored and strategies for improving the approaches are suggested.  相似文献   

13.
14.
  1. Download : Download high-res image (246KB)
  2. Download : Download full-size image
Highlights
  • •Statistical approach for differential abundance analysis for proteomic experiments with TMT labeling.
  • •Applicable to large-scale experiments with complex or unbalanced design.
  • •An open-source R/Bioconductor package compatible with popular data processing tools.
  相似文献   

15.
Many metabolomics, and other high-content or high-throughput, experiments are set up such that the primary aim is the discovery of biomarker metabolites that can discriminate, with a certain level of certainty, between nominally matched ‘case’ and ‘control’ samples. However, it is unfortunately very easy to find markers that are apparently persuasive but that are in fact entirely spurious, and there are well-known examples in the proteomics literature. The main types of danger are not entirely independent of each other, but include bias, inadequate sample size (especially relative to the number of metabolite variables and to the required statistical power to prove that a biomarker is discriminant), excessive false discovery rate due to multiple hypothesis testing, inappropriate choice of particular numerical methods, and overfitting (generally caused by the failure to perform adequate validation and cross-validation). Many studies fail to take these into account, and thereby fail to discover anything of true significance (despite their claims). We summarise these problems, and provide pointers to a substantial existing literature that should assist in the improved design and evaluation of metabolomics experiments, thereby allowing robust scientific conclusions to be drawn from the available data. We provide a list of some of the simpler checks that might improve one’s confidence that a candidate biomarker is not simply a statistical artefact, and suggest a series of preferred tests and visualisation tools that can assist readers and authors in assessing papers. These tools can be applied to individual metabolites by using multiple univariate tests performed in parallel across all metabolite peaks. They may also be applied to the validation of multivariate models. We stress in particular that classical p-values such as “p < 0.05”, that are often used in biomedicine, are far too optimistic when multiple tests are done simultaneously (as in metabolomics). Ultimately it is desirable that all data and metadata are available electronically, as this allows the entire community to assess conclusions drawn from them. These analyses apply to all high-dimensional ‘omics’ datasets.  相似文献   

16.
Statistical hypothesis testing is commonly used inappropriately to analyze data, determine causality, and make decisions about significance in ecological risk assessment. Hypothesis testing is conceptually inappropriate in that it is designed to test scientific hypotheses rather than to estimate risks. It is inappropriate for analysis of field studies because it requires replication and random assignment of treatments. It discourages good toxicity testing and field studies, it provides less protection to ecosystems or their components that are difficult to sample or replicate, and it provides less protection when more treatments or responses are used. It provides a poor basis for decision‐making because it does not generate a conclusion of no effect, it does not indicate the nature or magnitude of effects, it does not address effects at untested exposure levels, and it confounds effects and uncertainty. Attempts to make hypothesis testing less problematical cannot solve these problems. Rather, risk assessors should focus on analyzing the relationship between exposure and effects, on presenting a clear estimate of expected or observed effects and associated uncertainties, and on providing the information in a manner that is useful to decision‐makers and the public.  相似文献   

17.
The Rhynchocinetidae (‘hinge‐beak’ shrimps) is a family of marine caridean decapods with considerable variation in sexual dimorphism, male weaponry, mating tactics, and sexual systems. Thus, this group is an excellent model with which to analyse the evolution of these important characteristics, which are of interest not only in shrimps specifically but also in animal taxa in general. Yet, there exists no phylogenetic hypothesis, either molecular or morphological, for this taxon against which to test either the evolution of behavioural traits within the Rhynchocinetidae or its genealogical relationships with other caridean taxa. In this study, we tested (1) hypotheses on the phylogenetic relationships of rhynchocinetid shrimps, and (2) the efficacy of different (one‐, two‐, and three‐phase) methods to generate a reliable phylogeny. Total genomic DNA was extracted from tissue samples taken from 17 species of Rhynchocinetidae and five other species currently or previously assigned to the same superfamily (Nematocarcinoidea); six species from other superfamilies were used as outgroups. Sequences from two nuclear genes (H3 and Enolase) and one mitochondrial gene (12S) were used to construct phylogenies. One‐phase phylogenetic analyses (SATé‐II) and classical two‐ and three‐phase phylogenetic analyses were employed, using both maximum likelihood and Bayesian inference methods. Both a two‐gene data set (H3 and Enolase) and a three‐gene data set (H3, Enolase, 12S) were utilized to explore the relationships amongst the targeted species. These analyses showed that the superfamily Nematocarcinoidea, as currently accepted, is polyphyletic. Furthermore, the two major clades recognized by the SATé‐II analysis are clearly concordant with the genera Rhynchocinetes and Cinetorhynchus, which are currently recognized in the morphological‐based classification (implicit phylogeny) as composing the family Rhynchocinetidae. The SATé‐II method is considered superior to the other phylogenetic analyses employed, which failed to recognize these two major clades. Studies using more genes and a more complete species data set are needed to test yet unresolved inter‐ and intrafamilial systematic and evolutionary questions about this remarkable clade of caridean shrimps. © 2014 The Linnean Society of London  相似文献   

18.
Nested clade phylogeographical analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographical hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographical model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyse a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the 'probabilities' generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models.  相似文献   

19.
Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non-independent. Although interpretations of effect sizes are often difficult, we provide some pointers to help researchers. This paper serves both as a beginner's instruction manual and a stimulus for changing statistical practice for the better in the biological sciences.  相似文献   

20.
A well‐written application for funding in support of basic biological or biomedical research or individual training fellowship requires that the author perform several functions well. They must (i) identify an important topic, (ii) provide a brief but persuasive introduction to highlight its significance, (iii) identify one or two key questions that if answered would impact the field, (iv) present a series of logical experiments and convince the reader that the approaches are feasible, doable within a certain period of time and have the potential to answer the questions posed, and (v) include citations that demonstrate both scholarship and an appropriate command of the relevant literature and techniques involved in the proposed research study. In addition, preparation of any compelling application requires formal scientific writing and editing skills that are invaluable in any career. These are also all key components in a doctoral dissertation and encompass many of the skills that we expect graduate students to master. Almost 20 years ago, we began a grant writing course as a mechanism to train students in these specific skills. Here, we describe the use of this course in training of our graduate students as well as our experiences and lessons learned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号