首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In epidemiology, the fatality rate is an important indicator of disease severity and has been used to evaluate the effects of new treatments. During an emerging epidemic with limited resources, monitoring the changes in fatality rate can also provide signals on the evaluation of government policies and healthcare quality, which helps to guide public health decision. A statistical test is developed in this paper to detect changes in fatality rate over time during the course of an emerging infectious disease. A major advantage of the proposed test is that it only requires the regularly reported numbers of deaths and recoveries, which meets the actual need as detailed surveillance data are hard to collect during the course of an emerging epidemic especially the deadly infectious diseases with large magnitude. In addition, with the sequential testing procedure, the effective measures can be detected at the earliest possible time to provide guidance to policymakers for swift action. Simulation studies showed that the proposed test performs well and is sensitive in picking up changes in the fatality rate. The test is applied to the 2014–2016 Ebola outbreak in Sierra Leone for illustration.  相似文献   

2.
Although a large body of work investigating tests of correlated evolution of two continuous characters exists, hypotheses such as character displacement are really tests of whether substantial evolutionary change has occurred on a particular branch or branches of the phylogenetic tree. In this study, we present a methodology for testing such a hypothesis using ancestral character state reconstruction and simulation. Furthermore, we suggest how to investigate the robustness of the hypothesis test by varying the reconstruction methods or simulation parameters. As a case study, we tested a hypothesis of character displacement in body size of Caribbean Anolis lizards. We compared squared-change, weighted squared-change, and linear parsimony reconstruction methods, gradual Brownian motion and speciational models of evolution, and several resolution methods for linear parsimony. We used ancestor reconstruction methods to infer the amount of body size evolution, and tested whether evolutionary change in body size was greater on branches of the phylogenetic tree in which a transition from occupying a single-species island to a two-species island occurred. Simulations were used to generate null distributions of reconstructed body size change. The hypothesis of character displacement was tested using Wilcoxon Rank-Sums. When tested against simulated null distributions, all of the reconstruction methods resulted in more significant P-values than when standard statistical tables were used. These results confirm that P-values for tests using ancestor reconstruction methods should be assessed via simulation rather than from standard statistical tables. Linear parsimony can produce an infinite number of most parsimonious reconstructions in continuous characters. We present an example of assessing the robustness of our statistical test by exploring the sample space of possible resolutions. We compare ACCTRAN and DELTRAN resolutions of ambiguous character reconstructions in linear parsimony to the most and least conservative resolutions for our particular hypothesis.  相似文献   

3.
Solow AR 《Biometrics》2000,56(4):1272-1273
A common problem in ecology is to determine whether the diversity of a biological community has changed over time or in response to a disturbance. This involves testing the null hypothesis that diversity is constant against the alternative hypothesis that it has changed. As the power of this test may be low, Fritsch and Hsu (1999, Biometrics 55, 1300-1305) proposed reversing the null and alternative hypothesis. Here, I return to the original formulation and point out that power can be improved by adopting a parametric model for relative abundances and by testing against an ordered alternative.  相似文献   

4.
DiRienzo AG 《Biometrics》2003,59(3):497-504
When testing the null hypothesis that treatment arm-specific survival-time distributions are equal, the log-rank test is asymptotically valid when the distribution of time to censoring is conditionally independent of randomized treatment group given survival time. We introduce a test of the null hypothesis for use when the distribution of time to censoring depends on treatment group and survival time. This test does not make any assumptions regarding independence of censoring time and survival time. Asymptotic validity of this test only requires a consistent estimate of the conditional probability that the survival event is observed given both treatment group and that the survival event occurred before the time of analysis. However, by not making unverifiable assumptions about the data-generating mechanism, there exists a set of possible values of corresponding sample-mean estimates of these probabilities that are consistent with the observed data. Over this subset of the unit square, the proposed test can be calculated and a rejection region identified. A decision on the null that considers uncertainty because of censoring that may depend on treatment group and survival time can then be directly made. We also present a generalized log-rank test that enables us to provide conditions under which the ordinary log-rank test is asymptotically valid. This generalized test can also be used for testing the null hypothesis when the distribution of censoring depends on treatment group and survival time. However, use of this test requires semiparametric modeling assumptions. A simulation study and an example using a recent AIDS clinical trial are provided.  相似文献   

5.
生态学假说试验验证的原假说困境   总被引:1,自引:1,他引:0  
李际 《生态学杂志》2016,27(6):2031-2038
试验方法是生态学假说的主要验证方法之一,但也存在由原假说引发的质疑.Quinn和Dunham(1983)通过对Platt(1964)的假说-演绎模型进行分析,主张生态学不可能存在可以严格被试验验证的原假说.Fisher的证伪主义与Neyman-Pearson(N-P)的非判决性使得统计学原假说不能被严格验证;而生态过程中存在的不同于经典物理学的原假说H0(α=1,β=0)与不同的备假说H1′(α′=1,β′=0)的情况,使得生态学原假说也很难得到严格的实验验证.通过降低P值、谨慎选择原假说、对非原假说采取非中心化和双侧验证可分别缓解上述的原假说困境.但统计学的原假说显著性验证(NHST)不应等同于生态学假说中有关因果关系的逻辑证明方法.因此,现有大量基于NHST的生态学假说的方法研究和试验验证的结果与结论都不是绝对的逻辑可靠的.  相似文献   

6.
OBJECTIVE: To present an alternative linkage test to the transmission/disequilibrium test (TDT) which is conservative under the null hypothesis and generally more powerful under alternatives. METHODS: The exact distribution of the TDT is examined under both the null hypothesis and relevant alternatives. The TDT is rewritten in an alternate form based on the contributions from each of the three relevant parental mating types. This makes it possible to show that a particular term in the estimate is an exact tie and thus to rewrite the estimate without this term and to replace the multinomial 'variance estimate' of Spielman et al. [Am J Hum Genet 1993;52:506-516] by the binomial variance. RESULTS: The resulting test is shown to be a stratified McNemar test (SMN). The significance level attained by the SMN is shown to be conservative when compared to the asymptotic chi(2) distribution, while the TDT often exceeds the nominal level alpha. Under alternatives, the proposed test is shown to be typically more powerful than the TDT. CONCLUSION: The properties of the TDT as a statistical test have never been fully investigated. The proposed test replaces the heuristically motivated TDT by a formally derived test, which is also computationally simple.  相似文献   

7.
Guan Y 《Biometrics》2008,64(3):800-806
Summary .   We propose a formal method to test stationarity for spatial point processes. The proposed test statistic is based on the integrated squared deviations of observed counts of events from their means estimated under stationarity. We show that the resulting test statistic converges in distribution to a functional of a two-dimensional Brownian motion. To conduct the test, we compare the calculated statistic with the upper tail critical values of this functional. Our method requires only a weak dependence condition on the process but does not assume any parametric model for it. As a result, it can be applied to a wide class of spatial point process models. We study the efficacy of the test through both simulations and applications to two real data examples that were previously suspected to be nonstationary based on graphical evidence. Our test formally confirmed the suspected nonstationarity for both data.  相似文献   

8.
Pop‐Inference is an educational tool designed to help teaching of hypothesis testing using populations. The application allows for the statistical comparison of demographic parameters among populations. Input demographic data are projection matrices or raw demographic data. Randomization tests are used to compare populations. The tests evaluate the hypothesis that demographic parameters differ among groups of individuals more that should be expected from random allocation of individuals to populations. Confidence intervals for demographic parameters are obtained using the bootstrap. Tests may be global or pairwise. In addition to tests on differences, one‐way life table response experiments (LTRE) are available for random and fixed factors. Planned (a priori) comparisons are possible. Power of comparison tests is evaluated by constructing the distribution of the test statistic when the null hypothesis is true and when it is false. The relationship between power and sample size is explored by evaluating differences among populations at increasing population sizes, while keeping vital rates constant.  相似文献   

9.
A pseudolikelihood method for analyzing interval censored data   总被引:1,自引:0,他引:1  
We introduce a method based on a pseudolikelihood ratio forestimating the distribution function of the survival time ina mixed-case interval censoring model. In a mixed-case model,an individual is observed a random number of times, and at eachtime it is recorded whether an event has happened or not. Oneseeks to estimate the distribution of time to event. We usea Poisson process as the basis of a likelihood function to constructa pseudolikelihood ratio statistic for testing the value ofthe distribution function at a fixed point, and show that thisconverges under the null hypothesis to a known limit distribution,that can be expressed as a functional of different convex minorantsof a two-sided Brownian motion process with parabolic drift.Construction of confidence sets then proceeds by standard inversion.The computation of the confidence sets is simple, requiringthe use of the pool-adjacent-violators algorithm or a standardisotonic regression algorithm. We also illustrate the superiorityof the proposed method over competitors based on resamplingtechniques or on the limit distribution of the maximum pseudolikelihoodestimator, through simulation studies, and illustrate the differentmethods on a dataset involving time to HIV seroconversion ina group of haemophiliacs.  相似文献   

10.
An important unresolved problem associated with actomyosin motors is the role of Brownian motion in the process of force generation. On the basis of structural observations of myosins and actins, the widely held lever-arm hypothesis has been proposed, in which proteins are assumed to show sequential structural changes among observed and hypothesized structures to exert mechanical force. An alternative hypothesis, the Brownian motion hypothesis, has been supported by single-molecule experiments and emphasizes more on the roles of fluctuating protein movement. In this study, we address the long-standing controversy between the lever-arm hypothesis and the Brownian motion hypothesis through in silico observations of an actomyosin system. We study a system composed of myosin II and actin filament by calculating free-energy landscapes of actin-myosin interactions using the molecular dynamics method and by simulating transitions among dynamically changing free-energy landscapes using the Monte Carlo method. The results obtained by this combined multi-scale calculation show that myosin with inorganic phosphate (Pi) and ADP weakly binds to actin and that after releasing Pi and ADP, myosin moves along the actin filament toward the strong-binding site by exhibiting the biased Brownian motion, a behavior consistent with the observed single-molecular behavior of myosin. Conformational flexibility of loops at the actin-interface of myosin and the N-terminus of actin subunit is necessary for the distinct bias in the Brownian motion. Both the 5.5–11 nm displacement due to the biased Brownian motion and the 3–5 nm displacement due to lever-arm swing contribute to the net displacement of myosin. The calculated results further suggest that the recovery stroke of the lever arm plays an important role in enhancing the displacement of myosin through multiple cycles of ATP hydrolysis, suggesting a unified movement mechanism for various members of the myosin family.  相似文献   

11.
Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α.  相似文献   

12.
In many scientific problems the purpose of comparing two linear regression models is to demonstrate that they have only negligible differences and so can be regarded as being practically equivalent. The frequently used statistical approach of testing the homogeneity null hypothesis of the two models by using a partial F test is not appropriate for this purpose. In this paper, a simultaneous confidence band is proposed which provides an upper bound on the largest possible difference between the two models, in units of the standard error of the observations, over a given region of the covariates. This is demonstrated to be a more practical method for assessing the equivalence of the two regression models.  相似文献   

13.
Taking into account the effect of constant convective thermal and mass boundary conditions, we present numerical solution of the 2-D laminar g-jitter mixed convective boundary layer flow of water-based nanofluids. The governing transport equations are converted into non-similar equations using suitable transformations, before being solved numerically by an implicit finite difference method with quasi-linearization technique. The skin friction decreases with time, buoyancy ratio, and thermophoresis parameters while it increases with frequency, mixed convection and Brownian motion parameters. Heat transfer rate decreases with time, Brownian motion, thermophoresis and diffusion-convection parameters while it increases with the Reynolds number, frequency, mixed convection, buoyancy ratio and conduction-convection parameters. Mass transfer rate decreases with time, frequency, thermophoresis, conduction-convection parameters while it increases with mixed convection, buoyancy ratio, diffusion-convection and Brownian motion parameters. To the best of our knowledge, this is the first paper on this topic and hence the results are new. We believe that the results will be useful in designing and operating thermal fluids systems for space materials processing. Special cases of the results have been compared with published results and an excellent agreement is found.  相似文献   

14.
NOETHER (1987) proposed a method of sample size determination for the Wilcoxon-Mann-Whitney test. To obtain a sample size formula, he restricted himself to alternatives that differ only slightly from the null hypothesis, so that the unknown variance o2 of the Mann-Whitney statistic can be approximated by the known variance under the null hypothesis which depends only on n. This fact is frequently forgotten in statistical practice. In this paper, we compare Noether's large sample solution against an alternative approach based on upper bounds of σ2 which is valid for any alternatives. This comparison shows that Noether's approximation is sufficiently reliable with small and large deviations from the null hypothesis.  相似文献   

15.
A statistical method for comparing matrices of genetic variation and covariation between groups (e.g., species, populations, a single population grown in distinct environments) is proposed. This maximum-likelihood method provides a test of the overall null hypothesis that two covariance component matrices are identical. Moreover, when the overall null hypothesis is rejected, the method provides a framework for isolating the particular components that differ significantly between the groups. Simulation studies reveal that discouragingly large experiments are necessary to obtain acceptable power for comparing genetic covariance component matrices. For example, even in cases of a single trait measured on 900 individuals in a nested design of 100 sires and three dams per sire in each population, the power was only about 0.5 when additive genetic variance differed by a factor of 2.5. Nevertheless, this flexible method makes valid comparison of covariance component matrices possible.  相似文献   

16.
It has been proposed that intron and genome sizes in birds are reduced in comparison with mammals because of the metabolic demands of flight. To test this hypothesis, we examined the sizes of 14 introns in a nonflying relative of birds, the American alligator (Alligator mississippiensis), and in 19 flighted and flightless birds in 12 taxonomic orders. Our results indicate that a substantial fraction (66%) of the reduction in intron size as well as in genome size had already occurred in nonflying archosaurs. Using phylogenetically independent contrasts, we found that the proposed inverse correlation of genome size and basal metabolic rate (BMR) is significant among amniotes and archosaurs, whereas intron and genome size variation within birds showed no significant correlation with BMR. We show statistically that the distribution of genome sizes in birds and mammals is underdispersed compared with the Brownian motion model and consistent with strong stabilizing selection; that genome size differences between vertebrate clades are overdispersed and punctuational; and that evolution of BMR and avian intron size is consistent with Brownian motion. These results suggest that the contrast between genome size/BMR and intron size/BMR correlations may be a consequence of different intensities of selection for these traits and that we should not expect changes in intron size to be significantly associated with metabolically costly behaviors such as flight.  相似文献   

17.
The statistical analysis of neuronal spike trains by models of point processes often relies on the assumption of constant process parameters. However, it is a well-known problem that the parameters of empirical spike trains can be highly variable, such as for example the firing rate. In order to test the null hypothesis of a constant rate and to estimate the change points, a Multiple Filter Test (MFT) and a corresponding algorithm (MFA) have been proposed that can be applied under the assumption of independent inter spike intervals (ISIs). As empirical spike trains often show weak dependencies in the correlation structure of ISIs, we extend the MFT here to point processes associated with short range dependencies. By specifically estimating serial dependencies in the test statistic, we show that the new MFT can be applied to a variety of empirical firing patterns, including positive and negative serial correlations as well as tonic and bursty firing. The new MFT is applied to a data set of empirical spike trains with serial correlations, and simulations show improved performance against methods that assume independence. In case of positive correlations, our new MFT is necessary to reduce the number of false positives, which can be highly enhanced when falsely assuming independence. For the frequent case of negative correlations, the new MFT shows an improved detection probability of change points and thus, also a higher potential of signal extraction from noisy spike trains.  相似文献   

18.
Gillen DL  Emerson SS 《Biometrics》2005,61(2):546-551
Summary .   Group sequential designs are often used for periodically assessing treatment efficacy during the course of a clinical trial. Following a group sequential test, P -values computed under the assumption that the data were gathered according to a fixed sample design are no longer uniformly distributed under the null hypothesis of no treatment effect. Various sample space orderings have been proposed for computing proper P -values following a group sequential test. Although many of the proposed orderings have been compared in the setting of time-invariant treatment effects, little attention has been given to their performance when the effect of treatment within an individual varies over time. Our interest here is to compare two of the most commonly used methods for computing proper P -values following a group sequential test, based upon the analysis time (AT) and Z -statistic orderings, with respect to resulting power functions when treatment effects on survival are delayed. Power under the AT ordering is shown to be heavily influenced by the presence of a delayed treatment effect, while power functions corresponding to the Z -statistic ordering remain robust under time-varying treatment effects.  相似文献   

19.
Statistically nonsignificant (p > .05) results from a null hypothesis significance test (NHST) are often mistakenly interpreted as evidence that the null hypothesis is true—that there is “no effect” or “no difference.” However, many of these results occur because the study had low statistical power to detect an effect. Power below 50% is common, in which case a result of no statistical significance is more likely to be incorrect than correct. The inference of “no effect” is not valid even if power is high. NHST assumes that the null hypothesis is true; p is the probability of the data under the assumption that there is no effect. A statistical test cannot confirm what it assumes. These incorrect statistical inferences could be eliminated if decisions based on p values were replaced by a biological evaluation of effect sizes and their confidence intervals. For a single study, the observed effect size is the best estimate of the population effect size, regardless of the p value. Unlike p values, confidence intervals provide information about the precision of the observed effect. In the biomedical and pharmacology literature, methods have been developed to evaluate whether effects are “equivalent,” rather than zero, as tested with NHST. These methods could be used by biological anthropologists to evaluate the presence or absence of meaningful biological effects. Most of what appears to be known about no difference or no effect between sexes, between populations, between treatments, and other circumstances in the biological anthropology literature is based on invalid statistical inference.  相似文献   

20.
Abstract Pleurocarpous mosses, characterized by lateral female gametangia and highly branched, interwoven stems, comprise three orders and some 5000 species, or almost half of all moss diversity. Recent phylogenetic analyses resolve the Ptychomniales as sister to the Hypnales plus Hookeriales. Species richness is highly asymmetric with approximately 100 Ptychomniales, 750 Hookeriales, and 4400 Hypnales. Chloroplast DNA (cpDNA) sequences were obtained to compare partitioning of molecular diversity among the orders with estimates of species richness, and to test the hypothesis that either the Hookeriales or Hypnales underwent a period (or periods) of exceptionally rapid diversification. Levels of biodiversity were quantified using explicitly historical "phylogenetic diversity" and non-historical estimates of standing sequence diversity. Diversification rates were visualized using lineage-through-time (LTT) plots, and statistical tests of alternative diversification models were performed using the methods of Paradis (1997). The effects of incomplete sampling on the shape of LTT plots and performance of statistical tests were investigated using simulated phylogenies with incomplete sampling. Despite a much larger number of accepted species, the Hypnales contain lower levels of (cpDNA) biodiversity than their sister group, the Hookeriales, based on all molecular measures. Simulations confirm previous results that incomplete sampling yields diversification patterns that appear to reflect a decreasing rate through time, even when the true phylogenies were simulated with constant rates. Comparisons between simulated results and empirical data indicate that a constant rate of diversification cannot be rejected for the Hookeriales. The Hypnales, however, appear to have undergone a period of exceptionally rapid diversification for the earliest 20% of their history.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号