首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
In vertebrates, large body size is often a key diagnostic feature of species threatened with extinction. However, in amphibians the link between body size and extinction risk is highly uncertain, with previous studies suggesting positive, negative, u-shaped, or no relationship. Part of the reason for this uncertainty is ‘researcher degrees of freedom’: the subjectivity and selectivity in choices associated with specifying and fitting models. Here, I clarify the size–threat association in amphibians using Specification Curve Analysis, an analytical approach from the social sciences that attempts to minimize this problem by complete mapping of model space. I find strong support for prevailing negative associations between body size and threat status, the opposite of patterns typical in other vertebrates. This pattern is largely explained by smaller species having smaller geographic ranges, but smaller amphibian species also appear to lack some of the life-history advantages (e.g. higher reproductive output) that are often assumed to ‘protect’ small species in other taxa. These results highlight the need for a renewed conservation focus on the smallest species of the world''s most threatened class of vertebrates, as aquatic habitats become increasingly degraded by human activity.  相似文献   

2.
    
This paper proposes a novel approach for the confidence interval estimation and hypothesis testing of the common mean of several log-normal populations using the concept of generalized variable. Simulation studies demonstrate that the proposed approach can provide confidence intervals with satisfying coverage probabilities and can perform hypothesis testing with satisfying type-I error control even at small sample sizes. Overall, it is superior to the large sample approach. The proposed method is illustrated using two examples.  相似文献   

3.
    
The accelerated failure time model is presented as an alternative to the proportional hazard model in the analysis of survival data. We investigate the effect of covariates omission in the case of applying a Weibull accelerated failure time model. In an uncensored setting, the asymptotic bias of the treatment effect is theoretically zero when important covariates are omitted; however, the asymptotic variance estimator of the treatment effect could be biased and then the size of the Wald test for the treatment effect is likely to exceed the nominal level. In some cases, the test size could be more than twice the nominal level. In a simulation study, in both censored and uncensored settings, Type I error for the test of the treatment effect was likely inflated when the prognostic covariates are omitted. This work remarks the careless use of the accelerated failure time model. We recommend the use of the robust sandwich variance estimator in order to avoid the inflation of the Type I error in the accelerated failure time model, although the robust variance is not commonly used in the survival data analyses.  相似文献   

4.
Evolutionary biologists seek to explain the origin and maintenance of phenotypes, and a substantial portion of this research is accomplished by thorough study of individual species. For instance, many researchers study individual species to understand evolution of ornamental traits which appear to be products of sexual selection. I explored our understanding of sexual ornaments in a well‐studied vertebrate species that may serve as a case study for research programs in evolutionary biology. I attempted to located all published papers examining plumage colour and variables related to sexual selection hypotheses in a common European songbird, the blue tit (Cyanistes caeruleus). Researchers have estimated over 1200 statistical relationships with plumage colour of blue tits in 52 studies. However, of the approximately 1000 main‐effect relationships from the 48 studies that are candidates for inclusion in this meta‐analysis, more than 400 were reported without details of strength and direction. Circumstantial evidence suggests that an unknown number of other estimated effects remain unpublished. Missing information is a substantial barrier to interpretation of these papers and to meta‐analytic synthesis. Examination and analysis of funnel plots indicated that unpublished effects may be a biased sample of all effects, especially for comparisons of plumage colour to age and individual quality, and possibly also to measures of mate choice. Further, type I error was likely elevated by the large number of statistical comparisons evaluated, the frequent use of iterative model‐building procedures, and a willingness to interpret a wide variety of results as support for a hypothesis. Type I errors were made more problematic because blue tit plumage researchers only rarely have attempted to replicate important findings in their own work or that of others. Replication is essential to drawing robust scientific conclusions, especially in probabilistic systems with moderate to weak effects or a likelihood of bias. Last, researchers studying blue tit plumage have often developed ad hoc explanations for deviations of results from their predictions. Revising hypotheses in light of data is appropriate, but these revised hypotheses were rarely tested with new data. The only highly robust conclusion supported by meta‐analysis is that male blue tits have plumage that reflects more light in the ultraviolet and yellow wavelengths than the plumage of females. Various other effects, including condition‐dependence of plumage colour expression and a tendency for females to adjust the sex ratio of their offspring in response to male colour, remain uncertain. These obstacles to progress in the blue tit plumage literature are likely common in evolutionary biology, and so I recommend changes to incentive structures which may improve progress towards scientific understanding in this discipline.  相似文献   

5.
  总被引:2,自引:1,他引:2  
Environmental management decisions are prone to expensive mistakes if they are triggered by hypothesis tests using the conventional Type I error rate (α) of 0.05. We derive optimal α‐levels for decision‐making by minimizing a cost function that specifies the overall cost of monitoring and management. When managing an economically valuable koala population, it shows that a decision based on α = 0.05 carries an expected cost over $5 million greater than the optimal decision. For a species of such value, there is never any benefit in guarding against the spurious detection of declines and therefore management should proceed directly to recovery action. This result holds in most circumstances where the species’ value substantially exceeds its recovery costs. For species of lower economic value, we show that the conventional α‐level of 0.05 rarely approximates the optimal decision‐making threshold. This analysis supports calls for reversing the statistical ‘burden of proof’ in environmental decision‐making when the cost of Type II errors is relatively high.  相似文献   

6.
7.
Survival data consisting of independent sets of correlated failure times may arise in many situations. For example, we may take repeated observations of the failure time of interest from each patient or observations of the failure time on siblings, or consider the failure times on littermates in toxicological experiments. Because the failure times taken on the same patient or related family members or from the same litter are likely correlated, use of the classical log‐rank test in these situations can be quite misleading with respect to type I error. To avoid this concern, this paper develops two closed‐form asymptotic summary tests, that account for the intraclass correlation between the failure times within patients or units. In fact, one of these two test includes the classical log‐rank test as a special case when the intraclass correlation equals 0. Furthermore, to evaluate the finite‐sample performance of the two tests developed here, this paper applies Monte Carlo simulation and notes that they can actually perform quite well in a variety of situations considered here.  相似文献   

8.
    
In a randomized clinical trial (RCT), noncompliance with an assigned treatment can occur due to serious side effects, while missing outcomes on patients may happen due to patients' withdrawal or loss to follow up. To avoid the possible loss of power to detect a given risk difference (RD) of interest between two treatments, it is essentially important to incorporate the information on noncompliance and missing outcomes into sample size calculation. Under the compound exclusion restriction model proposed elsewhere, we first derive the maximum likelihood estimator (MLE) of the RD among compliers between two treatments for a RCT with noncompliance and missing outcomes and its asymptotic variance in closed form. Based on the MLE with tanh(-1)(x) transformation, we develop an asymptotic test procedure for testing equality of two treatment effects among compliers. We further derive a sample size calculation formula accounting for both noncompliance and missing outcomes for a desired power 1 - beta at a nominal alpha-level. To evaluate the performance of the test procedure and the accuracy of the sample size calculation formula, we employ Monte Carlo simulation to calculate the estimated Type I error and power of the proposed test procedure corresponding to the resulting sample size in a variety of situations. We find that both the test procedure and the sample size formula developed here can perform well. Finally, we include a discussion on the effects of various parameters, including the proportion of compliers, the probability of non-missing outcomes, and the ratio of sample size allocation, on the minimum required sample size.  相似文献   

9.
10.
When a new diagnostic procedure is developed, it is important to assess whether the diagnostic accuracy of the new procedure is different from that of the standard procedure. For paired‐sample ordinal data, this paper develops two test statistics for testing equality of the diagnostic accuracy between two procedures without assuming any parametric models. One is derived on the basis of the probability of correctly identifying the case for a randomly selected pair of a case and a non‐case over all possible cutoff points, and the other is derived on the basis of the sensitivity and specificity directly. To illustrate the practical use of the proposed test procedures, this paper includes an example regarding the use of digitized and plain films for screening breast cancer. This paper also applies Monte Carlo simulation to evaluate the finite sample performance of the two statistics developed here and notes that they can perform well in a variety of situations. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
    
When establishing a treatment in clinical trials, it is important to evaluate both effectiveness and toxicity. In phase II clinical trials, multinomial data are collected in m‐stage designs, especially in two‐stage () design. Exact tests on two proportions, for the response rate and for the nontoxicity rate, should be employed due to limited sample sizes. However, existing tests use certain parameter configurations at the boundary of null hypothesis space to determine rejection regions without showing that the maximum Type I error rate is achieved at the boundary of null hypothesis. In this paper, we show that the power function for each test in a large family of tests is nondecreasing in both and ; identify the parameter configurations at which the maximum Type I error rate and the minimum power are achieved and derive level‐α tests; provide optimal two‐stage designs with the least expected total sample size and the optimization algorithm; and extend the results to the case of . Some R‐codes are given in the Supporting Information.  相似文献   

12.
The conditional exact tests of homogeneity of two binomial proportions are often used in small samples, because the exact tests guarantee to keep the size under the nominal level. The Fisher's exact test, the exact chi‐squared test and the exact likelihood ratio test are popular and can be implemented in software StatXact. In this paper we investigate which test is the best in small samples in terms of the unconditional exact power. In equal sample cases it is proved that the three tests produce the same unconditional exact power. A symmetry of the unconditional exact power is also found. In unequal sample cases the unconditional exact powers of the three tests are computed and compared. In most cases the Fisher's exact test turns out to be best, but we characterize some cases in which the exact likelihood ratio test has the highest unconditional exact power. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
    
The Mantel test, based on comparisons of distance matrices, is commonly employed in comparative biology, but its statistical properties in this context are unknown. Here, we evaluate the performance of the Mantel test for two applications in comparative biology: testing for phylogenetic signal, and testing for an evolutionary correlation between two characters. We find that the Mantel test has poor performance compared to alternative methods, including low power and, under some circumstances, inflated type‐I error. We identify a remedy for the inflated type‐I error of three‐way Mantel tests using phylogenetic permutations; however, this test still has considerably lower power than independent contrasts. We recommend that use of the Mantel test should be restricted to cases in which data can only be expressed as pairwise distances among taxa.  相似文献   

14.
15.
16.
    
We propose a multiple comparison procedure to identify the minimum effective dose level by sequentially comparing each dose level with the zero dose level in the dose finding test. If we can find the minimum effective dose level at an early stage in the sequential test, it is possible to terminate the procedure in the dose finding test after a few group observations up to the dose level. Thus, the procedure is viable from an economical point of view when high costs are involved in obtaining the observations. In the procedure, we present an integral formula to determine the critical values for satisfying a predefined type I familywise error rate. Furthermore, we show how to determine the required sample size in order to guarantee the power of the test in the procedure. In practice, we compare the power of the test and the required sample size for various configurations of the population means in simulation studies and adopt our sequential procedure to the dose response test in a case study.  相似文献   

17.
Question: When multiple observers record the same spatial units of alpine vegetation, how much variation is there in the records and what are the consequences of this variation for monitoring schemes to detect changes? Location: One test summit in Switzerland (Alps) and one test summit in Scotland (Cairngorm Mountains). Method: Eight observers used the GLORIA protocols for species composition and visual cover estimates in percentages on large summit sections (>100 m2) and species composition and frequency in nested quadrats (1 m2). Results: The multiple records from the same spatial unit for species composition and species cover showed considerable variation in the two countries. Estimates of pseudo‐turnover of composition and coefficients of variation of cover estimates for vascular plant species in 1 m × 1‐m quadrats showed less variation than in previously published reports, whereas our results in larger sections were broadly in line with previous reports. In Scotland, estimates for bryophytes and lichens were more variable than for vascular plants. Conclusions: Statistical power calculations indicated that unless large numbers of plots were used, changes in cover or frequency were only likely to be detected for abundant species (exceeding 10% cover) or if relative changes were large (50% or more). Lower variation could be reached with the point method and with larger numbers of small plots. However, as summits often strongly differ from each other, supplementary summits cannot be considered as a way of increasing statistical power without introducing a supplementary component of variance into the analysis and hence into the power calculations.  相似文献   

18.
    
Intraclass correlation (ICC) is an established tool to assess inter-rater reliability. In a seminal paper published in 1979, Shrout and Fleiss considered three statistical models for inter-rater reliability data with a balanced design. In their first two models, an infinite population of raters was considered, whereas in their third model, the raters in the sample were considered to be the whole population of raters. In the present paper, we show that the two distinct estimates of ICC developed for the first two models can both be applied to the third model and we discuss their different interpretations in this context.  相似文献   

19.
    
Summary Meta‐analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran’s Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi‐square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here, we present an expansion for the mean of Q under the null hypothesis that is valid when the effect and the weight for each study depend on a single parameter, but for which neither normality nor independence of the effect and weight estimators is needed. This expansion represents an order O(1/n) correction to the usual chi‐square moment in the one‐parameter case. We apply the result to the homogeneity test for meta‐analyses in which the effects are measured by the standardized mean difference (Cohen’s d‐statistic). In this situation, we recommend approximating the null distribution of Q by a chi‐square distribution with fractional degrees of freedom that are estimated from the data using our expansion for the mean of Q. The resulting homogeneity test is substantially more accurate than the currently used test. We provide a program available at the Paper Information link at the Biometrics website http://www.biometrics.tibs.org for making the necessary calculations.  相似文献   

20.
    
HeLa cells were synchronized at late G1, early S, and late S phase of the cell cycle by nocodazole treatment. The cells were permeabilized with Triton X-100, digested with DNAse I, and extracted with 0.2 M ammonium sulfate to remove the digested chromatin. DNA was isolated from the residual chromatin attached to the nuclear matrix, digested with Hind III, and subjected to hybridization with [(32)P] labeled probe located upstream of the core region of the human beta-globin replication origin. The hybridization pattern revealed the existence of a DNase I sensitive site in the core region of the beta-globin replicator. The results suggest that association with the nuclear matrix induce alteration in the chromatin structure of the origin of replication that represents a more open chromatin configuration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号