共查询到20条相似文献,搜索用时 8 毫秒
1.
Increasingly, environmental managers attempt to incorporate precautionary principles into decision making. In any quantitative analysis of impacts, precaution is closely related to the power of the analysis to detect an impact. Designs of sampling to detect impacts are, however, complex because of natural spatial and temporal variability and the intrinsic nature of the statistical interactions which define impacts. Here, pulse and press responses and impacts that affect time courses (temporal variance) were modelled to determine the influences of increasing temporal replication—sampling more times in each of several longer periods before and again after an impact.Increasing the number of control or reference locations and number of replicate sample units at each time and place of sampling investigated the influence of spatial replication on power. From numerous scenarios of impacts, with or without natural spatial and temporal interactions (i.e. not caused by an impact), general recommendations are possible. Detecting press impacts requires maximal numbers of control locations. Shorter-term pulse impacts are best detected when the number of periods sampled is maximized. Impacts causing changes in temporal variance are most likely to be detected by sampling with the greatest possible number of periods or times within periods.To allow precautionary decision making, the type of predicted impact should be specified with its magnitude and duration. Only then can sampling be designed to be powerful, thereby allowing precautionary concepts to be invoked. 相似文献
2.
We consider the problematic relationship between publication success and statistical significance in the light of analyses in which we examine the distribution of published probability (P) values across the statistical 'significance' range, below the 5% probability threshold. P-values are often judged according to whether they lie beneath traditionally accepted thresholds (< 0.05, < 0.01, < 0.001, < 0.0001); we examine how these thresholds influence the distribution of reported absolute P-values in published scientific papers, the majority in biological sciences. We collected published P-values from three leading journals, and summarized their distribution using the frequencies falling across and within these four threshold values between 0.05 and 0. These published frequencies were then fitted to three complementary null models which allowed us to predict the expected proportions of P-values in the top and bottom half of each inter-threshold interval (i.e. those lying below, as opposed to above, each P-value threshold). Statistical comparison of these predicted proportions, against those actually observed, provides the first empirical evidence for a remarkable excess of probability values being cited on, or just below, each threshold relative to the smoothed theoretical distributions. The pattern is consistent across thresholds and journals, and for whichever theoretical approach used to generate our expected proportions. We discuss this novel finding and its implications for solving the problems of publication bias and selective reporting in evolutionary biology. 相似文献
3.
The presentation of quantitative results from ELISAs is very variable, and, in particular, the positive-negative threshold or critical value is poorly defined. A simple statistical model is presented which demonstrates how this value may be determined. Depending on what assumptions are made it is possible to estimate the necessary parameters in several different ways. These methods generally form a hierarchy, and greater precision can be achieved by combining variances from several plates. More problematical is the determination of the negative control, and indeed its definition. The results of a series of ELISAs taken from a study of polyphagous predators of the cereal aphid S. avenae are used to illustrate the methods. The ELISAs were done over a period of three months using 86 plates. Recommendations are made for the numbers and levels of controls to be used in future studies. 相似文献
4.
Huan Yin Weizhen Wang Zhongzhan Zhang 《Biometrical journal. Biometrische Zeitschrift》2019,61(6):1462-1476
When establishing a treatment in clinical trials, it is important to evaluate both effectiveness and toxicity. In phase II clinical trials, multinomial data are collected in m‐stage designs, especially in two‐stage () design. Exact tests on two proportions, for the response rate and for the nontoxicity rate, should be employed due to limited sample sizes. However, existing tests use certain parameter configurations at the boundary of null hypothesis space to determine rejection regions without showing that the maximum Type I error rate is achieved at the boundary of null hypothesis. In this paper, we show that the power function for each test in a large family of tests is nondecreasing in both and ; identify the parameter configurations at which the maximum Type I error rate and the minimum power are achieved and derive level‐α tests; provide optimal two‐stage designs with the least expected total sample size and the optimization algorithm; and extend the results to the case of . Some R‐codes are given in the Supporting Information. 相似文献
5.
Joseph F. Mudge Faith M. Penny Jeff E. Houlahan 《BioEssays : news and reviews in molecular, cellular and developmental biology》2012,34(12):1045-1049
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well‐considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re‐assess conclusions reached by three recently published tests of the pace‐of‐life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. 相似文献
6.
Background: Statistical validation of predicted complexes is a fundamental issue in proteomics and bioinformatics. The target is to measure the statistical significance of each predicted complex in terms of p-values. Surprisingly, this issue has not received much attention in the literature. To our knowledge, only a few research efforts have been made towards this direction. Methods: In this article, we propose a novel method for calculating the p-value of a predicted complex. The null hypothesis is that there is no difference between the number of edges in target protein complex and that in the random null model. In addition, we assume that a true protein complex must be a connected subgraph. Based on this null hypothesis, we present an algorithm to compute the p-value of a given predicted complex. Results: We test our method on five benchmark data sets to evaluate its effectiveness. Conclusions: The experimental results show that our method is superior to the state-of-the-art algorithms on assessing the statistical significance of candidate protein complexes. 相似文献
7.
Julio Arrontes 《Population Ecology》2021,63(1):123-132
This report explores how the heterogeneity of variances affects randomization tests used to evaluate differences in the asymptotic population growth rate, λ. The probability of Type I error was calculated in four scenarios for populations with identical λ but different variance of λ: (1) Populations have different projection matrices: the same λ may be obtained from different sets of vital rates, which gives room for different variances of λ. (2) Populations have identical projection matrices but reproductive schemes differ and fecundity in one of the populations has a larger associated variance. The two other scenarios evaluate a sampling artifact as responsible for heterogeneity of variances. The same population is sampled twice, (3) with the same sampling design, or (4) with different sampling effort for different stages. Randomization tests were done with increasing differences in sample size between the two populations. This implies additional differences in the variance of λ. The probability of Type I error keeps at the nominal significance level (α = .05) in Scenario 3 and with identical sample sizes in the others. Tests were too liberal, or conservative, under a combination of variance heterogeneity and different sample sizes. Increased differences in sample size exacerbated the difference between observed Type I error and the nominal significance level. Type I error increases or decreases depending on which population has a larger sample size, the population with the smallest or the largest variance. However, by their own, sample size is not responsible for changes in Type I errors. 相似文献
8.
A statistical simulation model for field testing of non‐target organisms in environmental risk assessment of genetically modified plants 下载免费PDF全文
Paul W. Goedhart Hilko van der Voet Ferdinando Baldacchino Salvatore Arpaia 《Ecology and evolution》2014,4(8):1267-1283
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non‐target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess‐zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. 相似文献
9.
10.
11.
Lipsitz et al. (1998, Biometrics 54, 148-160) discussed testing the homogeneity of the risk difference for a series of 2 x 2 tables. They proposed and evaluated several weighted test statistics, including the commonly used weighted least squares test statistic. Here we suggest various important improvements on these test statistics. First, we propose using the one-sided analogues of the test procedures proposed by Lipsitz et al. because we should only reject the null hypothesis of homogeneity when the variation of the estimated risk differences between centers is large. Second, we generalize their study by redesigning the simulations to include the situations considered by Lipsitz et al. (1998) as special cases. Third, we consider a logarithmic transformation of the weighted least squares test statistic to improve the normal approximation of its sampling distribution. On the basis of Monte Carlo simulations, we note that, as long as the mean treatment group size per table is moderate or large (> or = 16), this simple test statistic, in conjunction with the commonly used adjustment procedure for sparse data, can be useful when the number of 2 x 2 tables is small or moderate (< or = 32). In these situations, in fact, we find that our proposed method generally outperforms all the statistics considered by Lipsitz et al. Finally, we include a general guideline about which test statistic should be used in a variety of situations. 相似文献
12.
Terumasa Komuro 《Cell and tissue research》1970,105(3):317-324
Summary The neuromuscular junctions in the crayfish heart were studied with the electron microscope and were classified into two types based on the characteristics of the post-synaptic side. Type I junction was characterized by a mazy post-synaptic apparatus which has been referred to in this work as the junctional envelope, consisting of the cytoplasmic processes and/or lamellae of the muscle cell. Type II junction on the other hand, lacked the junctional envelope. The nerve terminals in both Type I and Type II junctions contained two types of synaptic vesicles: large granular and small agranular vesicles, which were about 1000 Å and 450 Å in diameter respectively. The physiological significance of these neuromuscular junctions and the nature of their synaptic vesicles are discussed.Acknowledgement. The author wishes to express sincere gratitude to Prof. T.Yamamoto for the kind encouragement and guidance during the course of this study.The presence of this unusual neuromuscular junction, coupled with the histological characteristics of heart muscles themselves (Komuro, 1968), may be involved in the different physiological properties of the crustacean heart. This subject will be discussed in a later publication by the author. 相似文献
13.
Summary The abdominal vagal paraganglia of the rat consist of small groups of cells, interspersed by blood vessels and nerve bundles and lying close to, or within, the vagus nerve or its branches. Each cell group consists of 2–10 Type I cells incompletely invested by 1–3 satellite cells. Type I cells are characterised by the presence of numerous dense-cored vesicles in their cytoplasm and may exhibit synaptic-like contact with each other.Small efferent nerve endings make synaptic contacts with Type I cells. Larger cup-shaped afferent nerve endings also make synaptic contacts of two kinds with Type I cells. Nerve-nerve synapses are often seen within or close to paraganglia.Attention is drawn to the close similarity of fine structure of abdominal vagal paraganglia, carotid body and small intensely fluorescent cells of the superior cervical ganglion in rats. Possible functional implications of this morphological similarity are discussed. 相似文献
14.
转基因植物环境监测进展 总被引:1,自引:1,他引:1
近20年来,转基因植物的商业化应用规模越来越大,而转基因生物安全问题依然是转基因植物产业进一步发展的最主要制约因素。转基因植物在商业化应用之前虽然预先进行了风险评估,但是,包括环境监测在内的风险管理措施是确保转基因植物安全应用的必要手段。在转基因作物大规模应用近20年之后,其在靶标生物抗性、对生物多样性的影响、基因漂移、在生态系统中的长期存留等方面产生的环境风险已经渐渐显现出来,表明风险评估无法为转基因植物应用提供足够的安全保障,还必须通过开展系统而长期的环境监测,明确转基因植物在生产应用后的实际环境影响。联合国环境规划署和欧盟等已经制定了转基因植物环境监测的法规和技术指南,一些国家实施了系统的转基因植物环境监测。对转基因植物所产生的环境风险以及环境监测应包括的内容进行了综述。 相似文献
15.
Hayley M. Geyle Gurutzeta Guillera‐Arroita Hugh F. Davies Ronald S. C. Firth Brett P. Murphy Dale G. Nimmo Euan G. Ritchie John C. Z. Woinarski Emily Nicholson 《Austral ecology》2019,44(2):223-236
Detecting trends in species’ distribution and abundance are essential for conserving threatened species, and depend upon effective monitoring programmes. Despite this, monitoring programmes are often designed without explicit consideration of their ability to deliver the information required by managers, such as their power to detect population changes. Here, we demonstrate the use of existing data to support the design of monitoring programmes aimed at detecting declines in species occupancy. We used single‐season occupancy models and baseline data to gain information on variables affecting the occupancy and detectability of the threatened brush‐tailed rabbit‐rat Conilurus penicillatus (Gould 1842) on the Tiwi Islands, Australia. This information was then used to estimate the survey effort required to achieve sufficient power to detect changes in occupancy of different magnitudes. We found that occupancy varied spatially, driven primarily by habitat (canopy height and cover, distance to water) and fire history across the landscape. Detectability varied strongly among seasons, and was three times higher in the late dry season (July–September), compared to the early dry season (April–June). Evaluation of three monitoring scenarios showed that conducting surveys at times when detectability is highest can lead to a substantial improvement in our ability to detect declines, thus reducing the survey effort and costs. Our study highlights the need for careful consideration of survey design related to the ecology of a species, as it can lead to substantial cost savings and improved insight into species population change via monitoring. 相似文献
16.
This paper proposes a novel approach for the confidence interval estimation and hypothesis testing of the common mean of several log-normal populations using the concept of generalized variable. Simulation studies demonstrate that the proposed approach can provide confidence intervals with satisfying coverage probabilities and can perform hypothesis testing with satisfying type-I error control even at small sample sizes. Overall, it is superior to the large sample approach. The proposed method is illustrated using two examples. 相似文献
17.
18.
Carlos Martorell 《Population Ecology》2007,49(2):115-125
Echeveria longissima, a threatened herb whose habitat has been severely overgrazed and eroded, was studied for three years in a currently grazed
and a fenced area. Matrix population models were used to assess if livestock elimination provides a proper management strategy.
The merits of retrospective perturbation analyses in terms of management planning have been debated. Nevertheless, they may
prove useful when applied in combination with exclosures because they may detect the effects of anthropogenic disturbance
on population dynamics. Thus, the results of retrospective and prospective methods were compared. A rapid decrease in population
size was projected in both areas, even though it was faster in the exposed one. The demographic processes that were favourable
or detrimental in a given year were magnified outside of the fence, but buffered in the exclosure, showing a strong drought-disturbance
synergism. Thus, the largest difference in the population growth rate λ between areas was observed in the driest year. Higher nurse-plant density inside the fence seems to alleviate drought effects.
The use of prospective analysis alone may lead to erroneous management decisions, since the highest elasticities corresponded
to transitions that were favoured by human activities. While allowing for an increased λ in the short term, intervention aimed at increasing these transitions further without attending others that are lessened
by disturbance may introduce large changes in the population dynamics, with negative long-term consequences. Retrospective
methods can detect which processes have been altered by disturbance and its synergisms, so we may more efficiently restore
healthy population dynamics. 相似文献
19.
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (DLR) appeared to be an effective way to predict whether F0 immigrants could be identified for a particular pair of populations using a given set of markers. 相似文献