首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Logic of experiments in ecology: is pseudoreplication a pseudoissue?   总被引:13,自引:0,他引:13  
Lauri Oksanen 《Oikos》2001,94(1):27-38
Hurlbert divides experimental ecologist into ‘those who do not see any need for dispersion (of replicated treatments and controls), and those who do recognize its importance and take whatever measures are necessary to achieve a good dose of it’. Experimental ecologists could also be divided into those who do not see any problems with sacrificing spatial and temporal scales in order to obtain replication, and those who understand that appropriate scale must always have priority over replication. If an experiment is conducted in a spatial or temporal scale, where the predictions of contesting hypotheses are convergent or ambiguous, no amount of technical impeccability can make the work instructive. Conversely, replication can always be obtained afterwards, by conducting more experiments with basically similar design in different areas and by using meta‐analysis. This approach even reduces the sampling bias obtained if resources are allocated to a small number of well‐replicated experiments. For a strict advocate of the hypothetico‐deductive method, replication is unnecessary even as a matter of principle, unless the predicted response is so weak that random background noise is a plausible excuse for a discrepancy between predictions and results. By definition, a prediction is an ‘all‐statement’, referring to all systems within a well‐defined category. What applies to all must apply to any. Hence, choosing two systems and assigning them randomly to a treatment and a control is normally an adequate design for a deductive experiment. The strength of such experiments depends on the firmness of the predictions and their a priori probability of corroboration. Replication is but one of many ways of reducing this probability. Whether the experiment is replicated or not, inferential statistics should always be used, to enable the reader to judge how well the apparent patterns in samples reflect real patterns in statistical populations. The concept ‘pseudoreplication’ amounts to entirely unwarranted stigmatization of a reasonable way to test predictions referring to large‐scale systems.  相似文献   

3.
The use of differential statistics to test for treatment effect with data from experiments where either treatments were not replicated (though samples may be) or replicates are not statistically independent leads to serious methodological problem. This problem, discovered by Hurbert (1984), is called pseudoreplication. Due to unknown reasons, pseudoreplication issue was completely overlooked by the Russian ecologists, in spite of the fact that the international scientific community is aware of pseudoreplication for almost twenty years. As the result, up to 47% of the experimental ecological papers, published in six Russian academic journals (Botanicheskij zhurnal, Ekologia, Izvestija RAN Ser. Biol., Lesovedenie, Zhurnal Obshchei Biologii, Zooligicheskij zhurnal) in 1998-2001, are pseudoreplicated; this proportion is nearly twice as high as the proportion of pseudoreplicated studies in international journals during 1960-1980, e.g. before the problem was discovered by Hurlbert (1984). This situation is alarming, especially because a substantial part of pseudoreplication arise from incorrect use of statistics, not from incorrect designing of experiments. By using several examples from the recent papers of Russian ecologists I shortly review the situations where pseudoreplication may occur and discuss some aspects of the experimental design, which are critical for correct processing and interpretation of ecological data.  相似文献   

4.
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect‐resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied.  相似文献   

5.
Several temperature-sensitive initiation mutants of Escherichia coli were examined for the ability to initiate more than one round of replication after being held at nonpermissive temperature for approximately 1.5 generation equivalents. The capacity for initiation was measured by residual synthesis experiments and rate experiments under conditions where protein synthesis and ribonucleic acid synthesis were inhibited. Results of the rate and density transfer experiments suggest that the cells may initiate more than one round of replication in the absence of protein or ribonucleic acid synthesis. This contrasts with the results of the residual synthesis experiments which suggest that, under these conditions, only one round of synthesis is achieved. These findings suggest that the total amount of residual synthesis achieved in the presence of an inhibitor may be both a function of the number of initiation events which occur and the effect of the inhibitor of protein or ribonucleic acid synthesis on chain elongation.  相似文献   

6.
Properly designed (randomized and/or balanced) experiments are standard in ecological research. Molecular methods are increasingly used in ecology, but studies generally do not report the detailed design of sample processing in the laboratory. This may strongly influence the interpretability of results if the laboratory procedures do not account for the confounding effects of unexpected laboratory events. We demonstrate this with a simple experiment where unexpected differences in laboratory processing of samples would have biased results if randomization in DNA extraction and PCR steps do not provide safeguards. We emphasize the need for proper experimental design and reporting of the laboratory phase of molecular ecology research to ensure the reliability and interpretability of results.  相似文献   

7.
Post-stratification in the randomized clinical trial   总被引:1,自引:0,他引:1  
R McHugh  J Matts 《Biometrics》1983,39(1):217-225
A topic of current biometric discussion is whether stratification should be used in randomized clinical trials and, if so, which kind. An approach based upon randomization theory is used to evaluate pre- versus post-stratification. The results obtained relate specifically to the effect of the size of the clinical trial on the bias and precision of estimated treatment contrasts.  相似文献   

8.
Plant breeders frequently evaluate large numbers of entries in field trials for selection. Generally, the tested entries are related by pedigree. The simplest case is a nested treatment structure, where entries fall into groups or families such that entries within groups are more closely related than between groups. We found that some plant breeders prefer to plant close relatives next to each other in the field. This contrasts with common experimental designs such as the α-design, where entries are fully randomized. A third design option is to randomize in such a way that entries of the same group are separated as much as possible. The present paper compares these design options by simulation. Another important consideration is the type of model used for analysis. Most of the common experimental designs were optimized assuming that the model used for analysis has fixed treatment effects. With many entries that are related by pedigree, analysis based on a model with random treatment effects becomes a competitive alternative. In simulations, we therefore study the properties of best linear unbiased predictions (BLUP) of genetic effects based on a nested treatment structure under these design options for a range of genetic parameters. It is concluded that BLUP provides efficient estimates of genetic effects and that resolvable incomplete block designs such as the α-design with restricted or unrestricted randomization can be recommended.  相似文献   

9.
Two widely‐recognized hypotheses propose that increases in fish abundance at artificial reefs are caused by (a) the attraction and redistribution of existing individuals, with no net increase in overall abundance and (b) the addition of new individuals by production, leading to a net increase in overall abundance. Inappropriate experimental designs have prevented many studies from discriminating between the two processes. Eight of 11 experiments comparing fish abundances on artificial reefs with those on adjacent soft bottom habitats were compromised by a lack of replication or spatial interspersion in the design itself. Only three studies featured proper controls and replicated designs with the interspersion of reef and control sites. Goodness of fit tests of abundance data for 67 species from these studies indicated that more fishes occur on reefs than on controls, particularly for species that typically occur over hard substrata. Conversely, seagrass specialists favour controls over reefs. Changes in the appearance of fish abundance trajectories driven by manipulation of sampling intervals highlight the need for adequate temporal sampling to encompass key life history events, particularly juvenile settlement. To ultimately determine whether attraction and production is responsible for increased abundances on reefs, requires two experimental features: 1) control sites, both interspersed among artificial reefs and at reef and non‐reef locations outside the test area and 2) incorporation of fish age and length data over time. Techniques such as otolith microchemistry, telemetry and stable isotope analysis can be used to help resolve feeding and movement mechanisms driving attraction and production.  相似文献   

10.
Investigating the drivers of diet quality is a key issue in wildlife ecology and conservation. Fecal near infrared reflectance spectroscopy (f‐NIRS) is widely used to assess dietary quality since it allows for noninvasive, rapid, and low‐cost analysis of nutrients. Samples for f‐NIRS can be collected and analyzed with or without knowledge of animal identities. While anonymous sampling allows to reduce the costs of individual identification, as it neither requires physical captures nor DNA genotyping, it neglects the potential effects of individual variation. As a consequence, regression models fitted to investigate the drivers of dietary quality may suffer severe issues of pseudoreplication. I investigated the relationship between crude protein and ecological predictors at different time periods to assess the level of individual heterogeneity in diet quality of 22 marked chamois Rupicapra rupicapra monitored over 2 years. Models with and without individual grouping effect were fitted to simulate identifiable and anonymous fecal sampling, and model estimates were compared to evaluate the consequences of anonymizing data collection and analysis. The variance explained by the individual random effect and the value of diet repeatability varied with seasons and peaked in winter. Despite the occurrence of individual variation in dietary quality, ecological parameter estimates under identifiable or anonymous sampling were consistently similar. This study suggests that anonymous fecal sampling may provide robust estimates of the relationship between dietary quality and ecological correlates. However, since the level of individual heterogeneity in dietary quality may vary with species‐ or study‐specific features, inconsequential pseudoreplication should not be assumed in other taxa. When individual differences are known to be inconsequential, anonymous sampling allows to optimize the trade‐off between sampling intensity and representativeness. When pseudoreplication is consequential, however, no conclusive remedy exists to effectively resolve nonindependence.  相似文献   

11.
The meaning of conception of pseudoreplication in ecological experiments is discussed. The replications are found to be pseudo if factors causing random variation act on such replications as on one unit. For example, preudoreplications could appear if a single flipped coin would be split up. The absence of random spatial distribution of experimental and control fields could be considered as a mistake because the experimental results could interfere with spatial heterogeneity. However, there are no pseudoreplications in this case, because the variations of ecological systems are produced by factors that are different in different places and each organism responds to these factors individually. The pseudoreplication should be created by multiple determination of a response of an individual organism. Also recommendations for applications of randomized block design and analysis of covariance for field trials in plant communities are proposed. Randomized blocks unite the experimental and control fields occupying the identical place in the series of spatial heterogeneity. The influence of such heterogeneity on random variation of determined parameters could be excluded by this approach. The heterogeneity considered only at the scale of used blocks. As far as control and experimental fields belongs to a single set, the series of heterogeneities, that comprises the range of their parameters, is identical. The random sorting of the large amount of experimental and control samples consisting of the same number of replications will produces replications in the same place of the series of general heterogeneity of investigated set. The replications occupying the same places of the sorted series form "virtual block". The population of such blocks includes the majority of random variation and allows one to reveal even very weak experimental effects by variance analysis.  相似文献   

12.
Zoo and aquarium research presents many logistic challenges, including extremely small sample sizes and lack of independent data points, which lend themselves to the misuse of statistics. Pseudoreplication and pooling of data are two statistical problems common in research in the biological sciences. Although the prevalence of these and other statistical miscues have been documented in other fields, little attention has been paid to the practice of statistics in the field of zoo biology. A review of articles published in the journal Zoo Biology between 1999–2004 showed that approximately 40% of the 146 articles utilizing inferential statistics during that span contained some evidence of pseudoreplication or pooling of data. Nearly 75% of studies did not provide degrees of freedom for all statistics and approximately 20% did not report test statistic values. Although the level of pseudoreplication in this dataset is not outside the levels found in other branches of biology, it does indicate the challenges of dealing with appropriate data analysis in zoo and aquarium studies. The standardization of statistical techniques to deal with the methodological challenges of zoo and aquarium populations can help advance zoo research by guiding the production and analysis of applied studies. This study recommends techniques for dealing with these issues, including complete disclosure of data manipulation and reporting of statistical values, checking and control for institutional effects in statistical models, and avoidance of pseudoreplicated observations. Additionally, zoo biologists should seek out other models such as hierarchical or factorial models or randomization tests to supplement their repertoire of t‐tests and ANOVA. These suggestions are intended to stimulate conversation and examination of the current use of statistics in zoo biology in an effort to develop more consistent requirements for publication. Zoo Biol 0:1–14, 2006. © 2006 Wiley‐Liss, Inc.  相似文献   

13.
When replicating DNA is labeled sequentially with radioactive and density tracers and analyzed by equilibrium centrifugation, the fraction banding at heavier than normal density is inversely proportional to the rate of replication fork movement if there is a sharp transition from one tracer to the other on the newly synthesized chains (Painter and Schaefer, '69). Primate CV-1 DNA labeled for 5 to 30 minutes with 3H-dThd and then for three hours with BrdUrd in the presence of FdUrd bands in a bimodal distribution in alkaline CsCl, rather than in a continuous distribution with a skew toward heavier density seen when FdUrd is omitted and centrifugation is in neutral CsCl. The heavy density peak represents interspersion of both tracers in the DNA and is caused by slow transition from dThd to BrdUrd incorporation when the tracers are switched in the labeling medium. This may result from preferential uptake and incorporation of dThd over BrdUrd. Because of the interspersion, calculation of the rate of replication fork movement is inaccurate. Reversal of the labeling sequence with administration of the long density pulse before the radioactive pulse reduces the problem of interspersion. Using this sequence of labeling, estimates of the rate of fork movement of 0.36-0.38 micrometer/min are obtained when the 3H pulse time is long enough to allow accurate measurement of the fraction of heavy DNA. Analysis by fiber autoradiography yields a rate of 0.56 micrometer/min in the same cell line. If appropriate precautions are taken to minimize mixing of the two tracers in the precursor pool and to ensure that the fraction of heavy DNA is measured accurately, the hydrodynamic technique provides an objective method of measuring rate of fork movement that gives values only slightly lower than those obtained by autoradiography.  相似文献   

14.
Vegetative cells carrying the new temperature-sensitive mutation cdc40 arrest at the restrictive temperature with a medial nuclear division phenotype. DNA replication is observed under these conditions, but most cells remain sensitive to hydroxyurea and do not complete the ongoing cell cycle if the drug is present during release from the temperature block. It is suggested that the cdc40 lesion affects an essential function in DNA synthesis. Normal meiosis is observed at the permissive temperature in cdc40 homozygotes. At the restrictive temperature, a full round of premeiotic DNA replication is observed, but neither commitment to recombination nor later meiotic events occur. Meiotic cells that are already committed to the recombination process at the permissive temperature do not complete it if transferred to the restrictive temperature before recombination is realized. These temperature shift-up experiments demonstrate that the CDC40 function is required for the completion of recombination events, as well as for the earlier stage of recombination commitment. Temperature shift-down experiments with cdc40 homozygotes suggest that meiotic segregation depends on the final events of recombination rather than on commitment to recombination.  相似文献   

15.
High‐throughput sequencing is a powerful tool, but suffers biases and errors that must be accounted for to prevent false biological conclusions. Such errors include batch effects; technical errors only present in subsets of data due to procedural changes within a study. If overlooked and multiple batches of data are combined, spurious biological signals can arise, particularly if batches of data are correlated with biological variables. Batch effects can be minimized through randomization of sample groups across batches. However, in long‐term or multiyear studies where data are added incrementally, full randomization is impossible, and batch effects may be a common feature. Here, we present a case study where false signals of selection were detected due to a batch effect in a multiyear study of Alpine ibex (Capra ibex). The batch effect arose because sequencing read length changed over the course of the project and populations were added incrementally to the study, resulting in nonrandom distributions of populations across read lengths. The differences in read length caused small misalignments in a subset of the data, leading to false variant alleles and thus false SNPs. Pronounced allele frequency differences between populations arose at these SNPs because of the correlation between read length and population. This created highly statistically significant, but biologically spurious, signals of selection and false associations between allele frequencies and the environment. We highlight the risk of batch effects and discuss strategies to reduce the impacts of batch effects in multiyear high‐throughput sequencing studies.  相似文献   

16.
Rosenbaum PR 《Biometrics》2011,67(3):1017-1027
Summary In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is “no departure” then this is the same as the power of a randomization test in a randomized experiment. A new family of u‐statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments—that is, it often has good Pitman efficiency—but small effects are invariably sensitive to small unobserved biases. Members of this family of u‐statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u‐statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology.  相似文献   

17.
In this paper, we describe a new restricted randomization method called run-reversal equilibrium (RRE), which is a Nash equilibrium of a game where (1) the clinical trial statistician chooses a sequence of medical treatments, and (2) clinical investigators make treatment predictions. RRE randomization counteracts how each investigator could observe treatment histories in order to forecast upcoming treatments. Computation of a run-reversal equilibrium reflects how the treatment history at a particular site is imperfectly correlated with the treatment imbalance for the overall trial. An attractive feature of RRE randomization is that treatment imbalance follows a random walk at each site, while treatment balance is tightly constrained and regularly restored for the overall trial. Less predictable and therefore more scientifically valid experiments can be facilitated by run-reversal equilibrium for multi-site clinical trials.  相似文献   

18.
Random permuted blocks can result in treatment imbalance if entry to the trial stops in mid‐block. This paper presents a restriction of this method of randomization. The restriction avoids severe treatment imbalance but gives unbiased estimators of the treatment difference and its variance.  相似文献   

19.
IIntroductionTheuseofrestrlctlonfragmentlengthpolymorphism(RFLP)markershasgreatlyslmpllfledthegeneticanalysisofquantitativetraits,providingareliableandextensiveframeworkofquantltatlvemarkerstowhichquantltatlyetyaitIOCI(QTL)clnhilinked[‘].TodetectthelinkagebetwwenRFLPmarkersandPhenotyPlcvariationsoh-served,generallinearmodelofanalysisofvariance(ANOVA)hasbeenextensivelyusedL‘zJ.ByusingF、populations,thecompletegeneticInformation,thatIs,thethreegenotypesofageneticfact…  相似文献   

20.
Pseudofactorialism is defined as ‘the invalid statistical analysis that results from the misidentification of two or more response variables as representing different levels of an experimental variable or treatment factor. Most often the invalid analysis consists of use of an (n + 1)‐way anova in a situation where two or more n‐way anova s would be the appropriate approach’. I and my students examined a total of 1362 papers published from the 1960s to 2009 reporting manipulative experiments, primarily in the field of ecology. The error was present in 7% of these, including 9% of 80 experimental papers examined in 2009 issues of Ecology and the Journal of Animal Ecology. Key features of 60 cases of pseudofactorialism are tabulated as a basis for discussion of the varied ways and circumstances in which the error can occur. As co‐authors, colleagues, editors and anonymous referees and editors who approved them for publication, a total of 459 persons other than the senior authors shared responsibility for these 60 papers. Pseudofactorialism may sometimes be motivated by a desire to test whether different response variables respond in the same way to treatment factors. Proper procedures for doing that are briefly reviewed. A major cause of pseudofactorialism is the widespread failure in statistics texts, primary literature and documentation for statistics software packages to distinguish the three major components of experimental design – treatment structure, design structure, response structure – and clearly define key terms such as experimental unit, evaluation unit, split unit, factorial and repeated measures. A quick way to check for the possible presence of the pseudofactorialism is to determine whether the number of valid experimental units in a study is smaller than (i) the error degrees of freedom in a multi‐way anova ; or (ii) the total number of tallies (N) in a multi‐way contingency table. Such situations also can indicate the commission of pseudoreplication, however.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号