首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.  相似文献   

2.
Although there have been several papers recommending appropriate experimental designs for ancient-DNA studies, there have been few attempts at statistical analysis. We assume that we cannot decide whether a result is authentic simply by examining the sequence (e.g., when working with humans and domestic animals). We use a maximum-likelihood approach to estimate the probability that a positive result from a sample is (either partly or entirely) an amplification of DNA that was present in the sample before the experiment began. Our method is useful in two situations. First, we can decide in advance how many samples will be needed to achieve a given level of confidence. For example, to be almost certain (95% confidence interval 0.96-1.00, maximum-likelihood estimate 1.00) that a positive result comes, at least in part, from DNA present before the experiment began, we need to analyze at least five samples and controls, even if all samples and no negative controls yield positive results. Second, we can decide how much confidence to place in results that have been obtained already, whether or not there are positive results from some controls. For example, the risk that at least one negative control yields a positive result increases with the size of the experiment, but the effects of occasional contamination are less severe in large experiments.  相似文献   

3.
生物医学动物实验中的实验设计和统计分析   总被引:1,自引:0,他引:1  
实验设计和统计分析在动物实验研究的启动、实施和结果评价中起着关键的作用。我们对实验设计的因素、原则及实验设计类型进行了综述,阐明了统计分析在整个研究所有环节中的重要意义,并提出在生物医学动物实验中容易忽视的统计学分析的问题。  相似文献   

4.
Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this article, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular, conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen.  相似文献   

5.
INTRODUCTION: Microarray experiments often have complex designs that include sample pooling, biological and technical replication, sample pairing and dye-swapping. This article demonstrates how statistical modelling can illuminate issues in the design and analysis of microarray experiments, and this information can then be used to plan effective studies. METHODS: A very detailed statistical model for microarray data is introduced, to show the possible sources of variation that are present in even the simplest microarray experiments. Based on this model, the efficacy of common experimental designs, normalisation methodologies and analyses is determined. RESULTS: When the cost of the arrays is high compared with the cost of samples, sample pooling and spot replication are shown to be efficient variance reduction methods, whereas technical replication of whole arrays is demonstrated to be very inefficient. Dye-swap designs can use biological replicates rather than technical replicates to improve efficiency and simplify analysis. When the cost of samples is high and technical variation is a major portion of the error, technical replication can be cost effective. Normalisation by centreing on a small number of spots may reduce array effects, but can introduce considerable variation in the results. Centreing using the bulk of spots on the array is less variable. Similarly, normalisation methods based on regression methods can introduce variability. Except for normalisation methods based on spiking controls, all normalisation requires that most genes do not differentially express. Methods based on spatial location and/or intensity also require that the nondifferentially expressing genes are at random with respect to location and intensity. Spotting designs should be carefully done so that spot replicates are widely spaced on the array, and genes with similar expression patterns are not clustered. DISCUSSION: The tools for statistical design of experiments can be applied to microarray experiments to improve both efficiency and validity of the studies. Given the high cost of microarray experiments, the benefits of statistical input prior to running the experiment cannot be over-emphasised.  相似文献   

6.
MOTIVATION: Many biomedical experiments are carried out by pooling individual biological samples. However, pooling samples can potentially hide biological variance and give false confidence concerning the data significance. In the context of microarray experiments for detecting differentially expressed genes, recent publications have addressed the problem of the efficiency of sample pooling, and some approximate formulas were provided for the power and sample size calculations. It is desirable to have exact formulas for these calculations and have the approximate results checked against the exact ones. We show that the difference between the approximate and the exact results can be large. RESULTS: In this study, we have characterized quantitatively the effect of pooling samples on the efficiency of microarray experiments for the detection of differential gene expression between two classes. We present exact formulas for calculating the power of microarray experimental designs involving sample pooling and technical replications. The formulas can be used to determine the total number of arrays and biological subjects required in an experiment to achieve the desired power at a given significance level. The conditions under which pooled design becomes preferable to non-pooled design can then be derived given the unit cost associated with a microarray and that with a biological subject. This paper thus serves to provide guidance on sample pooling and cost-effectiveness. The formulation in this paper is outlined in the context of performing microarray comparative studies, but its applicability is not limited to microarray experiments. It is also applicable to a wide range of biomedical comparative studies where sample pooling may be involved.  相似文献   

7.
Bone Morphogenetic Proteins (BMPs) are critical for pattern formation in many animals. In numerous tissues, BMPs become distributed in spatially non-uniform profiles. The gradients of signaling activity can be detected by a number of biological assays involving fluorescence microscopy. Quantitative analyses of BMP gradients are powerful tools to investigate the regulation of BMP signaling pathways during development. These approaches rely heavily on images as spatial representations of BMP activity levels, using them to infer signaling distributions that inform on regulatory mechanisms. In this perspective, we discuss current imaging assays and normalization methods used to quantify BMP activity profiles with a focus on the Drosophila wing primordium. We find that normalization tends to lower the number of samples required to establish statistical significance between profiles in controls and experiments, but the increased resolvability comes with a cost. Each normalization strategy makes implicit assumptions about the biology that impacts our interpretation of the data. We examine the tradeoffs for normalizing versus not normalizing, and discuss their impacts on experimental design and the interpretation of resultant data.  相似文献   

8.
Optimization of experiments, such as those used in drug discovery, can lead to useful savings of scientific resources. Factors such as sex, strain, and age of the animals and protocol-specific factors such as timing and methods of administering treatments can have an important influence on the response of animals to experimental treatments. Factorial experimental designs can be used to explore which factors and what levels of these factors will maximize the difference between a vehicle control and a known positive control treatment. This information can then be used to design more efficient experiments, either by reducing the numbers of animals used or by increasing the sensitivity so that smaller biological effects can be detected. A factorial experimental design approach is more effective and efficient than the older approach of varying one factor at a time. Two examples of real factorial experiments reveal how using this approach can potentially lead to a reduction in animal use and savings in financial and scientific resources without loss of scientific validity.  相似文献   

9.
ABSTRACT: Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice.  相似文献   

10.
An increasing number of studies are using landscape genomics to investigate local adaptation in wild and domestic populations. Implementation of this approach requires the sampling phase to consider the complexity of environmental settings and the burden of logistical constraints. These important aspects are often underestimated in the literature dedicated to sampling strategies. In this study, we computed simulated genomic data sets to run against actual environmental data in order to trial landscape genomics experiments under distinct sampling strategies. These strategies differed by design approach (to enhance environmental and/or geographical representativeness at study sites), number of sampling locations and sample sizes. We then evaluated how these elements affected statistical performances (power and false discoveries) under two antithetical demographic scenarios. Our results highlight the importance of selecting an appropriate sample size, which should be modified based on the demographic characteristics of the studied population. For species with limited dispersal, sample sizes above 200 units are generally sufficient to detect most adaptive signals, while in random mating populations this threshold should be increased to 400 units. Furthermore, we describe a design approach that maximizes both environmental and geographical representativeness of sampling sites and show how it systematically outperforms random or regular sampling schemes. Finally, we show that although having more sampling locations (between 40 and 50 sites) increase statistical power and reduce false discovery rate, similar results can be achieved with a moderate number of sites (20 sites). Overall, this study provides valuable guidelines for optimizing sampling strategies for landscape genomics experiments.  相似文献   

11.
For ethical and economic reasons, it is important to design animal experiments well, to analyze the data correctly, and to use the minimum number of animals necessary to achieve the scientific objectives---but not so few as to miss biologically important effects or require unnecessary repetition of experiments. Investigators are urged to consult a statistician at the design stage and are reminded that no experiment should ever be started without a clear idea of how the resulting data are to be analyzed. These guidelines are provided to help biomedical research workers perform their experiments efficiently and analyze their results so that they can extract all useful information from the resulting data. Among the topics discussed are the varying purposes of experiments (e.g., exploratory vs. confirmatory); the experimental unit; the necessity of recording full experimental details (e.g., species, sex, age, microbiological status, strain and source of animals, and husbandry conditions); assigning experimental units to treatments using randomization; other aspects of the experiment (e.g., timing of measurements); using formal experimental designs (e.g., completely randomized and randomized block); estimating the size of the experiment using power and sample size calculations; screening raw data for obvious errors; using the t-test or analysis of variance for parametric analysis; and effective design of graphical data.  相似文献   

12.
One of the most fundamental challenges in genome-wide RNA interference (RNAi) screens is to glean biological significance from mounds of data, which relies on the development and adoption of appropriate analytic methods and designs for quality control (QC) and hit selection. Currently, a Z-factor-based QC criterion is widely used to evaluate data quality. However, this criterion cannot take into account the fact that different positive controls may have different effect sizes and leads to inconsistent QC results in experiments with 2 or more positive controls with different effect sizes. In this study, based on a recently proposed parameter, strictly standardized mean difference (SSMD), novel QC criteria are constructed for evaluating data quality in genome-wide RNAi screens. Two good features of these novel criteria are: (1) SSMD has both clear original and probability meanings for evaluating the differentiation between positive and negative controls and hence the SSMD-based QC criteria have a solid probabilistic and statistical basis, and (2) these QC criteria obtain consistent QC results for multiple positive controls with different effect sizes. In addition, I propose multiple plate designs and the guidelines for using them in genome-wide RNAi screens. Finally, I provide strategies for using the SSMD-based QC criteria and effective plate design together to improve data quality. The novel SSMD-based QC criteria, effective plate designs, and related guidelines and strategies may greatly help to obtain high quality of data in genome-wide RNAi screens.  相似文献   

13.
The determination of a list of differentially expressed genes is a basic objective in many cDNA microarray experiments. We present a statistical approach that allows direct control over the percentage of false positives in such a list and, under certain reasonable assumptions, improves on existing methods with respect to the percentage of false negatives. The method accommodates a wide variety of experimental designs and can simultaneously assess significant differences between multiple types of biological samples. Two interconnected mixed linear models are central to the method and provide a flexible means to properly account for variability both across and within genes. The mixed model also provides a convenient framework for evaluating the statistical power of any particular experimental design and thus enables a researcher to a priori select an appropriate number of replicates. We also suggest some basic graphics for visualizing lists of significant genes. Analyses of published experiments studying human cancer and yeast cells illustrate the results.  相似文献   

14.
Determining sample sizes for microarray experiments is important but the complexity of these experiments, and the large amounts of data they produce, can make the sample size issue seem daunting, and tempt researchers to use rules of thumb in place of formal calculations based on the goals of the experiment. Here we present formulae for determining sample sizes to achieve a variety of experimental goals, including class comparison and the development of prognostic markers. Results are derived which describe the impact of pooling, technical replicates and dye-swap arrays on sample size requirements. These results are shown to depend on the relative sizes of different sources of variability. A variety of common types of experimental situations and designs used with single-label and dual-label microarrays are considered. We discuss procedures for controlling the false discovery rate. Our calculations are based on relatively simple yet realistic statistical models for the data, and provide straightforward sample size calculation formulae.  相似文献   

15.
Data processing systems for the management of animal houses are available as individually tailored solutions to meet the specific requirements of different institutions in various sectors. After four years of use, a “proprietary system” (originally used by Boehringer/Mannheim), which was based on UNIFACE and ORACLE database software and consisted of a VAX station with eight terminals, was replaced by a PC-based multi-user system for administrative employees and animal technicians. The new system runs under the operating systems of Microsoft Windows 98 and Microsoft Windows NT and uses Microsoft Access 97 as the database software. A “multipurpose program system” was created in order to fulfil the following main functions: documentation and control of experimental plans and animal usage; registering changes in the number of animals available; controlling room use and animal storage areas; maintenance cost calculations for determining the cost of experiments and an evaluation of how animals are used and consumed.

CAMS (Computerised Animal House Management System) has a centralised / decentralised structure and is a user-friendly, interactive system which can only be accessed by authorised groups of users. You do not need to be highly skilled with computers to master the system. It can generate daily and monthly reports and any necessary documentation. Furthermore, it complies with the revised Animal Protection Act (1998; annual report on number of animals used), facilitates a detailed analysis of housing and the associated costs and provides options for documenting different types of experimental data.

For this reason, the system has become an increasingly important tool for the management of our own animal facilities and animal experiments as well as for external ones.  相似文献   


16.
Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations  相似文献   

17.
ABSTRACT

Nonhuman animal welfare science is the scientific study of the welfare state of animals that attempts to make inferences about how animals feel from their behavior, endocrine function, and/or signs of physical health. These welfare measurements are applicable within zoos yet inherently more complex than in farms and laboratories. This complexity is due to the vast number of species housed, lack of fundamental biological information, and relatively lower sample sizes and levels of experimental control. This article summarizes the invited presentations on the topic of “Advances in Applied Animal Welfare Science,” given at the Fourth Global Animal Welfare Congress held jointly by the Detroit Zoological Society and the World Association of Zoos and Aquariums in 2017. The article focuses on current trends in research on zoo animal welfare under the following themes: (a) human–animal interactions and relationships, (b) anticipatory behavior, (c) cognitive enrichment, (d) behavioral biology, and (e) reproductive and population management. It highlights areas in which further advancements in zoo animal welfare science are needed and the challenges that may be faced in doing so.  相似文献   

18.
19.
The crossover design is often used in biomedical trials since it eliminates between subject variability. This paper is concerned with the statistical analysis of data arising from such trials when assumptions like normality do not necessarily apply. Nonparametric analysis of the two-period, two-treatment design was first described by Koch in a paper 1972. The purpose of this paper is to study nonparametric methods in crossover designs with three or more treatments and an equal number of periods. The proposed test for direct treatment effects is based on within subject comparisons after removing a possible period effect. With only two treatments this test reduces to the twosided Wilcoxon signed rank test. By simulation experiments the validity of the significance level of the test when using the asymptotic distribution of the test statistic are manifested and the power against different alternatives illustrated. A test for first order carryover effects can be constructed by a straightforward generalization of the test proposed by Koch in 1972. However, since this test is based on between subject comparisons its power will be low. Our recommendation is to consider the crossover design rather than the parallel group design if the carryover effects are assumed to be neglible or positive and smaller then the direct treatment effects.  相似文献   

20.
Investigating differences between means of more than two groups or experimental conditions is a routine research question addressed in biology. In order to assess differences statistically, multiple comparison procedures are applied. The most prominent procedures of this type, the Dunnett and Tukey-Kramer test, control the probability of reporting at least one false positive result when the data are normally distributed and when the sample sizes and variances do not differ between groups. All three assumptions are non-realistic in biological research and any violation leads to an increased number of reported false positive results. Based on a general statistical framework for simultaneous inference and robust covariance estimators we propose a new statistical multiple comparison procedure for assessing multiple means. In contrast to the Dunnett or Tukey-Kramer tests, no assumptions regarding the distribution, sample sizes or variance homogeneity are necessary. The performance of the new procedure is assessed by means of its familywise error rate and power under different distributions. The practical merits are demonstrated by a reanalysis of fatty acid phenotypes of the bacterium Bacillus simplex from the “Evolution Canyons” I and II in Israel. The simulation results show that even under severely varying variances, the procedure controls the number of false positive findings very well. Thus, the here presented procedure works well under biologically realistic scenarios of unbalanced group sizes, non-normality and heteroscedasticity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号