首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Two issues that arise in the design and statistical analysis of in vivo SCE and similar experiments are considered. First, with regard to analysis, the merits of various methods of data transformation are explored in depth. The conclusion drawn is that common transformations of the type studied here seemingly offer little advantage in the assessment of whether a test agent induces SCE in a dose-related manner. Second, a proposal is made for a method to determine, subject to budgetary constraints, the desired numbers of animals/dose group and cells scored/animal. The approach advocated also lends itself to discussions weighing the gains and losses from possible reductions in the number of animals below the 'desired' levels.  相似文献   

3.
This editorial is an annex, essentially an amplification of earlier Animal Feed Science and Technology (AFST) editorials which focused on analytical and statistical issues of papers submitted for publication in AFST. This amplification is needed because, since publication of those editorials, there has been a sharp increase in papers submitted to AFST (particularly in vitro gas, ruminal in sacco, continuous culture fermenter and mini-silo studies), in which there have been substantive disagreements among authors, reviewers and editors as to what, exactly, constitutes acceptable statistical replicates as opposed to pseudo replicates. In this editorial, the Co-Editors in Chief of AFST provide clarification of their view relative to use of analytical observations in statistical analyses in papers submitted for consideration for publication in AFST. If the objective is to compare feeds and from the results, make inferences to populations, only multiple samples of each feed represent an acceptable feed base. This suggests that means comparisons based on repeated assays of the same sample, whether by chemical, physical or microbiological methods, will almost certainly be rejected by the CEIC.  相似文献   

4.
5.
Contemporary small-molecule drug discovery frequently involves the screening of large compound files as a core activity. Subsequently cost, speed, and safety become critical issues. In order to meet this need, numerous technologies have been developed to allow mix and measure approaches, facilitate miniaturization, and to increase speed and to minimize the use of potentially hazardous reagents such as radioactive materials. However, despite the on-paper advantages of these new technologies, risks can remain undefined. For example, the question of whether the novel method will facilitate identification of active chemical series in a way that is comparable with conventional methods arises. In order to address this question, we have taken the approach of carrying out experiments to directly compare the output of high-throughput screens using a given novel approach and a traditional method. The concordance between the screening methods can then be determined via comparison of the numbers and structures of the active molecules identified. This article describes the approach taken in our laboratory to minimize variability in such experiments and shows data that exemplifies the general result of lower than expected concordance. Statistical modeling was subsequently used to facilitate this interpretation. The model used beta-distribution function to generate a real-activity frequency relationship with added normal random error and occasional outliers to represent assay variability. Hence, the effect of assay parameters such as the threshold, the number of real actives, and the number of outliers and the standard deviation could readily be explored. The model was found to describe the data reasonably and moreover was found to be of great utility when it came to planning further optimal experiments. A key conclusion from the model was that concordance between screening methods could appear poor even when one approach is compared with itself. This occurs simply because the result is a function of assay threshold, standard deviation and the true compound % activity. In response to this finding we have adopted alternative experimental designs that more reliably measure the concordance between screening methods.  相似文献   

6.
On the statistical analysis of capture experiments   总被引:19,自引:0,他引:19  
HUGGINS  R. M. 《Biometrika》1989,76(1):133-140
  相似文献   

7.
To isolate the primary variables influencing acetabular cup and interface stresses, we performed an evaluation of cup loading and cup support variables, using a Statistical Design of Experiments (SDOE) approach. We developed three-dimensional finite element (FEM) models of the pelvis and adjacent bone. Cup support variables included fixation mechanism (cemented or noncemented), amount of bone support, and presence of metal backing. Cup loading variables included head size and cup thickness, cup/head friction, and conformity between the cup and head. Interaction between and among variables was determined using SDOE techniques. Of the variables tested, conformity, head size, and backing emerged as significant influences on stresses. Since initially nonconforming surfaces would be expected to wear into conforming surfaces, conformity is not expected to be a clinically significant variable. This indicates that head size should be tightly toleranced during manufacturing, and that small changes in head size can have a disproportionate influence on the stress environment. In addition, attention should be paid to the use of nonmetal backed cups, in limiting cup/bone interface stresses. No combination of secondary variables could compensate for, or override the effect of, the primary variables. Based on the results using the SDOE approach, adaptive FEM models simulating the wear process may be able to limit their parameters to head size and cup backing.  相似文献   

8.
9.
Lockwood III  John R. 《Oecologia》1998,116(4):475-481
A stopping rule for an experiment defines when (under what conditions) the experiment is terminated. I investigated the stopping rules used in numerous multiple–choice feeding-preference experiments and also examined a recently proposed method for analyzing the data arising from such experiments. All of the surveyed experiments imposed stopping rules which result in a random total food consumption. If an acceptable quantification of preference is relative consumption of different food types, then the proposed analysis will likely misstate the information about preference conveyed by the data. This is due to the fact that the method may confound differences in preferences among food types with differences in the total consumption across trials. I discuss this issue in detail and present an alternative procedure which is appropriate under all stopping regimes when preference is quantified through relative consumption. The procedure I suggest uses an index which is a multivariate generalization of the preference index suggested by Kogan and Goeden (Ann Entomol Soc 1970; 63: 1175–1180) and Kogan (Ann Entomol Soc 1972; 65: 675–683) and which is analogous to a selection index for discrete food units proposed by Manly (Biometrics 1974; 30: 281–294). Received: 29 November 1997 / Accepted: 20 April 1998  相似文献   

10.
MOTIVATION: Microarray experiments generate a high data volume. However, often due to financial or experimental considerations, e.g. lack of sample, there is little or no replication of the experiments or hybridizations. These factors combined with the intrinsic variability associated with the measurement of gene expression can result in an unsatisfactory detection rate of differential gene expression (DGE). Our motivation was to provide an easy to use measure of the success rate of DGE detection that could find routine use in the design of microarray experiments or in post-experiment assessment. RESULTS: In this study, we address the problem of both random errors and systematic biases in microarray experimentation. We propose a mathematical model for the measured data in microarray experiments and on the basis of this model present a t-based statistical procedure to determine DGE. We have derived a formula to determine the success rate of DGE detection that takes into account the number of microarrays, the number of genes, the magnitude of DGE, and the variance from biological and technical sources. The formula and look-up tables based on the formula, can be used to assist in the design of microarray experiments. We also propose an ad hoc method for estimating the fraction of non-differentially expressed genes within a set of genes being tested. This will help to increase the power of DGE detection. AVAILABILITY: The functions to calculate the success rate of DGE detection have been implemented as a Java application, which is accessible at http://www.le.ac.uk/mrctox/microarray_lab/Microarray_Softwares/Microarray_Softwares.htm  相似文献   

11.
The scientific value of the outcome of an experiment is closely related to its design and analysis. This article deals with the design issues of pseudoreplication (whether the experimental design has the statistical features needed to answer the question as posed) and execution errors (problems arising from how the experiment was conducted). Three issues of analysis are also dealt with: the number and type of response measures to record; how measures should, and should not, be combined into a single response measure; and how to interpret an apparent lack of response. Interactive playback is considered separately because it raises its own specific design and analysis issues. Although the examples generally refer to video playback, these issues are common to all experiments in behaviour. Received: 23 September 1999 / Received in revised form: 24 February 2000 / Accepted: 25 February 2000  相似文献   

12.
13.
Selection and optimization of the concentrations of chemically defined components effective on recombinant factor VIII:C production by CHO cells have been performed with statistically designed experiments. Screening of efficient compounds on factor VIII expression was performed by using HADAMARD method, while a precise optimization of the concentrations of two components was achieved with the help of DOEHLERT method. Increased specific activity of factor VIII and reduced cost medium have been obtained with a fewer number of experiments than compared to a less rational approach.  相似文献   

14.
15.
16.
Proteomics is an emerging field that uses many types of proteomic platforms however has few standardized procedures. Deciding which platform to use to perform large-scale proteomic studies is either based on personal preference or on so-called "figures of merit" such as dynamic range, resolution, and the limit of detection; these factors are often insufficient to predict the outcome of the experiment as the detection of peptides correlates to the chemical properties of each peptide. There is a need for a novel figure of merit that describes the overall performance of a platform based on measured output, which in proteomics is often a list of identified peptides. We report the development of such a figure of merit based on a predictive genetic algorithm. This algorithm takes into account the properties of the observed peptides such as length, hydrophobicity, and pI. Several large-scale studies that differed in sample type or platform were used to demonstrate the usefulness of the algorithm for improved experimental design. The figures that were obtained were clustered to find platforms that were biased in similar ways. Even though some platforms are different, they lead to the identification of similar peptide types and are thus redundant. The algorithm can thus be used as an exploratory tool to suggest a minimal number of complementary experiments in order to maximize experimental efficiency.  相似文献   

17.
A major bottleneck in drug discovery is the production of soluble human recombinant protein in sufficient quantities for analysis. This problem is compounded by the complex relationship between protein yield and the large number of variables which affect it. Here, we describe a generic framework for the rapid identification and optimization of factors affecting soluble protein yield in microwell plate fermentations as a prelude to the predictive and reliable scaleup of optimized culture conditions. Recombinant expression of firefly luciferase in Escherichia coli was used as a model system. Two rounds of statistical design of experiments (DoE) were employed to first screen (D-optimal design) and then optimize (central composite face design) the yield of soluble protein. Biological variables from the initial screening experiments included medium type and growth and induction conditions. To provide insight into the impact of the engineering environment on cell growth and expression, plate geometry, shaking speed, and liquid fill volume were included as factors since these strongly influence oxygen transfer into the wells. Compared to standard reference conditions, both the screening and optimization designs gave up to 3-fold increases in the soluble protein yield, i.e., a 9-fold increase overall. In general the highest protein yields were obtained when cells were induced at a relatively low biomass concentration and then allowed to grow slowly up to a high final biomass concentration, >8 g.L-1. Consideration and analysis of the model results showed 6 of the original 10 variables to be important at the screening stage and 3 after optimization. The latter included the microwell plate shaking speeds pre- and postinduction, indicating the importance of oxygen transfer into the microwells and identifying this as a critical parameter for subsequent scale translation studies. The optimization process, also known as response surface methodology (RSM), predicted there to be a distinct optimum set of conditions for protein expression which could be verified experimentally. This work provides a generic approach to protein expression optimization in which both biological and engineering variables are investigated from the initial screening stage. The application of DoE reduces the total number of experiments needed to be performed, while experimentation at the microwell scale increases experimental throughput and reduces cost.  相似文献   

18.
19.
20.
This paper describes a computational algorithm (STADEERS-STAtisticalDesign of Exeriments in Enzyme ReactorS) for the statisticaldesign of biochemical engineering experiments. The type of experimentthat qualifies for this package involves a batch reaction catalyzedby a soluble enzyme where the activity of the enzyme decayswith time. Assuming that both the catalytic action and the deactivationof the enzyme obey known rate expressions, the present codeis helpful in the process of obtaining estimates of the kineticparameters by providing as output the times at which samplesshould be withdrawn from the reacting mixture. Starting D-optimaldesign is used as a basis for the statistical approach. ThisBASIC code is a powerful tool when fitting a rate expressionto data because it increases the effectiveness of experimentationby helping the biochemical kineticist obtain data points withthe largest possible informa tional content.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号