首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided.  相似文献   

2.
1. The effect of systematic error (loss of ligand, complex or macromolecule) on three of the experimental designs by which equilibrium dialysis may be used to quantify the interaction of ligand and macromolecule is examined theoretically, and the design that is least sensitive to systematic error is identified. 2. Thirteen methods for fitting the binding isotherm to experimental data are compared by using them to analyse simulated data containing random error, and the most reliable method is identified.  相似文献   

3.
High-throughput screening (HTS) is an efficient technology for drug discovery. It allows for screening of more than 100,000 compounds a day per screen and requires effective procedures for quality control. The authors have developed a method for evaluating a background surface of an HTS assay; it can be used to correct raw HTS data. This correction is necessary to take into account systematic errors that may affect the procedure of hit selection. The described method allows one to analyze experimental HTS data and determine trends and local fluctuations of the corresponding background surfaces. For an assay with a large number of plates, the deviations of the background surface from a plane are caused by systematic errors. Their influence can be minimized by the subtraction of the systematic background from the raw data. Two experimental HTS assays from the ChemBank database are examined in this article. The systematic error present in these data was estimated and removed from them. It enabled the authors to correct the hit selection procedure for both assays.  相似文献   

4.
In the degradation of chlorophyll, chlorophyllase catalyzes the initial hydrolysis of the phytol moiety from the pigment. Since chlorophyll degradation is a defining feature of plant senescence, compounds inhibiting chlorophyllase activity may delay senescence, thereby improving shelf life and appearance of plant products. Here we describe the development of a 96-well plate-based purification and assay system for measuring chlorophyllase activity. Integrated lysis and immobilized metal affinity chromatography plates were used for purifying recombinant hexahistidine-tagged Triticum aestivum (wheat) chlorophyllase from Escherichia coli. Chlorophyllase assays using chlorophyll as a substrate showed that the immobilized fusion protein displayed kinetic parameters similar to those of recombinant enzyme purified by affinity chromatography; however, the need to extract reaction products from a multiwell plate limits the value of this assay for high-throughput screening applications. Replacing chlorophyll with p-nitrophenyl-ester substrates eliminates the extraction step and allows for continuous measurement of chlorophyllase activity in a multiwell plate format. Determination of steady state kinetic constants, pH rate profile, the inhibitory effects of metal ions and esterase inhibitors, and the effect of functional group-modifying reagents validated the utility of the plate-based system. The combined purification and assay system provides a convenient and rapid method for the assessment of chlorophyllase activity.  相似文献   

5.
RNA interference (RNAi) not only plays an important role in drug discovery but can also be developed directly into drugs. RNAi high-throughput screening (HTS) biotechnology allows us to conduct genome-wide RNAi research. A central challenge in genome-wide RNAi research is to integrate both experimental and computational approaches to obtain high quality RNAi HTS assays. Based on our daily practice in RNAi HTS experiments, we propose the implementation of 3 experimental and analytic processes to improve the quality of data from RNAi HTS biotechnology: (1) select effective biological controls; (2) adopt appropriate plate designs to display and/or adjust for systematic errors of measurement; and (3) use effective analytic metrics to assess data quality. The applications in 5 real RNAi HTS experiments demonstrate the effectiveness of integrating these processes to improve data quality. Due to the effectiveness in improving data quality in RNAi HTS experiments, the methods and guidelines contained in the 3 experimental and analytic processes are likely to have broad utility in genome-wide RNAi research.  相似文献   

6.
Comparison of microarray designs for class comparison and class discovery   总被引:4,自引:0,他引:4  
MOTIVATION: Two-color microarray experiments in which an aliquot derived from a common RNA sample is placed on each array are called reference designs. Traditionally, microarray experiments have used reference designs, but designs without a reference have recently been proposed as alternatives. RESULTS: We develop a statistical model that distinguishes the different levels of variation typically present in cancer data, including biological variation among RNA samples, experimental error and variation attributable to phenotype. Within the context of this model, we examine the reference design and two designs which do not use a reference, the balanced block design and the loop design, focusing particularly on efficiency of estimates and the performance of cluster analysis. We calculate the relative efficiency of designs when there are a fixed number of arrays available, and when there are a fixed number of samples available. Monte Carlo simulation is used to compare the designs when the objective is class discovery based on cluster analysis of the samples. The number of discrepancies between the estimated clusters and the true clusters were significantly smaller for the reference design than for the loop design. The efficiency of the reference design relative to the loop and block designs depends on the relation between inter- and intra-sample variance. These results suggest that if cluster analysis is a major goal of the experiment, then a reference design is preferable. If identification of differentially expressed genes is the main concern, then design selection may involve a consideration of several factors.  相似文献   

7.
The standard (STD) 5 × 5 hybrid median filter (HMF) was previously described as a nonparametric local backestimator of spatially arrayed microtiter plate (MTP) data. As such, the HMF is a useful tool for mitigating global and sporadic systematic error in MTP data arrays. Presented here is the first known HMF correction of a primary screen suffering from systematic error best described as gradient vectors. Application of the STD 5 × 5 HMF to the primary screen raw data reduced background signal deviation, thereby improving the assay dynamic range and hit confirmation rate. While this HMF can correct gradient vectors, it does not properly correct periodic patterns that may present in other screening campaigns. To address this issue, 1 × 7 median and a row/column 5 × 5 hybrid median filter kernels (1 × 7 MF and RC 5 × 5 HMF) were designed ad hoc, to better fit periodic error patterns. The correction data show periodic error in simulated MTP data arrays is reduced by these alternative filter designs and that multiple corrective filters can be combined in serial operations for progressive reduction of complex error patterns in a MTP data array.  相似文献   

8.
The problem of evaluating the kinetic parameters associated with an enzyme-catalysed reaction is examined. If the errors associated with velocity measurements are unknown, or are known to deviate from a normal distribution, then methods based on the minimization of the sum of squares are inappropriate. It is shown that by using a proper experimental design then the kinetic parameters may be estimated unambiguously while making only minimal assumptions regarding the error structure of the data. The design consists of several replicate measurements of the velocity at as many experimental conditions as there are parameters to be estimated. Formulae are presented for choosing the experimental conditions for selected kinetic equations and a computer program is described for designing experiments for any kinetic model. The designs formulated are optimal in the sense that they minimize the overall variance of the parameter estimates.  相似文献   

9.
A statistical model is proposed for the analysis of errors in microarray experiments and is employed in the analysis and development of a combined normalisation regime. Through analysis of the model and two-dye microarray data sets, this study found the following. The systematic error introduced by microarray experiments mainly involves spot intensity-dependent, feature-specific and spot position-dependent contributions. It is difficult to remove all these errors effectively without a suitable combined normalisation operation. Adaptive normalisation using a suitable regression technique is more effective in removing spot intensity-related dye bias than self-normalisation, while regional normalisation (block normalisation) is an effective way to correct spot position-dependent errors. However, dye-flip replicates are necessary to remove feature-specific errors, and also allow the analyst to identify the experimentally introduced dye bias contained in non-self-self data sets. In this case, the bias present in the data sets may include both experimentally introduced dye bias and the biological difference between two samples. Self-normalisation is capable of removing dye bias without identifying the nature of that bias. The performance of adaptive normalisation, on the other hand, depends on its ability to correctly identify the dye bias. If adaptive normalisation is combined with an effective dye bias identification method then there is no systematic difference between the outcomes of the two methods.  相似文献   

10.
The development and characterization of a one-step homogeneous immunoassay-based multiwell ImmunoChip is reported for the simultaneous detection and quantitation of antiepileptic drugs (AEDs). The assay platform uses a cloned enzyme donor immunoassay (CEDIA) and a Beta-Glo assay system for generation of bioluminescent signal. Results of the one-step CEDIA for three AEDs (carbamazepine, phenytoin, and valproic acid), in the presence of serum, correlate well with the values determined by fluorescence polarization immunoassay. CEDIA intra- and interassay coefficients of variation are less than 10%. A microfabrication process, xurography, was used to produce the multiwell ImmunoChip. Assay reagents were dispensed and lyophilized in a three-layer pattern. The multiwell ImmunoChip prototype was used to detect and quantify AEDs in serum samples containing all three drugs. Luminescent signals generated from each well were recorded with a charge-coupled device (CCD) camera. The assays performed on an ImmunoChip were fast (5 min), requiring only small volumes of both the reagents (<1 microl/well) and the serum sample. The ImmunoChip assay platform described in this article may be well suited for therapeutic monitoring of drugs and metabolites at the point-of-care setting.  相似文献   

11.
The application of single-cell RNA sequencing (scRNAseq) for the evaluation of chemicals, drugs, and food contaminants presents the opportunity to consider cellular heterogeneity in pharmacological and toxicological responses. Current differential gene expression analysis (DGEA) methods focus primarily on two group comparisons, not multi-group dose–response study designs used in safety assessments. To benchmark DGEA methods for dose–response scRNAseq experiments, we proposed a multiplicity corrected Bayesian testing approach and compare it against 8 other methods including two frequentist fit-for-purpose tests using simulated and experimental data. Our Bayesian test method outperformed all other tests for a broad range of accuracy metrics including control of false positive error rates. Most notable, the fit-for-purpose and standard multiple group DGEA methods were superior to the two group scRNAseq methods for dose–response study designs. Collectively, our benchmarking of DGEA methods demonstrates the importance in considering study design when determining the most appropriate test methods.  相似文献   

12.
One of the most fundamental challenges in genome-wide RNA interference (RNAi) screens is to glean biological significance from mounds of data, which relies on the development and adoption of appropriate analytic methods and designs for quality control (QC) and hit selection. Currently, a Z-factor-based QC criterion is widely used to evaluate data quality. However, this criterion cannot take into account the fact that different positive controls may have different effect sizes and leads to inconsistent QC results in experiments with 2 or more positive controls with different effect sizes. In this study, based on a recently proposed parameter, strictly standardized mean difference (SSMD), novel QC criteria are constructed for evaluating data quality in genome-wide RNAi screens. Two good features of these novel criteria are: (1) SSMD has both clear original and probability meanings for evaluating the differentiation between positive and negative controls and hence the SSMD-based QC criteria have a solid probabilistic and statistical basis, and (2) these QC criteria obtain consistent QC results for multiple positive controls with different effect sizes. In addition, I propose multiple plate designs and the guidelines for using them in genome-wide RNAi screens. Finally, I provide strategies for using the SSMD-based QC criteria and effective plate design together to improve data quality. The novel SSMD-based QC criteria, effective plate designs, and related guidelines and strategies may greatly help to obtain high quality of data in genome-wide RNAi screens.  相似文献   

13.
In the last years, biostatistical research has begun to apply linear models and design theory to develop efficient experimental designs and analysis tools for gene expression microarray data. With two-colour microarrays, direct comparisons of RNA-targets are possible and lead to incomplete block designs. In this setting, efficient designs for simple and factorial microarray experiments have mainly been proposed for technical replicates. But for biological replicates, which are crucial to obtain inference that can be generalised to a biological population, this question has only been discussed recently and is not fully solved yet. In this paper, we propose efficient designs for independent two-sample experiments using two-colour microarrays enabling biologists to measure their biological random samples in an efficient manner to draw generalisable conclusions. We give advice for experimental situations with differing group sizes and show the impact of different designs on the variance and degrees of freedom of the test statistics. The designs proposed in this paper can be evaluated using SAS PROC MIXED or S+/R lme.  相似文献   

14.
A multistage procedure, which is based on the likelihood principle, is proposed to identify active effects in unreplicated factorial designs and their fractions. The proposed procedure controls the experimental error rate (EER) at any prespecified level in industrial and biomedical experiments. Extensive comparison with Lenth 's (1989) test is discussed.  相似文献   

15.
Roark DE 《Biophysical chemistry》2004,108(1-3):121-126
Biophysical chemistry experiments, such as sedimentation-equilibrium analyses, require computational techniques to reduce the effects of random errors of the measurement process. The existing approaches have primarily relied on assumption of polynomial models and least-squares approximation. Such models by constraining the data to remove random fluctuations may distort the data and cause loss of information. The better the removal of random errors the greater is the likely introduction of systematic errors through the constraining fit itself. An alternative technique, reverse smoothing, is suggested that makes use of a more model-free approach of exponential smoothing of the first derivative. Exponential smoothing approaches have been generally unsatisfactory because they introduce significant data lag. The approaches given here compensates for the lag defect and appears promising for the smoothing of many experimental data sequences, including the macromolecular concentration data generated by sedimentation-equilibria experiments. Test results on simulated sedimentation-equilibrium data indicate that a 4-fold reduction in error may be typical over standard analyses techniques.  相似文献   

16.
Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses.  相似文献   

17.
Human observations during behavioral studies are expensive, time‐consuming, and error prone. For this reason, automatization of experiments is highly desirable, as it reduces the risk of human errors and workload. The robotic system we developed is simple and cheap to build and handles feeding and data collection automatically. The system was built using mostly off‐the‐shelf components and has a novel feeding mechanism that uses servos to perform refill operations. We used the robotic system in two separate behavioral studies with bumblebees (Bombus terrestris): The system was used both for training of the bees and for the experimental data collection. The robotic system was reliable, with no flight in our studies failing due to a technical malfunction. The data recorded were easy to apply for further analysis. The software and the hardware design are open source. The development of cheap open‐source prototyping platforms during the recent years has opened up many possibilities in designing of experiments. Automatization not only reduces workload, but also potentially allows experimental designs never done before, such as dynamic experiments, where the system responds to, for example, learning of the animal. We present a complete system with hardware and software, and it can be used as such in various experiments requiring feeders and collection of visitation data. Use of the system is not limited to any particular experimental setup or even species.  相似文献   

18.
This article presents a method to test the presence of relatively small systematic measurement errors; e.g., those caused by inaccurate calibration or sensor drift. To do this, primary measurements-flow rates and concentrations-are first translated into observed conversions, which should satisfy several constraints, like the laws of conservation of chemical elements. This study considers three objectives: 1.Modification of the commonly used balancing technique to improve error sensitivity to be able to detect small systematic errors. To this end, the balancing technique is applied sequentially in time.2.Extension of the method to enable direct diagnosis of errors in the primary measurements instead of diagnosing errors in the observed conversions. This was achieved by analyzing how individual errors in the primary measurements are expressed in the residual vector.3.Derivation of a new systematic method to quantitatively determine the sensitivity of the error, is that error size at which the expected value of the chisquare test function equals its critical value.The method is applied to industrial data demonstrating the effectiveness of the approach. It was shown that, for most possible error sources, a systematic errors of 2% to 5% could be detected. In given application, the variation of the N-content of biomass was appointed to be the cause of errors. (c) 1994 John Wiley & Sons, Inc.  相似文献   

19.
R D Snee  J D Irr 《Mutation research》1984,128(2):115-125
Ames Salmonella test data collected in our laboratory and 3 National Cancer Institute contract laboratories were analyzed to study the distribution of experimental errors associated with the test. It is shown that the Poisson distribution is not appropriate, and that the power transformation model Y = (revertants/plate)lambda, with lambda = 0.2 as estimated by the methods of Box and Cox, produced a measurement scale on which the experimental errors could be adequately described by a normal (Gaussian) distribution with a constant variance. The modeling procedure enables one to properly use analysis of variance, regression analysis, and Student's t test to analyze Ames Salmonella test results, and well-known statistical quality control procedures to monitor laboratory performance. The method detects weak mutagenic activity and measures the amount and uncertainty of the increase in revertants/plate. The development of the power transformation model is discussed and examples of its use in the interpretation of Ames Salmonella assay results are included.  相似文献   

20.
A statistical model for doubled haploids and backcrosses based on the interval-mapping methodology has been used to carry out power studies to investigate the effects of different experimental designs, heritabilities of the quantitative trait, and types of gene action, using two test statistics, the F of Fisher-Snedecor and the LOD score. The doubled haploid experimental design is more powerful than backcrosses while keeping actual type I errors similar to nominal ones. For the doubled haploid design, individual QTLs, showing heritabilities as low as 5% were detected in about 90% of the cases using only 250 individuals. The power to detect a given QTL is related to its contribution to the heritability of the trait. For a given nominal type I error, tests using F values are more powerful than with LOD scores. It seems that more conservative levels should be used for the LOD score in order to increase the power and obtain type I errors similar to nominal ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号