共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
High-throughput screening (HTS) is an efficient technology for drug discovery. It allows for screening of more than 100,000 compounds a day per screen and requires effective procedures for quality control. The authors have developed a method for evaluating a background surface of an HTS assay; it can be used to correct raw HTS data. This correction is necessary to take into account systematic errors that may affect the procedure of hit selection. The described method allows one to analyze experimental HTS data and determine trends and local fluctuations of the corresponding background surfaces. For an assay with a large number of plates, the deviations of the background surface from a plane are caused by systematic errors. Their influence can be minimized by the subtraction of the systematic background from the raw data. Two experimental HTS assays from the ChemBank database are examined in this article. The systematic error present in these data was estimated and removed from them. It enabled the authors to correct the hit selection procedure for both assays. 相似文献
3.
Competition as a source of errors in RAPD analysis 总被引:21,自引:0,他引:21
C. Halldén M. Hansen N. -O. Nilsson A. Hjerdin T. Säll 《TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik》1996,93(8):1185-1192
We have used artificial 11 DNA mixtures of all pairwise combinations of four doubled haploid Brassica napus lines to test the ability of RAPDs to function as reliable dominant genetic markers. In situations where a specific RAPD band is present in one homozygous line but absent in the other, the band is expected in the artificial heterozygote, i.e. in the 11 DNA mixture. In 84 of all 613 heterozygous situations analysed, the expected band failed to amplify in the RAPD reaction. Thus, RAPD markers will lead to an erroneous genetic interpretation in 14% of all cases. In contrast, the formation of non-parental heteroduplex bands was found at a frequency of only 0.2%. Analysis of 1 1 mixtures using (1) a different set of optimized reaction conditions and (2) a material with low genomic complexity (Bacillus cereus) gave identical results. Serial dilutions of one genome into another, in steps of 10%, showed that all of the polymorphic bands decreased in intensity as a linear function of their respective proportion in the mixture. In dilutions with water no differences in band intensity were detected. Thus, competition occurs in the amplification of all RAPD fragments and is a major source of genotyping errors in RAPD analysis. 相似文献
4.
G Bogányi 《Acta physiologica Hungarica》1987,70(2-3):283-287
Nowadays lung function parameters will be determined by digital data processing algorithms. Minimal sampling frequency characterizing analogue-digital convertion is given by Shannon's law. Integration is a typical operation in data processing. The dynamical errors of generally used integration algorithms are much influenced by the sampling frequency. Theoretical examinations show unambiguously that using Simpson's rule the condition 6fsignal less than or equal to fsample has to be fulfilled to keep the amplitude error of integration lower than 1%. This means that sampling frequency will be determined by both the signal's spectrum and the structure of the data processing system. 相似文献
5.
Heuristic approaches were used to quantify the influence that sequencing errors have on estimates of nucleotide diversity, substitution rate, and the construction of genealogies. Error rates of less than 1 nucleotide/kb probably have little affect on conclusions about the evolutionary history of highly polymorphic organisms such as Drosophila and Escherichia coli, but organisms with very low nucleotide diversity, such as humans, require greater sequencing accuracy. A scan of GenBank for corrections of previous errors reveals that sequencing errors are highly nonrandom. 相似文献
6.
In this paper we consider the problem where there is a randomized experimental design with several successive time measurements on each experimental unit. One approach to the analysis of such data is to treat time as the subplot treatment and to use a split-plot analysis of variance. Alternatively, the problem may be considered in a more general multivariate framework. Here we recognize the time-induced correlations and apply an autoregressive time series modelling approach. Estimation and testing are addressed. Two examples are presented to illustrate the practicality of our procedure. Some extensions are also considered briefly. 相似文献
7.
Linkage analysis in the presence of errors II: marker-locus genotyping errors modeled with hypercomplex recombination fractions
下载免费PDF全文

It is well known that genotyping errors lead to loss of power in gene-mapping studies and underestimation of the strength of correlations between trait- and marker-locus genotypes. In two-point linkage analysis, these errors can be absorbed in an inflated recombination-fraction estimate, leaving the test statistic quite robust. In multipoint analysis, however, genotyping errors can easily result in false exclusion of the true location of a disease-predisposing gene. In a companion article, we described a "complex-valued" extension of the recombination fraction to accommodate errors in the assignment of trait-locus genotypes, leading to a multipoint LOD score with the same robustness to errors in trait-locus genotypes that is seen with the conventional two-point LOD score. Here, a further extension of this model to "hypercomplex-valued" recombination fractions (hereafter referred to as "hypercomplex recombination fractions") is presented, to handle random and systematic sources of marker-locus genotyping errors. This leads to a multipoint method (either "model-based" or "model-free") with the same robustness to marker-locus genotyping errors that is seen with conventional two-point analysis but with the advantage that multiple marker loci can be used jointly to increase meiotic informativeness. The cost of this increased robustness is a decrease in fine-scale resolution of the estimated map location of the trait locus, in comparison with traditional multipoint analysis. This probability model further leads to algorithms for the estimation of the lower bounds for the error rates for genomewide and locus-specific genotyping, based on the null-hypothesis distribution of the LOD-score statistic in the presence of such errors. It is argued that those genome scans in which the LOD score is 0 for >50% of the genome are likely to be characterized by high rates of genotyping errors in general. 相似文献
8.
Wang J 《Molecular ecology》2010,19(22):5061-5078
Genetic markers are widely used to determine the parentage of individuals in studies of mating systems, reproductive success, dispersals, quantitative genetic parameters and in the management of conservation populations. These markers are, however, imperfect for parentage analyses because of the presence of genotyping errors and undetectable alleles, which may cause incompatible genotypes (mismatches) between parents and offspring and thus result in false exclusions of true parentage. Highly polymorphic markers widely used in parentage analyses, such as microsatellites, are especially prone to genotyping errors. In this investigation, I derived the probabilities of excluding a random (related) individual from parentage and the probabilities of Mendelian-inconsistent errors (mismatches) and Mendelian-consistent errors (which do not cause mismatches) in parent-offspring dyads, when a marker having null alleles, allelic dropouts and false alleles is used in a parentage analysis. These probabilities are useful in evaluating the impact of various types of genotyping errors on the information content of a set of markers in and thus the power of a parentage analysis, in determining the threshold number of genetic mismatches that is appropriate for a parentage exclusion analysis and in estimating the rates of genotyping errors and frequencies of null alleles from observed mismatches between known parent-offspring dyads. These applications are demonstrated by numerical examples using both hypothetical and empirical data sets and discussed in the context of practical parentage exclusion analyses. 相似文献
9.
Hsiao LL Jensen RV Yoshida T Clark KE Blumenstock JE Gullans SR 《BioTechniques》2002,32(2):330-2, 334, 336
10.
Background: Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution.Methods and findings: We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n.Conclusions and significance: We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning. 相似文献
11.
Rie Hagihara Rhondda E. Jones Amanda J. Hodgson Helene Marsh 《Journal of experimental marine biology and ecology》2011,399(2):173-181
Knowledge of the diving behaviour of aquatic animals expanded considerably with the invention of time-depth recorders (TDRs) in the 1960s. The large volume of data acquired from TDRs can be analyzed using dive analysis software, however, the application of the software has received relatively little attention. We present an empirical procedure to select optimum values that are critical to obtaining reliable results: the zero-offset correction (ZOC) and the dive threshold. We used dive data from shallow-diving coastal dugongs (Dugong dugon) and visual observations from an independent study to develop and test a procedure that minimizes errors in characterizing dives. We initially corrected the surface level using custom software. We then determined the optimum values for each parameter by classifying dives identified by an open-source dive analysis software into Plausible and Implausible dives based on the duration of dives. The Plausible dives were further classified as Unrecognized dives if they were not identified by the software but were of realistic dive duration. The comparison of these dive types indicated that a ZOC of 1 m and a dive threshold of 0.75 m were the optimum values for our dugong data as they gave the largest number of Plausible dives and smaller numbers of other dive types. Frequency distributions of dive durations from TDRs and independent visual observations supported the selection. Our procedure could be applied to other shallow-diving animals such as coastal dolphins and turtles. 相似文献
12.
In movement analysis of the horse, large errors result from movements of the skin with respect to the underlying bones. A generally applicable, two-dimensional, method for correction of these skin-movement errors in kinematic data has been developed. It was tested on a kinematic analysis of the hindlimb in a walking pony. The results indicate that without correction for skin-movement errors, misreadings of up to 15 degrees in the knee angle and 30% in the moment arm of the gastrocnemius muscle can be expected. 相似文献
13.
14.
P Rother W Jahn G Fitzl T Wallmann U Walter 《Gegenbaurs morphologisches Jahrbuch》1986,132(6):839-845
Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. 相似文献
15.
16.
S. M. Cummings M. McMullan D. A. Joyce C. van Oosterhout 《Conservation Genetics》2010,11(3):1095-1097
PCR and sequencing artefacts can seriously bias population genetic analyses, particularly of populations with low genetic variation such as endangered vertebrate populations. Here, we estimate the error rates, discuss their population genetics implications, and propose a simple detection method that helps to reduce the risk of accepting such errors. We study the major histocompatibility complex (MHC) class IIB of guppies, Poecilia reticulata and find that PCR base misincorporations inflate the apparent sequence diversity. When analysing neutral genes, such bias can inflate estimates of effective population size. Previously suggested protocols for identifying genuine alleles are unlikely to exclude all sequencing errors, or they ignore genuine sequence diversity. We present a novel and statistically robust method that reduces the likelihood of accepting PCR artefacts as genuine alleles, and which minimises the necessity of repeated genotyping. Our method identifies sequences that are unlikely to be a PCR artefact, and which need to be independently confirmed through additional PCR of the same template DNA. The proposed methods are recommended particularly for population genetic studies that involve multi-template DNA and in studies on genes with low genetic diversity. 相似文献
17.
Emily B. Kramer Haritha Vallabhaneni Lauren M. Mayer Philip J. Farabaugh 《RNA (New York, N.Y.)》2010,16(9):1797-1808
The process of protein synthesis must be sufficiently rapid and sufficiently accurate to support continued cellular growth. Failure in speed or accuracy can have dire consequences, including disease in humans. Most estimates of the accuracy come from studies of bacterial systems, principally Escherichia coli, and have involved incomplete analysis of possible errors. We recently used a highly quantitative system to measure the frequency of all types of misreading errors by a single tRNA in E. coli. That study found a wide variation in error frequencies among codons; a major factor causing that variation is competition between the correct (cognate) and incorrect (near-cognate) aminoacyl-tRNAs for the mutant codon. Here we extend that analysis to measure the frequency of missense errors by two tRNAs in a eukaryote, the yeast Saccharomyces cerevisiae. The data show that in yeast errors vary by codon from a low of 4 × 10−5 to a high of 6.9 × 10−4 per codon and that error frequency is in general about threefold lower than in E. coli, which may suggest that yeast has additional mechanisms that reduce missense errors. Error rate again is strongly influenced by tRNA competition. Surprisingly, missense errors involving wobble position mispairing were much less frequent in S. cerevisiae than in E. coli. Furthermore, the error-inducing aminoglycoside antibiotic, paromomycin, which stimulates errors on all error-prone codons in E. coli, has a more codon-specific effect in yeast. 相似文献
18.
Based on the well-known mechanism describing Michaelis-Menten kinetics, three rate expressions may be developed: the exact solution (Model 1), a rate equation resulting from the pseudo-steady-state assumption (Model 2), and Model 2 with the additional assumption that the amount of free substrate is approximately equal to the total amount of substrate (Model 3). Although Model 1 is the most precise, it must be integrated numerically and it requires three experimentally determined parameters. Models 2 and 3, however, are simpler and require only two parameters. Using dimensionless forms of the three models, we have evaluated the errors in the two simplified models relative to the exact solution using a wide range of parameter values. The choice of model for reactor design depends on the initial substrate to enzyme ratio (alpha(0)), and on the ratio of the Michaelis-Menten constant to the enzyme concentration (sigma). Based on a 2% model error criteria, when alpha(0) > 15 or sigma >/= 100, Model 3 is adequate; if 5 < alpha(0) < 15, or if sigma >/= 10, then Model 2 may be used; and if alpha(0) < 5 and sigma < 10, then the exact solution (Model 1) is required. 相似文献
19.
20.