首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High-throughput screening (HTS) is an efficient technology for drug discovery. It allows for screening of more than 100,000 compounds a day per screen and requires effective procedures for quality control. The authors have developed a method for evaluating a background surface of an HTS assay; it can be used to correct raw HTS data. This correction is necessary to take into account systematic errors that may affect the procedure of hit selection. The described method allows one to analyze experimental HTS data and determine trends and local fluctuations of the corresponding background surfaces. For an assay with a large number of plates, the deviations of the background surface from a plane are caused by systematic errors. Their influence can be minimized by the subtraction of the systematic background from the raw data. Two experimental HTS assays from the ChemBank database are examined in this article. The systematic error present in these data was estimated and removed from them. It enabled the authors to correct the hit selection procedure for both assays.  相似文献   

2.
For surface fluxes of carbon dioxide, the net daily flux is the sum of daytime and nighttime fluxes of approximately the same magnitude and opposite direction. The net flux is therefore significantly smaller than the individual flux measurements and error assessment is critical in determining whether a surface is a net source or sink of carbon dioxide. For carbon dioxide flux measurements, it is an occasional misconception that the net flux is measured as the difference between the net upward and downward fluxes (i.e. a small difference between large terms). This is not the case. The net flux is the sum of individual (half-hourly or hourly) flux measurements, each with an associated error term. The question of errors and uncertainties in long-term flux measurements of carbon and water is addressed by first considering the potential for errors in flux measuring systems in general and thus errors which are relevant to a wide range of timescales of measurement. We also focus exclusively on flux measurements made by the micrometeorological method of eddy covariance. Errors can loosely be divided into random errors and systematic errors, although in reality any particular error may be a combination of both types. Systematic errors can be fully systematic errors (errors that apply on all of the daily cycle) or selectively systematic errors (errors that apply to only part of the daily cycle), which have very different effects. Random errors may also be full or selective, but these do not differ substantially in their properties. We describe an error analysis in which these three different types of error are applied to a long-term dataset to discover how errors may propagate through long-term data and which can be used to estimate the range of uncertainty in the reported sink strength of the particular ecosystem studied.  相似文献   

3.
This article presents a method to test the presence of relatively small systematic measurement errors; e.g., those caused by inaccurate calibration or sensor drift. To do this, primary measurements-flow rates and concentrations-are first translated into observed conversions, which should satisfy several constraints, like the laws of conservation of chemical elements. This study considers three objectives: 1.Modification of the commonly used balancing technique to improve error sensitivity to be able to detect small systematic errors. To this end, the balancing technique is applied sequentially in time.2.Extension of the method to enable direct diagnosis of errors in the primary measurements instead of diagnosing errors in the observed conversions. This was achieved by analyzing how individual errors in the primary measurements are expressed in the residual vector.3.Derivation of a new systematic method to quantitatively determine the sensitivity of the error, is that error size at which the expected value of the chisquare test function equals its critical value.The method is applied to industrial data demonstrating the effectiveness of the approach. It was shown that, for most possible error sources, a systematic errors of 2% to 5% could be detected. In given application, the variation of the N-content of biomass was appointed to be the cause of errors. (c) 1994 John Wiley & Sons, Inc.  相似文献   

4.
In well-known methods of estimating rates of irreversible disposal (utilization) in vivo the rates are calculated from the areas to infinity under specific radioactivity-time (S-t) or quantity-of-label-time (q-t) curves obtained by measurements on samples of plasma after intravenous injection of labelled substrate. The errors in the calculated rates are mostly those of the estimates of the areas. These errors are of two kinds: random, caused by the variances of the values of S or q, and systematic, caused by differences between the curves used to interpolate between these values and the true curves. A rigorous method is given for calculating the random errors from the variances of the values of S or q, and is applied to choosing the best times to sample the plasma from small animals from which few plasma samples can be taken. A procedure for estimating systematic errors is also given. Programs in BASIC language to carry out the calculations are deposited as Supplementary Publication SUP 50058 (5 pages) at the British Library (Lending Division), Boston Spa, Wetherby, West Yorkshire LS23 7BQ, U.K., from whom copies can be obtained on the terms given in Biochem. J. (1975) 145, 5.  相似文献   

5.
Several systematic errors may occur during the analysis of uninhibited enzyme kinetic data using commercially available multiwell plate reader software. A MATLAB program is developed to remove these systematic errors from the data analysis process for a single substrate-enzyme system conforming to Michaelis-Menten kinetics. Three experimental designs that may be used to validate a new enzyme preparation or assay methodology and to characterize an enzyme-substrate system, while capitalizing on the ability of multiwell plate readers to perform multiple reactions simultaneously, are also proposed. These experimental designs are used to (i) test for enzyme inactivation and the quality of data obtained from an enzyme assay using Selwyn's test, (ii) calculate the limit of detection of the enzyme assay, and (iii) calculate Km and Vm values. If replicates that reflect the overall error in performing a measurement are used, the latter two experiments may be performed with internal estimation of the error structure. The need to correct for the systematic errors discussed and the utility of the proposed experimental designs were confirmed by numerical simulation. The proposed experiments were conducted using recombinant inducible nitric oxide synthase preparations and the oxyhemoglobin assay.  相似文献   

6.
Abstract

A theory based on a Langevin equation along the reaction coordinate is developed to explain and calculate systematic and statistical errors in free energy perturbation simulations. The errors are calculated exactly when both the perturbation potential and the mean potential from the surrounding degrees of freedom are harmonic in the reaction coordinate. The effect of the mean potential is small as long as the force constant is small compared to the force constant of the perturbation potential. This indicates that the results obtained with zero mean force may still be valid as long as the second derivate of the mean potential is small compared to that of the perturbation potential. The theory is applied to conversion between L and D amino acids by changing the position of the minimum of the harmonic improper dihedral potential between ±35.264 degrees. For phenylalanine bound in the active site of a protein (thermolysin) we find from 20 psec. simulations statistical errors and hysteresis that both are about 2.5 kJ/mol in agreement with what is obtained from the theoretical predictions. The statistical errors are proportional to the square root of the coupling to the heat bath and inversely proportional to the square root of integration time while the (positive) hysteresis due to that the reaction coordinate lags behind is linear in the same quantities. This shows that the systematic errors will dominate in short simulations while the statistical ones will dominate for long simulations. The treatment is based on that the systematic influence of the surroundings can be represented by a mean force upon the reaction coordinate. If the relaxation processes of the environment are slow this may not be true. Then additional errors have to be considered.  相似文献   

7.
Semiconductor nanocrystals or quantum dots (QDs) are becoming widely used as fluorescent labels for biological applications. Here we demonstrate that fluorescence fluctuation analysis of their diffusional mobility using temporal image correlation spectroscopy is highly susceptible to systematic errors caused by fluorescence blinking of the nanoparticles. Temporal correlation analysis of fluorescence microscopy image time series of streptavidin-functionalized (CdSe)ZnS QDs freely diffusing in two dimensions shows that the correlation functions are fit well to a commonly used diffusion decay model, but the transport coefficients can have significant systematic errors in the measurements due to blinking. Image correlation measurements of the diffusing QD samples measured at different laser excitation powers and analysis of computer simulated image time series verified that the effect we observe is caused by fluorescence intermittency. We show that reciprocal space image correlation analysis can be used for mobility measurements in the presence of blinking emission because it separates the contributions of fluctuations due to photophysics from those due to transport. We also demonstrate application of the image correlation methods for measurement of the diffusion coefficient of glycosyl phosphatidylinositol-anchored proteins tagged with QDs as imaged on living fibroblasts.  相似文献   

8.
While extracting dynamics parameters from backbone (15)N relaxation measurements in proteins has become routine over the past two decades, it is increasingly recognized that accurate quantitative analysis can remain limited by the potential presence of systematic errors associated with the measurement of (15)N R(1) and R(2) or R(1ρ) relaxation rates as well as heteronuclear (15)N-{(1)H} NOE values. We show that systematic errors in such measurements can be far larger than the statistical error derived from either the observed signal-to-noise ratio, or from the reproducibility of the measurement. Unless special precautions are taken, the problem of systematic errors is shown to be particularly acute in perdeuterated systems, and even more so when TROSY instead of HSQC elements are used to read out the (15)N magnetization through the NMR-sensitive (1)H nucleus. A discussion of the most common sources of systematic errors is presented, as well as TROSY-based pulse schemes that appear free of systematic errors to the level of <1 %. Application to the small perdeuterated protein GB3, which yields exceptionally high S/N and therefore is an ideal test molecule for detection of systematic errors, yields relaxation rates that show considerably less residue by residue variation than previous measurements. Measured R(2)'/R(1)' ratios fit an axially symmetric diffusion tensor with a Pearson's correlation coefficient of 0.97, comparable to fits obtained for backbone amide RDCs to the Saupe matrix.  相似文献   

9.
Attenuation correction is necessary for quantification in micro-single-photon emission computed tomography (micro-SPECT). In general, this is done based on micro-computed tomographic (micro-CT) images. Derivation of the attenuation map from magnetic resonance (MR) images is difficult because bone and lung are invisible in conventional MR images and hence indistinguishable from air. An ultrashort echo time (UTE) sequence yields signal in bone and lungs. Micro-SPECT, micro-CT, and MR images of 18 rats were acquired. Different tracers were used: hexamethylpropyleneamine oxime (brain), dimercaptosuccinic acid (kidney), colloids (liver and spleen), and macroaggregated albumin (lung). The micro-SPECT images were reconstructed without attenuation correction, with micro-CT-based attenuation maps, and with three MR-based attenuation maps: uniform, non-UTE-MR based (air, soft tissue), and UTE-MR based (air, lung, soft tissue, bone). The average difference with the micro-CT-based reconstruction was calculated. The UTE-MR-based attenuation correction performed best, with average errors ≤ 8% in the brain scans and ≤ 3% in the body scans. It yields nonsignificant differences for the body scans. The uniform map yields errors of ≤ 6% in the body scans. No attenuation correction yields errors ≥ 15% in the brain scans and ≥ 25% in the body scans. Attenuation correction should always be performed for quantification. The feasibility of MR-based attenuation correction was shown. When accurate quantification is necessary, a UTE-MR-based attenuation correction should be used.  相似文献   

10.
BACKGROUND AND AIMS: Two previous papers in this series evaluated model fit of eight thermal-germination models parameterized from constant-temperature germination data. The previous studies determined that model formulations with the fewest shape assumptions provided the best estimates of both germination rate and germination time. The purpose of this latest study was to evaluate the accuracy and efficiency of these same models in predicting germination time and relative seedlot performance under field-variable temperature scenarios. METHODS: The seeds of four rangeland grass species were germinated under 104 variable-temperature treatments simulating six planting dates at three field sites in south-western Idaho. Measured and estimated germination times for all subpopulations were compared for all models, species and temperature treatments. KEY RESULTS: All models showed similar, and relatively high, predictive accuracy for field-temperature simulations except for the iterative-probit-optimization (IPO) model, which exhibited systematic errors as a function of subpopulation. Highest efficiency was obtained with the statistical-gridding (SG) model, which could be directly parameterized by measured subpopulation rate data. Relative seedlot response predicted by thermal time coefficients was somewhat different from that estimated from mean field-variable temperature response as a function of subpopulation. CONCLUSIONS: All germination response models tested performed relatively well in estimating field-variable temperature response. IPO caused systematic errors in predictions of germination time, and may have degraded the physiological relevance of resultant cardinal-temperature parameters. Comparative indices based on expected field performance may be more ecologically relevant than indices derived from a broader range of potential thermal conditions.  相似文献   

11.
A new dynamic model of left ventricular (LV) pressure-volume relationships in beating heart was developed by mathematically linking chamber pressure-volume dynamics with cardiac muscle force-length dynamics. The dynamic LV model accounted for >80% of the measured variation in pressure caused by small-amplitude volume perturbation in an otherwise isovolumically beating, isolated rat heart. The dynamic LV model produced good fits to pressure responses to volume perturbations, but there existed some systematic features in the residual errors of the fits. The issue was whether these residual errors would be damaging to an application where the dynamic LV model was used with LV pressure and volume measurements to estimate myocardial contractile parameters. Good agreement among myocardial parameters responsible for response magnitude was found between those derived by geometric transformations of parameters of the dynamic LV model estimated in beating heart and those found by direct measurement in constantly activated, isolated muscle fibers. Good agreement was also found among myocardial kinetic parameters estimated in each of the two preparations. Thus the small systematic residual errors from fitting the LV model to the dynamic pressure-volume measurements do not interfere with use of the dynamic LV model to estimate contractile parameters of myocardium. Dynamic contractile behavior of cardiac muscle can now be obtained from a beating heart by judicious application of the dynamic LV model to information-rich pressure and volume signals. This provides for the first time a bridge between the dynamics of cardiac muscle function and the dynamics of heart function and allows a beating heart to be used in studies where the relevance of myofilament contractile behavior to cardiovascular system function may be investigated.  相似文献   

12.
This paper deals with hazard regression models for survival data with time-dependent covariates consisting of updated quantitative measurements. The main emphasis is on the Cox proportional hazards model but also additive hazard models are discussed. Attenuation of regression coefficients caused by infrequent updating of covariates is evaluated using simulated data mimicking our main example, the CSL1 liver cirrhosis trial. We conclude that the degree of attenuation depends on the type of stochastic process describing the time-dependent covariate and that attenuation may be substantial for an Ornstein-Uhlenbeck process. Also trends in the covariate combined with non-synchronous updating may cause attenuation. Simple methods to adjust for infrequent updating of covariates are proposed and compared to existing techniques using both simulations and the CSL1 data. The comparison shows that while existing, more complicated methods may work well with frequent updating of covariates the simpler techniques may have advantages in larger data sets with infrequent updatings.  相似文献   

13.
Several methods currently in use for measuring mean corpuscular volume include: centrifuged packed cell volume, electronic impedance, and light scattering methods. Although these techniques are widely used and accepted, there are problems inherent to each method which may produce systematic errors that are difficult to estimate. This paper describes a new flow cytometric method of cell volume determination, based on the principle of volume exclusion, which may overcome the systematic errors of the methods currently in use. This method requires that the cells be suspended in a fluorescent dye which is unable to penetrate the cell membrane. The level of fluorescence which is produced when a narrow stream of the cell suspension is excited by a focused laser beam will remain constant until a cell arrives in the illuminated region thereby causing a decrease in fluorescence which is directly proportional to the cell's volume. The volume exclusion method is shown to give an estimate of mean red cell volume which correlates well with existing methods.  相似文献   

14.
The Ion Torrent Personal Genome Machine (PGM) is a new sequencing platform that substantially differs from other sequencing technologies by measuring pH rather than light to detect polymerisation events. Using re-sequencing datasets, we comprehensively characterise the biases and errors introduced by the PGM at both the base and flow level, across a combination of factors, including chip density, sequencing kit, template species and machine. We found two distinct insertion/deletion (indel) error types that accounted for the majority of errors introduced by the PGM. The main error source was inaccurate flow-calls, which introduced indels at a raw rate of 2.84% (1.38% after quality clipping) using the OneTouch 200 bp kit. Inaccurate flow-calls typically resulted in over-called short-homopolymers and under-called long-homopolymers. Flow-call accuracy decreased with consecutive flow cycles, but we also found significant periodic fluctuations in the flow error-rate, corresponding to specific positions within the flow-cycle pattern. Another less common PGM error, high frequency indel (HFI) errors, are indels that occur at very high frequency in the reads relative to a given base position in the reference genome, but in the majority of instances were not replicated consistently across separate runs. HFI errors occur approximately once every thousand bases in the reference, and correspond to 0.06% of bases in reads. Currently, the PGM does not achieve the accuracy of competing light-based technologies. However, flow-call inaccuracy is systematic and the statistical models of flow-values developed here will enable PGM-specific bioinformatics approaches to be developed, which will account for these errors. HFI errors may prove more challenging to address, especially for polymorphism and amplicon applications, but may be overcome by sequencing the same DNA template across multiple chips.  相似文献   

15.
在基因芯片实验中,基因表达水平之间的相关性在推断基因间相互关系时起到非常重要的作用.未经标准化处理的芯片数据基因之间往往都呈现出很强的相关性,这些高相关性一部分是由基因表达水平变化引起的,而另外一部分是由系统偏差引起的.对芯片数据进行标准化处理的目的之一是消除系统偏差引起的高相关性,同时保留由真正生物学原因引起的基因表达水平高相关性.虽然目前对标准化方法已经有了不少比较研究,但还较少有人研究标准化方法对基因之间相关系数的影响,以及哪种方法最有利于恢复基因之间的相关性结构.通过对基因表达水平数据的模拟,具体比较了几种常用标准化方法的效果,从而给出最有利于恢复基因之间相关性结构的那种标准化方法.  相似文献   

16.
The present study deals with kinetic modeling of enzyme-catalyzed reactions by integral progress curve analysis, and shows how to apply this technique to kinetic resolution of enantiomers. It is shown that kinetic parameters for both enantiomers and the enantioselectivity of the enzyme may be obtained from the progress curve measurement of a racemate only.A parameter estimation procedure has been established and it is shown that the covariance matrix of the obtained parameters is a useful statistical tool in the selection and verification of the model structure. Standard deviations calculated from this matrix have shown that progress curve analysis yields parameter values with high accuracies.Potential sources of systematic errors in (multiple) progress curve analysis are addressed in this article. Amongst these, the following needed to be dealt with: (1) the true initial substrate concentrations were obtained from the final amount of product experimentally measured (mass balancing); (2) systematic errors in the initial enzyme concentration were corrected by incorporating this variable in the fitting procedure as an extra parameter per curve; and (3) enzyme inactivation is included in the model and a first-order inactivation constant is determined.Experimental verification was carried out by continuous monitoring of the hydrolysis of ethyl 2-chloropropionate by carboxylesterase NP and the alpha-chymotrypsin-catalyzed hydrolysis of benzoylalanine mathyl ester in a pH-stat system. Kinetic parameter values were obtained with high accuracies and model predictions were in good agreement with independent measurements of enantiomeric excess values or literature data. (c) 1994 John Wiley & Sons, Inc.  相似文献   

17.
G Torres  C Rivier 《Life sciences》1992,51(13):1041-1048
The role of multiple (iv) injections of cocaine on the rat hypothalamic-pituitary-adrenal (HPA) axis was examined using four different temporal regimens of drug exposure. In intact rats, cocaine (5 mg/kg) consistently stimulated the secretion of adrenocorticotropin hormone (ACTH) and corticosterone over a 6 hr interval regimen. In all experimental groups, administration of the vehicle alone failed to measurably alter the secretion of the aforementioned hormones. When rats where exposed to the drug over a 4 hr interval regimen, a modest attenuation of ACTH, but not corticosterone, secretion was observed following the third and last cocaine injection. To test whether the attenuation of ACTH secretion to cocaine administration was caused by corticosterone-mediated negative feedback, the response of intact and adrenalectomized (ADX) rats over 2 hr and 1 hr interval regimens was compared. In intact rats, both drug interval regimens resulted in a significant attenuation of ACTH secretion following, the second and third injections of the drug. ADX rats, on the other hand, exhibited significant increases in ACTH levels following either interval regimens, though we observed a modest blunting of pituitary responsiveness to the 1 hr regimen. From these results we conclude that in intact rats the activity of the HPA axis is significantly attenuated in response to multiple, acute cocaine injections, and that this decreased response may be at least in part caused by a negative corticoid feedback mechanism.  相似文献   

18.
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring?s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation.  相似文献   

19.
A common question in movement studies is how the results should be interpreted with respect to systematic and random errors. In this study, simulations are made in order to see how a rigid body's orientation in space (i.e. helical angle between two orientations) is affected by (1) a systematic error added to a single marker (2) a combination of this systematic error and Gaussian white noise. The orientation was estimated after adding a systematic error to one marker within the rigid body. This procedure was repeated with Gaussian noise added to each marker. In conclusion, results show that the systematic error's effect on estimated orientation depends on number of markers in the rigid body and also on which direction the systematic error is added. The systematic error has no effect if the error is added along the radial axis (i.e. the line connecting centre of mass and the affected marker).  相似文献   

20.
Accurate estimation of the in vivo locations of skeletal landmarks plays an integral role in several biomechanical research techniques. Because of rounding errors caused by instruments or skin movement, the data obtained through cinematography are usually not accurate and rise to a distance matrix which, because of the data errors, may not be Euclidean. The aim of this paper is to find the best Euclidean distance matrix (EDM) that approximates the distance matrix and then, an accurate estimation of the locations of skeletal landmarks. A useful scheme for parametrizing an orthogonal matrix is also described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号