首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether. In the GLS method, the least squares (LS) residuals from the data fit are themselves fitted to a variance function, with iterative adjustment of the weighting function in the data analysis to produce consistency. The data are treated in a pooled fashion, providing 321 fitted residuals from 35 data sets in the final analysis. Heteroscedasticity (nonconstant variance) is clearly indicated. Data error terms proportional to q(i) and q(i)/v are well defined statistically, where q(i) is the heat from the ith injection of titrant and v is the injected volume. The statistical significance of the variance function parameters is confirmed through Monte Carlo calculations that mimic the actual data set. For the data in question, which fall mostly in the range of q(i)=100-2000 microcal, the contributions to the data variance from the terms in q(i)(2) typically exceed the background constant term for q(i)>300 microcal and v<10 microl. Conversely, this means that in reactions with q(i) much less than this, heteroscedasticity is not a significant problem. Accordingly, in such cases the standard unweighted fitting procedures provide reliable results for the key parameters, K and DeltaH(degrees) and their statistical errors. These results also support an important earlier finding: in most ITC work on 1:1 binding processes, the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm. For high-q reactions, where weighting is needed for optimal LS analysis, tips are given for using the weighting option in the commercial software commonly employed to process ITC data.  相似文献   

2.
Fluorescence decay deconvolution analysis to fit a multiexponential function by the nonlinear least squares method requires numerical calculation of a convolution integral. A linear approximation of the successive data of the instrument response function is proposed for the computation of the convolution integral. Deconvolution analysis of simulated fluorescence data were carried out to show that the linear approximation method is generally better when one of the lifetimes is comparable to the time interval between data.  相似文献   

3.
Hydrodynamic properties as well as structural dynamics of proteins can be investigated by the well-established experimental method of fluorescence anisotropy decay. Successful use of this method depends on determination of the correct kinetic model, the extent of cross-correlation between parameters in the fitting function, and differences between the timescales of the depolarizing motions and the fluorophore's fluorescence lifetime. We have tested the utility of an independently measured steady-state anisotropy value as a constraint during data analysis to reduce parameter cross correlation and to increase the timescales over which anisotropy decay parameters can be recovered accurately for two calcium-binding proteins. Mutant rat F102W parvalbumin was used as a model system because its single tryptophan residue exhibits monoexponential fluorescence intensity and anisotropy decay kinetics. Cod parvalbumin, a protein with a single tryptophan residue that exhibits multiexponential fluorescence decay kinetics, was also examined as a more complex model. Anisotropy decays were measured for both proteins as a function of solution viscosity to vary hydrodynamic parameters. The use of the steady-state anisotropy as a constraint significantly improved the precision and accuracy of recovered parameters for both proteins, particularly for viscosities at which the protein's rotational correlation time was much longer than the fluorescence lifetime. Thus, basic hydrodynamic properties of larger biomolecules can now be determined with more precision and accuracy by fluorescence anisotropy decay.  相似文献   

4.
Conventional analyses of fluorescence lifetime measurements resolve the fluorescence decay profile in terms of discrete exponential components with distinct lifetimes. In complex, heterogeneous biological samples such as tissue, multi-exponential decay functions can appear to provide a better fit to fluorescence decay data than the assumption of a mono-exponential decay, but the assumption of multiple discrete components is essentially arbitrary and is often erroneous. Moreover, interactions, both between fluorophores and with their environment, can result in complex fluorescence decay profiles that represent a continuous distribution of lifetimes. Such continuous distributions have been reported for tryptophan, which is one of the main fluorophores in tissue. This situation is better represented by the stretched-exponential function (StrEF). In this work, we have applied, for the first time to our knowledge, the StrEF to time-domain whole-field fluorescence lifetime imaging (FLIM), yielding both excellent tissue contrast and goodness of fit using data from rat tissue. We note that for many biological samples for which there is no a priori knowledge of multiple discrete exponential fluorescence decay profiles, the StrEF is likely to provide a truer representation of the underlying fluorescence dynamics. Furthermore, fitting to a StrEF significantly decreases the required processing time, compared with a multi-exponential component fit and typically provides improved contrast and signal/noise in the resulting FLIM images. In addition, the stretched-exponential decay model can provide a direct measure of the heterogeneity of the sample, and the resulting heterogeneity map can reveal subtle tissue differences that other models fail to show.  相似文献   

5.
The interpretation of fluorescence intensity decay times in terms of protein structure and dynamics depends on the accuracy and sensitivity of the methods used for data analysis. The are many methods available for the analysis of fluorescence decay data, but justification for choosing any one of them is unclear. In this paper we generalize the recently proposed Padé-Laplace method [45] to include deconvolution with respect to the instrument response function. In this form the method can be readily applied to the analysis of time-correlated single photon counting data. By extensive simulations we have shown that the Padé-Laplace method provides more accurate results than the standard least squares method with iterative reconvolution under the condition of closely spaced lifetimes. The application of the Padé-Laplace method to several experimental data sets yielded results consistent with those obtained by use of the least squares analysis. Offprint requests to: F. G. Prendergast  相似文献   

6.
1. The normalization of biochemical data to weight them appropriately for parameter estimation is considered, with reference particularly to data from tracer kinetics and enzyme kinetics. If the data are in replicate, it is recommended that the sum of squared deviations for each experimental variable at each time or concentration point is divided by the local variance at that point. 2. If there is only one observation for each variable at each sampling point, normalization may still be required if the observations cover more than one order of magnitude, but there is no absolute criterion for judging the effect of the weighting that is produced. The goodness of fit that is produced by minimizing the weighted sum of squares of deviations must be judged subjectively. It is suggested that the goodness of fit may be regarded as satisfactory if the data points are distributed uniformly on either side of the fitted curve. A chi-square test may be used to decide whether the distribution is abnormal. The proportion of the residual variance associated with points on one or other side of the fitted curve may also be taken into account, because this gives an indication of the sensitivity of the residual variance to movement of the curve away from particular data points. These criteria for judging the effect of weighting are only valid if the model equation may reasonably be expected to apply to all the data points. 3. On this basis, normalizing by dividing the deviation for each data point by the experimental observation or by the equivalent value calculated by the model equation may both be shown to produce a consistent bias for numerically small observations, the former biasing the curve towards the smallest observations, the latter tending to produce a curve that is above the numerically smaller data points. It was found that dividing each deviation by the mean of observed and calculated variable appropriate to it produces a weighting that is fairly free from bias as judged by the criteria mentioned above. This normalization factor was tested on published data from both tracer kinetics and enzyme kinetics.  相似文献   

7.
Klevanik  A. V. 《Biophysics》2018,63(6):909-914
Biophysics - The least squares method is commonly used to find the parameters and sum of exponentials that form molecular fluorescence decay kinetics. However, the method usually fails to lead to a...  相似文献   

8.
The quality of fit of sedimentation velocity data is critical to judge the veracity of the sedimentation model and accuracy of the derived macromolecular parameters. Absolute statistical measures are usually complicated by the presence of characteristic systematic errors and run-to-run variation in the stochastic noise of data acquisition. We present a new graphical approach to visualize systematic deviations between data and model in the form of a histogram of residuals. In comparison with the ideally expected Gaussian distribution, it can provide a robust measure of fit quality and be used to flag poor models.  相似文献   

9.
Experimental data from continuous enzyme assays or protein folding experiments often contain hundreds, or even thousands, of densely spaced data points. When the sampling interval is extremely short, the experimental data points might not be statistically independent. The resulting neighborhood correlation invalidates important theoretical assumptions of nonlinear regression analysis. As a consequence, certain goodness-of-fit criteria, such as the runs-of-signs test and the autocorrelation function, might indicate a systematic lack of fit even if the experiment does agree very well with the underlying theoretical model. A solution to this problem is to analyze only a subset of the residuals of fit, such that any excessive neighborhood correlation is eliminated. Substrate kinetics of the HIV protease and the unfolding kinetics of UMP/CMP kinase, a globular protein from Dictyostelium discoideum, serve as two illustrative examples. A suitable data-reduction algorithm has been incorporated into software DYNAFIT [P. Kuzmi?, Anal. Biochem. 237 (1996) 260-273], freely available to all academic researchers from http://www.biokin.com.  相似文献   

10.
Steady-state and time-resolved fluorescence properties of the single tyrosyl residue in oxytocin and two oxytocin derivatives at pH 3 are presented. The decay kinetics of the tyrosyl residue are complex for each compound. By use of a linked-function analysis, the fluorescence kinetics can be explained by a ground-state rotamer model. The linked function assumes that the preexponential weighting factors (amplitudes) of the fluorescence decay constants have the same relative relationship as the 1H NMR determined phenol side-chain rotamer populations. According to this model, the static quenching of the oxytocin fluorescence can be attributed to an interaction between one specific rotamer population of the tyrosine ring and the internal disulfide bridge.  相似文献   

11.
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment.  相似文献   

12.
The time-resolved fluorescence properties of phenol and straight-chained phenol derivatives and tyrosine and simple tyrosine derivatives are reported for the pH range below neutrality. Phenol and straight-chained phenol derivatives exhibit single exponential fluorescence decay kinetics in this pH range unless they have a titratable carboxyl group. If a carboxyl group is present, the data follow a two-state, ground-state, Henderson-Hasselbalch relationship. Tyrosine and its derivatives with a free carboxyl group display complex fluorescence decay behavior as a function of pH. The complex kinetics cannot be fully explained by titration of a carboxyl group; other ground-state processes are evident, especially since tyrosine analogues with a blocked carboxyl group are also multiexponential. The fluorescence kinetics can be explained by a ground-state rotamer model. Comparison of the preexponential weighting factors (amplitudes) of the fluorescence decay constants with the 1H NMR determined phenol side-chain rotamer populations shows that tyrosine derivatives with a blocked or protonated carboxyl group have at least one rotamer exchanging more slowly than the radiative and nonradiative rates, and the fluorescence data are consistent with a slow-exchange model for all three rotamers, the shortest fluorescence decay constant is associated with a rotamer where the carbonyl group can contact the phenol ring, and in the tyrosine zwitterion, either rotamer interconversion is fast and an average lifetime is seen or rotamer interconversion is slow and the individual fluorescence decay constants are similar.  相似文献   

13.
A statistical analysis of a weighted averaging procedure for the estimation of small signals buried in noise (Hoke et al. 1984a) is given. The weighting factor used by this method is in inverse proportion to the variance estimated for the noise. It is shown that, compred to conventional averaging, weighted averaging can improve the signal-to-noise ratio to a high extent if the variance of the noise changes as a function of time. On the other hand, uncritical application of the method involves the danger that the signal amplitude is underestimated. How serious this effect is depends on the number of degrees of freedom available for the estimation of the weighting factor. The effect can be neglected, if this number is sufficiently increased by means of an appropriate preprocessing.  相似文献   

14.
Abstract— The unidirectional transport of metabolic substrates from blood to brain may be defined in terms of Michaelis-Menten saturable ( K m, V max) and non-saturable ( K d) components of influx. Various computation procedures have been previously reported to estimate the kinetic parameters when an intracarotid injection technique is used. Transformations of the influx data which allow linear plots to obtain estimates were compared with estimates obtained directly from a best fit on a least means squares criterion for both experimental and simulated data. Large discrepancies were apparent between the various estimates of the kinetic parameters when an equal weight was given to transformed data. For pyruvate (21-day-old rats), K m, values varied between 1.02 and 6.25 mM and V max varied between 0.68 and 2.30 μmol g−1 min−1. The estimates were almost equivalent when pyruvate data was re-analysed using a weighting scheme based on the finding that the absolute value of the S.D. of influx increased in proportion to influx. It is recommended that estimates of kinetic parameters be obtained by an iterative, non-linear least squares method to fit appropriately weighted data directly.  相似文献   

15.
S P Brooks 《BioTechniques》1992,13(6):906-911
A simple computer program that calculates the kinetic parameters of enzyme reactions is described. Parameters are determined by nonlinear, least-squares regression using either Marquardt-Levenberg or Gauss-Newton algorithms to find the minimum sum of squares. Three types of enzyme reactions can be analyzed: single substrate reactions (Michaelis-Menten and sigmoidal kinetics), enzyme activation at a fixed substrate value or enzyme inhibition at a fixed substrate value. The user can monitor goodness of fit through nonparametric statistical tests (performed automatically by the computer) and through visual examination of the pattern of residuals. The program is unique in providing equations for activator and inhibition analysis as well as in enabling the user to fix some of the parameters before regression analysis. The simplicity of the program makes it extremely useful for quickly determining kinetic parameters during the data-gathering process.  相似文献   

16.
1.
1. A general purpose digital computer program is described for application to biological experiments that require a non-linear regression analysis. The mathematical function, or model, to be fitted to a given set of experimental data is written as a section within the program. Given initial estimates for the parameters of the function, the program uses an iterative procedure to adjust the parameters until the sum of squares of residuals has converged to a minimum.  相似文献   

17.
Stocks of commercial fish are often modelled using sampling data of various types, of unknown precision, and from various sources assumed independent. We want each set to contribute to estimates of the parameters in relation to its precision and goodness of fit with the model. Iterative re-weighting of the sets is proposed for linear models until the weight of each set is found to be proportional to (relative weighting) or equal to (absolute weighting) the set-specific residual invariances resulting from a generalised least squares fit. Formulae for the residual variances are put forward involving fractional allocation of degrees of freedom depending on the numbers of independent observations in each set, the numbers of sets contributing to the estimate of each parameter, and the number of weights estimated. To illustrate the procedure, numbers of the 1984 year-class of North Sea cod (a) landed commercially each year, and (b) caught per unit of trawling time by an annual groundfish survey are modelled as a function of age to estimate total mortality, Z, relative catching power of the two fishing methods, and relative precision of the two sets of observations as indices of stock abundance. It was found that the survey abundance indices displayed residual variance about 29 times higher than that of the annual landings.  相似文献   

18.
P Kügler 《PloS one》2012,7(8):e43001
The inference of reaction rate parameters in biochemical network models from time series concentration data is a central task in computational systems biology. Under the assumption of well mixed conditions the network dynamics are typically described by the chemical master equation, the Fokker Planck equation, the linear noise approximation or the macroscopic rate equation. The inverse problem of estimating the parameters of the underlying network model can be approached in deterministic and stochastic ways, and available methods often compare individual or mean concentration traces obtained from experiments with theoretical model predictions when maximizing likelihoods, minimizing regularized least squares functionals, approximating posterior distributions or sequentially processing the data. In this article we assume that the biological reaction network can be observed at least partially and repeatedly over time such that sample moments of species molecule numbers for various time points can be calculated from the data. Based on the chemical master equation we furthermore derive closed systems of parameter dependent nonlinear ordinary differential equations that predict the time evolution of the statistical moments. For inferring the reaction rate parameters we suggest to not only compare the sample mean with the theoretical mean prediction but also to take the residual of higher order moments explicitly into account. Cost functions that involve residuals of higher order moments may form landscapes in the parameter space that have more pronounced curvatures at the minimizer and hence may weaken or even overcome parameter sloppiness and uncertainty. As a consequence both deterministic and stochastic parameter inference algorithms may be improved with respect to accuracy and efficiency. We demonstrate the potential of moment fitting for parameter inference by means of illustrative stochastic biological models from the literature and address topics for future research.  相似文献   

19.
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.  相似文献   

20.
Recently it has become possible to measure fluorescence phase-shift and modulation data over a wide range of modulation frequencies. In this paper we describe the analysis of these data by the method of nonlinear least squares to determine the values of the lifetimes and fractional intensities for a mixture of exponentially decaying fluorophores. Analyzing simulated data allowed us to determine those experimental factors that are most critical for successfully resolving the emissions from mixtures of fluorophores. The most critical factors are the accuracy of the experimental data, the relative difference of the individual decay times, and the inclusion of data measured at multiple emission wavelengths. After measuring at eight widely spaced modulation frequencies, additional measurements yielded only a modest increase in resolution. In particular, the uncertainty in the parameters decreased approximately as the reciprocal of the square root of the number of modulation frequencies. Our simulations showed that with presently available precision and data for one emission bandpass, two decay times could be accurately determined if their ratio were greater than or equal to 1.4. Three exponential decays could also be resolved, but only if the range of the lifetimes were fivefold or greater. To reliably determine closely-spaced decay times, the data were measured at multiple emission wavelengths so that the fractional intensities of the components could be varied. Also, independent knowledge of any of the parameters substantially increased the accuracy with which the remaining parameters could be determined. In the subsequent paper we present experimental results that broadly confirm the predicted resolving potential of variable-frequency phase-modulation fluorometry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号