首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Moment index displacement automatically corrects a number of significant nonrandom instrumental errors in fluorescence time-decay measurements. Three-component data, obtained by measuring the fluorescence decay of three different species mixed in the same solution, were used as a test sample. It was shown, as predicted by theory, that moment index displacement corrects three nonrandom instrumental errors: (1) the presence of scatter in the data; (2) time origin shifts between lamp and fluorescence data; and (3) lamp drift, or time-dependent changes in the shape of the excitation curve. The data clearly show that the use of the method of moments with moment index displacement to analyze fluorescence decay data is not a curve-fitting procedure. This procedure will accurately obtain decay parameters for multiple-exponential decays from certain badly distorted data, yielding a calculated curve very different from the actual data.  相似文献   

2.
A method of moments is presented for the analysis of convoluted fluorescence decay data when the impulse response function is given by f(t) = alpha exp (-At - Bt1/2). Examples of this method are given using both simulated and measured fluorescence decays. It is also shown that this method, used with moment index displacements, will correct for light-scatter leakage, zero-point time shifts, and slow lamp drift.  相似文献   

3.
The nonparametric analysis of the stathmokinetic experiment presented in this paper is an extension of procedures by Jagers and Staudte. The method allows one to estimate, under very general assumptions, the first two moments of the residence time in successive cell cycle phases. Approximate formulae for the mean square errors of the estimates are derived. Applications include experimental stathmokinetic data for various cell lines, both analyzed and not analyzed previously. Comparison proves that the nonparametric method is very accurate whenever it can be applied. Results of analysis of the stathmokinetic data are also discussed from the viewpoint of the variability of the cell cycle generation time.  相似文献   

4.
Wang CY 《Biometrics》2000,56(1):106-112
Consider the problem of estimating the correlation between two nutrient measurements, such as the percent energy from fat obtained from a food frequency questionnaire (FFQ) and that from repeated food records or 24-hour recalls. Under a classical additive model for repeated food records, it is known that there is an attenuation effect on the correlation estimation if the sample average of repeated food records for each subject is used to estimate the underlying long-term average. This paper considers the case in which the selection probability of a subject for participation in the calibration study, in which repeated food records are measured, depends on the corresponding FFQ value, and the repeated longitudinal measurement errors have an autoregressive structure. This paper investigates a normality-based estimator and compares it with a simple method of moments. Both methods are consistent if the first two moments of nutrient measurements exist. Furthermore, joint estimating equations are applied to estimate the correlation coefficient and related nuisance parameters simultaneously. This approach provides a simple sandwich formula for the covariance estimation of the estimator. Finite sample performance is examined via a simulation study, and the proposed weighted normality-based estimator performs well under various distributional assumptions. The methods are applied to real data from a dietary assessment study.  相似文献   

5.
We present a method for characterizing the free-energy and affinity distributions of a heterogeneous population of molecules interacting with a homogeneous population of ligands, by driving expressions for the moments as functions of experimental binding curve characteristics, and then constructing the distribution as an expansion over a Gaussian basis set. Although the method provides the complete distribution in principle, in practice it is restricted by experimental noise, inaccuracies in data fitting, and the severity with which the distribution deviates from a Gaussian. Limitations imposed by experimental inaccuracies and the requirement of an appropriate analytic function for data fitting were evaluated by Monte Carlo simulations of binding experiments with various degrees of error in the data. Thus a distribution was assumed, binding curves with random errors were generated, and the technique was applied in order to determine the extent to which the characteristics of the assumed distribution could be recovered. Typical inaccuracies in the first two moments fell within experimental error, whereas inaccuracies in the third and fourth were generally larger than standard deviations in the data. The accuracy of these higher-order moments was invarient for experimental errors ranging from 2 to 10% and may thus be limited, within this range, primarily by the curve fitting procedure. The other aspect of the problem, accurate inference of the distribution, is limited in part by inaccuracies in the moments but more importantly by the extent to which the distribution deviates from a Gaussian. The extensive statistical literature on the problem of inference enables the delineation of specific criteria for estimating the efficiency of construction, as well as for deciding whether certain features of the inferred distribution, such as bimodality, are artifacts of the procedure. In spite of the limitations of the method, the results indicate that the mean and standard deviation are obtainable with greater accuracy than by a Sipsian analysis. This difference is particularly important when the distribution is narrow and width detection is beyond the sensitivity of the Sips plot. The method should be more accurate than the latter as an assay for homogeneity as well as for characterizing the moments, though equally easy to apply.  相似文献   

6.
A new method is described for estimating initial velocities of enzyme-catalysed reactions. It is simple to apply either graphically or numerically, and is particularly appropriate for experiments in which the initial straight part of the progress curve is very short or non-existent. It requires no more knowledge than is readily available about the details of the system, such as the extent of reaction at equilibrium, the rate of enzyme inactivation, the nature of product inhibition etc., unlike some other methods of analysing progress curves, which are often invalidated by small errors in the defining assumptions.  相似文献   

7.
A new method is proposed for estimating the parameters of ball joints, also known as spherical or revolute joints and hinge joints with a fixed axis of rotation. The method does not require manual adjustment of any optimisation parameters and produces closed form solutions. It is a least squares solution using the whole 3D motion data set. We do not assume strict rigidity but only that the markers maintain a constant distance from the centre or axis of rotation. This method is compared with other methods that use similar assumptions in the cases of random measurement errors, systematic skin movements and skin movements with random measurement noise. Simulation results indicate that the new method is superior in terms of the algorithm used, the closure of the solution, consistency and minimal manual parameter adjustment. The method can also be adapted to joints with translational movements.  相似文献   

8.
The Analysis of Fluorescence Decay by a Method of Moments   总被引:3,自引:1,他引:2       下载免费PDF全文
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained.  相似文献   

9.
A Fourier method for the analysis of exponential decay curves.   总被引:23,自引:0,他引:23       下载免费PDF全文
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.  相似文献   

10.
Flash spectroscopy of purple membrane.   总被引:10,自引:6,他引:4       下载免费PDF全文
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.  相似文献   

11.
P Kügler 《PloS one》2012,7(8):e43001
The inference of reaction rate parameters in biochemical network models from time series concentration data is a central task in computational systems biology. Under the assumption of well mixed conditions the network dynamics are typically described by the chemical master equation, the Fokker Planck equation, the linear noise approximation or the macroscopic rate equation. The inverse problem of estimating the parameters of the underlying network model can be approached in deterministic and stochastic ways, and available methods often compare individual or mean concentration traces obtained from experiments with theoretical model predictions when maximizing likelihoods, minimizing regularized least squares functionals, approximating posterior distributions or sequentially processing the data. In this article we assume that the biological reaction network can be observed at least partially and repeatedly over time such that sample moments of species molecule numbers for various time points can be calculated from the data. Based on the chemical master equation we furthermore derive closed systems of parameter dependent nonlinear ordinary differential equations that predict the time evolution of the statistical moments. For inferring the reaction rate parameters we suggest to not only compare the sample mean with the theoretical mean prediction but also to take the residual of higher order moments explicitly into account. Cost functions that involve residuals of higher order moments may form landscapes in the parameter space that have more pronounced curvatures at the minimizer and hence may weaken or even overcome parameter sloppiness and uncertainty. As a consequence both deterministic and stochastic parameter inference algorithms may be improved with respect to accuracy and efficiency. We demonstrate the potential of moment fitting for parameter inference by means of illustrative stochastic biological models from the literature and address topics for future research.  相似文献   

12.
The present article discusses the use of computational methods based on generalized estimating equations (GEE), as a potential alternative to full maximum-likelihood methods, for performing segregation analysis of continuous phenotypes by using randomly selected family data. The method that we propose can estimate effect and degree of dominance of a major gene in the presence of additional nongenetic or polygenetic familial associations, by relating sample moments to their expectations calculated under the genetic model. It is known that all parameters in basic major-gene models cannot be identified, for estimation purposes, solely in terms of the first two sample moments of data from randomly selected families. Thus, we propose the use of higher (third order) sample moments to resolve this identifiability problem, in a pseudo-profile likelihood estimation scheme. In principle, our methods may be applied to fitting genetic models by using complex pedigrees and for estimation in the presence of missing phenotype data for family members. In order to assess its statistical efficiency we compare several variants of the method with each other and with maximum-likelihood estimates provided by the SAGE computer package in a simulation study.  相似文献   

13.
14.
EMT-6 tumor cell killing by decays from 3H and 125I incorporated by adduct formation of radiolabeled sensitizers was studied in vitro. Hypoxic radiosensitizers become covalently bound to cellular molecules after metabolic reduction, and EMT-6 tumor cells can tolerate over 10(9) adducts/cell of misonidazole without loss of colony-forming ability. Cells were incubated under hypoxic conditions in the presence of [3H]misonidazole or [125I]iodoazomycinriboside for various times and the amounts of bound 3H and 125I were determined. Cells were stored as monolayers at 22 degrees C, in suspension culture at 4 degrees C, and frozen in complete medium plus 8% DMSO at -196 degrees C for various times to facilitate the accumulation of radioactive decays before plating in vitro for colony-forming assays at 37 degrees C. At 22 degrees C in monolayer culture, EMT-6 tumor cells tolerated 950 and 1720 decays/cell of 3H and 125I, respectively, without evidence of radiotoxicity. This number of decays/cell over the exposure times used represents 1.54 x 10(6) 3H/cell and 8.4 x 10(4) 125I/cell, respectively. Significant cell killing was detected after similar amounts of isotope decay when cells were held at 4 degrees C. When cells were frozen in the presence of 8% DMSO, they were more resistant to inactivation by isotope decays or by gamma rays than cells in liquid phase at 4 degrees C. These data suggest that selective hypoxic tumor cell suicide by 3H or 125I decays from bound sensitizer at 37 degrees C will be an inefficient process, at least for drugs with specific activities as tested. These data are consistent with data on cell inactivation by isotopes incorporated into cells by other procedures.  相似文献   

15.
In this article, we construct an approximate EM algorithm to estimate the parameters of a nonlinear mixed effects model. The iterative procedure can be viewed as an iterative method of moments procedure for estimating the variance components and an iterative reweighted least squares estimates for estimating the fixed effects. Therefore, it is valid without the normality assumptions on the random components. A computationally simple method of moments estimates of the model parameters are used as the starting values for our iterative procedure. A simulation study was conducted to compare the performances of the proposed procedure with the procedure proposed by Lindstrom and Bates (1990) for some normal models and nonnormal models.  相似文献   

16.
Contact forces and moments act on orthopaedic implants such as joint replacements. The three forces and three moment components can be measured by six internal strain gauges and wireless telemetric data transmission. The accuracy of instrumented implants is restricted by their small size, varying modes of load transfer, and the accuracy of calibration. Aims of this study were to test with finite element studies design features to improve the accuracy, to develop simple but accurate calibration arrangements, and to select the best mathematical method for calculating the calibration constants. Several instrumented implants, and commercial and test transducers were calibrated using different loading setups and mathematical methods. It was found that the arrangement of flexible elements such as bellows or notches between the areas of load transfer and the central sensor locations is most effective to improve the accuracy. Increasing the rigidity of the implant areas, which are fixed in bones or articulate against joint surfaces, is less effective. Simple but accurate calibration of the six force and moment components can be achieved by applying eccentric forces instead of central forces and pure moments. Three different methods for calculating the measuring constants proved to be equally well suited. Employing these improvements makes it possible to keep the average measuring errors of many instrumented implants below 1-2% of the calibration ranges, including cross talk. Additional errors caused by noise of the transmitted signals can be reduced by filtering if this is permitted by the sampling rate and the required frequency content of the loads.  相似文献   

17.
Several recent reports have addressed the problem of estimating the response slope from repeated measurements of paired data when both stimulus and response variables are subject to biological variability. These earlier approaches suffer from several drawbacks: useful information about the relationships between the error components in a closed-loop system is not fully utilized; the response intercept cannot be directly estimated; and the normalization procedure required in some methods may fail under certain circumstances. This paper proposes a new, general method of simultaneously estimating the response slope and intercept from corrupted stimulus-response data when the errors in both variables are specifically related by the system structure. A direct extension of the least-squares approach, this method [directed least squares (DLS)] reduces to ordinary least-squares methods when either of the measured variables is error free and to the reduced-major-axis (RMA) method of Kermack and Haldane (Biometrics 37: 30-41, 1950) when the magnitudes of the normalized errors are equal. The DLS estimators are scale invariant, statistically unbiased and always assume the minimum variance. With simple modifications, the method is also applicable to paired data. If, however, the relation between error components is uncertain, then the RMA method is optimal, i.e., having the least possible asymptotic bias and variance. These results are illustrated by using various types of closed-loop respiratory response data.  相似文献   

18.
Multivariate meta‐analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between‐study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta‐regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example.  相似文献   

19.
The conductance, number, and mean open time of ion channels can be estimated from fluctuations in membrane current. To examine potential errors associated with fluctuation analysis, we simulated ensemble currents and estimated single channel properties. The number (N) and amplitude (i) of the underlying single channels were estimated using nonstationary fluctuation analysis, while mean open time was estimated using covariance and spectral analysis. Both excessive filtering and the analysis of segments of current that were too brief led to underestimates of i and overestimates of N. Setting the low-pass cut-off frequency of the filter to greater than five times the inverse of the effective mean channel open time (burst duration) and analyzing segments of current that were at least 80 times the effective mean channel open time reduced the errors to < 2%. With excessive filtering, Butterworth filtering gave up to 10% less error in estimating i and N than Bessel filtering. Estimates of mean open time obtained from the time constant of decay of the covariance, tau obs, at low open probabilities (Po) were much less sensitive to filtering than estimates of i and N. Extrapolating plots of tau obs versus mean current to the ordinate provided a method to estimate mean open time from data obtained at higher Po, where tau obs no longer represents mean open time. Bessel filtering gave the least error when estimating tau obs from the decay of the covariance function, and Butterworth filtering gave the least error when estimating tau obs from spectral density functions.  相似文献   

20.
Force plates for human movement analysis provide accurate measurements when mounted rigidly on an inertial reference frame. Large measurement errors occur, however, when the force plate is accelerated, or tilted relative to gravity. This prohibits the use of force plates in human perturbation studies with controlled surface movements, or in conditions where the foundation is moving or not sufficiently rigid. Here we present a linear model to predict the inertial and gravitational artifacts using accelerometer signals. The model is first calibrated with data collected from random movements of the unloaded system and then used to compensate for the errors in another trial. The method was tested experimentally on an instrumented force treadmill capable of dynamic mediolateral translation and sagittal pitch. The compensation was evaluated in five experimental conditions, including platform motions induced by actuators, by motor vibration, and by human ground reaction forces. In the test that included all sources of platform motion, the root-mean-square (RMS) errors were 39.0 N and 15.3 N m in force and moment, before compensation, and 1.6 N and 1.1 N m, after compensation. A sensitivity analysis was performed to determine the effect on estimating joint moments during human gait. Joint moment errors in hip, knee, and ankle were initially 53.80 N m, 32.69 N m, and 19.10 N m, and reduced to 1.67 N m, 1.37 N m, and 1.13 N m with our method. It was concluded that the compensation method can reduce the inertial and gravitational artifacts to an acceptable level for human gait analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号