首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A method for fitting experimental sedimentation velocity data to finite-element solutions of various models based on the Lamm equation is presented. The method provides initial parameter estimates and guides the user in choosing an appropriate model for the analysis by preprocessing the data with the G(s) method by van Holde and Weischet. For a mixture of multiple solutes in a sample, the method returns the concentrations, the sedimentation (s) and diffusion coefficients (D), and thus the molecular weights (MW) for all solutes, provided the partial specific volumes (v) are known. For nonideal samples displaying concentration-dependent solution behavior, concentration dependency parameters for s(sigma) and D(delta) can be determined. The finite-element solution of the Lamm equation used for this study provides a numerical solution to the differential equation, and does not require empirically adjusted correction terms or any assumptions such as infinitely long cells. Consequently, experimental data from samples that neither clear the meniscus nor exhibit clearly defined plateau absorbances, as well as data from approach-to-equilibrium experiments, can be analyzed with this method with enhanced accuracy when compared to other available methods. The nonlinear least-squares fitting process was accomplished by the use of an adapted version of the "Doesn't Use Derivatives" nonlinear least-squares fitting routine. The effectiveness of the approach is illustrated with experimental data obtained from protein and DNA samples. Where applicable, results are compared to methods utilizing analytical solutions of approximated Lamm equations.  相似文献   

2.
A convenient method for evaluation of biochemical reaction rate coefficients and their uncertainties is described. The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots. The nonlinear least-squares method provides accurate estimates of the rate coefficients and their uncertainties from experimental data. Linearized methods that involve inversion of data are unreliable since several important assumptions of linear regression are violated. Furthermore, when linearized methods are used, there is no basis for calculation of the uncertainties in the rate coefficients. Uncertainty estimates are crucial to studies involving comparisons of rates for different organisms or environmental conditions. The spreadsheet method uses weighted least-squares analysis to determine the best-fit values of the rate coefficients for the integrated Monod equation. Although the integrated Monod equation is an implicit expression of substrate concentration, weighted least-squares analysis can be employed to calculate approximate differences in substrate concentration between model predictions and data. An iterative search routine in a spreadsheet program is utilized to search for the best-fit values of the coefficients by minimizing the sum of squared weighted errors. The uncertainties in the best-fit values of the rate coefficients are calculated by an approximate method that can also be implemented in a spreadsheet. The uncertainty method can be used to calculate single-parameter (coefficient) confidence intervals, degrees of correlation between parameters, and joint confidence regions for two or more parameters. Example sets of calculations are presented for acetate utilization by a methanogenic mixed culture and trichloroethylene cometabolism by a methane-oxidizing mixed culture. An additional advantage of application of this method to the integrated Monod equation compared with application of linearized methods is the economy of obtaining rate coefficients from a single batch experiment or a few batch experiments rather than having to obtain large numbers of initial rate measurements. However, when initial rate measurements are used, this method can still be used with greater reliability than linearized approaches.  相似文献   

3.
A modification of a method of Gardner, which employs Fourier-transform techniques, is used to obtain initial estimates for the number of terms and values of the parameters for data which are represented by a sum of exponential terms. New experimental methods have increased both the amount and accuracy of data from radiopharmaceutical experiments. This in turn allows one to devise specific numerical methods that utilize the better data. The inherent difficulties of fitting exponentials to data, which is an ill-posed problem, cannot be overcome by any method. However, we show that the present accuracy of Fourier methods may be extended by our numerical methods applied to the improved data sets. In many cases the method yields accurate estimates for the parameters; these estimates then are to be used as initial estimates for a nonlinear least-squares analysis of the problem.  相似文献   

4.
Experimental designs involving repeated measurements on experimental units are widely used in physiological research. Often, relatively many consecutive observations on each experimental unit are involved and the data may be quite nonlinear. Yet evidently, one of the most commonly used statistical methods for dealing with such data sets in physiological research is the repeated-measurements ANOVA model. The problem herewith is that it is not well suited for data sets with many consecutive measurements; it does not deal with nonlinear features of the data, and the interpretability of the model may be low. The use of inappropriate statistical models increases the likelihood of drawing wrong conclusions. The aim of this article is to illustrate, for a reasonably typical repeated-measurements data set, how fundamental assumptions of the repeated-measurements ANOVA model are inappropriate and how researchers may benefit from adopting different modeling approaches using a variety of different kinds of models. We emphasize intuitive ideas rather than mathematical rigor. We illustrate how such models represent alternatives that 1) can have much higher interpretability, 2) are more likely to meet underlying assumptions, 3) provide better fitted models, and 4) are readily implemented in widely distributed software products.  相似文献   

5.
A variety of analytical methods is available for branch testing in distance-based phylogenies. However, these methods are rarely used, possibly because the estimation of some of their statistics, especially the covariances, is not always feasible. We show that these difficulties can be overcome if some simplifying assumptions are made, namely distance independence. The weighted least-squares likelihood ratio test (WLS-LRT) we propose is easy to perform, using only the distances and some of their associated variances. If no variances are known, the use of the Felsenstein F-test, also based on weighted least squares, is discussed. Using simulated data and a data set of 43 mammalian mitochondrial sequences we demonstrate that the WLS-LRT performs as well as the generalized least-squares test, and indeed better for a large number of taxa data set. We thus show that the assumption of independence does not negatively affect the reliability or the accuracy of the least-squares approach. The results of the WLS-LRT are no worse than the results of the bootstrap methods, such as the Felsenstein bootstrap selection probability test and the Dopazo test. We also show that WLS-LRT can be applied in instances where other analytical methods are inappropriate. This point is illustrated by analyzing the relationships between human immunodeficiency virus type 1 (HIV-1) sequences isolated from various organs of different individuals.  相似文献   

6.
The method of phylogenetically independent contrasts is commonly used for exploring cross-taxon relationships between traits. Here we show that this phylogenetic comparative method (PCM) can fail to detect correlated evolution when the underlying relationship between traits is nonlinear. Simulations indicate that statistical power can be dramatically reduced when independent contrasts analysis is used on nonlinear relationships. We also reanalyze a published data set and demonstrate that ignoring nonlinearity can affect biological inferences. We suggest that researchers consider the shape of the relationship between traits when using independent contrasts analysis. Alternative PCMs may be more appropriate if data cannot be transformed to meet assumptions of linearity.  相似文献   

7.
A set of programs for analysis of kinetic and equilibrium data   总被引:1,自引:0,他引:1  
A program package that can be used for analysis of a wide rangeof kinetic and equilibrium data is described. The four programswere written in Turbo Pascal and run on PC, XT, AT and compatibles.The first of the programs allows the user to fit data with 16predefined and one user-defined function, using two differentnon-linear least-squares procedures. Two additional programsare used to test both the evaluation of model functions andthe least-squares fits. One of these programs uses two simpleprocedures to generate a Gaussian-distributed random variablethat is used to simulate the experimental error of measurements.The last program simulates kinetics described by differentialequations that cannot be solved analytically, using numericalintegration. This program helps the user to judge the validityof steady-state assumptions or treatment of kinetic measurementsas relaxations. Received on September 19, 1989; accepted on March 16, 1990  相似文献   

8.
Two problems that are often overlooked in studies employing nonlinear least-squares techniques for parameter estimation are confidence-interval estimation and propagation. When the parameters are correlated, the variance space and consequently the confidence intervals are nonlinear and asymmetrical. The presented mathematical method for the evaluation of confidence intervals and error propagation addresses these problems. The examples employed to demonstrate these methods include linear least-squares and the nonlinear least-squares analysis of ligand-binding problems, such as hormone receptor interactions and oxygen binding to human hemoglobin. The mathematical procedures have proven very useful for analyzing the molecular mechanism of cooperativity in human hemoglobin (Johnson, M. L., and G. K. Ackers, 1982. Biochemistry 21:201-211).  相似文献   

9.
Synopsis We present ways to test the assumptions of the Petersen and removal methods of population size estimation and ways to adjust the estimates if violations of the assumptions are found. We were motivated by the facts that (1) results of using both methods are commonly reported without any reference to the testing of assumptions, (2) violations of the assumptions are more likely to occur than not to occur in natural populations, and (3) the estimates can be grossly in error if assumptions are violated. We recognize that in many cases two days in the field is the most time fish biologists can spend in obtaining a population estimate, so the use of alternative models of population estimation that require fewer assumptions is precluded. Hence, for biologists operating with these constraints and only these biologists, we describe and recommend a two-day technique that combines aspects of both capture-recapture and removal methods. We indicate how to test: most of the assumptions of both methods and how to adjust the population estimates obtained if violations of the assumptions occur. We also illustrate the use of this combined method with data from a field study. The results of this application further emphasize the importance of testing the assumptions of whatever method is used and making appropriate adjustments to the population size estimates for any violations identified.  相似文献   

10.
Schuck P  Rossmanith P 《Biopolymers》2000,54(5):328-341
A new method is presented for the calculation of apparent sedimentation coefficient distributions g*(s) for the size-distribution analysis of polymers in sedimentation velocity experiments. Direct linear least-squares boundary modeling by a superposition of sedimentation profiles of ideal nondiffusing particles is employed. It can be combined with algebraic noise decomposition techniques for the application to interference optical ultracentrifuge data at low loading concentrations with significant systematic noise components. Because of the use of direct boundary modeling, residuals are available for assessment of the quality of the fits and the consistency of the g*(s) distribution with the experimental data. The method can be combined with regularization techniques based on F statistics, such as used in the program CONTIN, or alternatively, the increment of s values can be adjusted empirically. The method is simple, has advantageous statistical properties, and reveals precise sedimentation coefficients. The new least-squares ls-g*(s) exhibits a very high robustness and resolution if data acquired over a large time interval are analyzed. This can result in a high resolution for large particles, and for samples with a high degree of heterogeneity. Because the method does not require a high frequency of scans, it can also be easily used in experiments with the absorbance optical scanning system. Published 2000 John Wiley & Sons, Inc.  相似文献   

11.
An adaptation of some methods used in obesity research is presented as a teaching example to illustrate the use of 'animal models' in medical research. From sixth-form level upwards, it serves as a theoretical exercise in the analysis and interpretation of methods and data. For undergraduates it is also suitable as a laboratory exercise in dissection and measurement techniques.

Four simple means of altering fat levels in laboratory mice are described, contrasting invasive injection techniques with non-invasive dietary and behavioural means. Several measures of body fatness can be evaluated, using either the experimental mice or, more simply, untreated mice from outbred stocks. Guidelines are given for practicals in which objective and subjective measures of fatness in human subjects can be collected. Sample data for mice are given that, alone or together with class-collected data, allow graphical and statistical analysis. Throughout, the exercise lends itself to discussion of the assumptions both of the methods used on mice and men, and of the relation of such investigations to the problems of overweight in man. Obesity has many manifestations: the study of a variety of ‘animal models’ is one approach in the search for relevant physiological knowledge.  相似文献   

12.
Analysis of data in terms of the sum of two rectangular hyperbolas is frequently required in solute uptake studies. Four methods for such analysis have been compared. Three are based on least-squares fitting whereas the fourth (partition method I) is an extension of a single hyperbola fitting procedure based on non-parametric statistics. The four methods were tested using data sets which had been generated with two primary types of random, normal error in the dependent variable: one of constant error variance and the other of constant coefficient of variation. The methods were tested on further data sets which were obtained by incorporating single 10% bias errors at different positions in the original two sets. Partition method I consistently gave good estimates for the four parameters defining the double hyperbola and was highly insensitive to the bias errors. The least-squares procedures performed well under conditions satisfying the least-squares assumptions regarding error distribution, but frequently gave poor estimates when these assumptions did not hold. Our conclusion is that in view of the errors inherent in many solute uptake experiments it would usually be preferable to analyse data by a method such as partition method I rather than to rely on a least-squares procedure.  相似文献   

13.
A model of substrate inhibition for enzyme catalysis was extended to describe the kinetics of photosynthetic production of ethylene by a recombinant cyanobacterium, which exhibits light-inhibition behavior similar to the substrate-inhibition behavior in enzyme reactions. To check the validity of the model against the experimental data, the model equation, which contains three kinetic parameters, was transformed so that a linear plot of the data could be made. The plot yielded reasonable linearity, and the parameter values could be estimated from the plot. The linear-plot approach was then applied to other inhibition kinetics including substrate inhibition of enzyme reactions and inhibitory growth of bacteria, whose analyses would otherwise require nonlinear least-squares fits or data measured in constrained ranges. Plots for three totally different systems all showed reasonable linearity, which enabled visual validation of the assumed kinetics. Parameter values evaluated from the plots were compared with results of nonlinear least-squares fits. A normalized linear plot for all the results discussed in this work is also presented, where dimensionless rates as a function of dimensionless concentration lie in a straight line. The linear-plot approach is expected to be complementary to nonlinear least-squares fits and other currently used methods in analyses of substrate-inhibition kinetics. Copyright 1999 John Wiley & Sons, Inc.  相似文献   

14.
Vasco DA 《Genetics》2008,179(2):951-963
The estimation of ancestral and current effective population sizes in expanding populations is a fundamental problem in population genetics. Recently it has become possible to scan entire genomes of several individuals within a population. These genomic data sets can be used to estimate basic population parameters such as the effective population size and population growth rate. Full-data-likelihood methods potentially offer a powerful statistical framework for inferring population genetic parameters. However, for large data sets, computationally intensive methods based upon full-likelihood estimates may encounter difficulties. First, the computational method may be prohibitively slow or difficult to implement for large data. Second, estimation bias may markedly affect the accuracy and reliability of parameter estimates, as suggested from past work on coalescent methods. To address these problems, a fast and computationally efficient least-squares method for estimating population parameters from genomic data is presented here. Instead of modeling genomic data using a full likelihood, this new approach uses an analogous function, in which the full data are replaced with a vector of summary statistics. Furthermore, these least-squares estimators may show significantly less estimation bias for growth rate and genetic diversity than a corresponding maximum-likelihood estimator for the same coalescent process. The least-squares statistics also scale up to genome-sized data sets with many nucleotides and loci. These results demonstrate that least-squares statistics will likely prove useful for nonlinear parameter estimation when the underlying population genomic processes have complex evolutionary dynamics involving interactions between mutation, selection, demography, and recombination.  相似文献   

15.
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.  相似文献   

16.
Computer programs for the analysis of data from techniques frequently used in nucleic acids research are described. In addition to calculating non-linear, least-squares solutions to equations describing these systems, the programs allow for data editing, normalization, plotting and storage, and are flexible and simple to use. Typical applications of the programs are described.  相似文献   

17.
18.
The phenomenological principles of information theory are used in the analysis of ligand-binding phenomena in biological macromolecules. Information maps are constructed to visualize regions of ligand chemical potential with maximum amount of information and to devise suitable experimental strategies therefrom. Extensive simulation studies and analysis of experimental data also point out the properties of information used as a weighting procedure in nonlinear least-squares analyses.  相似文献   

19.
Accurate prediction of tumor progression is key for adaptive therapy and precision medicine. Cancer progression models (CPMs) can be used to infer dependencies in mutation accumulation from cross-sectional data and provide predictions of tumor progression paths. However, their performance when predicting complete evolutionary trajectories is limited by violations of assumptions and the size of available data sets. Instead of predicting full tumor progression paths, here we focus on short-term predictions, more relevant for diagnostic and therapeutic purposes. We examine whether five distinct CPMs can be used to answer the question “Given that a genotype with n mutations has been observed, what genotype with n + 1 mutations is next in the path of tumor progression?” or, shortly, “What genotype comes next?”. Using simulated data we find that under specific combinations of genotype and fitness landscape characteristics CPMs can provide predictions of short-term evolution that closely match the true probabilities, and that some genotype characteristics can be much more relevant than global features. Application of these methods to 25 cancer data sets shows that their use is hampered by a lack of information needed to make principled decisions about method choice. Fruitful use of these methods for short-term predictions requires adapting method’s use to local genotype characteristics and obtaining reliable indicators of performance; it will also be necessary to clarify the interpretation of the method’s results when key assumptions do not hold.  相似文献   

20.
The theory of photon count histogram (PCH) analysis describes the distribution of fluorescence fluctuation amplitudes due to populations of fluorophores diffusing through a focused laser beam and provides a rigorous framework through which the brightnesses and concentrations of the fluorophores can be determined. In practice, however, the brightnesses and concentrations of only a few components can be identified. Brightnesses and concentrations are determined by a nonlinear least-squares fit of a theoretical model to the experimental PCH derived from a record of fluorescence intensity fluctuations. The χ2 hypersurface in the neighborhood of the optimum parameter set can have varying degrees of curvature, due to the intrinsic curvature of the model, the specific parameter values of the system under study, and the relative noise in the data. Because of this varying curvature, parameters estimated from the least-squares analysis have varying degrees of uncertainty associated with them. There are several methods for assigning confidence intervals to the parameters, but these methods have different efficacies for PCH data. Here, we evaluate several approaches to confidence interval estimation for PCH data, including asymptotic standard error, likelihood joint-confidence region, likelihood confidence intervals, skew-corrected and accelerated bootstrap (BCa), and Monte Carlo residual resampling methods. We study these with a model two-dimensional membrane system for simplicity, but the principles are applicable as well to fluorophores diffusing in three-dimensional solution. Using simulated fluorescence fluctuation data, we find the BCa method to be particularly well-suited for estimating confidence intervals in PCH analysis, and several other methods to be less so. Using the BCa method and additional simulated fluctuation data, we find that confidence intervals can be reduced dramatically for a specific non-Gaussian beam profile.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号