首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A nonlinear regression technique for estimating the Monod parameters describing biodegradation kinetics is presented and analyzed. Two model data sets were taken from a study of aerobic biodegradation of the polycyclic aromatic hydrocarbons (PAHs), naphthalene and 2-methylnaphthalene, as the growth-limiting substrates, where substrate and biomass concentrations were measured with time. For each PAH, the parameters estimated were: q(max), the maximum substrate utilization rate per unit biomass; K(S), the half-saturation coefficient; and Y, the stoichiometric yield coefficient. Estimating parameters when measurements have been made for two variables with different error structures requires a technique more rigorous than least squares regression. An optimization function is derived from the maximumlikelihood equation assuming an unknown, nondiagonal covariance matrix for the measured variables. Because the derivation is based on an assumption of normally distributed errors in the observations, the error structures of the regression variables were examined. Through residual analysis, the errors in the substrate concentration data were found to be distributed log-normally, demonstrating a need for log transformation of this variable. The covariance between ln C and X was found to be small but significantly nonzero at the 67% confidence level for NPH and at the 94% confidence level for 2MN. The nonlinear parameter estimation yielded unique values for q(max), K(S), and Y for naphthalene. Thus, despite the low concentrations of this sparingly soluble compound, the data contained sufficient information for parameter estimation. For 2-methylnaphthalene, the values of q(max) and K(S) could not be estimated uniquely; however, q(max)/K(S) was estimated. To assess the value of including the relatively imprecise biomass concentration data, the results from the bivariate method were compared with a univariate method using only the substrate concentration data. The results demonstrated that the bivariate data yielded a better confidence in the estimates and provided additional information about the model fit and model adequacy. The combination of the value of the bivariate data set and their nonzero covariance justifies the need for maximum likelihood estimation over the simpler nonlinear least squares regression.  相似文献   

2.

Background

Ordinary differential equations (ODEs) are often used to understand biological processes. Since ODE-based models usually contain many unknown parameters, parameter estimation is an important step toward deeper understanding of the process. Parameter estimation is often formulated as a least squares optimization problem, where all experimental data points are considered as equally important. However, this equal-weight formulation ignores the possibility of existence of relative importance among different data points, and may lead to misleading parameter estimation results. Therefore, we propose to introduce weights to account for the relative importance of different data points when formulating the least squares optimization problem. Each weight is defined by the uncertainty of one data point given the other data points. If one data point can be accurately inferred given the other data, the uncertainty of this data point is low and the importance of this data point is low. Whereas, if inferring one data point from the other data is almost impossible, it contains a huge uncertainty and carries more information for estimating parameters.

Results

G1/S transition model with 6 parameters and 12 parameters, and MAPK module with 14 parameters were used to test the weighted formulation. In each case, evenly spaced experimental data points were used. Weights calculated in these models showed similar patterns: high weights for data points in dynamic regions and low weights for data points in flat regions. We developed a sampling algorithm to evaluate the weighted formulation, and demonstrated that the weighted formulation reduced the redundancy in the data. For G1/S transition model with 12 parameters, we examined unevenly spaced experimental data points, strategically sampled to have more measurement points where the weights were relatively high, and fewer measurement points where the weights were relatively low. This analysis showed that the proposed weights can be used for designing measurement time points.

Conclusions

Giving a different weight to each data point according to its relative importance compared to other data points is an effective method for improving robustness of parameter estimation by reducing the redundancy in the experimental data.
  相似文献   

3.
This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.  相似文献   

4.
Cardiac muscle tissue during relaxation is commonly modeled as a hyperelastic material with strongly nonlinear and anisotropic stress response. Adapting the behavior of such a model to experimental or patient data gives rise to a parameter estimation problem which involves a significant number of parameters. Gradient-based optimization algorithms provide a way to solve such nonlinear parameter estimation problems with relatively few iterations, but require the gradient of the objective functional with respect to the model parameters. This gradient has traditionally been obtained using finite differences, the calculation of which scales linearly with the number of model parameters, and introduces a differencing error. By using an automatically derived adjoint equation, we are able to calculate this gradient more efficiently, and with minimal implementation effort. We test this adjoint framework on a least squares fitting problem involving data from simple shear tests on cardiac tissue samples. A second challenge which arises in gradient-based optimization is the dependency of the algorithm on a suitable initial guess. We show how a multi-start procedure can alleviate this dependency. Finally, we provide estimates for the material parameters of the Holzapfel and Ogden strain energy law using finite element models together with experimental shear data.  相似文献   

5.
Generalized least squares regression with variance function estimation was used to derive the calibration function for measurement of methotrexate plasma concentration and its results were compared with weighted least squares regression by usual weight factors and also with that of ordinary least squares method. In the calibration curve range of 0.05 to 100 microM, both heteroscedasticity and non-linearity were present therefore ordinary least squares linear regression methods could result in large errors in the calculation of methotrexate concentration. Generalized least squares regression with variance function estimation worked better than both the weighted regression with the usual weight factors and ordinary least squares regression and gave better estimates for methotrexate concentration.  相似文献   

6.
Huang J  Ma S  Xie H 《Biometrics》2006,62(3):813-820
We consider two regularization approaches, the LASSO and the threshold-gradient-directed regularization, for estimation and variable selection in the accelerated failure time model with multiple covariates based on Stute's weighted least squares method. The Stute estimator uses Kaplan-Meier weights to account for censoring in the least squares criterion. The weighted least squares objective function makes the adaptation of this approach to multiple covariate settings computationally feasible. We use V-fold cross-validation and a modified Akaike's Information Criterion for tuning parameter selection, and a bootstrap approach for variance estimation. The proposed method is evaluated using simulations and demonstrated on a real data example.  相似文献   

7.
Methods for robust logistic modeling of batch and fed‐batch mammalian cell cultures are presented in this study. Linearized forms of the logistic growth, logistic decline, and generalized logistic equation were derived to obtain initial estimates of the parameters by linear least squares. These initial estimates facilitated subsequent determination of refined values by nonlinear optimization using three different algorithms. Data from BHK, CHO, and hybridoma cells in batch or fed‐batch cultures at volumes ranging from 100 mL–300 L were tested with the above approach and solution convergence was obtained for all three nonlinear optimization approaches for all data sets. This result, despite the sensitivity of logistic equations to parameter variation because of their exponential nature, demonstrated that robust estimation of logistic parameters was possible by this combination of linearization followed by nonlinear optimization. The approach is relatively simple and can be implemented in a spreadsheet to robustly model mammalian cell culture batch or fed‐batch data. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009  相似文献   

8.
This paper applies the inverse probability weighted least‐squares method to predict total medical cost in the presence of censored data. Since survival time and medical costs may be subject to right censoring and therefore are not always observable, the ordinary least‐squares approach cannot be used to assess the effects of explanatory variables. We demonstrate how inverse probability weighted least‐squares estimation provides consistent asymptotic normal coefficients with easily computable standard errors. In addition, to assess the effect of censoring on coefficients, we develop a test comparing ordinary least‐squares and inverse probability weighted least‐squares estimators. We demonstrate the methods developed by applying them to the estimation of cancer costs using Medicare claims data. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
10.
When we apply ecological models in environmental management, we must assess the accuracy of parameter estimation and its impact on model predictions. Parameters estimated by conventional techniques tend to be nonrobust and require excessive computational resources. However, optimization algorithms are highly robust and generally exhibit convergence of parameter estimation by inversion with nonlinear models. They can simultaneously generate a large number of parameter estimates using an entire data set. In this study, we tested four inversion algorithms (simulated annealing, shuffled complex evolution, particle swarm optimization, and the genetic algorithm) to optimize parameters in photosynthetic models depending on different temperatures. We investigated if parameter boundary values and control variables influenced the accuracy and efficiency of the various algorithms and models. We obtained optimal solutions with all of the inversion algorithms tested if the parameter bounds and control variables were constrained properly. However, the efficiency of processing time use varied with the control variables obtained. In addition, we investigated if temperature dependence formalization impacted optimally the parameter estimation process. We found that the model with a peaked temperature response provided the best fit to the data.  相似文献   

11.
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.  相似文献   

12.
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one‐dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE‐based penalty. Within the quasilinearized spline‐based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known.  相似文献   

13.
In isothermal titration calorimetry (ITC), the two main sources of random (statistical) error are associated with the extraction of the heat q from the measured temperature changes and with the delivery of metered volumes of titrant. The former leads to uncertainty that is approximately constant and the latter to uncertainty that is proportional to q. The role of these errors in the analysis of ITC data by nonlinear least squares is examined for the case of 1:1 binding, M+X right arrow over left arrow MX. The standard errors in the key parameters-the equilibrium constant Ko and the enthalpy DeltaHo-are assessed from the variance-covariance matrix computed for exactly fitting data. Monte Carlo calculations confirm that these "exact" estimates will normally suffice and show further that neglect of weights in the nonlinear fitting can result in significant loss of efficiency. The effects of the titrant volume error are strongly dependent on assumptions about the nature of this error: If it is random in the integral volume instead of the differential volume, correlated least-squares is required for proper analysis, and the parameter standard errors decrease with increasing number of titration steps rather than increase.  相似文献   

14.
Breathing has inherent irregularities that produce breath-to-breath fluctuations ("noise") in pulmonary gas exchange. These impair the precision of characterizing nonsteady-state gas exchange kinetics during exercise. We quantified the effects of this noise on the confidence of estimating kinetic parameters of the underlying physiological responses and hence of model discrimination. Five subjects each performed eight transitions from 0 to 100 W on a cycle ergometer. Ventilation, CO2 output, and O2 uptake were computed breath by breath. The eight responses were interpolated uniformly, time aligned, and averaged for each subject; and the kinetic parameters of a first-order model (i.e., the time constant and time delay) were then estimated using three methods: linear least squares, nonlinear least squares, and maximum likelihood. The breath-by-breath noise approximated an uncorrelated Gaussian stochastic process, with a standard deviation that was largely independent of metabolic rate. An expression has therefore been derived for the number of square-wave repetitions required for a specified parameter confidence using methods b and c; method a being less appropriate for parameter estimation of noisy gas exchange kinetics.  相似文献   

15.
This paper introduces a simple stochastic model for waterfowl movement. After outlining the properties of the model, we focus on parameter estimation. We compare three standard least squares estimation procedures with maximum likelihood (ML) estimates using Monte Carlo simulations. For our model, little is gained by incorporating information about the covariance structure of the process into least squares estimation. In fact, misspecifying the covariance produces worse estimates than ignoring heteroscedasticity and autocorrelation. We also develop a modified least squares procedure that performs as well as ML. We then apply the five estimators to field data and show that differences in the statistical properties of the estimators can greatly affect our interpretation of the data. We conclude by highlighting the effects of density on per capita movement rates.  相似文献   

16.
Nonlinear mixed effects models for repeated measures data   总被引:51,自引:1,他引:50  
We propose a general, nonlinear mixed effects model for repeated measures data and define estimators for its parameters. The proposed estimators are a natural combination of least squares estimators for nonlinear fixed effects models and maximum likelihood (or restricted maximum likelihood) estimators for linear mixed effects models. We implement Newton-Raphson estimation using previously developed computational methods for nonlinear fixed effects models and for linear mixed effects models. Two examples are presented and the connections between this work and recent work on generalized linear mixed effects models are discussed.  相似文献   

17.
Online estimation of unknown state variables is a key component in the accurate modelling of biological wastewater treatment processes due to a lack of reliable online measurement systems. The extended Kalman filter (EKF) algorithm has been widely applied for wastewater treatment processes. However, the series approximations in the EKF algorithm are not valid, because biological wastewater treatment processes are highly nonlinear with a time-varying characteristic. This work proposes an alternative online estimation approach using the sequential Monte Carlo (SMC) methods for recursive online state estimation of a biological sequencing batch reactor for wastewater treatment. SMC is an algorithm that makes it possible to recursively construct the posterior probability density of the state variables, with respect to all available measurements, through a random exploration of the states by entities called ‘particle’. In this work, the simplified and modified Activated Sludge Model No. 3 with nonlinear biological kinetic models is used as a process model and formulated in a dynamic state-space model applied to the SMC method. The performance of the SMC method for online state estimation applied to a biological sequencing batch reactor with online and offline measured data is encouraging. The results indicate that the SMC method could emerge as a powerful tool for solving online state and parameter estimation problems without any model linearization or restrictive assumptions pertaining to the type of nonlinear models for biological wastewater treatment processes.  相似文献   

18.
19.
Efficient measurement error correction with spatially misaligned data   总被引:1,自引:0,他引:1  
Association studies in environmental statistics often involve exposure and outcome data that are misaligned in space. A common strategy is to employ a spatial model such as universal kriging to predict exposures at locations with outcome data and then estimate a regression parameter of interest using the predicted exposures. This results in measurement error because the predicted exposures do not correspond exactly to the true values. We characterize the measurement error by decomposing it into Berkson-like and classical-like components. One correction approach is the parametric bootstrap, which is effective but computationally intensive since it requires solving a nonlinear optimization problem for the exposure model parameters in each bootstrap sample. We propose a less computationally intensive alternative termed the "parameter bootstrap" that only requires solving one nonlinear optimization problem, and we also compare bootstrap methods to other recently proposed methods. We illustrate our methodology in simulations and with publicly available data from the Environmental Protection Agency.  相似文献   

20.
We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号