首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A decentralized feedback control scheme is proposed to synchronize linearly coupled identical neural networks with time-varying delay and parameter uncertainties. Sufficient condition for synchronization is developed by carefully investigating the uncertain nonlinear synchronization error dynamics in this article. A procedure for designing a decentralized synchronization controller is proposed using linear matrix inequality (LMI) technique. The designed controller can drive the synchronization error to zero and overcome disruption caused by system uncertainty and external disturbance.  相似文献   

2.
This article considers the parameter estimation of multi-fiber family models for biaxial mechanical behavior of passive arteries in the presence of the measurement errors. First, the uncertainty propagation due to the errors in variables has been carefully characterized using the constitutive model. Then, the parameter estimation of the artery model has been formulated into nonlinear least squares optimization with an appropriately chosen weight from the uncertainty model. The proposed technique is evaluated using multiple sets of synthesized data with fictitious measurement noises. The results of the estimation are compared with those of the conventional nonlinear least squares optimization without a proper weight factor. The proposed method significantly improves the quality of parameter estimation as the amplitude of the errors in variables becomes larger. We also investigate model selection criteria to decide the optimal number of fiber families in the multi-fiber family model with respect to the experimental data balancing between variance and bias errors.  相似文献   

3.
The investigation of enzyme kinetics is increasingly important, especially for finding active substances and understanding intracellular behaviors. Therefore, the determination of an enzyme's kinetic parameters is crucial. For this a systematic experimental design procedure is necessary to avoid wasting time and resources. The parameter estimation error of a Michaelis-Menten enzyme kinetic process is analysed analytically to reduce the search area as well as numerically to specify the optimum for parameter estimation. From analytical analysis of the Fisher information matrix the fact is obtained, that an enzyme feed will not improve the estimation process, but substrate feeding is favorable with small volume flow. Unconstrained and constrained process conditions are considered. If substrate fed-batch process design is used instead of pure batch experiments the improvements of the Cramer-Rao lower bound of the variance of parameter estimation error reduces to 82% for mu(max) and to 60% for K(m) of the batch values in average.  相似文献   

4.
B Steiert  A Raue  J Timmer  C Kreutz 《PloS one》2012,7(7):e40052
Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines.  相似文献   

5.
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.  相似文献   

6.
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*) 相似文献   

7.
Halley (2003) proposed that parameter drift decreases the uncertainty in long‐range extinction risk estimates, because drift mitigates the extreme sensitivity of estimated risk to estimated mean growth rate. However, parameter drift has a second, opposing effect: it increases the uncertainty in parameter estimates from a given data set. When both effects are taken into account, parameter drift can increase, sometimes substantially, the uncertainty in risk estimates. The net effect depends sensitively on the type of drift and on which model parameters must be estimated from observational data on the population at risk. In general, unless many parameters are estimated from independent data, parameter drift increases the uncertainty in extinction risk. These findings suggest that more mechanistic PVA models, using long‐term data on key environmental variables and experiments to quantify their demographic impacts, offer the best prospects for escaping the high data requirements when extinction risk is estimated from observational data.  相似文献   

8.
Results of product assessments are often criticised as to their handling of uncertainty. Therefore, it is necessary to develop a comprehensive methodology reflecting parameter uncertainty in combination with uncertainty due to choices in the outcome of LCAs. This paper operationalises the effect of combined parameter uncertainties in the inventory and in the characterisation factors for global warming and acidification for the comparison of two exemplary types of roof gutters. For this purpose, Latin Hypercube sampling is used in the matrix (inventory) method. To illustrate the influence of choices, the effect on LCA outcomes is shown of two different allocation procedures in open-loop recycling and three time horizons for global warming potentials. Furthermore, an uncertainty importance analysis is performed to show which parameter uncertainties mainly contribute to uncertainties in the comparison and the separate environmental profiles of the product systems. These results can be used to prioritise further data research.  相似文献   

9.

Background

The dynamics of biochemical networks can be modelled by systems of ordinary differential equations. However, these networks are typically large and contain many parameters. Therefore model reduction procedures, such as lumping, sensitivity analysis and time-scale separation, are used to simplify models. Although there are many different model reduction procedures, the evaluation of reduced models is difficult and depends on the parameter values of the full model. There is a lack of a criteria for evaluating reduced models when the model parameters are uncertain.

Results

We developed a method to compare reduced models and select the model that results in similar dynamics and uncertainty as the original model. We simulated different parameter sets from the assumed parameter distributions. Then, we compared all reduced models for all parameter sets using cluster analysis. The clusters revealed which of the reduced models that were similar to the original model in dynamics and variability. This allowed us to select the smallest reduced model that best approximated the full model. Through examples we showed that when parameter uncertainty was large, the model should be reduced further and when parameter uncertainty was small, models should not be reduced much.

Conclusions

A method to compare different models under parameter uncertainty is developed. It can be applied to any model reduction method. We also showed that the amount of parameter uncertainty influences the choice of reduced models.
  相似文献   

10.

Background

Ordinary differential equations (ODEs) are often used to understand biological processes. Since ODE-based models usually contain many unknown parameters, parameter estimation is an important step toward deeper understanding of the process. Parameter estimation is often formulated as a least squares optimization problem, where all experimental data points are considered as equally important. However, this equal-weight formulation ignores the possibility of existence of relative importance among different data points, and may lead to misleading parameter estimation results. Therefore, we propose to introduce weights to account for the relative importance of different data points when formulating the least squares optimization problem. Each weight is defined by the uncertainty of one data point given the other data points. If one data point can be accurately inferred given the other data, the uncertainty of this data point is low and the importance of this data point is low. Whereas, if inferring one data point from the other data is almost impossible, it contains a huge uncertainty and carries more information for estimating parameters.

Results

G1/S transition model with 6 parameters and 12 parameters, and MAPK module with 14 parameters were used to test the weighted formulation. In each case, evenly spaced experimental data points were used. Weights calculated in these models showed similar patterns: high weights for data points in dynamic regions and low weights for data points in flat regions. We developed a sampling algorithm to evaluate the weighted formulation, and demonstrated that the weighted formulation reduced the redundancy in the data. For G1/S transition model with 12 parameters, we examined unevenly spaced experimental data points, strategically sampled to have more measurement points where the weights were relatively high, and fewer measurement points where the weights were relatively low. This analysis showed that the proposed weights can be used for designing measurement time points.

Conclusions

Giving a different weight to each data point according to its relative importance compared to other data points is an effective method for improving robustness of parameter estimation by reducing the redundancy in the experimental data.
  相似文献   

11.
M L Doyle  J H Simmons  S J Gill 《Biopolymers》1990,29(8-9):1129-1135
Examination of binding information in the form of derivative (or finite difference) measurements is explored (1) experimentally by a thin-layer optical procedure (Dolman, D. & Gill, S. J. (1978) Anal. Biochem. 87, 127-134) and (2) theoretically by simulation in order to determine the influence of the number of data points and their standard error upon the resolvability of binding parameters in cooperative and non-cooperative systems. The data is described by the difference in optical absorbance divided by the change in the logarithm of the ligand activity and each data point is assumed to be influenced by a random error with a given variance. It is found that increasing the number of data points, which in turn effectively reduces the magnitude of the observed absorbance changes, results in an increase in the uncertainty of the resolved parameters of the system. The effect is verified by both experimental and simulation studies. Thus one is led to suggest that fewer measurements for the change of absorbance with larger magnitudes produces the most favorable situation for parameter resolution when the data is in the form of finite difference measurements.  相似文献   

12.
Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.  相似文献   

13.
In this study, an ADM1-based distributed parameter model was validated using experimental results obtained in a laboratory-scale 10 L UASB reactor. Sensitivity analysis of the model parameters was used to select four parameters for estimation by a numerical procedure while other parameters were accepted from ADM1 benchmark simulations. The parameter estimation procedure used measurements of liquid phase components obtained at different sampling points in the reactor and under different operating conditions. Model verification used real time fluorescence-based measurements of chemical oxygen demand and volatile fatty acids at four sampling locations in the reactor. Overall, the distributed parameter model was able to describe the distribution of liquid phase components in the reactor and adequately simulated the effect of external recirculation on degradation efficiency. The model can be used in the design, analysis and optimization of UASB reactors.  相似文献   

14.
An experimental design to estimate the parameters in a Monod-type equation from batch culture data was examined. Consideration was given to the design of experiments to estimate accurate values of the parameters. Sequential experimental design with the information index was used for this purpose. With this approach the standard deviation of the parameter values was reduced using simulated batch culture data.  相似文献   

15.
Summary An experiment is described, the results of which indicate that impulse rate is the information carrying parameter of impulse trains. The demodulation of such information occurs by low-pass filtering. This conceptual model is compatible with all the features of synaptic transmission. The implications of this model for the transmission of information in integrative neurons is discussed.  相似文献   

16.
Purpose

Objective uncertainty quantification (UQ) of a product life-cycle assessment (LCA) is a critical step for decision-making. Environmental impacts can be measured directly or by using models. Underlying mathematical functions describe a model that approximate the environmental impacts during various LCA stages. In this study, three possible uncertainty sources of a mathematical model, i.e., input variability, model parameter (differentiate from input in this study), and model-form uncertainties, were investigated. A simple and easy to implement method is proposed to quantify each source.

Methods

Various data analytics methods were used to conduct a thorough model uncertainty analysis; (1) Interval analysis was used for input uncertainty quantification. A direct sampling using Monte Carlo (MC) simulation was used for interval analysis, and results were compared to that of indirect nonlinear optimization as an alternative approach. A machine learning surrogate model was developed to perform direct MC sampling as well as indirect nonlinear optimization. (2) A Bayesian inference was adopted to quantify parameter uncertainty. (3) A recently introduced model correction method based on orthogonal polynomial basis functions was used to evaluate the model-form uncertainty. The methods are applied to a pavement LCA to propagate uncertainties throughout an energy and global warming potential (GWP) estimation model; a case of a pavement section in Chicago metropolitan area was used.

Results and discussion

Results indicate that each uncertainty source contributes to the overall energy and GWP output of the LCA. Input uncertainty was shown to have significant impact on overall GWP output; for the example case study, GWP interval was around 50%. Parameter uncertainty results showed that an assumption of ±?10% uniform variation in the model parameter priors resulted in 28% variation in the GWP output. Model-form uncertainty had the lowest impact (less than 10% variation in the GWP). This is because the original energy model is relatively accurate in estimating the energy. However, sensitivity of the model-form uncertainty showed that even up to 180% variation in the results can be achieved due to lower original model accuracies.

Conclusions

Investigating each uncertainty source of the model indicated the importance of the accurate characterization, propagation, and quantification of uncertainty. The outcome of this study proposed independent and relatively easy to implement methods that provide robust grounds for objective model uncertainty analysis for LCA applications. Assumptions on inputs, parameter distributions, and model form need to be justified. Input uncertainty plays a key role in overall pavement LCA output. The proposed model correction method as well as interval analysis were relatively easy to implement. Research is still needed to develop a more generic and simplified MCMC simulation procedure that is fast to implement.

  相似文献   

17.
The jackknife procedure is introduced as a means of making comparisons among Michaelis-Menten parameter estimates for six different experimental conditions. In addition to providing a solution to the general inter-experimental comparison problem, the jackknife procedure will provide valid parameter estimates even when some of the assumptions usually required for statistical analysis are violated, e.g., the random errors are not normally distributed and the variances are not homogeneous. Other recent variations of the jackknife have also been introduced and briefly investigated: (i) the linear jackknife, which is more efficient computationally, and (ii) the weighted jackknife, which reduces the influence of design points (substrate concentrations) that have an excessive influence on the precision of parameter estimates.  相似文献   

18.
MOTIVATION: Finding differentially expressed genes is a fundamental objective of a microarray experiment. Numerous methods have been proposed to perform this task. Existing methods are based on point estimates of gene expression level obtained from each microarray experiment. This approach discards potentially useful information about measurement error that can be obtained from an appropriate probe-level analysis. Probabilistic probe-level models can be used to measure gene expression and also provide a level of uncertainty in this measurement. This probe-level measurement error provides useful information which can help in the identification of differentially expressed genes. RESULTS: We propose a Bayesian method to include probe-level measurement error into the detection of differentially expressed genes from replicated experiments. A variational approximation is used for efficient parameter estimation. We compare this approximation with MAP and MCMC parameter estimation in terms of computational efficiency and accuracy. The method is used to calculate the probability of positive log-ratio (PPLR) of expression levels between conditions. Using the measurements from a recently developed Affymetrix probe-level model, multi-mgMOS, we test PPLR on a spike-in dataset and a mouse time-course dataset. Results show that the inclusion of probe-level measurement error improves accuracy in detecting differential gene expression. AVAILABILITY: The MAP approximation and variational inference described in this paper have been implemented in an R package pplr. The MCMC method is implemented in Matlab. Both software are available from http://umber.sbs.man.ac.uk/resources/puma.  相似文献   

19.
Goal, Scope and Background Decision-makers demand information about the range of possible outcomes of their actions. Therefore, for developing Life Cycle Assessment (LCA) as a decision-making tool, Life Cycle Inventory (LCI) databases should provide uncertainty information. Approaches for incorporating uncertainty should be selected properly contingent upon the characteristics of the LCI database. For example, in industry-based LCI databases where large amounts of up-to-date process data are collected, statistical methods might be useful for quantifying the uncertainties. However, in practice, there is still a lack of knowledge as to what statistical methods are most effective for obtaining the required parameters. Another concern from the industry's perspective is the confidentiality of the process data. The aim of this paper is to propose a procedure for incorporating uncertainty information with statistical methods in industry-based LCI databases, which at the same time preserves the confidentiality of individual data. Methods The proposed procedure for taking uncertainty in industry-based databases into account has two components: continuous probability distributions fitted to scattering unit process data, and rank order correlation coefficients between inventory flows. The type of probability distribution is selected using statistical methods such as goodness-of-fit statistics or experience based approaches. Parameters of probability distributions are estimated using maximum likelihood estimation. Rank order correlation coefficients are calculated for inventory items in order to preserve data interdependencies. Such probability distributions and rank order correlation coefficients may be used in Monte Carlo simulations in order to quantify uncertainties in LCA results as probability distribution. Results and Discussion A case study is performed on the technology selection of polyethylene terephthalate (PET) chemical recycling systems. Three processes are evaluated based on CO2 reduction compared to the conventional incineration technology. To illustrate the application of the proposed procedure, assumptions were made about the uncertainty of LCI flows. The application of the probability distributions and the rank order correlation coefficient is shown, and a sensitivity analysis is performed. A potential use of the results of the hypothetical case study is discussed. Conclusion and Outlook The case study illustrates how the uncertainty information in LCI databases may be used in LCA. Since the actual scattering unit process data were not available for the case study, the uncertainty distribution of the LCA result is hypothetical. However, the merit of adopting the proposed procedure has been illustrated: more informed decision-making becomes possible, basing the decisions on the significance of the LCA results. With this illustration, the authors hope to encourage both database developers and data suppliers to incorporate uncertainty information in LCI databases.  相似文献   

20.
A simple design procedure is proposed that can be used to enhance the performance of a biological cultivation process. The model-supported method starts with a simple model. This is used to design first experiment, i.e., to calculate such a control profile that improves the process performance. The results of the experiment are then used to update the model and subsequently the control profiles. The procedure was first tested at simulated Saccharomyces cerevisiae and Penicillum chrysogenum cultivation processes and then practically applied to optimize an E.coli cultivation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号