首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A key factor contributing to the variability in the microbial kinetic parameters reported from batch assays is parameter identifiability, i.e., the ability of the mathematical routine used for parameter estimation to provide unique estimates of the individual parameter values. This work encompassed a three-part evaluation of the parameter identifiability of intrinsic kinetic parameters describing the Andrews growth model that are obtained from batch assays. First, a parameter identifiability analysis was conducted by visually inspecting the sensitivity equations for the Andrews growth model. Second, the practical retrievability of the parameters in the presence of experimental error was evaluated for the parameter estimation routine used. Third, the results of these analyses were tested using an example data set from the literature for a self-inhibitory substrate. The general trends from these analyses were consistent and indicated that it is very difficult, if not impossible, to simultaneously obtain a unique set of estimates of intrinsic kinetic parameters for the Andrews growth model using data from a single batch experiment.  相似文献   

2.

Background

Translating a known metabolic network into a dynamic model requires reasonable guesses of all enzyme parameters. In Bayesian parameter estimation, model parameters are described by a posterior probability distribution, which scores the potential parameter sets, showing how well each of them agrees with the data and with the prior assumptions made.

Results

We compute posterior distributions of kinetic parameters within a Bayesian framework, based on integration of kinetic, thermodynamic, metabolic, and proteomic data. The structure of the metabolic system (i.e., stoichiometries and enzyme regulation) needs to be known, and the reactions are modelled by convenience kinetics with thermodynamically independent parameters. The parameter posterior is computed in two separate steps: a first posterior summarises the available data on enzyme kinetic parameters; an improved second posterior is obtained by integrating metabolic fluxes, concentrations, and enzyme concentrations for one or more steady states. The data can be heterogenous, incomplete, and uncertain, and the posterior is approximated by a multivariate log-normal distribution. We apply the method to a model of the threonine synthesis pathway: the integration of metabolic data has little effect on the marginal posterior distributions of individual model parameters. Nevertheless, it leads to strong correlations between the parameters in the joint posterior distribution, which greatly improve the model predictions by the following Monte-Carlo simulations.

Conclusion

We present a standardised method to translate metabolic networks into dynamic models. To determine the model parameters, evidence from various experimental data is combined and weighted using Bayesian parameter estimation. The resulting posterior parameter distribution describes a statistical ensemble of parameter sets; the parameter variances and correlations can account for missing knowledge, measurement uncertainties, or biological variability. The posterior distribution can be used to sample model instances and to obtain probabilistic statements about the model's dynamic behaviour.  相似文献   

3.
Models of membrane systems containing immobilized glucose oxidase and catalase operating together or independently have been developed. A rotated disk electrode apparatus was employed with novel electrochemical operating conditions to experimentally determine mass transfer and intrinsic kinetic parameters of enzyme-containing membranes. The value of a mass transfer parameter that describes internal and external diffusion was first determined under conditions that do not permit the enzyme reactions. In a subsequent experiment with the reaction allowed, kinetic parameters corresponding to the intrinsic maximal velocity and Michaelis constants of the immobilized enzymes were estimated by regression analysis of data based on an appropriate two- or three- parameter model. It was found that immobilization reduced the maximal intrinsic velocity but had no detectable effect on the Michaelis constants. In all but one case- these methods for membrane characterization are nondestructive and can be used repeatedly on a given membrane. These techniques provide the means for quantitative comparisons of immobilization methods and make possible temporal studies of immobilized enzyme inactivation.  相似文献   

4.
The modelling of biochemical networks becomes delicate if kinetic parameters are varying, uncertain or unknown. Facing this situation, we quantify uncertain knowledge or beliefs about parameters by probability distributions. We show how parameter distributions can be used to infer probabilistic statements about dynamic network properties, such as steady-state fluxes and concentrations, signal characteristics or control coefficients. The parameter distributions can also serve as priors in Bayesian statistical analysis. We propose a graphical scheme, the 'dependence graph', to bring out known dependencies between parameters, for instance, due to the equilibrium constants. If a parameter distribution is narrow, the resulting distribution of the variables can be computed by expanding them around a set of mean parameter values. We compute the distributions of concentrations, fluxes and probabilities for qualitative variables such as flux directions. The probabilistic framework allows the study of metabolic correlations, and it provides simple measures of variability and stochastic sensitivity. It also shows clearly how the variability of biological systems is related to the metabolic response coefficients.  相似文献   

5.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

6.
This paper uses the analysis of a data set to examine a number of issues in Bayesian statistics and the application of MCMC methods. The data concern the selectivity of fishing nets and logistic regression is used to relate the size of a fish to the probability it will be retained or escape from a trawl net. Hierarchical models relate information from different trawls and posterior distributions are determined using MCMC. Centring data is shown to radically reduce autocorrelation in chains and Rao‐Blackwellisation and chain‐thinning are found to have little effect on parameter estimates. The results of four convergence diagnostics are compared and the sensitivity of the posterior distribution to the prior distribution is examined using a novel method. Nested models are fitted to the data and compared using intrinsic Bayes factors, pseudo‐Bayes factors and credible intervals.  相似文献   

7.
The classical approach of musculoskeletal modeling is to predict muscle forces and joint torques with a deterministic model constructed from parameters of an average subject. However, this type of model does not perform well for outliers, and does not model the effects of parameter variability. In this study, a Monte-Carlo model was used to stochastically simulate the effects of variability in musculoskeletal parameters on elbow flexion strength in healthy normals, and in subjects with long head biceps (LHB) rupture. The goal was to determine if variability in elbow flexion strength could be quantifiably explained with variability in musculoskeletal parameters. Parameter distributions were constructed from data in the literature. Parameters were sampled from these distributions and used to predict muscle forces and joint torques. The median and distribution of measured joint torque was predicted with small errors ( < 5%). Muscle forces for both cases were predicted and compared. In order to predict measured torques for the case of LHB rupture, the median force and mean cross-sectional area in the remaining elbow flexor muscles is greater than in healthy normals. The probabilities that muscle forces for the Tear case exceed median muscle forces for the No-Tear case are 0.98, 0.99 and 0.79 for SH Biceps, brachialis and brachioradialis, respectively. Differences in variability of measured torques for the two cases are explained by differences in parameter variability.  相似文献   

8.
The classical approach of musculoskeletal modeling is to predict muscle forces and joint torques with a deterministic model constructed from parameters of an average subject. However, this type of model does not perform well for outliers, and does not model the effects of parameter variability. In this study, a Monte-Carlo model was used to stochastically simulate the effects of variability in musculoskeletal parameters on elbow flexion strength in healthy normals, and in subjects with long head biceps (LHB) rupture. The goal was to determine if variability in elbow flexion strength could be quantifiably explained with variability in musculoskeletal parameters. Parameter distributions were constructed from data in the literature. Parameters were sampled from these distributions and used to predict muscle forces and joint torques. The median and distribution of measured joint torque was predicted with small errors (< 5%). Muscle forces for both cases were predicted and compared. In order to predict measured torques for the case of LHB rupture, the median force and mean cross-sectional area in the remaining elbow flexor muscles is greater than in healthy normals. The probabilities that muscle forces for the Tear case exceed median muscle forces for the No-Tear case are 0.98, 0.99 and 0.79 for SH Biceps, brachialis and brachioradialis, respectively. Differences in variability of measured torques for the two cases are explained by differences in parameter variability.  相似文献   

9.
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.  相似文献   

10.
The intrinsic population growth rate (r) of the surplus production function used in the biomass dynamic model and the steepness (h) of the stock-recruitment relationship used in age-structured population dynamics models are two key parameters in fish stock assessment. There is generally insufficient information in the data to estimate these parameters that thus have to be constrained. We developed methods to directly estimate the probability distributions of r and h for the Atlantic bluefin tuna (Thunnus thynnus, Scombridae), using all available biological and ecological information. We examined the existing literature to define appropriate probability distributions of key life history parameters associated with intrinsic growth rate and steepness, paying particular attention to the natural mortality for early life history stages. The estimated probability distribution of the population intrinsic growth rate was weakly informative, with an estimated mean r = 0.77 (±0.53) and an interquartile range of (0.34, 1.12). The estimated distribution of h was more informative, but also strongly asymmetric with an estimated mean h = 0.89 (±0.20) and a median of 0.99. We note that these two key demographic parameters strongly depend on the distribution of early life history mortality rate (M0), which is known to exhibit high year-to-year variations. This variability results in a widely spread distribution of M0 that affects the distribution of the intrinsic population growth rate and further makes the spawning stock biomass an inadequate proxy to predict recruitment levels.  相似文献   

11.
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17°C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.  相似文献   

12.
A continuous distribution approach, instead of the traditional mono- and multiexponential analysis, for determining quencher concentration in a heterogeneous system has been developed. A mathematical model of phosphorescence decay inside a volume with homogeneous concentration of phosphor and heterogeneous concentration of quencher was formulated to obtain pulse-response fitting functions for four different distributions of quencher concentration: rectangular, normal (Gaussian), gamma, and multimodal. The analysis was applied to parameter estimates of a heterogeneous distribution of oxygen tension (PO2) within a volume. Simulated phosphorescence decay data were randomly generated for different distributions and heterogeneity of PO2 inside the excitation/emission volume, consisting of 200 domains, and then fit with equations developed for the four models. Analysis using a monoexponential fit yielded a systematic error (underestimate) in mean PO2 that increased with the degree of heterogeneity. The fitting procedures based on the continuous distribution approach returned more accurate values for parameters of the generated PO2 distribution than did the monoexponential fit. The parameters of the fit (M = mean; sigma = standard deviation) were investigated as a function of signal-to-noise ratio (SNR = maximum signal amplitude/peak-to-peak noise). The best-fit parameter values were stable when SNR > or = 20. All four fitting models returned accurate values of M and sigma for different PO2 distributions. The ability of our procedures to resolve two different heterogeneous compartments was also demonstrated using a bimodal fitting model. An approximate scheme was formulated to allow calculation of the first moments of a spatial distribution of quencher without specifying the distribution. In addition, a procedure for the recovery of a histogram, representing the quencher concentration distribution, was developed and successfully tested.  相似文献   

13.
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity. The concluding discussion argues against the random walk equation because it embodies a constraint that is not valid, and because it implies specific parameters that may be spurious.  相似文献   

14.
Models of neocortical networks are increasingly including the diversity of excitatory and inhibitory neuronal classes. Significant variability in cellular properties are also seen within a nominal neuronal class and this heterogeneity can be expected to influence the population response and information processing in networks. Recent studies have examined the population and network effects of variability in a particular neuronal parameter with some plausibly chosen distribution. However, the empirical variability and covariance seen across multiple parameters are rarely included, partly due to the lack of data on parameter correlations in forms convenient for model construction. To addess this we quantify the heterogeneity within and between the neocortical pyramidal-cell classes in layers 2/3, 4, and the slender-tufted and thick-tufted pyramidal cells of layer 5 using a combination of intracellular recordings, single-neuron modelling and statistical analyses. From the response to both square-pulse and naturalistic fluctuating stimuli, we examined the class-dependent variance and covariance of electrophysiological parameters and identify the role of the h current in generating parameter correlations. A byproduct of the dynamic I-V method we employed is the straightforward extraction of reduced neuron models from experiment. Empirically these models took the refractory exponential integrate-and-fire form and provide an accurate fit to the perisomatic voltage responses of the diverse pyramidal-cell populations when the class-dependent statistics of the model parameters were respected. By quantifying the parameter statistics we obtained an algorithm which generates populations of model neurons, for each of the four pyramidal-cell classes, that adhere to experimentally observed marginal distributions and parameter correlations. As well as providing this tool, which we hope will be of use for exploring the effects of heterogeneity in neocortical networks, we also provide the code for the dynamic I-V method and make the full electrophysiological data set available.  相似文献   

15.
The ability to correct parameters of voltage-gated conductances measured under poor spatial control by point voltage clamp could rescue much flawed experimental data. We explore a strategy for correcting errors in experiments that employs a full-trace approach to parameter determination. Simulated soma voltage-clamp runs are made on a model neuron with a single voltage-gated, Hodgkin-Huxley channel type distributed uniformly along an elongate process. Estimates for both kinetic and I(V) parameters are obtained by fitting a form of the Hodgkin-Huxley equations to the complete time course of leak-subtracted current curves. The fitted parameters are used to determine how much correction in each parameter is needed to regenerate the set actually belonging to the channel. Corrections are generated for a range of neurite lengths, conductance densities, and channel characteristics.  相似文献   

16.
For the application of immobilized enzymes, the influence of immobilization on the activity of the enzyme should be Known. This influence can be obtained by determining the intrinsic kinetic parameters of the immobilized enzyme, and by comparing them with the kinetic parameters of the suspended enzyme. This article deals with the determination of the intrinsic kinetic parameters of an agarose-gel bead immobilized oxygen-consuming enzyme: L-lactate 2-monooxygenase. The reaction rate of the enzyme can be described by Michaelis-Menten kinetics. Batch conversion experiments using a biological oxygen monitor, as well as steady-state profile measurements within the biocatalyst particles using an oxygen microsensor, were performed. Two different mathematical methods were used for the batch conversion experiments, both assuming a pseudosteady-state situation with respect to the shape of the profile inside the bead. One of the methods used an approximate relation for the effectiveness factor for Michaelis-Menten kinetics which interpolates between the analytical solutions for zero- and first-order kinetics. The other mathematical method was based on a numerical solution and combined a mass balance over the reactor with a mass balance over the bead. The main difference in the application of the two methods is the computer calculation time; the completely numerical calculation procedure was about 20 times slower than the other calculation procedure.The intrinsic kinetic parameters resulting from both experimental methods were compared to check the reliability of the methods. There was no significant difference in the intrinsic kinetic parameters obtained from the two experimental methods. By comparison of the kinetic parameters for the suspended enzyme with the intrinsic kinetic parameters for the immobilized enzyme, it appeared that immobilization caused a decrease in the value of V(m) by a factor of 2, but there was no significant difference in the values obtained for K(m).  相似文献   

17.
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.  相似文献   

18.
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.  相似文献   

19.
Population variability and uncertainty are important features of biological systems that must be considered when developing mathematical models for these systems. In this paper we present probability-based parameter estimation methods that account for such variability and uncertainty. Theoretical results that establish well-posedness and stability for these methods are discussed. A probabilistic parameter estimation technique is then applied to a toxicokinetic model for trichloroethylene using several types of simulated data. Comparison with results obtained using a standard, deterministic parameter estimation method suggests that the probabilistic methods are better able to capture population variability and uncertainty in model parameters.  相似文献   

20.
Many image processing methods applied to magnetic resonance (MR) images directly or indirectly rely on prior knowledge of the statistical data distribution that characterizes the MR data. Also, data distributions are key in many parameter estimation problems and strongly relate to the accuracy and precision with which parameters can be estimated. This review paper provides an overview of the various distributions that occur when dealing with MR data, considering both single-coil and multiple-coil acquisition systems. The paper also summarizes how knowledge of the MR data distributions can be used to construct optimal parameter estimators and answers the question as to what precision may be achieved ultimately from a particular MR image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号