首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT: BACKGROUND: A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. RESULTS: A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. CONCLUSIONS: The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.  相似文献   

2.
Prediction of gene dynamic behavior is a challenging and important problem in genomic research while estimating the temporal correlations and non-stationarity are the keys in this process. Unfortunately, most existing techniques used for the inclusion of the temporal correlations treat the time course as evenly distributed time intervals and use stationary models with time-invariant settings. This is an assumption that is often violated in microarray time course data since the time course expression data are at unequal time points, where the difference in sampling times varies from minutes to days. Furthermore, the unevenly spaced short time courses with sudden changes make the prediction of genetic dynamics difficult. In this paper, we develop two types of Bayesian state space models to tackle this challenge for inferring and predicting the gene expression profiles associated with diseases. In the univariate time-varying Bayesian state space models we treat both the stochastic transition matrix and the observation matrix time-variant with linear setting and point out that this can easily be extended to nonlinear setting. In the multivariate Bayesian state space model we include temporal correlation structures in the covariance matrix estimations. In both models, the unevenly spaced short time courses with unseen time points are treated as hidden state variables. Bayesian approaches with various prior and hyper-prior models with MCMC algorithms are used to estimate the model parameters and hidden variables. We apply our models to multiple tissue polygenetic affymetrix data sets. Results show that the predictions of the genomic dynamic behavior can be well captured by the proposed models.  相似文献   

3.
Most biological models of intermediate size, and probably all large models, need to cope with the fact that many of their parameter values are unknown. In addition, it may not be possible to identify these values unambiguously on the basis of experimental data. This poses the question how reliable predictions made using such models are. Sensitivity analysis is commonly used to measure the impact of each model parameter on its variables. However, the results of such analyses can be dependent on an exact set of parameter values due to nonlinearity. To mitigate this problem, global sensitivity analysis techniques are used to calculate parameter sensitivities in a wider parameter space. We applied global sensitivity analysis to a selection of five signalling and metabolic models, several of which incorporate experimentally well-determined parameters. Assuming these models represent physiological reality, we explored how the results could change under increasing amounts of parameter uncertainty. Our results show that parameter sensitivities calculated with the physiological parameter values are not necessarily the most frequently observed under random sampling, even in a small interval around the physiological values. Often multimodal distributions were observed. Unsurprisingly, the range of possible sensitivity coefficient values increased with the level of parameter uncertainty, though the amount of parameter uncertainty at which the pattern of control was able to change differed among the models analysed. We suggest that this level of uncertainty can be used as a global measure of model robustness. Finally a comparison of different global sensitivity analysis techniques shows that, if high-throughput computing resources are available, then random sampling may actually be the most suitable technique.  相似文献   

4.
In controlling animal behavior the nervous system has to perform within the operational limits set by the requirements of each specific behavior. The implications for the corresponding range of suitable network, single neuron, and ion channel properties have remained elusive. In this article we approach the question of how well-constrained properties of neuronal systems may be on the neuronal level. We used large data sets of the activity of isolated invertebrate identified cells and built an accurate conductance-based model for this cell type using customized automated parameter estimation techniques. By direct inspection of the data we found that the variability of the neurons is larger when they are isolated from the circuit than when in the intact system. Furthermore, the responses of the neurons to perturbations appear to be more consistent than their autonomous behavior under stationary conditions. In the developed model, the constraints on different parameters that enforce appropriate model dynamics vary widely from some very tightly controlled parameters to others that are almost arbitrary. The model also allows predictions for the effect of blocking selected ionic currents and to prove that the origin of irregular dynamics in the neuron model is proper chaoticity and that this chaoticity is typical in an appropriate sense. Our results indicate that data driven models are useful tools for the in-depth analysis of neuronal dynamics. The better consistency of responses to perturbations, in the real neurons as well as in the model, suggests a paradigm shift away from measuring autonomous dynamics alone towards protocols of controlled perturbations. Our predictions for the impact of channel blockers on the neuronal dynamics and the proof of chaoticity underscore the wide scope of our approach.  相似文献   

5.
Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K + conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K + conductances were captured across a wide variety of contexts of model parameters. For each type of K + conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.  相似文献   

6.
When predicting population dynamics, the value of the prediction is not enough and should be accompanied by a confidence interval that integrates the whole chain of errors, from observations to predictions via the estimates of the parameters of the model. Matrix models are often used to predict the dynamics of age- or size-structured populations. Their parameters are vital rates. This study aims (1) at assessing the impact of the variability of observations on vital rates, and then on model’s predictions, and (2) at comparing three methods for computing confidence intervals for values predicted from the models. The first method is the bootstrap. The second method is analytic and approximates the standard error of predictions by their asymptotic variance as the sample size tends to infinity. The third method combines use of the bootstrap to estimate the standard errors of vital rates with the analytical method to then estimate the errors of predictions from the model. Computations are done for an Usher matrix models that predicts the asymptotic (as time goes to infinity) stock recovery rate for three timber species in French Guiana. Little difference is found between the hybrid and the analytic method. Their estimates of bias and standard error converge towards the bootstrap estimates when the error on vital rates becomes small enough, which corresponds in the present case to a number of observations greater than 5000 trees.  相似文献   

7.
In order to predict extinction risk in the presence of reddened, or correlated, environmental variability, fluctuating parameters may be represented by the family of 1/f noises, a series of stochastic models with different levels of variation acting on different timescales. We compare the process of parameter estimation for three 1/f models (white, pink and brown noise) with each other, and with autoregressive noise models (which are not 1/f noises), using data from a model time-series (length, T) of population. We then calculate the expected increase in variance and the expected extinction risk for each model, and we use these to explore the implication of assuming an incorrect noise model. When parameterising these models, it is necessary to do so in terms of the measured ("sample") parameters rather than fundamental ("population") parameters. This is because these models are non-stationary: their parameters need not stabilize on measurement over long periods of time and are uniquely defined only over a specified "window" of timescales defined by a measurement process. We find that extinction forecasts can differ greatly between models, depending on the length, T, and the coefficient of variability, CV, of the time series used to parameterise the models, and on the length of time into the future which is to be projected. For the simplest possible models, ones with population itself the 1/f noise process, it is possible to predict the extinction risk based on CV of the observed time series. Our predictions, based on explicit formulae and on simulations, indicate that (a) for very short projection times relative to T, brown and pink noise models are usually optimistic relative to equivalent white noise model; (b) for projection timescales equal to and substantially greater than T, an equivalent brown or pink noise model usually predicts a greater extinction risk, unless CV is very large; and (c) except for very small values of CV, for timescales very much greater than T, the brown and pink models present a more optimistic picture than the white noise model. In most cases, a pink noise is intermediate between white and brown models. Thus, while reddening of environmental noise may increase the long-term extinction probability for stationary processes, this is not generally true for non-stationary processes, such as pink or brown noises.  相似文献   

8.
Muscle fatigue models (MFM) have broad potential application if they can accurately predict muscle capacity and/or endurance time during the execution of diverse tasks. As an initial step toward facilitating improved MFMs, we assessed the sensitivity of selected existing models to their inherent parameters, specifically that model the fatigue and recovery processes, and the accuracy of model predictions. These evaluations were completed for both prolonged and intermittent isometric contractions, and were based on model predictions of endurance times. Based on a recent review of the literature, four MFMs were initially chosen, from which a preliminary assessment led to two of these being considered for more comprehensive evaluation. Both models had a higher sensitivity to their fatigue parameter. Predictions of both models were also more sensitive to the alteration of their parameters in conditions involving lower to moderate levels of effort, though such conditions may be of most practical, contemporary interest or relevance. Although both models yielded accurate predictions of endurance times during prolonged contractions, their predictive ability was inferior for more complex (intermittent) conditions. When optimizing model parameters for different loading conditions, the recovery parameter showed considerably larger variability, which might be related to the inability of these MFMs in simulating the recovery process under different loading conditions. It is argued that such models may benefit in future work from improving their representation of recovery process, particularly how this process differs across loading conditions.  相似文献   

9.
Enhancing the predictive power of models in biology is a challenging issue. Among the major difficulties impeding model development and implementation are the sensitivity of outcomes to variations in model parameters, the problem of choosing of particular expressions for the parametrization of functional relations, and difficulties in validating models using laboratory data and/or field observations. In this paper, we revisit the phenomenon which is referred to as structural sensitivity of a model. Structural sensitivity arises as a result of the interplay between sensitivity of model outcomes to variations in parameters and sensitivity to the choice of model functions, and this can be somewhat of a bottleneck in improving the models predictive power. We provide a rigorous definition of structural sensitivity and we show how we can quantify the degree of sensitivity of a model based on the Hausdorff distance concept. We propose a simple semi-analytical test of structural sensitivity in an ODE modeling framework. Furthermore, we emphasize the importance of directly linking the variability of field/experimental data and model predictions, and we demonstrate a way of assessing the robustness of modeling predictions with respect to data sampling variability. As an insightful illustrative example, we test our sensitivity analysis methods on a chemostat predator-prey model, where we use laboratory data on the feeding of protozoa to parameterize the predator functional response.  相似文献   

10.
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.  相似文献   

11.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

12.
Spatial capture–recapture (SCR) models are a relatively recent development in quantitative ecology, and they are becoming widely used to model density in studies of animal populations using camera traps, DNA sampling and other methods which produce spatially explicit individual encounter information. One of the core assumptions of SCR models is that individuals possess home ranges that are spatially stationary during the sampling period. For many species, this assumption is unlikely to be met and, even for species that are typically territorial, individuals may disperse or exhibit transience at some life stages. In this paper we first conduct a simulation study to evaluate the robustness of estimators of density under ordinary SCR models when dispersal or transience is present in the population. Then, using both simulated and real data, we demonstrate that such models can easily be described in the BUGS language providing a practical framework for their analysis, which allows us to evaluate movement dynamics of species using capture–recapture data. We find that while estimators of density are extremely robust, even to pathological levels of movement (e.g., complete transience), the estimator of the spatial scale parameter of the encounter probability model is confounded with the dispersal/transience scale parameter. Thus, use of ordinary SCR models to make inferences about density is feasible, but interpretation of SCR model parameters in relation to movement should be avoided. Instead, when movement dynamics are of interest, such dynamics should be parameterized explicitly in the model.  相似文献   

13.
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.  相似文献   

14.
Computer fitting of binding data is discussed and it is concluded that the main problem is the choice of starting estimates and internal scaling parameters, not the optimization software. Solving linear overdetermined systems of equations for starting estimates is investigated. A function, Q, is introduced to study model discrimination with binding isotherms and the behaviour of Q as a function of model parameters is calculated for the case of 2 and 3 sites. The power function of the F test is estimated for models with 2 to 5 binding sites and necessary constraints on parameters for correct model discrimination are given. The sampling distribution of F test statistics is compared to an exact F distribution using the Chi-squared and Kolmogorov-Smirnov tests. For low order modes (n less than 3) the F test statistics are approximately F distributed but for higher order models the test statistics are skewed to the left of the F distribution. The parameter covariance matrix obtained by inverting the Hessian matrix of the objective function is shown to be a good approximation to the estimate obtained by Monte Carlo sampling for low order models (n less than 3). It is concluded that analysis of up to 2 or 3 binding sites presents few problems and linear, normal statistical results are valid. To identify correctly 4 sites is much more difficult, requiring very precise data and extreme parameter values. Discrimination of 5 from 4 sites is an upper limit to the usefulness of the F test.  相似文献   

15.
Matrix population models in which individuals are classified by both age and stage can be constructed using the vec-permutation matrix. The resulting age-stage models can be used to derive the age-specific consequences of a stage-specific life history or to describe populations in which the vital rates respond to both age and stage. I derive a general formula for the sensitivity of any output (scalar, vector, or matrix-valued) of the model, to any vector of parameters, using matrix calculus. The matrices describing age-stage dynamics are almost always reducible; I present results giving conditions under which population growth is ergodic from any initial condition. As an example, I analyze a published stage-specific model of Scotch broom (Cytisus scoparius), an invasive perennial shrub. Sensitivity analysis of the population growth rate finds that the selection gradients on adult survival do not always decrease with age but may increase over a range of ages. This may have implications for the evolution of senescence in stage-classified populations. I also derive and analyze the joint distribution of age and stage at death and present a sensitivity analysis of this distribution and of the marginal distribution of age at death.  相似文献   

16.
The traditional method for estimating the linear function of fixed parameters in mixed linear model is a two-stage procedure. In the first stage of this procedure the variance components estimators are calculated and next in the second stage these estimators are taken as true values of variance components to estimating the linear function of fixed parameters according to generalized least squares method. In this paper the general mixed linear model is considered in which a matrix related to fixed parameters and or/a dispersion matrix of observation vector may be deficient in rank. It is shown that the estimators of a set of functions of fixed parameters obtained in second stage are unbiased if only the observation vector is symmetrically distributed about its expected value and the estimators of variance components from first stage are translation-invariant and are even functions of the observation vector.  相似文献   

17.
Modeling vital rates improves estimation of population projection matrices   总被引:1,自引:1,他引:0  
Population projection matrices are commonly used by ecologists and managers to analyze the dynamics of stage-structured populations. Building projection matrices from data requires estimating transition rates among stages, a task that often entails estimating many parameters with few data. Consequently, large sampling variability in the estimated transition rates increases the uncertainty in the estimated matrix and quantities derived from it, such as the population multiplication rate and sensitivities of matrix elements. Here, we propose a strategy to avoid overparameterized matrix models. This strategy involves fitting models to the vital rates that determine matrix elements, evaluating both these models and ones that estimate matrix elements individually with model selection via information criteria, and averaging competing models with multimodel averaging. We illustrate this idea with data from a population of Silene acaulis (Caryophyllaceae), and conduct a simulation to investigate the statistical properties of the matrices estimated in this way. The simulation shows that compared with estimating matrix elements individually, building population projection matrices by fitting and averaging models of vital-rate estimates can reduce the statistical error in the population projection matrix and quantities derived from it.  相似文献   

18.
Dynamic compartmentalized metabolic models are identified by a large number of parameters, several of which are either non-physical or extremely difficult to measure. Typically, the available data and prior information is insufficient to fully identify the system. Since the models are used to predict the behavior of unobserved quantities, it is important to understand how sensitive the output of the system is to perturbations in the poorly identifiable parameters. Classically, it is the goal of sensitivity analysis to asses how much the output changes as a function of the parameters. In the case of dynamic models, the output is a function of time and therefore its sensitivity is a time dependent function. If the output is a differentiable function of the parameters, the sensitivity at one time instance can be computed from its partial derivatives with respect to the parameters. The time course of these partial derivatives describes how the sensitivity varies in time.When the model is not uniquely identifiable, or if the solution of the parameter identification problem is known only approximately, we may have not one, but a distribution of possible parameter values. This is always the case when the parameter identification problem is solved in a statistical framework. In that setting, the proper way to perform sensitivity analysis is to not rely on the values of the sensitivity functions corresponding to a single model, but to consider the distributed nature of the sensitivity functions, inherited from the distribution of the vector of the model parameters.In this paper we propose a methodology for analyzing the sensitivity of dynamic metabolic models which takes into account the variability of the sensitivity over time and across a sample. More specifically, we draw a representative sample from the posterior density of the vector of model parameters, viewed as a random variable. To interpret the output of this doubly varying sensitivity analysis, we propose visualization modalities particularly effective at displaying simultaneously variations over time and across a sample. We perform an analysis of the sensitivity of the concentrations of lactate and glycogen in cytosol, and of ATP, ADP, NAD+ and NADH in cytosol and mitochondria, to the parameters identifying a three compartment model for myocardial metabolism during ischemia.  相似文献   

19.
The integration of large quantities of biological information into mathematical models of cell metabolism provides a way for quantitatively evaluating the effect of parameter changes on simultaneous, coupled, and, often, counteracting processes. From a practical point of view, the validity of the model's predictions would critically depend on its quality. Among others, one of the critical steps that may compromise this quality is to decide which are the boundaries of the model. That is, we must decide which metabolites are assumed to be constants, and which fluxes are considered to be the inputs and outputs of the system. In this article, we analyze the effect of the experimental uncertainty on these variables on the system's characterization. Using a previously defined model of glucose fermentation in Saccharomyces cerevisiae, we characterize the effect of the uncertainty on some key variables commonly considered to be constants in many models of glucose metabolism, i.e., the intracellular pH and the pool of nucleotides. Without considering if this variability corresponds to a possible true physiological phenomenon, the goal of this article is to illustrate how this uncertainty may result in an important variability in the systemic responses predicted by the model. To characterize this variability, we analyze the utility and limitations of computing the sensitivities of logarithmic-gains (control coefficients) to the boundary parameters. With the exception of some special cases, our analysis shows that these sensitivities are good indicators of the dependence of the model systemic behavior on the parameters of interest.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号