首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Evaluation of insulin sensitivity is of prime importance in the clinical investigation of glucose related diseases. This paper deals with a novel model-based technique for the evaluation of an index for insulin sensitivity. A set of nonlinear autoregressive models are identified from the clinical test data of normal subjects. The two-stage identification procedure involves proper structure selection for approximating the input–output data followed by estimating the parameters of the polynomial model. The models obtained are analyzed to derive an index for insulin sensitivity by determining the effect of insulin on glucose utilization. A median bootstraped correlation (sampling with replacement) of 0.97 with 90% confidence interval of [0.92 0.98], is obtained between the indexes of the proposed model and the widely used minimal model. The proposed model is able to achieve a good fitting performance on the validation dataset. The results also suggest that for representing the dynamics of insulin action on glucose disposal, the proposed model overcomes some of the well known limitations of the minimal model, and thus gives a better representation of insulin sensitivity.  相似文献   

2.
以富士苹果(Malus domestica‘Fuji’)为试材,测定其果实生长发育期间各生长指标的动态变化,选择5种理论生长方程对纵径、横径、单果重、体积、干重进行拟合,并根据拟合结果确定合适的生长方程建立各生长指标的数学模型,采用多项式拟合建立果形指数变化的数学模型,同时对果实各生长指标之间进行相关性分析。结果表明,果实纵径、横径生长适合选择Logistic方程,单果重、体积、干重适合选择Gompertz方程,果形指数的变化适合采用多项式拟合;果实纵径、横径、单果重、体积、干重两两之间均呈显著正相关,果形指数、果实干物质相对含量均与纵径、横径、单果重、体积、干重之间呈显著负相关,果实密度与纵径、横径、单果重、体积、干重之间呈显著负相关,而与果形指数、果实干物质相对含量之间呈显著正相关。  相似文献   

3.
The semiparametric Cox proportional hazards model is routinely adopted to model time-to-event data. Proportionality is a strong assumption, especially when follow-up time, or study duration, is long. Zeng and Lin (J. R. Stat. Soc., Ser. B, 69:1–30, 2007) proposed a useful generalisation through a family of transformation models which allow hazard ratios to vary over time. In this paper we explore a variety of tests for the need for transformation, arguing that the Cox model is so ubiquitous that it should be considered as the default model, to be discarded only if there is good evidence against the model assumptions. Since fitting an alternative transformation model is more complicated than fitting the Cox model, especially as procedures are not yet incorporated in standard software, we focus mainly on tests which require a Cox fit only. A score test is derived, and we also consider performance of omnibus goodness-of-fit tests based on Schoenfeld residuals. These tests can be extended to compare different transformation models. In addition we explore the consequences of fitting a misspecified Cox model to data generated under a true transformation model. Data on survival of 1043 leukaemia patients are used for illustration.  相似文献   

4.
This paper reports the development and application of three powerful algorithms for the analysis and simulation of mathematical models consisting of ordinary differential equations. First, we describe an extended parameter sensitivity analysis: we measure the relative sensitivities of many dynamical behaviors of the model to perturbations of each parameter. We check sensitivities to parameter variation over both small and large ranges. These two extensions of a common technique have applications in parameter estimation and in experimental design. Second, we compute sensitivity functions, using an efficient algorithm requiring just one model simulation to obtain all sensitivities of state variables to all parameters as functions of time. We extend the analysis to a behavior which is not a state variable. Third, we present an unconstrained global optimization algorithm, and apply it in a novel way: we determine the input to the model, given an optimality criterion and typical outputs. The algorithm itself is an efficient one for high-order problems, and does not get stuck at local extrema. We apply the sensitivity analysis, sensitivity functions, and optimization algorithm to a sixth-order nonlinear ordinary differential equation model for human eye movements. This application shows that the algorithms are not only practicable for high-order models, but also useful as conceptual tools.  相似文献   

5.
6.
The identification of metabolic regulation is a major concern in metabolic engineering. Metabolic regulation phenomena depend on intracellular compounds such as enzymes, metabolites and cofactors. A complete understanding of metabolic regulation requires quantitative information about these compounds under in vivo conditions. This quantitative knowledge in combination with the known network of metabolic pathways allows the construction of mathematical models that describe the dynamic changes in metabolite concentrations over time. Rapid sampling combined with pulse experiments is a useful tool for the identification of metabolic regulation owing to the transient data they provide. Enzymatic tests in combination with ESI-LC-MS (Electrospray Ionization Liquid Chromatographic Tandem Mass Spectrometry) and HPLC measurements have been used to identify up to 30 metabolites and nucleotides from rapid sampling experiments. A metabolic modeling tool (MMT) that is built on a relational database was developed specifically for analysis of rapid sampling experiments. The tool allows to construct complex pathway models with information stored in the relational database. Parameter fitting and simulation algorithms for the resulting system of Ordinary Differential Equations (ODEs) are part of MMT. Additionally explicit sensitivity functions are calculated. The integration of all necessary algorithms in one tool allows fast model analysis and comparison. Complex models have been developed to describe the central metabolic pathways of Escherichia coli during a glucose pulse experiment.  相似文献   

7.
Quantitative understanding of the kinetics of lymphocyte proliferation and death upon activation with an antigen is crucial for elucidating factors determining the magnitude, duration and efficiency of the immune response. Recent advances in quantitative experimental techniques, in particular intracellular labeling and multi-channel flow cytometry, allow one to measure the population structure of proliferating and dying lymphocytes for several generations with high precision. These new experimental techniques require novel quantitative methods of analysis. We review several recent mathematical approaches used to describe and analyze cell proliferation data. Using a rigorous mathematical framework, we show that two commonly used models that are based on the theories of age-structured cell populations and of branching processes, are mathematically identical. We provide several simple analytical solutions for a model in which the distribution of inter-division times follows a gamma distribution and show that this model can fit both simulated and experimental data. We also show that the estimates of some critical kinetic parameters, such as the average inter-division time, obtained by fitting models to data may depend on the assumed distribution of inter-division times, highlighting the challenges in quantitative understanding of cell kinetics.  相似文献   

8.
The analysis of metabolic pathways with mathematical models contributes to the better understanding of the behavior of metabolic processes. This paper presents the analysis of a mathematical model for carbohydrate uptake and metabolism in Escherichia coli. It is shown that the dynamic processes cover a broad time span from some milliseconds to several hours. Based on this analysis the fast processes could be described with steady-state characteristic curves. A subsequent robustness analysis of the model parameters shows that the fast part of the system may act as a filter for the slow part of the system; the sensitivities of the fast system are conserved. From these findings it is concluded that the slow part of the system shows some robustness against changes in parameters of the fast subsystem, i.e. if a parameter shows no sensitivity for the fast part of the system, it will also show no sensitivity for the slow part of the system.  相似文献   

9.
Growth modelling is an essential prerequisite for evaluating the consequences of a particular management action on the future development of a forest ecosystem. Mathematical growth models are not available for many tree species in India. The objectives of this study were to estimate potential stand density and model the actual tree density and basal area development in pure even-aged stands of Eucalyptus camaldulensis. Relationships between quadratic mean diameter and stems ha(-1) were developed, and parameter values of this relationship were used to establish the limiting density line. Two different models were compared to describe the natural decrease of stem number. The model including site index as one of the variables performed slightly better than the model without site index. Seven different stand level models also were compared for predicting basal area in the stands. The models tested in this paper belong to the path invariant algebraic difference form of a nonlinear model. They can be used to predict future basal area as a function of stand variables like initial basal area, age or dominant height, and stems ha(-1) and are crucial for evaluating different silvicultural treatment options. The performance of the models for basal area was evaluated using different quantitative criteria. Among the seven models tested, the two models proposed by Pienaar and Shiver and Forss et al. had the best performance. The equations proposed to predict future basal area and stem number are related and, therefore, simultaneous regression technique has also been used to investigate the differences between parameter coefficients obtained by fitting the equations separately and jointly.  相似文献   

10.
Although many mathematical models exist predicting the dynamics of transposable elements (TEs), there is a lack of available empirical data to validate these models and inherent assumptions. Genomes can provide a snapshot of several TE families in a single organism, and these could have their demographics inferred by coalescent analysis, allowing for the testing of theories on TE amplification dynamics. Using the available genomes of the mosquitoes Aedes aegypti and Anopheles gambiae, we indicate that such an approach is feasible. Our analysis follows four steps: (1) mining the two mosquito genomes currently available in search of TE families; (2) fitting, to selected families found in (1), a phylogeny tree under the general time‐reversible (GTR) nucleotide substitution model with an uncorrelated lognormal (UCLN) relaxed clock and a nonparametric demographic model; (3) fitting a nonparametric coalescent model to the tree generated in (2); and (4) fitting parametric models motivated by ecological theories to the curve generated in (3).  相似文献   

11.
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics‐based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.  相似文献   

12.
The widely used “Maxent” software for modeling species distributions from presence‐only data (Phillips et al., Ecological Modelling, 190, 2006, 231) tends to produce models with high‐predictive performance but low‐ecological interpretability, and implications of Maxent's statistical approach to variable transformation, model fitting, and model selection remain underappreciated. In particular, Maxent's approach to model selection through lasso regularization has been shown to give less parsimonious distribution models—that is, models which are more complex but not necessarily predictively better—than subset selection. In this paper, we introduce the MIAmaxent R package, which provides a statistical approach to modeling species distributions similar to Maxent's, but with subset selection instead of lasso regularization. The simpler models typically produced by subset selection are ecologically more interpretable, and making distribution models more grounded in ecological theory is a fundamental motivation for using MIAmaxent. To that end, the package executes variable transformation based on expected occurrence–environment relationships and contains tools for exploring data and interrogating models in light of knowledge of the modeled system. Additionally, MIAmaxent implements two different kinds of model fitting: maximum entropy fitting for presence‐only data and logistic regression (GLM) for presence–absence data. Unlike Maxent, MIAmaxent decouples variable transformation, model fitting, and model selection, which facilitates methodological comparisons and gives the modeler greater flexibility when choosing a statistical approach to a given distribution modeling problem.  相似文献   

13.
Selecting the best-fit model of nucleotide substitution   总被引:2,自引:0,他引:2  
Despite the relevant role of models of nucleotide substitution in phylogenetics, choosing among different models remains a problem. Several statistical methods for selecting the model that best fits the data at hand have been proposed, but their absolute and relative performance has not yet been characterized. In this study, we compare under various conditions the performance of different hierarchical and dynamic likelihood ratio tests, and of Akaike and Bayesian information methods, for selecting best-fit models of nucleotide substitution. We specifically examine the role of the topology used to estimate the likelihood of the different models and the importance of the order in which hypotheses are tested. We do this by simulating DNA sequences under a known model of nucleotide substitution and recording how often this true model is recovered by the different methods. Our results suggest that model selection is reasonably accurate and indicate that some likelihood ratio test methods perform overall better than the Akaike or Bayesian information criteria. The tree used to estimate the likelihood scores does not influence model selection unless it is a randomly chosen tree. The order in which hypotheses are tested, and the complexity of the initial model in the sequence of tests, influence model selection in some cases. Model fitting in phylogenetics has been suggested for many years, yet many authors still arbitrarily choose their models, often using the default models implemented in standard computer programs for phylogenetic estimation. We show here that a best-fit model can be readily identified. Consequently, given the relevance of models, model fitting should be routine in any phylogenetic analysis that uses models of evolution.  相似文献   

14.
A novel mathematical model of the actin dynamics in living cells under steady-state conditions has been developed for fluorescence recovery after photobleaching (FRAP) experiments. As opposed to other FRAP fitting models, which use the average lifetime of actins in filaments and the actin turnover rate as fitting parameters, our model operates with unbiased actin association/dissociation rate constants and accounts for the filament length. The mathematical formalism is based on a system of stochastic differential equations. The derived equations were validated on synthetic theoretical data generated by a stochastic simulation algorithm adapted for the simulation of FRAP experiments. Consistent with experimental findings, the results of this work showed that (1) fluorescence recovery is a function of the average filament length, (2) the F-actin turnover and the FRAP are accelerated in the presence of actin nucleating proteins, (3) the FRAP curves may exhibit both a linear and non-linear behaviour depending on the parameters of actin polymerisation, and (4) our model resulted in more accurate parameter estimations of actin dynamics as compared with other FRAP fitting models. Additionally, we provide a computational tool that integrates the model and that can be used for interpretation of FRAP data on actin cytoskeleton.  相似文献   

15.
A sixth order nonlinear model for horizontal head rotations in humans is analyzed using an extended parameter sensitivity analysis and a global optimization algorithm. The sensitivity analysis is used in both the direct sense, as a model fitting tool, and in the indirect sense, as a guide to experimental design. Resolution is defined in terms of the sensitivity table, and is used to interpret the sensitivity results. Using sensitivity analyses, the head and eye movement systems are compared and contrasted. Controller signal parameters are the most influential. Their variations and effects on head movement trajectories and accelerations are investigated, and the conclusions are compared with clinical neurological findings. The global optimization algorithm, in addition to automating the fitting of various types of data, is combined with time optimality theory to give theoretical time-optimal inputs to the model.On leave from Department of Neurology, University of Hamburg, FRG; supported by Deutsche Forschungsgemeinschaft Bonn, FRG  相似文献   

16.
With the increasing flow of biological data there is a growing demand for mathematical tools whereby essential aspects of complex causal dynamic models can be captured and detected by simpler mathematical models without sacrificing too much of the realism provided by the original ones. Given the presence of a time scale hierarchy, singular perturbation techniques represent an elegant method for making such minimised mathematical representations. Any reduction of a complex model by singular perturbation methods is a targeted reduction by the fact that one has to pick certain mechanisms, processes or aspects thought to be essential in a given explanatory context. Here we illustrate how such a targeted reduction of a complex model of melanogenesis in mammals recently developed by the authors provides a way to improve the understanding of how the melanogenic system may behave in a switch-like manner between production of the two major types of melanins. The reduced model is shown by numerical means to be in good quantitative agreement with the original model. Furthermore, it is shown how the reduced model discloses hidden robustness features of the full model, and how the making of a reduced model represents an efficient analytical sensitivity analysis. In addition to yielding new insights concerning the melanogenic system, the paper provides an illustration of a protocol that could be followed to make validated simplifications of complex biological models possessing time scale hierarchies.  相似文献   

17.
A wide diversity of models have been proposed to account for the spiking response of central neurons, from the integrate-and-fire (IF) model and its quadratic and exponential variants, to multiple-variable models such as the Izhikevich (IZ) model and the well-known Hodgkin–Huxley (HH) type models. Such models can capture different aspects of the spiking response of neurons, but there is few objective comparison of their performance. In this article, we provide such a comparison in the context of well-defined stimulation protocols, including, for each cell, DC stimulation, and a series of excitatory conductance injections, arising in the presence of synaptic background activity. We use the dynamic-clamp technique to characterize the response of regular-spiking neurons from guinea-pig visual cortex by computing families of post-stimulus time histograms (PSTH), for different stimulus intensities, and for two different background activities (low- and high-conductance states). The data obtained are then used to fit different classes of models such as the IF, IZ, or HH types, which are constrained by the whole data set. This analysis shows that HH models are generally more accurate to fit the series of experimental PSTH, but their performance is almost equaled by much simpler models, such as the exponential or pulse-based IF models. Similar conclusions were also reached by performing partial fitting of the data, and examining the ability of different models to predict responses that were not used for the fitting. Although such results must be qualified by using more sophisticated stimulation protocols, they suggest that nonlinear IF models can capture surprisingly well the response of cortical regular-spiking neurons and appear as useful candidates for network simulations with conductance-based synaptic interactions.  相似文献   

18.
19.
20.
从机理分析的角度研究了吸入性麻醉药使用过程中在诱导期的数学模型,采用室分析方法,建立了人体血药浓度的药代动力学模型和吸入药量模型,并求出其精确解.然后以七氟谜用药累积记录为研究对象,进行了数值计算,算得药代动力学模型和吸入药量模型与实际测量数据的相关系数分别为0.9865和0.8874.最后给出模型拟合曲线.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号