首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is a need for efficient modeling strategies which quickly lead to reliable mathematical models that can be applied for design and optimization of (bio)-chemical processes. The serial gray box modeling strategy is potentially very efficient because no detailed knowledge is needed to construct the white box part of the model and because covenient black box modeling techniques like neural networks can be used for the black box part of the model. This paper shows for a typical biochemical conversion how the serial gray box modeling strategy can be applied efficiently to obtain a model with good frequency extrapolation properties. Models with good frequency extrapolation properties can be applied under dynamic conditions that were not present during the identification experiments. For a given application domain of a model, this property can be used to considerably reduce the number of identification experiments. The serial gray box modeling strategy is demonstrated to be successful for the modeling of the enzymatic conversion of penicillin G In the concentration range of 10-100 mM and temperature range of 298-335 K. Frequency extrapolation is shown by using only constant temperatures in the (batch) identification experiments, while the model can be used reliable with varying temperatures during the (batch) validation experiments. No reliable frequency extrapolation properties could be obtained for a black box model, and for a more knowledge-driven white box model reliable frequency extrapolation properties could only be obtained by incorporating more knowledge in the model. Copyright 1999 John Wiley & Sons, Inc.  相似文献   

2.
For single channel recordings, the maximum likelihood estimation (MLE) of kinetic rates and conductance is well established. A direct extrapolation of this method to macroscopic currents is computationally prohibitive: it scales as a power of the number of channels. An approximated MLE that ignored the local time correlation of the data has been shown to provide estimates of the kinetic parameters. In this article, an improved approximated MLE that takes into account the local time correlation is proposed. This method estimates the channel kinetics using both the time course and the random fluctuations of the macroscopic current generated by a homogeneous population of ion channels under white noise. It allows arbitrary kinetic models and stimulation protocols. The application of the proposed algorithm to simulated data from a simple three-state model on nonstationary conditions showed reliable estimates of all the kinetic constants, the conductance and the number of channels, and reliable values for the standard error of those estimates. Compared to the previous approximated MLE, it reduces by a factor of 10 the amount of data needed to secure a given accuracy and it can even determine the kinetic rates in macroscopic stationary conditions.  相似文献   

3.
Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.  相似文献   

4.
Summary. A reliable extrapolation of neurochemical alterations from a mouse model to human metabolic brain disease requires knowledge of neurotransmitter levels and related compounds in control mouse brain. C57BL/6 is a widely used background strain for knockout and transgenic mouse models. A prerequisite for reliable extrapolation from mouse brain to the human condition is the existence of analogous distribution patterns of neurotransmitters and related compounds in control mouse and human brain. We analysed regional distribution patterns of biogenic amines, neurotransmitter and non-neurotransmitter amino acids, and cholinergic markers. Distribution patterns were compared with known neurotransmitter pathways in human brain. The present study provides a reference work for future analyses of neurotransmitters and related compounds in mouse models bred in a C57BL/6 background strain.  相似文献   

5.
In recent years, hybrid neural network approaches, which combine mechanistic and neural network models, have received considerable attention. These approaches are potentially very efficient for obtaining more accurate predictions of process dynamics by combining mechanistic and neural network models in such a way that the neural network model properly accounts for unknown and nonlinear parts of the mechanistic model. In this work, a full-scale coke-plant wastewater treatment process was chosen as a model system. Initially, a process data analysis was performed on the actual operational data by using principal component analysis. Next, a simplified mechanistic model and a neural network model were developed based on the specific process knowledge and the operational data of the coke-plant wastewater treatment process, respectively. Finally, the neural network was incorporated into the mechanistic model in both parallel and serial configurations. Simulation results showed that the parallel hybrid modeling approach achieved much more accurate predictions with good extrapolation properties as compared with the other modeling approaches even in the case of process upset caused by, for example, shock loading of toxic compounds. These results indicate that the parallel hybrid neural modeling approach is a useful tool for accurate and cost-effective modeling of biochemical processes, in the absence of other reasonably accurate process models.  相似文献   

6.
This paper reports on the comparison of three modeling approaches that were applied to a fed batch evaporative sugar crystallization process. They are termed white box, black box, and grey box modeling strategies, which reflects the level of physical transparency and understanding of the model. White box models represent the traditional modeling approach, based on modeling by first principles. Black box models rely on recorded process data and knowledge collected during the normal process operation. Among various tools in this group an artificial neural networks (ANN) approach is adopted in this paper. The grey box model is obtained from a combination of first principles modeling, based on mass, energy and population balances, with an ANN to approximate three kinetic parameters ‐‐ crystal growth rate, nucleation rate and the agglomeration kernel. The results have shown that the hybrid modeling approach outperformed the other aforementioned modeling strategies.  相似文献   

7.
Biomechanical macroscopic models of the muscle organ as whole are conceptually limited in explaining muscle function in relation to structure. The examples are Hill-type and rheological muscle models where elastic properties of the muscle's contractible element are approached by a spring arranged in series and parallel, respectively. A new scaling model of the activated muscle powering a particular function is proposed. This model is based on the physical similarity suggested between the action-production muscle force and resulting reaction elastic muscle forces. Considered at a macroscopic scale, this force similarity provides four patterns of constraints in development of muscle architecture in different-sized animals. As the result, the analytical modeling predicts the primary motor, brake, strut and spring functions of individual muscles revealed earlier in work-loop experiments and now provided in terms of the scaling exponents for muscle cross-sectional area and fiber length. The model reliability is tested via literature available from muscle allometric data. The conceptual outcome of the study is that the architecture design of skeletal muscles is likely effected by the powering contractions of last fibers known as having higher myofibril volume than slow fibers.  相似文献   

8.
At the 2011 Yale Chemical Biology Symposium, Jason Gestwicki presented a novel yet intuitive approach to drug screening. This method, which he termed "gray box" screening, targets protein complexes that have been reconstituted in vitro. Therefore, the gray box screen can achieve greater phenotypic complexity than biochemical assays but avoids the need for target identification that follows cell-based assays. Dr. Gestwicki's research group was able to use the gray box screen to identify myricetin as an inhibitor of the DnaK-DnaJ chaperone complex. This review will discuss Dr. Gestwicki's approach to identifying DnaK-DnaJ inhibitors as well as where the gray box screen fits among traditional techniques in drug discovery.  相似文献   

9.
The migration of chemotactic bacteria in liquid media has previously been characterized in terms of two fundamental transport coefficients-the random motility coefficient and the chemotactic sensitivity coefficient. For modeling migration in porous media, we have shown that these coefficients which appear in macroscopic balance equations can be replaced by effective values that reflect the impact of the porous media on the swimming behavior of individual bacteria. Explicit relationships between values of the coefficients in porous and liquid media were derived. This type of quantitative analysis of bacterial migration is necessary for predicting bacterial population distributions in subsurface environments for applications such as in situ bioremediation in which bacteria respond chemotactically to the pollutants that they degrade.We analyzed bacterial penetration times through sand columns from two different experimental studies reported in the literature within the context of our mathematical model to evaluate the effective transport coefficients. Our results indicated that the presence of the porous medium reduced the random motility of the bacterial population by a factor comparable to the theoretical prediction. We were unable to determine the effect of the porous medium on the chemotactic sensitivity coefficient because no chemotactic response was observed in the experimental studies. However, the mathematical model was instrumental in developing a plausible explanation for why no chemotactic response was observed. The chemical gradients may have been too shallow over most of the sand core to elicit a measurable response. (c) 1997 John Wiley & Sons, Inc. Biotechnol Bioeng 53: 487-496, 1997.  相似文献   

10.
11.
12.
A computational strategy for homology modeling, using several protein structures comparison, is described. This strategy implies a formalized definition of structural blocks common to several protein structures, a new program to compare these structures simultaneously, and the use of consensus matrices to improve sequence alignment between the structurally known and target proteins. Applying this method to cytochromes P450 led to the definition of 15 substructures common to P450cam, P450BM3, and P450terp, and to proposing a 3D model of P450eryF. Proteins 28:388–404, 1997 © 1997 Wiley-Liss, Inc.  相似文献   

13.
Primarily used for metabolic engineering and synthetic biology, genome-scale metabolic modeling shows tremendous potential as a tool for fundamental research and curation of metabolism. Through a novel integration of flux balance analysis and genetic algorithms, a strategy to curate metabolic networks and facilitate identification of metabolic pathways that may not be directly inferable solely from genome annotation was developed. Specifically, metabolites involved in unknown reactions can be determined, and potentially erroneous pathways can be identified. The procedure developed allows for new fundamental insight into metabolism, as well as acting as a semi-automated curation methodology for genome-scale metabolic modeling. To validate the methodology, a genome-scale metabolic model for the bacterium Mycoplasma gallisepticum was created. Several reactions not predicted by the genome annotation were postulated and validated via the literature. The model predicted an average growth rate of 0.358±0.12, closely matching the experimentally determined growth rate of M. gallisepticum of 0.244±0.03. This work presents a powerful algorithm for facilitating the identification and curation of previously known and new metabolic pathways, as well as presenting the first genome-scale reconstruction of M. gallisepticum.  相似文献   

14.
Although much has been known about how humans psychologically perform data-driven scientific discovery, less has been known about its brain mechanism. The number series completion is a typical data-driven scientific discovery task, and has been demonstrated to possess the priming effect, which is attributed to the regularity identification and its subsequent extrapolation. In order to reduce the heterogeneities and make the experimental task proper for a brain imaging study, the number magnitude and arithmetic operation involved in number series completion tasks are further restricted. Behavioral performance in Experiment 1 shows the reliable priming effect for targets as expected. Then, a factorial design (the priming effect: prime vs. target; the period length: simple vs. complex) of event-related functional magnetic resonance imaging (fMRI) is used in Experiment 2 to examine the neural basis of data-driven scientific discovery. The fMRI results reveal a double dissociation of the left DLPFC (dorsolateral prefrontal cortex) and the left APFC (anterior prefrontal cortex) between the simple (period length=1) and the complex (period length=2) number series completion task. The priming effect in the left DLPFC is more significant for the simple task than for the complex task, while the priming effect in the left APFC is more significant for the complex task than for the simple task. The reliable double dissociation may suggest the different roles of the left DLPFC and left APFC in data-driven scientific discovery. The left DLPFC (BA 46) may play a crucial role in rule identification, while the left APFC (BA 10) may be related to mental set maintenance needed during rule identification and extrapolation.  相似文献   

15.
Anatomical, physiological, biochemical and molecular factors that contribute to chemical-induced nasal carcinogenesis are either largely divergent between test species and humans, or we know very little of them. These factors, let alone the uncertainty associated with our knowledge gap, present a risk assessor with the formidable task of making judgments about risks to human health from exposure to chemicals that have been identified in rodent studies to be nasal carcinogens. This paper summarizes some of the critical attributes of the hazard identification and dose–response aspects of risk assessments for nasal carcinogens that must be accounted for by risk assessors in order to make informed decisions. Data on two example compounds, dimethyl sulfate and hexamethylphosphoramide, are discussed to illustrate the diversity of information that can be used to develop informed hypotheses about mode of action and decisions on appropriate dosimeters for interspecies extrapolation. Default approaches to interspecies dosimetry extrapolation are described briefly and are followed by a discussion of a generalized physiologically based pharmacokinetic model that, unlike default approaches, is flexible and capable of incorporating many of the critical species-specific factors. Recent advancements in interspecies nasal dosimetry modeling are remarkable. However, it is concluded that without the development of research programs aimed at understanding carcinogenic susceptibility factors in human and rodent nasal tissues, development of plausible modes of action will lag behind the advancements made in dosimetry modeling.  相似文献   

16.
17.
Ecological niche modeling (ENM) is used widely to study species’ geographic distributions. ENM applications frequently involve transferring models calibrated with environmental data from one region to other regions or times that may include novel environmental conditions. When novel conditions are present, transferability implies extrapolation, whereas, in absence of such conditions, transferability is an interpolation step only. We evaluated transferability of models produced using 11 ENM algorithms from the perspective of interpolation and extrapolation in a virtual species framework. We defined fundamental niches and potential distributions of 16 virtual species distributed across Eurasia. To simulate real situations of incomplete understanding of species’ distribution or existing fundamental niche (environmental conditions suitable for the species contained in the study area; N* F ), we divided Eurasia into six regions and used 1–5 regions for model calibration and the rest for model evaluation. The models produced with the 11 ENM algorithms were evaluated in environmental space, to complement the traditional geographic evaluation of models. None of the algorithms accurately estimated the existing fundamental niche (N* F ) given one region in calibration, and model evaluation scores decreased as the novelty of the environments in the evaluation regions increased. Thus, we recommend quantifying environmental similarity between calibration and transfer regions prior to model transfer, providing an avenue for assessing uncertainty of model transferability. Different algorithms had different sensitivity to completeness of knowledge of N* F , with implications for algorithm selection. If the goal is to reconstruct fundamental niches, users should choose algorithms with limited extrapolation when N* F is well known, or choose algorithms with increased extrapolation when N* F is poorly known. Our assessment can inform applications of ecological niche modeling transference to anticipate species invasions into novel areas, disease emergence in new regions, and forecasts of species distributions under future climate conditions.  相似文献   

18.
刘洋洋  崔恒宓 《遗传》2015,37(9):939-944
为建立一种评估重亚硫酸盐处理DNA样本后胞嘧啶转化效率的有效方法,以两组不同的TaqMan qPCR检测梯度稀释的重亚硫酸盐处理和未处理的DNA标准品,建立转化与未转化的DNA Ct值以及对应的DNA拷贝数的标准曲线。使用相同的探针定量检测重亚硫酸盐处理后的DNA样本评估转化效率。结果显示该方法应用两组探针,根据相应的标准曲线,精确评估样本经重亚硫酸盐处理的转化效率。使用已知转化和未转化拷贝数的混合DNA作为模板,证实了该方法的可靠性。同时也对不同重亚硫酸盐试剂盒处理DNA的转化效率进行了评估,结果显示,该方法能够有效地评估DNA样品重亚硫酸盐的转化效率,为DNA甲基化准确分析提供了可靠快捷的方法。  相似文献   

19.
Compartmental models of infectious diseases readily represent known biological and epidemiological processes, are easily understood in flow-chart form by administrators, are simple to adjust to new information, and lend themselves to routine statistical analysis such as parameter estimation and model fitting. Technical results are immediately interpretable in epidemiological and public health terms. Deterministic models are easily stochasticized where this is important for practical purposes. With HIV/AIDS, serial data on both HIV prevalence and AIDS morbidity have been available from San Francisco. Assuming the distribution of the incubation period to be biologically stable, statistical analysis is quite feasible in other regions, even those with no reliable HIV data. Transmission rates must be estimated locally. It is also often possible to estimate the effective size of a population subgroup at risk, from population data on AIDS morbidity only. Computer simulation provides estimates of the evolving pattern of both HIV prevalence and AIDS morbidity. Some public health questions can be answered only by appropriately formulated stochastic models.  相似文献   

20.
A strategy for processing of metabolomic GC/MS data is presented. By considering the relationship between quantity and quality of detected profiles, representative data suitable for multiple sample comparisons and metabolite identification was generated. Design of experiments (DOE) and multivariate analysis was used to relate the changes in settings of the hierarchical multivariate curve resolution (H-MCR) method to quantitative and qualitative characteristics of the output data. These characteristics included number of resolved profiles, chromatographic quality in terms of reproducibility between analytical replicates, and spectral quality defined by purity and number of spectra containing structural information. The strategy was exemplified in two datasets: one containing 119 common metabolites, 18 of which were varied according to a DOE protocol; and one consisting of rat urine samples from control rats and rats exposed to a liver toxin. It was shown that the performance of the data processing could be optimized to produce metabolite data of high quality that allowed reliable sample comparisons and metabolite identification. This is a general approach applicable to any type of data processing where the important processing parameters are known and relevant output data characteristics can be defined. The results imply that this type of data quality optimization should be carried out as an integral step of data processing to ensure high quality data for further modeling and biological evaluation. Within metabolomics, this degree of optimization will be of high importance to generate models and extract biomarkers or biomarker patterns of biological or clinical relevance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号