首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
A universal feature of the biochemistry of any living system is that all the molecules and catalysts that are required for reactions of the system can be built up from an available food source by repeated application of reactions from within that system. RAF (reflexively autocatalytic and food-generated) theory provides a formal way to study such processes. Beginning with Kauffman’s notion of “collectively autocatalytic sets,” this theory has been further developed over the last decade with the discovery of efficient algorithms and new mathematical analysis. In this paper, we study how the behaviour of a simple binary polymer model can be extended to models where the pattern of catalysis more precisely reflects the ligation and cleavage reactions involved. We find that certain properties of these models are similar to, and can be accurately predicted from, the simple binary polymer model; however, other properties lead to slightly different estimates. We also establish a number of new results concerning the structure of RAFs in these systems.  相似文献   

2.
Large sample theory of semiparametric models based on maximum likelihood estimation (MLE) with shape constraint on the nonparametric component is well studied. Relatively less attention has been paid to the computational aspect of semiparametric MLE. The computation of semiparametric MLE based on existing approaches such as the expectation‐maximization (EM) algorithm can be computationally prohibitive when the missing rate is high. In this paper, we propose a computational framework for semiparametric MLE based on an inexact block coordinate ascent (BCA) algorithm. We show theoretically that the proposed algorithm converges. This computational framework can be applied to a wide range of data with different structures, such as panel count data, interval‐censored data, and degradation data, among others. Simulation studies demonstrate favorable performance compared with existing algorithms in terms of accuracy and speed. Two data sets are used to illustrate the proposed computational method. We further implement the proposed computational method in R package BCA1SG , available at CRAN.  相似文献   

3.
Traditional (genome-scale) metabolic models of cellular growth involve an approximate biomass “reaction”, which specifies biomass composition in terms of precursor metabolites (such as amino acids and nucleotides). On the one hand, biomass composition is often not known exactly and may vary drastically between conditions and strains. On the other hand, the predictions of computational models crucially depend on biomass. Also elementary flux modes (EFMs), which generate the flux cone, depend on the biomass reaction. To better understand cellular phenotypes across growth conditions, we introduce and analyze new classes of elementary vectors for comprehensive (next-generation) metabolic models, involving explicit synthesis reactions for all macromolecules. Elementary growth modes (EGMs) are given by stoichiometry and generate the growth cone. Unlike EFMs, they are not support-minimal, in general, but cannot be decomposed “without cancellations”. In models with additional (capacity) constraints, elementary growth vectors (EGVs) generate a growth polyhedron and depend also on growth rate. However, EGMs/EGVs do not depend on the biomass composition. In fact, they cover all possible biomass compositions and can be seen as unbiased versions of elementary flux modes/vectors (EFMs/EFVs) used in traditional models. To relate the new concepts to other branches of theory, we consider autocatalytic sets of reactions. Further, we illustrate our results in a small model of a self-fabricating cell, involving glucose and ammonium uptake, amino acid and lipid synthesis, and the expression of all enzymes and the ribosome itself. In particular, we study the variation of biomass composition as a function of growth rate. In agreement with experimental data, low nitrogen uptake correlates with high carbon (lipid) storage.  相似文献   

4.
This paper presents new results from a detailed study of the structure of autocatalytic sets. We show how autocatalytic sets can be decomposed into smaller autocatalytic subsets, and how these subsets can be identified and classified. We then argue how this has important consequences for the evolvability, enablement, and emergence of autocatalytic sets. We end with some speculation on how all this might lead to a generalized theory of autocatalytic sets, which could possibly be applied to entire ecologies or even economies.  相似文献   

5.
Ecological data sets often record the abundance of species, together with a set of explanatory variables. Multivariate statistical methods are optimal to analyze such data and are thus frequently used in ecology for exploration, visualization, and inference. Most approaches are based on pairwise distance matrices instead of the sites‐by‐species matrix, which stands in stark contrast to univariate statistics, where data models, assuming specific distributions, are the norm. However, through advances in statistical theory and computational power, models for multivariate data have gained traction. Systematic simulation‐based performance evaluations of these methods are important as guides for practitioners but still lacking. Here, we compare two model‐based methods, multivariate generalized linear models (MvGLMs) and constrained quadratic ordination (CQO), with two distance‐based methods, distance‐based redundancy analysis (dbRDA) and canonical correspondence analysis (CCA). We studied the performance of the methods to discriminate between causal variables and noise variables for 190 simulated data sets covering different sample sizes and data distributions. MvGLM and dbRDA differentiated accurately between causal and noise variables. The former had the lowest false‐positive rate (0.008), while the latter had the lowest false‐negative rate (0.027). CQO and CCA had the highest false‐negative rate (0.291) and false‐positive rate (0.256), respectively, where these error rates were typically high for data sets with linear responses. Our study shows that both model‐ and distance‐based methods have their place in the ecologist's statistical toolbox. MvGLM and dbRDA are reliable for analyzing species–environment relations, whereas both CQO and CCA exhibited considerable flaws, especially with linear environmental gradients.  相似文献   

6.
7.
Non-isothermal thermogravimetric analysis (TGA) data of biomasses and pulps originating from non-wood and alternatives materials (i.e., Tagasaste or rice straw) have been fitted with refined models, which include autocatalytic kinetics. Data sets were obtained for different experimental conditions, such as variations of heating rate and atmosphere, i.e., inert (pyrolysis) versus oxidative atmosphere (combustion). Besides the access to classical kinetic parameters (pre-exponential factor, activation energy, and reaction order), the improved data analysis enabled the determination of the chemical composition of the samples (cellulose, hemicellulose, extractives, lignin). The latter compared very well with those obtained by conventional methods (chemical analysis, HPLC). Given the reduced environmental impact and rapidness of the method, potential applications for research related to new biomasses and industrial processes can be foreseen.  相似文献   

8.
Gaussian processes for machine learning   总被引:13,自引:0,他引:13  
Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.  相似文献   

9.
It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as time-warp invariant speech recognition. This is possible because such circuits have an inherent tendency to integrate incoming information in such a way that simple linear readouts can be trained to transform the current circuit activity into the target output for a very large number of computational tasks. Consequently we propose to analyze circuits of spiking neurons in terms of their roles as analog fading memory and non-linear kernels, rather than as implementations of specific computational operations and algorithms. This article is a sequel to [W. Maass, T. Natschl?ger, H. Markram, Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531-2560, Online available as #130 from: ], and contains new results about the performance of generic neural microcircuit models for the recognition of speech that is subject to linear and non-linear time-warps, as well as for computations on time-varying firing rates. These computations rely, apart from general properties of generic neural microcircuit models, just on capabilities of simple linear readouts trained by linear regression. This article also provides detailed data on the fading memory property of generic neural microcircuit models, and a quick review of other new results on the computational power of such circuits of spiking neurons.  相似文献   

10.
A duplication growth model of gene expression networks   总被引:8,自引:0,他引:8  
  相似文献   

11.
Rapid advances in molecular genetics push the need for efficient data analysis. Advanced algorithms are necessary for extracting all possible information from large experimental data sets. We present a general linear algebra framework for quantitative trait loci (QTL) mapping, using both linear regression and maximum likelihood estimation. The formulation simplifies future comparisons between and theoretical analyses of the methods. We show how the common structure of QTL analysis models can be used to improve the kernel algorithms, drastically reducing the computational effort while retaining the original analysis results. We have evaluated our new algorithms on data sets originating from two large F(2) populations of domestic animals. Using an updating approach, we show that 1-3 orders of magnitude reduction in computational demand can be achieved for matrix factorizations. For interval-mapping/composite-interval-mapping settings using a maximum likelihood model, we also show how to use the original EM algorithm instead of the ECM approximation, significantly improving the convergence and further reducing the computational time. The algorithmic improvements makes it feasible to perform analyses which have previously been deemed impractical or even impossible. For example, using the new algorithms, it is reasonable to perform permutation testing using exhaustive search on populations of 200 individuals using an epistatic two-QTL model.  相似文献   

12.
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.  相似文献   

13.
Molecular models of the transmembrane domain of the phospholamban pentamer have been generated by a computational method that uses the experimentally measured effects of systematic single-site mutations as a guiding force in the modeling procedure. This method makes the assumptions that 1) the phospholamban transmembrane domain is a parallel five-helix bundle, and 2) nondisruptive mutation positions are lipid exposed, whereas 3) disruptive or partially disruptive mutations are not. Our procedure requires substantially less computer time than systematic search methods, allowing rapid assessment of the effects of different experimental results on the helix arrangement. The effectiveness of the approach is investigated in test calculations on two helix-dimer systems of known structure. Two independently derived sets of mutagenesis data were used to define the restraints for generating models of phospholamban. Both resulting models are left-handed, highly symmetrical pentamers. Although the overall bundle geometry is very similar in the two models, the orientation of individual helices differs by approximately 50 degrees, resulting in different sets of residues facing the pore. This demonstrates how differences in restraints can have an effect on the model structures generated, and how the violation of these restraints can identify inconsistent experimental data.  相似文献   

14.
15.
A proposed unified framework for biological invasions   总被引:1,自引:0,他引:1  
There has been a dramatic growth in research on biological invasions over the past 20 years, but a mature understanding of the field has been hampered because invasion biologists concerned with different taxa and different environments have largely adopted different model frameworks for the invasion process, resulting in a confusing range of concepts, terms and definitions. In this review, we propose a unified framework for biological invasions that reconciles and integrates the key features of the most commonly used invasion frameworks into a single conceptual model that can be applied to all human-mediated invasions. The unified framework combines previous stage-based and barrier models, and provides a terminology and categorisation for populations at different points in the invasion process.  相似文献   

16.
Two definitions of persistence despite perturbations in deterministic models are presented. The first definition, persistence despite frequent small perturbations, is shown to be equivalent to the existence of a positive attractor i.e. an attractor bounded away from extinction. The second definition, persistence despite rare large perturbations, is shown to be equivalent to permanence i.e. a positive attractor whose basin of attraction includes all positive states. Both definitions set up a natural dichotomy for classifying models of interacting populations. Namely, a model is either persistent despite perturbations or not. When it is not persistent, it follows that all initial conditions are prone to extinction due to perturbations of the appropriate type. For frequent small perturbations, this method of classification is shown to be generically robust: there is a dense set of models for which persistent (respectively, extinction prone) models lies within an open set of persistent (resp. extinction prone) models. For rare large perturbations, this method of classification is shown not to be generically robust. Namely, work of Josef Hofbauer and the author have shown there are open sets of ecological models containing a dense sets of permanent models and a dense set of extinction prone models. The merits and drawbacks of these different definitions are discussed.  相似文献   

17.
Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high‐level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future.  相似文献   

18.
In recent years, halogen bonding has become an important design tool in crystal engineering, supramolecular chemistry and biosciences. The fundamentals of halogen bonding have been studied extensively with high-accuracy computational methods. Due to its non-covalency, the use of triple-zeta (or larger) basis sets is often recommended when studying halogen bonding. However, in the large systems often encountered in supramolecular chemistry and biosciences, large basis sets can make the calculations far too slow. Therefore, small basis sets, which would combine high computational speed and high accuracy, are in great demand. This study focuses on comparing how well density functional theory (DFT) methods employing small, double-zeta basis sets can estimate halogen-bond strengths. Several methods with triple-zeta basis sets are included for comparison. Altogether, 46 DFT methods were tested using two data sets of 18 and 33 halogen-bonded complexes for which the complexation energies have been previously calculated with the high-accuracy CCSD(T)/CBS method. The DGDZVP basis set performed far better than other double-zeta basis sets, and it even outperformed the triple-zeta basis sets. Due to its small size, it is well-suited to studying halogen bonding in large systems.  相似文献   

19.
The big brown bats, Eptesicus fuscus, emit ultrasonic signals and analyze the returning echoes in multi-parametric domains to extract target features. The variation of different pulse parameters during hunting predicts that analysis of an echo parameter by bats is inevitably affected by other co-varying echo parameters. In this study, we presented data to show that the bat inferior collicular (IC) neurons have maximal amplitude sensitivity at the best duration (BD). A family of rate-amplitude function (RAF) of each IC neuron is plotted with the BD and non-BD sound pulses. The RAF plotted with BD pulses has sharper slope (SL) and smaller dynamic range (DR) than the RAF plotted with non-BD pulses has. All RAFs can be described as monotonic, saturated or non-monotonic. IC neurons with monotonic RAF are mostly recorded at deeper IC and they have the largest average BD, best amplitude (BA) and DR. Conversely, IC neurons with non-monotonic RAF are mostly recorded at upper IC and they have the smallest average BD, BA and DR. Low best frequency (BF) neurons at upper IC have shorter BD, smaller BA and DR than high BF neurons at deeper IC have. These data suggest that IC neurons that tune to an echo duration also have the greatest sensitivity to echo amplitude. These data also suggest that sensitivity in frequency, duration and amplitude appears to be orderly represented along the dorso-ventral axis of the IC.  相似文献   

20.
Computational methods have been part of neuroscience for many years. For example, models developed with these methods have provided a theory that helps explain the action potential. More recently, as experimental patch-electrode techniques have revealed new biophysics related to dendritic function and synaptic integration, computational models of dendrites have been developed to explain and further illuminate these results, and to predict possible additional behavior. Here, a collection of computational models of dendrites is reviewed. The goal is to help explain how such computational techniques work, some of their limitations, and what one can hope to learn about dendrites by modeling them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号