首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three new approximations are suggested for the standardized selection intensity, i. Two are simple functions of powers of b, the fraction selected. These improve on previous approximations by covering a broader range of selection intensities. A third approximation is developed using a rational polynomial. This gave accurate approximation, but simplicity was lost.Communicated by E. J. Eisen  相似文献   

2.
A fundamental question in biology is the following: what is the time scale that is needed for evolutionary innovations? There are many results that characterize single steps in terms of the fixation time of new mutants arising in populations of certain size and structure. But here we ask a different question, which is concerned with the much longer time scale of evolutionary trajectories: how long does it take for a population exploring a fitness landscape to find target sequences that encode new biological functions? Our key variable is the length, of the genetic sequence that undergoes adaptation. In computer science there is a crucial distinction between problems that require algorithms which take polynomial or exponential time. The latter are considered to be intractable. Here we develop a theoretical approach that allows us to estimate the time of evolution as function of We show that adaptation on many fitness landscapes takes time that is exponential in even if there are broad selection gradients and many targets uniformly distributed in sequence space. These negative results lead us to search for specific mechanisms that allow evolution to work on polynomial time scales. We study a regeneration process and show that it enables evolution to work in polynomial time.  相似文献   

3.
4.
Using discrete competition models where the density dependent growth functions are either all exponential or all rational, notwithstanding the complex interactions of the species, we establish an exclusion principle. Moreover, in a 2-species discrete competition model where the growth functions are exponential and rational, an example is given illustrating coexistence when our conditions are satisfied. We obtain an exclusion principle for this 2-species model for some choice of parameters.Research partially supported by funds provided by a Science and Education Grant to the USDA-Forest Service, Southeastern Forest Experiment Station, Population Genetics of Forest Trees Research Unit, Raleigh, North Carolina  相似文献   

5.
A compartmentlike model is developed for tracers in which bone-volume diffusion plays an important role in body distribution (e.g. bone-volume-seeking metals). The model requires solution of an infinite eigensystem. Approximations are presented for cylindrical diffusion in canalicular territory. An application to lead metabolism in beagle dogs suggests that finite truncation of the system of equations provides an adequate approximation for routine use in computer programs for compartmental-parameter estimation. The model is consistent with both power-law and exponential mixture retention functions.  相似文献   

6.
A parameterized algorithm for protein structure alignment.   总被引:2,自引:0,他引:2  
This paper proposes a parameterized polynomial time approximation scheme (PTAS) for aligning two protein structures, in the case where one protein structure is represented by a contact map graph and the other by a contact map graph or a distance matrix. If the sequential order of alignment is not required, the time complexity is polynomial in the protein size and exponential with respect to two parameters D(u)/D(l) and D(c)/D(l), which usually can be treated as constants. In particular, D(u) is the distance threshold determining if two residues are in contact or not, D(c) is the maximally allowed distance between two matched residues after two proteins are superimposed, and D(l) is the minimum inter-residue distance in a typical protein. This result clearly demonstrates that the computational hardness of the contact map based protein structure alignment problem is related not to protein size but to several parameters modeling the problem. The result is achieved by decomposing the protein structure using tree decomposition and discretizing the rigid-body transformation space. Preliminary experimental results indicate that on a Linux PC, it takes from ten minutes to one hour to align two proteins with approximately 100 residues.  相似文献   

7.
A challenging task in computational biology is the reconstruction of genomic sequences of extinct ancestors, given the phylogenetic tree and the sequences at the leafs. This task is best solved by calculating the most likely estimate of the ancestral sequences, along with the most likely edge lengths. We deal with this problem and also the variant in which the phylogenetic tree in addition to the ancestral sequences need to be estimated. The latter problem is known to be NP-hard, while the computational complexity of the former is unknown. Currently, all algorithms for solving these problems are heuristics without performance guarantees. The biological importance of these problems calls for developing better algorithms with guarantees of finding either optimal or approximate solutions.We develop approximation, fix parameter tractable (FPT), and fast heuristic algorithms for two variants of the problem; when the phylogenetic tree is known and when it is unknown. The approximation algorithm guarantees a solution with a log-likelihood ratio of 2 relative to the optimal solution. The FPT has a running time which is polynomial in the length of the sequences and exponential in the number of taxa. This makes it useful for calculating the optimal solution for small trees. Moreover, we combine the approximation algorithm and the FPT into an algorithm with arbitrary good approximation guarantee (PTAS). We tested our algorithms on both synthetic and biological data. In particular, we used the FPT for computing the most likely ancestral mitochondrial genomes of hominidae (the great apes), thereby answering an interesting biological question. Moreover, we show how the approximation algorithms find good solutions for reconstructing the ancestral genomes for a set of lentiviruses (relatives of HIV). Supplementary material of this work is available at www.nada.kth.se/~isaac/publications/aml/aml.html.  相似文献   

8.
Currently used joint-surface models require the measurements to be structured according to a grid. With the currently available tracking devices a large quantity of unstructured surface points can be measured in a relatively short time. In this paper a method is presented to fit polynomial functions to three-dimensional unstructured data points. To test the method spherical, cylindrical, parabolic, hyperbolic, exponential, logarithmic, and sellar surfaces with different undulations were used. The resulting polynomials were compared with the original shapes. The results show that even complex joint surfaces can be modelled with polynomial functions. In addition, the influence of noise and the number of data points was also analyzed. From a surface (diam: 20 mm) which is measured with a precision of 0.2 mm a model can be constructed with a precision of 0.02 mm.  相似文献   

9.
Theoretical models of single-file transport in a homogeneous channel are considered. Three levels of channel populations were specified for which different approximations could be used. The results of these approximations are in good agreement with the results of a computer experiment (Aityan and Portnov 1986). At low populations, the pair correlation functions were negligibly small and allowed the use of linear approximation for unidirectional fluxes and populations. The value of the pair correlation function and the respective approximation for fluxes was obtained by the two-particles random-walk technique. At extremely high populations, the "divider" technique was proposed to describe the single-file transport. The divider technique allowed to explain the exponential shape of the pair correlation function FABn, n + 1 profile at extremely high populations. At medium populations the finite difference superposition approximation was valid.  相似文献   

10.
This paper introduces a theoretical framework for characterizing and classifying simple parallel algorithms and systems with many inputs, for example an array of photoreceptors. The polynomial representation (Taylor series development) of a large class of operators is introduced and its range of validity discussed. The problems involved in the polynomial approximation of systems are also briefly reviewed. Symmetry properties of the input-output map and their implications for the system structure (i.e. its kernels) are studied. Finally, the computational properties of polynomial mappings are characterized.  相似文献   

11.
In the cell-cycle-with-control model (CCC model), cells have to satisfy a condition before they are allowed to pass a control point during G1. Different cycle durations within a cell population are explained by individual time spans needed to satisfy the passing condition. If the distribution of cycle durations is time invariant, the population will grow exponentially. However, if the average cycle duration becomes longer, while the population grows, non-exponential population growth results. Simple functions for the lengthening of the average cycle duration, like linear or exponential ones, yield the well-known growth laws found in the biological literature. The same functions can be represented by an "S-system" differential equation that was derived earlier as an approximation for biochemical systems with many fast reactions (metabolism) and one slow process (e.g. ageing).  相似文献   

12.
The nonlinear two-layer arterial wall model introduced by von Maltzahn, et al. [11] is subjected to a rigorous parameter sensitivity and range of validity analysis. The model is based on the assumption that in large muscular conduit arteries the two mechanically significant layers are media and adventitia. Using curve-fitting techniques, the media is determined to be isotropic and the adventitia to be anisotropic. As a result of the range of validity analysis, the polynomial relationship for the energy density function of the media is changed to an exponential relationship. This leads to new coefficients for the polynomial of the adventitia. All coefficients have specific mechanical meanings. The parameter sensitivity analysis demonstrates convincingly that all model parameters are significantly important.  相似文献   

13.
This paper studies the L(p) approximation capabilities of sum-of-product (SOPNN) and sigma-pi-sigma (SPSNN) neural networks. It is proved that the set of functions that are generated by the SOPNN with its activation function in $L_{loc};p(\mathcal{R})$ is dense in $L;p(\mathcal{K})$ for any compact set $\mathcal{K}\subset \mathcal{R};N$, if and only if the activation function is not a polynomial almost everywhere. It is also shown that if the activation function of the SPSNN is in ${L_{loc};\infty(\mathcal{R})}$, then the functions generated by the SPSNN are dense in $L;p(\mathcal{K})$ if and only if the activation function is not a constant (a.e.).  相似文献   

14.
Predictive species distribution models (SDMs) are becoming increasingly important in ecology, in the light of rapid environmental change. However, the predictions of most current SDMs are specific to the habitat composition of the environments in which they were fitted. This may limit SDM predictive power because species may respond differently to a given habitat depending on the availability of all habitats in their environment, a phenomenon known as a functional response in resource selection. The Generalised Functional Response (GFR) framework captures this dependence by formulating the SDM coefficients as functions of habitat availability. The original GFR implementation used global polynomial functions of habitat availability to describe the functional responses. In this study, we develop several refinements of this approach and compare their predictive performance using two simulated and two real datasets. We first use local radial basis functions (RBF), a more flexible approach than global polynomials, to represent the habitat selection coefficients, and balance bias with precision via regularization to prevent overfitting. Second, we use the RBF-GFR and GFR models in combination with the classification and regression tree CART, which has more flexibility and better predictive powers for non-linear modelling. As further extensions, we use random forests (RFs) and extreme gradient boosting (XGBoost), ensemble approaches that consistently lead to variance reduction in generalization error. We find that the different methods are ranked consistently across the datasets for out-of-data prediction. The traditional stationary approach to SDMs and the GFR model consistently perform at the bottom of the ranking (simple SDMs underfit, and polynomial GFRs overfit the data). The best methods in our list provide non-negligible improvements in predictive performance, in some cases taking the out-of-sample R2 from 0.3 up to 0.7 across datasets. At times of rapid environmental change and spatial non-stationarity ignoring the effects of functional responses on SDMs, results in two different types of prediction bias (under-prediction or mis-positioning of distribution hotspots). However, not all functional response models perform equally well. The more volatile polynomial GFR models can generate biases through over-prediction. Our results indicate that there are consistently robust GFR approaches that achieve impressive gains in transferability across very different datasets.  相似文献   

15.
We present a framework for designing cheap control architectures of embodied agents. Our derivation is guided by the classical problem of universal approximation, whereby we explore the possibility of exploiting the agent’s embodiment for a new and more efficient universal approximation of behaviors generated by sensorimotor control. This embodied universal approximation is compared with the classical non-embodied universal approximation. To exemplify our approach, we present a detailed quantitative case study for policy models defined in terms of conditional restricted Boltzmann machines. In contrast to non-embodied universal approximation, which requires an exponential number of parameters, in the embodied setting we are able to generate all possible behaviors with a drastically smaller model, thus obtaining cheap universal approximation. We test and corroborate the theory experimentally with a six-legged walking machine. The experiments indicate that the controller complexity predicted by our theory is close to the minimal sufficient value, which means that the theory has direct practical implications.  相似文献   

16.
Roark DE 《Biophysical chemistry》2004,108(1-3):121-126
Biophysical chemistry experiments, such as sedimentation-equilibrium analyses, require computational techniques to reduce the effects of random errors of the measurement process. The existing approaches have primarily relied on assumption of polynomial models and least-squares approximation. Such models by constraining the data to remove random fluctuations may distort the data and cause loss of information. The better the removal of random errors the greater is the likely introduction of systematic errors through the constraining fit itself. An alternative technique, reverse smoothing, is suggested that makes use of a more model-free approach of exponential smoothing of the first derivative. Exponential smoothing approaches have been generally unsatisfactory because they introduce significant data lag. The approaches given here compensates for the lag defect and appears promising for the smoothing of many experimental data sequences, including the macromolecular concentration data generated by sedimentation-equilibria experiments. Test results on simulated sedimentation-equilibrium data indicate that a 4-fold reduction in error may be typical over standard analyses techniques.  相似文献   

17.
Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).  相似文献   

18.
19.
ABSTRACT: BACKGROUND: The ancestries of genes form gene trees which do not necessarily have the same topology as the species tree due to incomplete lineage sorting. Available algorithms determining the probability of a gene tree given a species tree require exponential computational runtime. RESULTS: In this paper, we provide a polynomial time algorithm to calculate the probability of a ranked gene tree topology for a given species tree, where a ranked tree topology is a tree topology with the internal vertices being ordered. The probability of a gene tree topology can thus be calculated in polynomial time if the number of orderings of the internal vertices is a polynomial number. However, the complexity of calculating the probability of a gene tree topology with an exponential number of rankings for a given species tree remains unknown. CONCLUSIONS: Polynomial algorithms for calculating ranked gene tree probabilities may become useful in developing methodology to infer species trees based on a collection of gene trees, leading to a more accurate reconstruction of ancestral species relationships.  相似文献   

20.
《Biophysical journal》2021,120(15):2952-2968
In TIRF microscopy, the sample resides near a surface in an evanescent optical field that, ideally, decreases in intensity with distance from the surface in a pure exponential fashion. In practice, multiple surfaces and imperfections in the optical system and refractive index (RI) inhomogeneities in the sample (often living cells) produce propagating scattered light that degrades the exponential purity. RI inhomogeneities cannot easily be avoided. How severe is the consequent optical degradation? Starting from Maxwell’s equations, we derive a first-order perturbative approximation of the electric field strength of light scattered by sample RI inhomogeneities of several types under coherent evanescent field illumination. The approximation provides an expression for the scattering field of any arbitrary RI inhomogeneity pattern. The scattering is not all propagating; some is evanescent and remains near the scattering centers. The results presented here are only a first-order approximation, and they ignore multiple scattering and reflections off the total internal reflection (TIR) surface. For simplicity, we assume that the RI variations in the z direction are insignificant within the depth of the evanescent field and consider only scattering of excitation light, not fluorescence emission light. The general conclusion of most significance from this study is that TIR scattering from a sample with RI variations typical of those on a cell culture alters the effective thickness of the illumination to only ∼50% greater than it would be without scattering. The qualitative surface selectivity of TIR fluorescence is largely retained even in the presence of scattering. Quantitatively, however, scattering will cause a deviation from the incident exponential decay at shorter distances, adding a slower decaying background. Calculations that assume a pure exponential decay will be approximations, and scattering should be taken into account. TIR scattering is only slightly dependent on polarization but is strongly reduced for the highest accessible incidence angles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号