共查询到20条相似文献,搜索用时 15 毫秒
1.
Evolutionary graph theory studies the evolutionary dynamics of populations structured on graphs. A central problem is determining the probability that a small number of mutants overtake a population. Currently, Monte Carlo simulations are used for estimating such fixation probabilities on general directed graphs, since no good analytical methods exist. In this paper, we introduce a novel deterministic framework for computing fixation probabilities for strongly connected, directed, weighted evolutionary graphs under neutral drift. We show how this framework can also be used to calculate the expected number of mutants at a given time step (even if we relax the assumption that the graph is strongly connected), how it can extend to other related models (e.g. voter model), how our framework can provide non-trivial bounds for fixation probability in the case of an advantageous mutant, and how it can be used to find a non-trivial lower bound on the mean time to fixation. We provide various experimental results determining fixation probabilities and expected number of mutants on different graphs. Among these, we show that our method consistently outperforms Monte Carlo simulations in speed by several orders of magnitude. Finally we show how our approach can provide insight into synaptic competition in neurology. 相似文献
2.
C Bundesen 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》1998,353(1373):1271
A computational theory of visual attention is presented. The basic theory (TVA) combines the biased-choice model for single-stimulus recognition with the fixed-capacity independent race model (FIRM) for selection from multi-element displays. TVA organizes a large body of experimental findings on performance in visual recognition and attention tasks. A recent development (CTVA) combines TVA with a theory of perceptual grouping by proximity. CTVA explains effects of perceptual grouping and spatial distance between items in multi-element displays. A new account of spatial focusing is proposed in this paper. The account provides a framework for understanding visual search as an interplay between serial and parallel processes. 相似文献
3.
A computational method for NMR-constrained protein threading. 总被引:2,自引:0,他引:2
Protein threading provides an effective method for fold recognition and backbone structure prediction. But its application is currently limited due to its level of prediction accuracy and scope of applicability. One way to significantly improve its usefulness is through the incorporation of underconstrained (or partial) NMR data. It is well known that the NMR method for protein structure determination applies only to small proteins and that its effectiveness decreases rapidly as the protein mass increases beyond about 30 kD. We present, in this paper, a computational framework for applying underconstrained NMR data (that alone are insufficient for structure determination) as constraints in protein threading and also in all-atom model construction. In this study, we consider both secondary structure assignments from chemical shifts and NOE distance restraints. Our results have shown that both secondary structure assignments and a small number of long-range NOEs can significantly improve the threading quality in both fold recognition and threading-alignment accuracy, and can possibly extend threading's scope of applicability from homologs to analogs. An accurate backbone structure generated by NMR-constrained threading can then provide a great amount of structural information, equivalent to that provided by many NMR data; and hence can help reduce the number of NMR data typically required for an accurate structure determination. This new technique can potentially accelerate current NMR structure determination processes and possibly expand NMR's capability to larger proteins. 相似文献
4.
A computational theory of human stereo vision. 总被引:15,自引:0,他引:15
D Marr T Poggio 《Proceedings of the Royal Society of London. Series B, Containing papers of a Biological character. Royal Society (Great Britain)》1979,204(1156):301-328
An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminationss of the same sign in the two images, for a range of disparities up to about the width of the mask's central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-D sketch. It is shown that this proposal provides a theoretical framework for most existing psychophysical and neurophysiological data about stereopsis. Several critical experimental predictions are also made, for instance about the size of Panum's area under various conditions. The results of such experiments would tell us whether, for example, cooperativity is necessary for the matching process. 相似文献
5.
In recent years, the increase in the amounts of available genomic data has made it easier to appreciate the extent by which organisms increase their genetic diversity through horizontally transferred genetic material. Such transfers have the potential to give rise to extremely dynamic genomes where a significant proportion of their coding DNA has been contributed by external sources. Because of the impact of these horizontal transfers on the ecological and pathogenic character of the recipient organisms, methods are continuously sought that are able to computationally determine which of the genes of a given genome are products of transfer events. In this paper, we introduce and discuss a novel computational method for identifying horizontal transfers that relies on a gene's nucleotide composition and obviates the need for knowledge of codon boundaries. In addition to being applicable to individual genes, the method can be easily extended to the case of clusters of horizontally transferred genes. With the help of an extensive and carefully designed set of experiments on 123 archaeal and bacterial genomes, we demonstrate that the new method exhibits significant improvement in sensitivity when compared to previously published approaches. In fact, it achieves an average relative improvement across genomes of between 11 and 41% compared to the Codon Adaptation Index method in distinguishing native from foreign genes. Our method's horizontal gene transfer predictions for 123 microbial genomes are available online at http://cbcsrv.watson.ibm.com/HGT/. 相似文献
6.
GeneSplicer is a new, flexible system for detecting splice sites in the genomic DNA of various eukaryotes. The system has been tested successfully using DNA from two reference organisms: the model plant Arabidopsis thaliana and human. It was compared to six programs representing the leading splice site detectors for each of these species: NetPlantGene, NetGene2, HSPL, NNSplice, GENIO and SpliceView. In each case GeneSplicer performed comparably to the best alternative, in terms of both accuracy and computational efficiency. 相似文献
7.
We have refined entropy theory to explore the meaning of the increasing sequence data on nucleic acids and proteins more conveniently. The concept of selection constraint was not introduced, only the analyzed sequences themselves were considered. The refined theory serves as a basis for deriving a method to analyze non-coding regions (NCRs) as well as coding regions. Positions with maximal entropy might play the most important role in genome functions as opposed to positions with minimal entropy. This method was tested in the well-characterized coding regions of 12 strains of Classical Swine Fever Virus (CSFV) and non-coding regions of 20 strains of CSFV. It is suitable to analyze nucleic acid sequences of a complete genome and to detect sensitive positions for mutagenesis. As such, the method serves to formulate the basis for elucidating the functional mechanism. 相似文献
8.
A software suite, SABER (Selection of Active/Binding sites for Enzyme Redesign), has been developed for the analysis of atomic geometries in protein structures, using a geometric hashing algorithm (Barker and Thornton, Bioinformatics 2003;19:1644–1649). SABER is used to explore the Protein Data Bank (PDB) to locate proteins with a specific 3D arrangement of catalytic groups to identify active sites that might be redesigned to catalyze new reactions. As a proof‐of‐principle test, SABER was used to identify enzymes that have the same catalytic group arrangement present in o‐succinyl benzoate synthase (OSBS). Among the highest‐scoring scaffolds identified by the SABER search for enzymes with the same catalytic group arrangement as OSBS were L ‐Ala D/L ‐Glu epimerase (AEE) and muconate lactonizing enzyme II (MLE), both of which have been redesigned to become effective OSBS catalysts, demonstrated by experiments. Next, we used SABER to search for naturally existing active sites in the PDB with catalytic groups similar to those present in the designed Kemp elimination enzyme KE07. From over 2000 geometric matches to the KE07 active site, SABER identified 23 matches that corresponded to residues from known active sites. The best of these matches, with a 0.28 Å catalytic atom RMSD to KE07, was then redesigned to be compatible with the Kemp elimination using RosettaDesign. We also used SABER to search for potential Kemp eliminases using a theozyme predicted to provide a greater rate acceleration than the active site of KE07, and used Rosetta to create a design based on the proteins identified. 相似文献
9.
Y. Wang H. Zhang R. A. Scott 《Protein science : a publication of the Protein Society》1995,4(7):1402-1411
A new model for calculating the solvation energy of proteins is developed and tested for its ability to identify the native conformation as the global energy minimum among a group of thousands of computationally generated compact non-native conformations for a series of globular proteins. In the model (called the WZS model), solvation preferences for a set of 17 chemically derived molecular fragments of the 20 amino acids are learned by a training algorithm based on maximizing the solvation energy difference between native and non-native conformations for a training set of proteins. The performance of the WZS model confirms the success of this learning approach; the WZS model misrecognizes (as more stable than native) only 7 of 8,200 non-native structures. Possible applications of this model to the prediction of protein structure from sequence are discussed. 相似文献
10.
W E Grimson 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》1982,298(1092):395-427
Computational theories of structure-from-motion and stereo vision only specify the computation of three-dimensional surface information at special points in the image. Yet the visual perception is clearly of complete surfaces. To account for this a computational theory of the interpolation of surfaces from visual information is presented. The problem is constrained by the fact that the surface must agree with the information from stereo or motion correspondence, and not vary radically between these points. Using the image irradiance equation, an explicit form of this surface consistency constraint can be derived. To determine which of two possible surfaces is more consistent with the surface consistency constraint, one must be able to compare the two surfaces. To do this, a functional from the space of possible functions to the real numbers is required. In this way, the surface most consistent with the visual information will be that which minimizes the functional. To ensure that the functional has a unique minimal surface, conditions on the form of the functional are derived. In particular, if the functional is a complete semi-norm that satisfies the parallelogram law, or the space of functions is a semi-Hilbert space and the functional is a semi-inner product, then there is a unique (to within possibly an element of the null space of the functional) surface that is most consistent with the visual information. It can be shown, based on the above conditions plus a condition of rotational symmetry, that there is a vector space of possible functionals that measure surface consistency, this vector space being spanned by the functional of quadratic variation and the functional of square Laplacian. Arguments based on the null spaces of the respective functionals are used to justify the choice of the quadratic variation as the optimal functional. Possible refinements to the theory, concerning the role of discontinuities in depth and the effects of applying the interpolation process to scenes containing more than one object, are discussed. 相似文献
11.
A new method for producing particles and membranes containing immobilized bacteria is presented. These immobilized bacteria display good stability over time making them well suited for use in a packed-bed reactor. Such a reactor is tested as a function of the different parameters of the system. The results are qualitatively similar to those obtained with purified enzyme reactors, but some discrepancies with the plug-flow model are noted. It is necessary to use a more sophisticated model in order to fit the experimental data. 相似文献
12.
Wynand S Verwoerd 《BMC systems biology》2011,5(1):25
Background
Compared to more general networks, biochemical networks have some special features: while generally sparse, there are a small number of highly connected metabolite nodes; and metabolite nodes can also be divided into two classes: internal nodes with associated mass balance constraints and external ones without. Based on these features, reclassifying selected internal nodes (separators) to external ones can be used to divide a large complex metabolic network into simpler subnetworks. Selection of separators based on node connectivity is commonly used but affords little detailed control and tends to produce excessive fragmentation. 相似文献13.
Models in computational biology, such as those used in binding, docking, and folding, are often empirical and have adjustable parameters. Because few of these models are yet fully predictive, the problem may be nonoptimal choices of parameters. We describe an algorithm called ENPOP (energy function parameter optimization) that improves-and sometimes optimizes-the parameters for any given model and for any given search strategy that identifies the stable state of that model. ENPOP iteratively adjusts the parameters simultaneously to move the model global minimum energy conformation for each of m different molecules as close as possible to the true native conformations, based on some appropriate measure of structural error. A proof of principle is given for two very different test problems. The first involves three different two-dimensional model protein molecules having 12 to 37 monomers and four parameters in common. The parameters converge to the values used to design the model native structures. The second problem involves nine bumpy landscapes, each having between 4 and 12 degrees of freedom. For the three adjustable parameters, the globally optimal values are known in advance. ENPOP converges quickly to the correct parameter set. 相似文献
14.
Tony Lindeberg 《Biological cybernetics》2013,107(6):589-635
A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space–time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space–time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space–time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms. 相似文献
15.
The combinatorial distance geometry method for the calculation of molecular conformation. II. Sample problems and computational statistics 总被引:2,自引:0,他引:2
The performance of a branch and bound algorithm for molecular energy minimization is evaluated on a variety of test problems. Although not at present efficient enough for use in most practical situations, we show that it has distinct advantages over more conventional methods of global minimization. In addition, this study illustrates the technique on which the present algorithm is based, and the problems which must be overcome in developing an efficient algorithm based on similar principles. 相似文献
16.
A new computational approach for real protein folding prediction 总被引:4,自引:0,他引:4
An effective and fast minimization approach is proposed for the prediction of protein folding, in which the 'relative entropy' is used as a minimization function and the off-lattice model is used. In this approach, we only use the information of distances between the consecutive Calpha atoms along the peptide chain and a generalized form of the contact potential for 20 types of amino acids. Tests of the algorithm are performed on the real proteins. The root mean square deviations of the structures of eight folded target proteins versus the native structures are in a reasonable range. In principle, this method is an improvement on the energy minimization approach. 相似文献
17.
Louis G. Zachos 《Journal of theoretical biology》2009,259(3):646-125
A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era. 相似文献
18.
A computational combinatorial approach is proposed for the design of a peptide inhibitor of Ras protein. The procedure involves three steps. First, a 'Multiple Copy Simultaneous Search' identifies the location of specific functional groups on the Ras surface. This search method allowed us to identify an important binding surface consisting of two beta strands (residues 5-8 and 52-56), in addition to the well known Ras effector loop and switch II region. The two beta strands had not previously been reported to be involved in Ras-Raf interaction. Second, after constructing the peptide inhibitor chain based on the location of N-methylacetamide (NMA) minima, functional groups are selected and connected to the main chain Calpha atom. This step generates a number of possible peptides with different sequences on the Ras surface. Third, potential inhibitors are designed based on a sequence alignment of the peptides generated in the second step. This computational approach reproduces the conserved pattern of hydrophobic, hydrophilic and charged amino acids identified from the Ras effectors. The advantages and limitations of this approach are discussed. 相似文献
19.
A computational method for calculating the dynamic distensibility of the vessel wall in vivo, developed on the basis of the pressure pulse transmission, is proposed. Distensibilities of descending thoracic aorta, abdominal aorta, and femoral artery in normal dogs, and of femoral artery of a typical dog under the action of vasoactive drugs, have been calculated. In femoral artery it is compared with the values of the diameter change/pressure change. Comparison of the results clearly indicate the feasibility of the proposed method. The order of distensibility found is: descending thoracic aorta greater than abdominal aorta greater than femoral artery. 相似文献
20.