共查询到20条相似文献,搜索用时 15 毫秒
1.
Eric C. Dykeman 《Nucleic acids research》2015,43(12):5708-5715
In this paper I outline a fast method called KFOLD for implementing the Gillepie algorithm to stochastically sample the folding kinetics of an RNA molecule at single base-pair resolution. In the same fashion as the KINFOLD algorithm, which also uses the Gillespie algorithm to predict folding kinetics, KFOLD stochastically chooses a new RNA secondary structure state that is accessible from the current state by a single base-pair addition/deletion following the Gillespie procedure. However, unlike KINFOLD, the KFOLD algorithm utilizes the fact that many of the base-pair addition/deletion reactions and their corresponding rates do not change between each step in the algorithm. This allows KFOLD to achieve a substantial speed-up in the time required to compute a prediction of the folding pathway and, for a fixed number of base-pair moves, performs logarithmically with sequence size. This increase in speed opens up the possibility of studying the kinetics of much longer RNA sequences at single base-pair resolution while also allowing for the RNA folding statistics of smaller RNA sequences to be computed much more quickly. 相似文献
2.
Many populations live in ‘advective’ media, such as rivers, where flow is biased in one direction. In these environments, populations face the possibility of extinction by being washed out of the system, even if the net reproductive rate (R) is greater than one. We propose a formal condition for population persistence in advective systems: a population can persist at any location in a homogeneous habitat if and only if it can invade upstream. This leads to a remarkably simple recipe for calculating the minimal value for the net reproductive rate for population persistence. We apply this criterion to discrete-time models of a semelparous population where dispersal is characterized by a mechanistically derived kernel. We demonstrate that persistence depends strongly on the form of the kernel’s ‘tail’, a result consistent with previous literature on the speed of spread of invasions. We apply our theory to models of stream invertebrates with a biphasic life cycle, and relate our results to the ‘colonization cycle’ hypothesis where bias in downstream drift is offset by upstream bias in adult dispersal. In the absence of bias in adult dispersal, variability in the duration of the larval stage and in oviposition sites have a large effect of the persistence condition. The minimization calculations required in our approach are very straightforward, indicating the feasibility of future applications to life history theory. 相似文献
3.
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization. 相似文献
4.
Pilsung Kang 《Cluster computing》2012,15(3):321-332
We present a modular approach to implementing dynamic algorithm switching for parallel scientific software. By using a compositional framework based on function call interception techniques, our proposed method transparently integrates algorithm switching code with a given program without directly modifying the original code structure. Through fine-grained control of algorithmic behavior of an application at the level of functions, our approach supports design and implementation of application-specific switching scenarios in a modular way. Our approach encourages algorithm switching to dynamically perform at the loop end of a parallel simulation, where cooperating processes in concurrent execution typically synchronize and intermediate computation results are consistent. In this way, newly added switching operations do not cause race conditions that may produce unreliable computation results in parallel simulations. By applying our method to a real-world scientific application and adapting its algorithmic behavior to the properties of input problems, we demonstrate the applicability and effectiveness of our approach to constructing efficient parallel simulations. 相似文献
5.
A simple vector implementation of the Laplace-transformed cable equations in passive dendritic trees
Jaap van Pelt 《Biological cybernetics》1992,68(1):15-21
Transient potentials in dendritic trees can be calculated by approximating the dendrite by a set of connected cylinders. The profiles for the currents and potentials in the whole system can then be obtained by imposing the proper boundary conditions and calculating these profiles along each individual cylinder. An elegant implementation of this method has been described by Holmes (1986), and is based on the Laplace transform of the cable equation. By calculating the currents and potentials only at the ends of the cylinders, the whole system of connected cylinders can be described by a set of n equations, where n denotes the number of internal and external nodes (points of connection and endpoints of the cylinders). The present study shows that the set of equations can be formulated by a simple vector equation which is essentially a generalization of Ohm's law for the whole system. The current and potential n-vectors are coupled by a n × n conductance matrix whose structure immediately reflects the connectivity pattern of the connected cylinders. The vector equation accounts for conductances, associated with driving potentials, which may be local or distributed over the membrane. It is shown that the vector equation can easily be adapted for the calculation of transients over a period in which stepwise changes in system parameters have occurred. In this adaptation it is assumed that the initial conditions for the potential profiles at the start of a new period after a stepwise change can be approximated by steady-state solutions. The vector representation of the Laplace-transformed equations is attractive because of its simplicity and because the structure of the conductance matrix directly corresponds to the connectivity pattern of the dendritic tree. Therefore it will facilitate the automatic generation of the equations once the geometry of the branching structure is known. 相似文献
6.
Joshua D. Evans Bruce R. Whiting David G. Politte Joseph A. O'Sullivan Paul F. Klahr Jeffrey F. Williamson 《Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics (AIFB)》2013,29(5):500-512
PurposeTo present a framework for characterizing the data needed to implement a polyenergetic model-based statistical reconstruction algorithm, Alternating Minimization (AM), on a commercial fan-beam CT scanner and a novel method for assessing the accuracy of the commissioned data model.MethodsThe X-ray spectra for three tube potentials on the Philips Brilliance CT scanner were estimated by fitting a semi-empirical X-ray spectrum model to transmission measurements. Spectral variations due to the bowtie filter were computationally modeled. Eight homogeneous cylinders of PMMA, Teflon and water with varying diameters were scanned at each energy. Central-axis scatter was measured for each cylinder using a beam-stop technique. AM reconstruction with a single-basis object-model matched to the scanned cylinder's composition allows assessment of the accuracy of the AM algorithm's polyenergetic data model. Filtered-backprojection (FBP) was also performed to compare consistency metrics such as uniformity and object-size dependence.ResultsThe spectrum model fit measured transmission curves with residual root-mean-square-error of 1.20%–1.34% for the three scanning energies. The estimated spectrum and scatter data supported polyenergetic AM reconstruction of the test cylinders to within 0.5% of expected in the matched object-model reconstruction test. In comparison to FBP, polyenergetic AM exhibited better uniformity and less object-size dependence.ConclusionsReconstruction using a matched object-model illustrate that the polyenergetic AM algorithm's data model was commissioned to within 0.5% of an expected ground truth. These results support ongoing and future research with polyenergetic AM reconstruction of commercial fan-beam CT data for quantitative CT applications. 相似文献
7.
MOTIVATION: The antigen receptors of adaptive immunity-T-cell receptors and immunoglobulins-are encoded by genes assembled stochastically from combinatorial libraries of gene segments. Immunoglobulin genes then experience further diversification through hypermutation. Analysis of the somatic genetics of the immune response depends explicitly on inference of the details of the recombinatorial process giving rise to each of the participating antigen receptor genes. We have developed a dynamic programming algorithm to perform this reconstruction and have implemented it as web-accessible software called SoDA (Somatic Diversification Analysis). RESULTS: We tested SoDA against a set of 120 artificial immunoglobulin sequences generated by simulation of recombination and compared the results with two other widely used programs. SoDA inferred the correct gene segments more frequently than the other two programs. We further tested these programs using 30 human immunoglobulin genes from Genbank and here highlight instances where the recombinations inferred by the three programs differ. SoDA appears generally to find more likely recombinations. 相似文献
8.
Development and implementation of an algorithm for detection of protein complexes in large interaction networks 总被引:4,自引:0,他引:4
Md Altaf-Ul-Amin Yoko Shinbo Kenji Mihara Ken Kurokawa Shigehiko Kanaya 《BMC bioinformatics》2006,7(1):207-13
Background
After complete sequencing of a number of genomes the focus has now turned to proteomics. Advanced proteomics technologies such as two-hybrid assay, mass spectrometry etc. are producing huge data sets of protein-protein interactions which can be portrayed as networks, and one of the burning issues is to find protein complexes in such networks. The enormous size of protein-protein interaction (PPI) networks warrants development of efficient computational methods for extraction of significant complexes. 相似文献9.
The massively parallel genetic algorithm for RNA folding: MIMD implementation and population variation 总被引:4,自引:0,他引:4
A massively parallel Genetic Algorithm (GA) has been applied to RNA sequence folding on three different computer architectures. The GA, an evolution-like algorithm that is applied to a large population of RNA structures based on a pool of helical stems derived from an RNA sequence, evolves this population in parallel. The algorithm was originally designed and developed for a 16384 processor SIMD (Single Instruction Multiple Data) MasPar MP-2. More recently it has been adapted to a 64 processor MIMD (Multiple Instruction Multiple Data) SGI ORIGIN 2000, and a 512 processor MIMD CRAY T3E. The MIMD version of the algorithm raises issues concerning RNA structure data-layout and processor communication. In addition, the effects of population variation on the predicted results are discussed. Also presented are the scaling properties of the algorithm from the perspective of the number of physical processors utilized and the number of virtual processors (RNA structures) operated upon. 相似文献
10.
BACKGROUND: The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. MAIN CONTRIBUTIONS: We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable--allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. 相似文献
11.
The large amount of image data necessary for high-resolution 3D reconstruction of macromolecular assemblies leads to significant increases in the computational time. One of the most time consuming operations is 3D density map reconstruction, and software optimization can greatly reduce the time required for any given structural study. The majority of algorithms proposed for improving the computational effectiveness of a 3D reconstruction are based on a ray-by-ray projection of each image into the reconstructed volume. In this paper, we propose a novel fast implementation of the "filtered back-projection" algorithm based on a voxel-by-voxel principle. Our version of this implementation has been exhaustively tested using both model and real data. We compared 3D reconstructions obtained by the new approach with results obtained by the filtered Back-Projections algorithm and the Fourier-Bessel algorithm commonly used for reconstructing icosahedral viruses. These computational experiments demonstrate the robustness, reliability, and efficiency of this approach. 相似文献
12.
13.
Mulvihill BM Prendergast PJ 《Computer methods in biomechanics and biomedical engineering》2008,11(5):443-451
The rate of bone loss is subject to considerable variation between individuals. With the 'mechanostat' model of Frost, genetic variations in bone mechanoresponsiveness are modelled by different mechanostat 'setpoints'--which may also change with age or disease. In this paper, the following setpoints are used: epsilonmin (strain below which resorption is triggered); epsilonmax (strain above which deposition occurs); omegacrit (microdamage-level above which damage-stimulated resorption occurs). To simulate decreased mechanosensitivity, epsilonmax is increased. Analyses carried out on a simplified model of a trabecula show that epsilonmax is a critical parameter: if it is higher in an individual (genetics) or increases (with age) the mass deficit each remodelling cycle increases. Furthermore, there is a value of epsilonmax above which trabecular perforation occurs, leading to rapid loss of bone mass. Maintaining bone cell mechanosensitivity could therefore be a therapeutic target for the prevention of osteoporosis. 相似文献
14.
15.
We describe an algorithm (IRSA) for identification of common regulatory signals in samples of unaligned DNA sequences. The algorithm was tested on randomly generated sequences of fixed length with implanted signal of length 15 with 4 mutations, and on natural upstream regions of bacterial genes regulated by PurR, ArgR and CRP. Then it was applied to upstream regions of orthologous genes from Escherichia coli and related genomes. Some new palindromic binding and direct repeats signals were identified. Finally we present a parallel version suitable for computers supporting the MPI protocol. This implementation is not strictly bounded by the number of available processors. The computation speed linearly depends on the number of processors. 相似文献
16.
We described advection and diffusion of water isotopologues in leaves in the non-steady state, applied specifically to amphistomatous leaves. This explains the isotopic enrichment of leaf water from the xylem to the mesophyll, and we showed how it relates to earlier models of leaf water enrichment in non-steady state. The effective length or tortuosity factor of isotopologue movement in leaves is unknown and, therefore, is a fitted parameter in the model. We compared the advection-diffusion model to previously published data sets for Lupinus angustifolius and Eucalyptus globulus. Night-time stomatal conductance was not measured in either data set and is therefore another fitted parameter. The model compared very well with the observations of bulk mesophyll water during the whole diel cycle. It compared well with the enrichment at the evaporative sites during the day but showed some deviations at night for E. globulus. It became clear from our analysis that night-time stomatal conductance should be measured in the future and that the temperature dependence of the tracer diffusivities should be accounted for. However, varying mesophyll water volume did not seem critical for obtaining a good prediction of leaf water enrichment, at least in our data sets. In addition, observations of single diurnal cycles do not seem to constrain the effective length that relates to the tortuosity of the water path in the mesophyll. Finally, we showed when simpler models of leaf water enrichment were suitable for applications of leaf water isotopes once weighted with the appropriate gas exchange flux. We showed that taking an unsuitable leaf water enrichment model could lead to large biases when cumulated over only 1 day. 相似文献
17.
Series of six repeated vertical Zooplankton hauls 5 min apart were made on a hourly basis during 7 h at an anchor station in the upper St. Lawrence Estuary, Québec. A hierarchical analysis of variance showed that hour-to-hour variations in numbers of most Zooplankton components were of greater magnitude than those found within 30 min or caused by counting errors.The increase of variance for increasing lengths of the sampling period was investigated using a 175 h time series of Zooplankton samples taken 30 min apart at the same location. The results show that the confidence interval of a single observation at the anchor station increases as the scale of the experiment approaches that of the main advective processes (semidiurnal tidal currents) after which it remains relatively stable. For a given sampling scale, the statistical dispersion of Zooplankton is not permanent but varies in time and space under the effects of tidal advection and mixing. These results show that, in tidal estuaries, advection phenomena are more easily recognizable than turbulence effects. 相似文献
18.
Vigmond EJ Weber dos Santos R Prassl AJ Deo M Plank G 《Progress in biophysics and molecular biology》2008,96(1-3):3-18
The bidomain equations are widely used for the simulation of electrical activity in cardiac tissue. They are especially important for accurately modeling extracellular stimulation, as evidenced by their prediction of virtual electrode polarization before experimental verification. However, solution of the equations is computationally expensive due to the fine spatial and temporal discretization needed. This limits the size and duration of the problem which can be modeled. Regardless of the specific form into which they are cast, the computational bottleneck becomes the repeated solution of a large, linear system. The purpose of this review is to give an overview of the equations and the methods by which they have been solved. Of particular note are recent developments in multigrid methods, which have proven to be the most efficient. 相似文献
19.
The Gompertz law of mortality quantitatively describes the mortality rate of humans and almost all multicellular animals. However, its underlying kinetic mechanism is unclear. The Gompertz law cannot explain the mortality plateau at advanced ages and cannot give an explicit relationship between temperature and mortality. In this study a reaction kinetics model with a time dependent rate coefficient is proposed to describe the survival and senescence processes. A temperature-dependent mortality function was derived. The new mortality function becomes the Gompertz mortality function with the same relationship of parameters prescribed by the Strehler–Mildvan correlation when age is smaller than a characteristic value δ, and reaches the mortality plateau when age is greater than δ. A closed-form analytical expression for describing the relationship of average lifespan with temperature and other equations are derived from the new mortality function. The derived equations can be used to estimate the limit of average lifespan, predict the maximal longevity, calculate the temperature coefficient of lifespan, and explain the tendency of the survival curve. This prediction is consistent with the most recently reported mortality trajectories for single-year birth cohorts. This study suggests that the senescence process results from the imbalance between damaging energy and protecting energy for the critical chemical substance in the body. The rate of senescence of the organism increases while the protecting energy decreases. The mortality plateau is reached when the protecting energy decreases to its minimal levels. The decreasing rate of the protecting energy is temperature-dependent. This study is exploring the connection between the biochemical mechanism and demography. 相似文献
20.
Constitutive equations for the lung tissue 总被引:2,自引:0,他引:2
Y Lanir 《Journal of biomechanical engineering》1983,105(4):374-380
The mechanical behavior of the lung tissue (expressed by its constitutive equations) has considerable influence on the normal and pathological function of the lung. It determines the stress field in the tissue, thus affecting the impedence and energy consumption during breathing as well as the localization of certain lung diseases. The lung tissue has a complex mechanical response. It arises from the tissue's structure--a cluster of a very large number of closely packed airsacks (alveoli) and air ducts. Each of the alveoli has a shape of irregular polyhedron. It is bounded by the alveolar wall membrane. In the present study, a stochastic approach to the tissue's structure will be employed. The density distribution function of the membrane's orientation in space is considered as the predominant structural parameter. Based on this model the present theory relates the behavior of both the alveolar membrane and that of its liquid interface to the tissue's general constitutive properties. The resulting equations allow for anisotropic and visco-elastic effects. A protocol for material characterization along the present model is proposed as well. The methodology of the present theory is quite general and can be similarly used with other structural models of the lung tissue (e.g., models in which the effect of the alveolar ducts is included). 相似文献