首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background  

The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n 6)time and O(n 4) space algorithm by Rivas and Eddy is currently the best available program.  相似文献   

2.

Background  

The problem of computationally predicting the secondary structure (or folding) of RNA molecules was first introduced more than thirty years ago and yet continues to be an area of active research and development. The basic RNA-folding problem of finding a maximum cardinality, non-crossing, matching of complimentary nucleotides in an RNA sequence of length n, has an O(n 3)-time dynamic programming solution that is widely applied. It is known that an o(n 3) worst-case time solution is possible, but the published and suggested methods are complex and have not been established to be practical. Significant practical improvements to the original dynamic programming method have been introduced, but they retain the O(n 3) worst-case time bound when n is the only problem-parameter used in the bound. Surprisingly, the most widely-used, general technique to achieve a worst-case (and often practical) speed up of dynamic programming, the Four-Russians technique, has not been previously applied to the RNA-folding problem. This is perhaps due to technical issues in adapting the technique to RNA-folding.  相似文献   

3.
Solving the N-Queens problem with a binary Hopfield-type network   总被引:3,自引:0,他引:3  
The application of a discrete Hopfield-type neural network to solving the NP-Hard optimization problem — the N-Queens Problem (NQP) — is presented. The applied network is binary, and at every moment each neuron potential is equal to either 0 or 1. The network can be implemented in the asynchronous mode as well as in the synchronous one with n parallel running processors. In both cases the convergence rate is up to 100%, and the experimental estimate of the average computational complexity is polynomial. Based on the computer simulation results and the theoretical analysis, the proper network parameters are established. The behaviour of the network is explained.  相似文献   

4.
Optimal filtering of noisy voltage signals on dendritic trees is a key problem in computational cellular neuroscience. However, the state variable in this problem—the vector of voltages at every compartment—is very high-dimensional: realistic multicompartmental models often have on the order of N = 104 compartments. Standard implementations of the Kalman filter require O(N 3) time and O(N 2) space, and are therefore impractical. Here we take advantage of three special features of the dendritic filtering problem to construct an efficient filter: (1) dendritic dynamics are governed by a cable equation on a tree, which may be solved using sparse matrix methods in O(N) time; and current methods for observing dendritic voltage (2) provide low SNR observations and (3) only image a relatively small number of compartments at a time. The idea is to approximate the Kalman equations in terms of a low-rank perturbation of the steady-state (zero-SNR) solution, which may be obtained in O(N) time using methods that exploit the sparse tree structure of dendritic dynamics. The resulting methods give a very good approximation to the exact Kalman solution, but only require O(N) time and space. We illustrate the method with applications to real and simulated dendritic branching structures, and describe how to extend the techniques to incorporate spatially subsampled, temporally filtered, and nonlinearly transformed observations.  相似文献   

5.
We describe an average-case O(n 2) algorithm to list all reversals on a signed permutation π that, when applied to π, produce a permutation that is closer to the identity. This algorithm is optimal in the sense that, the time it takes to write the list is Ω(n 2) in the worst case.  相似文献   

6.
Abstract

This paper describes the implementation of a method for computing the Coulombic interaction in a periodic system. If the basic cell contains n charges the CPU time required to compute all forces and the total energy is O(n·log n) in contrast to Ewald's method with O(n 3/2).  相似文献   

7.
This study presents a real-time, biologically plausible neural network approach to purposive behavior and cognitive mapping. The system is composed of (a) an action system, consisting of a goal-seeking neural mechanism controlled by a motivational system; and (b) a cognitive system, involving a neural cognitive map. The goal-seeking mechanism displays exploratory behavior until either (a) the goal is found or (b) an adequate prediction of the goal is generated. The cognitive map built by the network is a top logical map, i.e., it represents only the adjacency, but not distances or directions, between places. The network has recurrent and non-recurrent properties that allow the reading of the cognitive map without modifying it. Two types of predictions are introduced: fast-time and real-time predictions. Fast-time predictions are produced in advance of what occurs in real time, when the information stored in the cognitive map is used to predict the remote future. Real-time predictions are generated simultaneously with the occurrence of environmental events, when the information stored in the cognitive map is being updated. Computer simulations show that the network successfully describes latent learning and detour behavior in rats. In addition, simulations demonstrate that the network can be applied to problem-solving paradigms such as the Tower of Hanoi puzzle.  相似文献   

8.

Background  

Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet).  相似文献   

9.
The accurate reconstruction of phylogenies from short molecular sequences is an important problem in computational biology. Recent work has highlighted deep connections between sequence-length requirements for high-probability phylogeny reconstruction and the related problem of the estimation of ancestral sequences. In Daskalakis et al. (in Probab. Theory Relat. Fields 2010), building on the work of Mossel (Trans. Am. Math. Soc. 356(6):2379–2404, 2004), a tight sequence-length requirement was obtained for the simple CFN model of substitution, that is, the case of a two-state symmetric rate matrix Q. In particular the required sequence length for high-probability reconstruction was shown to undergo a sharp transition (from O(log n) to poly(n), where n is the number of leaves) at the “critical” branch length g ML(Q) (if it exists) of the ancestral reconstruction problem defined roughly as follows: below g ML(Q) the sequence at the root can be accurately estimated from sequences at the leaves on deep trees, whereas above g ML(Q) information decays exponentially quickly down the tree.  相似文献   

10.
Abstract Three kinds of trisaccharides were prepared by digesting fucoidan from the brown alga Kjellmaniella crassifolia, with the extracellular enzymes of the marine bacterium Fucobacter marina. Their structures were determined as Δ4,5GlcpUA1-2(L-Fucp(3-O-sulfate)α1-3)D-Manp, Δ4,5GlcpUA1-2(L-Fucp(3-O-sulfate)α1-3)D-Manp(6-O-sulfate), and Δ4,5GlcpUA1-2(L-Fucp(2,4-O-disulfate)α1-3)D-Manp(6-O-sulfate), which indicated the existence of a novel polysaccharide in the fucoidan and a novel glycosidase in the extracellular enzymes. In order to determine the complete structure of the polysaccharide and the reaction mechanism of the glycosidase, the fucoidan was partially hydrolyzed to obtain glucuronomannan, which is the putative backbone of the polysaccharide, and its sugar sequence was determined as (-4-D-GlcpUAβ1-2D-Manpα1-)n, which disclosed that the main structure of the polysaccharide is (-4-D-GlcpUAβ1-2(L-Fucp(3-O-sulfate)α1-3)D-Manpα1-)n. Consequently, the glycosidase was deduced to be an endo-α-D-mannosidase that eliminatively cleaves the α-D-mannosyl linkage between D-Manp and D-GlcpUA residues in the polysaccharide and produces the above trisaccharides. The novel polysaccharide and glycosidase were tentatively named as sulfated fucoglucuronomannan (SFGM) and SFGM lyase, respectively.  相似文献   

11.
Real-time systems are increasingly appearing in complex and dynamic environments such as cruise controllers, life support systems and nuclear reactors. These systems contain components that sense, control and stabilize the environment towards achieving the mission or target. These consociate components synchronize, compute and control themselves locally or have a centralized component to achieve their mission. Distributed computing techniques improve the overall performance and reliability of large real-time systems with spread components. Partially Clairvoyant scheduling was introduced in Saksena, M., PhD thesis (1994) to determine the schedulability of hard real-time jobs with variable execution times. The problem of deciding the Partially Clairvoyant schedulability of a constrained set of jobs was well studied in Gerber, R., et al., IEEE Trans. Comput. 44(3), 471–479 (1995), Choi, S. and Agrawala, A.K., Real-Time Syst. 19(1), 5–40 (2000), Subramani, K., J. Math. Model. Algorithms 2(2), 97–119 (2003). These algorithms determine the schedulability of a job-set offline and produce a set of dispatch functions for each job in the job-set. The dispatch functions of a job depend on the start and execution times of the jobs sequenced before the job. The dispatching problem is concerned with the online computation of the start time interval of a job such that none of the constraints are violated. In certain situations, the dispatcher fails to dispatch a job as it takes longer to compute the interval within which the job has to be dispatched, this phenomenon is called Loss of Dispatchability. For a job-set of size n, sequential approaches using function lists suffer from two major drawbacks, viz., Ω(n) dispatching time and the Loss of Dispatchability phenomenon. Existing approaches to this problem have been along sequential lines, through the use of stored function lists. In this paper, we propose and evaluate three distributed dispatching algorithms for Partially Clairvoyant schedules. For a job-set of size n, the algorithms have dispatch times of O(1) per job. In the first algorithm, one processor executes all the jobs and other processors compute the dispatch functions. This scenario simplifies design and is better in situations where one processor controls all other devices. In the other algorithms, all the processors execute jobs pre-assigned to them and compute the dispatch functions; which is a plausible scenario in distributed controlling.
A. OsmanEmail:
  相似文献   

12.
For any essentially nonlinear system of reaction-diffusion equations of the generic form ∂ci/∂t=Di2ci+Qi(c,x,t) supplemented with Robin type boundary conditions over the surface of a closed bounded three-dimensional region, it is demonstrated that all solutions for the concentration distributionn-tuple function c=(c 1(x,t),...,c n (x,t)) satisfy a differential variational condition. Approximate solutions to the reaction-diffusion intial-value boundary-value problem are obtainable by employing this variational condition in conjunction with a Galerkin-Ritz procedure. It is shown that the dynamical evolution from a prescribed initial concentrationn-tuple function to a final steady-state solution can be determined to desired accuracy by such an approximation method. The variational condition also admits a systematic Galerkin-Ritz procedure for obtaining approximate solutions to the multi-equation elliptic boundary-value problem for steady-state distributions c=−c(x). Other systems of phenomenological (non-Lagrangian) field equations can be treated by Galerkin-Ritz procedures based on analogues of the differential variational condition presented here. The method is applied to derive approximate nonconstant steady-state solutions for ann-species symbiosis model.  相似文献   

13.
In this work, the effects of iron ion intercalations on lead–tellurate glasses were investigated via FTIR, Raman and UV-Vis spectroscopies. This homogeneous glass system has compositions xFe2O3·(100−x)[4TeO2·PbO2], where x = 0–60 mol%. The presented observations in these mechanisms show that the lead ions have a pronounced affinity towards [TeO3] structural units, resulting in the deformation of the Te–O–Te linkages, and leading to the intercalation of [PbO n ] (n = 3, 4) and [FeO n ] (n = 4, 6) entities in the [TeO4] chain network. The formation of negatively charged [FeO4]1− structural units implies the attraction of Pb2+ ions in order to compensate for this electrical charge. Upon increasing the Fe2O3 content to 60 mol%, the network can accommodate an excess of oxygen through the formation of [FeO6] structural units and the conversion of [TeO4] into [TeO3] structural units. For even higher Fe2O3 contents, Raman spectra indicate a greater degree of depolymerization of the vitreous network than FTIR spectra do. The bands due to the Pb–O bond vibrations are very strongly polarized and the [TeO4] structural units convert into [TeO3] units via an intermediate coordination stage termed “[TeO3+1]” structural units. Our UV-Vis spectroscopic data show two mechanisms: (i) the conversion of the Fe3+ to Fe2+ at the same time as the oxidation of Pb2+ to Pb+4 ions for samples with low Fe2O3 contents; (ii) when the Fe2O3 content is high (x ≥ 50 mol%), the Fe2+ ions capture positive holes and are transferred to Fe3+ ions through a photochemical reaction, while the Pb2+ ions are formed by the reduction of Pb4+ ions. DFT calculations show that the addition of Fe2O3 to lead–tellurate glasses seems to break the axial Te–O bonds, and the [TeO4] structural units are gradually transformed into [TeO3+1]- and [TeO3]-type polyhedra. Analyzing these data further indicates a gradual conversion of the lead ions from covalent to ionic environment. There is then a charge transfer between the tri- and tetracoordinated tellurium atoms due to the capacity of the lead–tellurate network to form the appropriate coordination environments containing structural units of opposite charge, such as iron ions, [FeO4]1−.  相似文献   

14.
The Self-organizing map (SOM) is an unsupervised learning method based on the neural computation, which has found wide applications. However, the learning process sometime takes multi-stable states, within which the map is trapped to an undesirable disordered state including topological defects on the map. These topological defects critically aggravate the performance of the SOM. In order to overcome this problem, we propose to introduce an asymmetric neighborhood function for the SOM algorithm. Compared with the conventional symmetric one, the asymmetric neighborhood function accelerates the ordering process even in the presence of the defect. However, this asymmetry tends to generate a distorted map. This can be suppressed by an improved method of the asymmetric neighborhood function. In the case of one-dimensional SOM, it is found that the required steps for perfect ordering is numerically shown to be reduced from O(N 3) to O(N 2). We also discuss the ordering process of a twisted state in two-dimensional SOM, which can not be rectified by the ordinary symmetric neighborhood function.  相似文献   

15.
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and ∞. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality—the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley—Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the ( 4 n ) quadruples requires O(n 4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n 3 log n) time on a CREW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.  相似文献   

16.
A group of n susceptible individuals exposed to a contagious disease isconsidered. It is assumed that at each point in time one or more susceptible individuals can contract the disease. The progress of this simple batch epidemic is modeled by a stochastic process Xn(t), t[0, ∞), representing the number of infectiveindividuals at time t. In this paper our analysis is restricted to simple batch epidemics with transition rates given by [α2Xn(t){nXn(t) +Xn(0)}]1/2, t[0, ∞), α(0, ∞). This class of simple batch epidemics generalizes a model used and motivated by McNeil (1972) to describe simple epidemic situations. It is shown for this class of simple batch epidemics, that Xn(t), with suitable standardization, converges in distribution as n→∞ to a normal random variable for all t(0, t0), and t0 is evaluated.  相似文献   

17.
A method for measuring the gas temperature in an oxygen plasma by spectroscopy of the electronic transition from the O2(b 1Σ g + , v = 0) metastable state of molecular oxygen into the O2(X 3Σ g , v = 0) ground state is considered in detail. The method is verified experimentally for the plasma of dc glow discharge in pure oxygen. It is shown that the gas temperature can be determined by analyzing high-resolution spectra of the P branch of this transition, no matter whether its fine structure (P P and P Q branches) is resolved or masked, provided that the rotational structure of the spectrum is resolved. The feasibility of the method proposed in 1999 by P. Maco and P. Veis for determining the gas temperature from the ratio between the intensity maxima of the R and P branches of the O2(b 1Σ g + , v = 0) → O2(X 3Σ g , v = 0) transition in a poorly resolved spectrum was studied experimentally. It is shown that, in order to use this method, it is necessary to know the spectrograph instrumental function. The effect of the spatial inhomogeneity of the temperature and concentration of O2(b 1Σ g + ) molecules on the accuracy of integral (over the plasma volume) measurements of the gas temperature is investigated using spatially resolved spectroscopy of the O2(b 1Σ g + , v = 0) → O2(X 3Σ g , v = 0) transition. It is shown that precise measurements of the temperature require that the optical measurement system be thoroughly adjusted in order for the temperature and concentration of the emitting particles to vary insignificantly over the optically selected volume. Original Russian Text ? S.M. Zyryanov, D.V. Lopaev, 2007, published in Fizika Plazmy, 2007, Vol. 33, No. 6, pp. 563–574.  相似文献   

18.

Background  

Parsimony methods are widely used in molecular evolution to estimate the most plausible phylogeny for a set of characters. Sankoff parsimony determines the minimum number of changes required in a given phylogeny when a cost is associated to transitions between character states. Although optimizations exist to reduce the computations in the number of taxa, the original algorithm takes time O(n 2) in the number of states, making it impractical for large values of n.  相似文献   

19.
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, sn, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.  相似文献   

20.

Background  

Some distance methods are among the most commonly used methods for reconstructing phylogenetic trees from sequence data. The input to a distance method is a distance matrix, containing estimated pairwise distances between all pairs of taxa. Distance methods themselves are often fast, e.g., the famous and popular Neighbor Joining (NJ) algorithm reconstructs a phylogeny of n taxa in time O(n 3). Unfortunately, the fastest practical algorithms known for Computing the distance matrix, from n sequences of length l, takes time proportional to l·n 2. Since the sequence length typically is much larger than the number of taxa, the distance estimation is the bottleneck in phylogeny reconstruction. This bottleneck is especially apparent in reconstruction of large phylogenies or in applications where many trees have to be reconstructed, e.g., bootstrapping and genome wide applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号