首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文采用同步双标测法,研究了窦房结(SN)优势起搏点兴奋向心房传导的规律。在14只麻醉狗心脏界沟部缝上一装有32个电极的电极板,标测优势起搏点(P点),最早起始负电位点(N点),心房激动过程图及最早激动点(O点)。在用异丙肾上腺索、心得安或刺激迷走神经使P点移位后,我们研究了P点位置与P—O、P—N的关系。在计60次实验中:(1)P—O、P—N多数并不重叠(分别为46次和40次);(2)随着P点从SN头部向尾部移位,O(N)点在P点头侧的出现率下降,在P点尾侧的出现率上升;(3)出现两个O点者23次,出现两个N点者11次,但P点始终只有一个。 本文提出了SN兴奋在SN内优先向头、尾两端扩布,再传向心房肌的窦房传导模式。并对心房激动多中心起源、界沟部兴奋超速传导等现象及Boiaeau的“心房综合起搏”假说进行了讨论。  相似文献   

2.
An algorithm for approximate tandem repeats.   总被引:4,自引:0,他引:4  
A perfect single tandem repeat is defined as a nonempty string that can be divided into two identical substrings, e.g., abcabc. An approximate single tandem repeat is one in which the substrings are similar, but not identical, e.g., abcdaacd. In this paper we consider two criterions of similarity: the Hamming distance (k mismatches) and the edit distance (k differences). For a string S of length n and an integer k our algorithm reports all locally optimal approximate repeats, r = umacro ?, for which the Hamming distance of umacro and ? is at most k, in O(nk log (n/k)) time, or all those for which the edit distance of umacro and ? is at most k, in O(nk log k log (n/k)) time. This paper concentrates on a more general type of repeat called multiple tandem repeats. A multiple tandem repeat in a sequence S is a (periodic) substring r of S of the form r = u(a)u', where u is a prefix of r and u' is a prefix of u. An approximate multiple tandem repeat is a multiple repeat with errors; the repeated subsequences are similar but not identical. We precisely define approximate multiple repeats, and present an algorithm that finds all repeats that concur with our definition. The time complexity of the algorithm, when searching for repeats with up to k errors in a string S of length n, is O(nka log (n/k)) where a is the maximum number of periods in any reported repeat. We present some experimental results concerning the performance and sensitivity of our algorithm. The problem of finding repeats within a string is a computational problem with important applications in the field of molecular biology. Both exact and inexact repeats occur frequently in the genome, and certain repeats occurring in the genome are known to be related to diseases in the human.  相似文献   

3.
A gene team is a set of genes that appear in two or more species, possibly in a different order yet with the distance between adjacent genes in the team for each chromosome always no more than a certain threshold δ. A gene team tree is a succinct way to represent all gene teams for every possible value of δ. In this paper, improved algorithms are presented for the problem of finding the gene teams of two chromosomes and the problem of constructing a gene team tree of two chromosomes. For the problem of finding gene teams, Beal et al. had an O(n lg2 n)-time algorithm. Our improved algorithm requires O(n lg t) time, where t ≤ n is the number of gene teams. For the problem of constructing a gene team tree, Zhang and Leong had an O(n lg2 n)-time algorithm. Our improved algorithm requires O(n lg n lglg n) time. Similar to Beal et al.'s gene team algorithm and Zhang and Leong's gene team tree algorithm, our improved algorithms can be extended to k chromosomes with the time complexities increased only by a factor of k.  相似文献   

4.
Fast, optimal alignment of three sequences using linear gap costs   总被引:2,自引:0,他引:2  
Alignment algorithms can be used to infer a relationship between sequences when the true relationship is unknown. Simple alignment algorithms use a cost function that gives a fixed cost to each possible point mutation-mismatch, deletion, insertion. These algorithms tend to find optimal alignments that have many small gaps. It is more biologically plausible to have fewer longer gaps rather than many small gaps in an alignment. To address this issue, linear gap cost algorithms are in common use for aligning biological sequence data. More reliable inferences are obtained by aligning more than two sequences at a time. The obvious dynamic programming algorithm for optimally aligning k sequences of length n runs in O(n(k)) time. This is impractical if k>/=3 and n is of any reasonable length. Thus, for this problem there are many heuristics for aligning k sequences, however, they are not guaranteed to find an optimal alignment. In this paper, we present a new algorithm guaranteed to find the optimal alignment for three sequences using linear gap costs. This gives the same results as the dynamic programming algorithm for three sequences, but typically does so much more quickly. It is particularly fast when the (three-way) edit distance is small. Our algorithm uses a speed-up technique based on Ukkonen's greedy algorithm (Ukkonen, 1983) which he presented for two sequences and simple costs.  相似文献   

5.
We present two parameterized algorithms for the closest string problem. The first runs in O(nL + nd · 17.97d) time for DNA strings and in O(nL + nd · 61.86d) time for protein strings, where n is the number of input strings, L is the length of each input string, and d is the given upper bound on the number of mismatches between the center string and each input string. The second runs in O(nL + nd · 13.92d) time for DNA strings and in O(nL + nd · 47.21d) time for protein strings. We then extend the first algorithm to a new parameterized algorithm for the closest substring problem that runs in O((n - 1)m2(L + d · 17.97d · m[log2(d+1)])) time for DNA strings and in O((n - 1)m2(L + d · 61.86d · m[log2(d+1)])) time for protein strings, where n is the number of input strings, L is the length of the center substring, L - 1 + m is the maximum length of a single input string, and d is the given upper bound on the number of mismatches between the center substring and at least one substring of each input string. All the algorithms significantly improve the previous bests. To verify experimentally the theoretical improvements in the time complexity, we implement our algorithm in C and apply the resulting program to the planted (L, d)-motif problem proposed by Pevzner and Sze in 2000. We compare our program with the previously best exact program for the problem, namely PMSPrune (designed by Davila et al. in 2007). Our experimental data show that our program runs faster for practical cases and also for several challenging cases. Our algorithm uses less memory too.  相似文献   

6.

Background

Ordinary differential equations (ODEs) are often used to understand biological processes. Since ODE-based models usually contain many unknown parameters, parameter estimation is an important step toward deeper understanding of the process. Parameter estimation is often formulated as a least squares optimization problem, where all experimental data points are considered as equally important. However, this equal-weight formulation ignores the possibility of existence of relative importance among different data points, and may lead to misleading parameter estimation results. Therefore, we propose to introduce weights to account for the relative importance of different data points when formulating the least squares optimization problem. Each weight is defined by the uncertainty of one data point given the other data points. If one data point can be accurately inferred given the other data, the uncertainty of this data point is low and the importance of this data point is low. Whereas, if inferring one data point from the other data is almost impossible, it contains a huge uncertainty and carries more information for estimating parameters.

Results

G1/S transition model with 6 parameters and 12 parameters, and MAPK module with 14 parameters were used to test the weighted formulation. In each case, evenly spaced experimental data points were used. Weights calculated in these models showed similar patterns: high weights for data points in dynamic regions and low weights for data points in flat regions. We developed a sampling algorithm to evaluate the weighted formulation, and demonstrated that the weighted formulation reduced the redundancy in the data. For G1/S transition model with 12 parameters, we examined unevenly spaced experimental data points, strategically sampled to have more measurement points where the weights were relatively high, and fewer measurement points where the weights were relatively low. This analysis showed that the proposed weights can be used for designing measurement time points.

Conclusions

Giving a different weight to each data point according to its relative importance compared to other data points is an effective method for improving robustness of parameter estimation by reducing the redundancy in the experimental data.
  相似文献   

7.
The hexagonal (H) and the cubic (Q223) phases of the systems dodecyltrimethylammonium chloride-water and palmitoyllysophosphatidy choline-water have been studied by X-ray scattering techniques. The signs of the reflections of phase H were determined by a systematic study as a function of the water content, those of phase Q223 were assessed using a pattern recognition approach based upon the axiom that the histograms of the electron density maps of phases Q223 and H, extrapolated to the same concentration and properly normalized in scale and shape, are very similar to each other. In the case of phase Q223, all the sign combinations (the phi-sets) compatible with the observed reflections were generated, and each of the corresponding histograms was compared with the histogram of the map of phase H. One novelty of this work is the use of a highly sensitive criterion to estimate the similarity of the histograms, namely the distance in the six-dimensional space of the moments [mean value of (delta rho)n]1/n, for 3 greater than or equal to n greater than or equal to 8. In the two systems, the use of this criterion has led to the unambiguous choice of one electron density map. The maps show that the structure of phase Q223 consists of disjointed micelles (of type I), belonging to two different classes: those of one class are quasi-spherical in shape and are centered at the points a, those of the other class are disc-shaped and are centred at the points c. The results of this work rule out a structure formed by a cage-like distribution of rods enclosing a set of quasi-spherical micelles and is consistent with previous proposals. This is the second example, after that of phase Q227, of a micellar cubic phases in lipid-containing systems; all the known examples of phase Q223 are of type I, those of phase Q227 of type II.  相似文献   

8.
Contact maps are a model to capture the core information in the structure of biological molecules, e.g., proteins. A contact map consists of an ordered set S of elements (representing a protein's sequence of amino acids), and a set A of element pairs of S, called arcs (representing amino acids which are closely neighbored in the structure). Given two contact maps (S, A) and (Sp, Ap ) with |A| ges |Ap| the contact map pattern matching (CMPM) problem asks whether the "pattern" (Sp, Ap) "occurs" in (S, A), i.e., informally stated, whether there is a subset of |Ap| arcs in A whose arc structure coincides with Ap . CMPM captures the biological question of finding structural motifs in protein structures. In general, CMPM is NP-hard. In this paper, we show that CMPM is solvable in O(|A|6|Ap| time when the pattern is {<, }-structured, i.e., when each two arcs in the pattern are disjoint or crossing. Our algorithm extends to other closely related models. In particular, it answers an open question raised by Vialette that, rephrased in terms of contact maps, asked whether CMPM for {<, } -structured patterns is NP-hard or solvable in polynomial time. Our result stands in sharp contrast to the NP-hardness of closely related problems. We provide experimental results which show that contact maps derived from real protein structures can be processed efficiently  相似文献   

9.
The Minimum Evolution (ME) approach to phylogeny estimation has been shown to be statistically consistent when it is used in conjunction with ordinary least-squares (OLS) fitting of a metric to a tree structure. The traditional approach to using ME has been to start with the Neighbor Joining (NJ) topology for a given matrix and then do a topological search from that starting point. The first stage requires O(n(3)) time, where n is the number of taxa, while the current implementations of the second are in O(p n(3)) or more, where p is the number of swaps performed by the program. In this paper, we examine a greedy approach to minimum evolution which produces a starting topology in O(n(2)) time. Moreover, we provide an algorithm that searches for the best topology using nearest neighbor interchanges (NNIs), where the cost of doing p NNIs is O(n(2) + p n), i.e., O(n(2)) in practice because p is always much smaller than n. The Greedy Minimum Evolution (GME) algorithm, when used in combination with NNIs, produces trees which are fairly close to NJ trees in terms of topological accuracy. We also examine ME under a balanced weighting scheme, where sibling subtrees have equal weight, as opposed to the standard "unweighted" OLS, where all taxa have the same weight so that the weight of a subtree is equal to the number of its taxa. The balanced minimum evolution scheme (BME) runs slower than the OLS version, requiring O(n(2) x diam(T)) operations to build the starting tree and O(p n x diam(T)) to perform the NNIs, where diam(T) is the topological diameter of the output tree. In the usual Yule-Harding distribution on phylogenetic trees, the diameter expectation is in log(n), so our algorithms are in practice faster that NJ. Moreover, this BME scheme yields a very significant improvement over NJ and other distance-based algorithms, especially with large trees, in terms of topological accuracy.  相似文献   

10.
MOTIVATION: The problem of reconstructing full sibling groups from DNA marker data remains a significant challenge for computational biology. A recently published heuristic algorithm based on Mendelian exclusion rules and the Simpson index was successfully applied to the full sibship reconstruction (FSR) problem. However, the so-called SIMPSON algorithm has an unknown complexity measure, questioning its applicability range. RESULTS: We present a modified version of the SIMPSON (MS) algorithm that behaves as O(n(3)) and achieves the same or better accuracy when compared with the original algorithm. Performance of the MS algorithm was tested on a variety of simulated diploid population samples to verify its complexity measure and the significant improvement in efficiency (e.g. 100 times faster than SIMPSON in some cases). It has been shown that, in theory, the SIMPSON algorithm runs in non-polynomial time, significantly limiting its usefulness. It has been also verified via simulation experiments that SIMPSON could run in O(n(a)), where a > 3. AVAILABILITY: Computer code written in Java is available upon request from the first author. CONTACT: Dmitry.Konovalov@jcu.edu.au.  相似文献   

11.
Identifying conserved gene clusters is an important step toward understanding the evolution of genomes and predicting the functions of genes. A famous model to capture the essential biological features of a conserved gene cluster is called the gene-team model. The problem of finding the gene teams of two general sequences is the focus of this paper. For this problem, He and Goldwasser had an efficient algorithm that requires O(mn) time using O(m + n) working space, where m and n are, respectively, the numbers of genes in the two given sequences. In this paper, a new efficient algorithm is presented. Assume m ≤ n. Let C = Σ(α)(∈)(Σ) o(1)(α)o(2)(α), where Σ is the set of distinct genes, and o(1)(α) and o(2)(α) are, respectively, the numbers of copies of α in the two given sequences. Our new algorithm requires O(min{C lg n, mn}) time using O(m + n) working space. As compared with He and Goldwasser's algorithm, our new algorithm is more practical, as C is likely to be much smaller than mn in practice. In addition, our new algorithm is output sensitive. Its running time is O(lg n) times the size of the output. Moreover, our new algorithm can be efficiently extended to find the gene teams of k general sequences in O(k C lg (n(1)n(2). . .n(k)) time, where n(i) is the number of genes in the ith input sequence.  相似文献   

12.
MOTIVATION: Dynamic programming is the core algorithm of sequence comparison, alignment and linear hidden Markov model (HMM) training. For a pair of sequence lengths m and n, the problem can be solved readily in O(mn)time and O(mn)space. The checkpoint algorithm introduced by Grice et al. (CABIOS, 13, 45--53, 1997) runs in O(Lmn)time and O(Lm(L) square root of n)space, where L is a positive integer determined by m, n, and the amount of available workspace. The algorithm is appropriate for many string comparison problems, including all-paths and single-best-path hidden Markov model training, and is readily parallelizable. The checkpoint algorithm has a diagonal version that can solve the single-best-path alignment problem in O(mn)time and O(m + n)space. RESULTS: In this work, we improve performance by analyzing optimal checkpoint placement. The improved row checkpoint algorithm performs up to one half the computation of the original algorithm. The improved diagonal checkpoint algorithm performs up to 35% fewer computational steps than the original. We modified the SAM hidden Markov modeling package to use the improved row checkpoint algorithm. For a fixed sequence length, the new version is up to 33% faster for all-paths and 56% faster for single-best-path HMM training, depending on sequence length and allocated memory. Over a typical set of protein sequence lengths, the improvement is approximately 10%.  相似文献   

13.
An algorithm to enumerate sorting reversals for signed permutations.   总被引:1,自引:0,他引:1  
The rearrangement distance between single-chromosome genomes can be estimated as the minimum number of inversions required to transform the gene ordering observed in one into that observed in the other. This measure, known as "inversion distance," can be computed as the reversal distance between signed permutations. During the past decade, much progress has been made both on the problem of computing reversal distance and on the related problem of finding a minimum-length sequence of reversals, which is known as "sorting by reversals." For most problem instances, however, many minimum-length sequences of reversals exist, and in the absence of auxiliary information, no one is of greater value than the others. The problem of finding all minimum-length sequences of reversals is thus a natural generalization of sorting by reversals, yet it has received little attention. This problem reduces easily to the problem of finding all "sorting reversals" of one permutation with respect to another - that is, all reversals rho such that, if rho is applied to one permutation, then the reversal distance of that permutation from the other is decreased. In this paper, an efficient algorithm is derived to solve the problem of finding all sorting reversals, and experimental results are presented indicating that, while the new algorithm does not represent a significant improvement in asymptotic terms (it takes O(n(3)) time, for permutations of size n; the problem can now be solved by brute force in Theta(n(3)) time), it performs dramatically better in practice than the best known alternative. An implementation of the algorithm is available at www.cse.ucsc.edu/~acs.  相似文献   

14.
In conservation biology it is a central problem to measure, predict, and preserve biodiversity as species face extinction. In 1992 Faith proposed measuring the diversity of a collection of species in terms of their relationships on a phylogenetic tree, and to use this information to identify collections of species with high diversity. Here we are interested in some variants of the resulting optimization problem that arise when considering species whose evolution is better represented by a network rather than a tree. More specifically, we consider the problem of computing phylogenetic diversity relative to a split system on a collection of species of size n. We show that for general split systems this problem is NP-hard. In addition we provide some efficient algorithms for some special classes of split systems, in particular presenting an optimal O(n) time algorithm for phylogenetic trees and an O(n log n + nk) time algorithm for choosing an optimal subset of size k relative to a circular split system.  相似文献   

15.
We study three classical problems of genome rearrangement--sorting, halving, and the median problem--in a restricted double cut and join (DCJ) model. In the DCJ model, introduced by Yancopoulos et al., we can represent rearrangement events that happen in multichromosomal genomes, such as inversions, translocations, fusions, and fissions. Two DCJ operations can mimic transpositions or block interchanges by first extracting an appropriate segment of a chromosome, creating a temporary circular chromosome, and then reinserting it in its proper place. In the restricted model, we are concerned with multichromosomal linear genomes and we require that each circular excision is immediately followed by its reincorporation. Existing linear-time DCJ sorting and halving algorithms ignore this reincorporation constraint. In this article, we propose a new algorithm for the restricted sorting problem running in O(n log n) time, thus improving on the known quadratic time algorithm. We solve the restricted halving problem and give an algorithm that computes a multilinear halved genome in linear time. Finally, we show that the restricted median problem is NP-hard as conjectured.  相似文献   

16.
本研究考察了盐酸右美托咪定联合瑞芬太尼(观察组,n=68)和丙泊酚联合瑞芬太尼(对照组,n=68)在我院136例骨科手术患者中的应用效果,应用警觉/镇静观察评分(OAA/S)评价患者用药前(T0)、用药10 min后(T1)、用药20 min后(T2)、用药30 min后(T3)和清醒后(T4)的镇静效果,并比较了两组患者在不同时间点的呼吸频率(RR)、血氧饱和度(SpO2)、心率(HR)和平均动脉压(MAP)。研究结果显示,两组患者在T0、T1和T4时间点的OAA/S评分无统计学差异(p>0.05),观察组在T2和T3时间点的OAA/S评分明显小于对照组(p<0.05)。两组患者在T0、T1和T4时间点的RR无统计学差异(p>0.05),观察组在T2和T3时间点时的RR明显大于对照组(p<0.05)。观察组在T2时间点时的SpO2明显大于对照组(p<0.05)。两组患者的SpO2在其他时间段时无明显差异(p>0.05)。两组患者的HR和MAP均未表现出明显差异(p>0.05)。观察组的不良反应发生率为2.94%,明显高于对照组的11.76%(p=0.049)。本研究结论初步表明,与丙泊酚联合瑞芬太尼相比,盐酸右美托咪定联合瑞芬太尼在骨科手术中具有更好的镇静效果,可有效保障患者的呼吸功能,具有良好的血流动力学稳定性和安全性。  相似文献   

17.
Hud NV  Feigon J 《Biochemistry》2002,41(31):9900-9910
The localization of Mn(2+) in A-tract DNA has been studied by (1)H NMR spectroscopy using a series of self-complementary dodecamer oligonucleotides that contain the sequence motifs A(n)(n) and T(n)A(n), where n = 2, 3, or 4. Mn(2+) localization in the minor groove is observed for all the sequences that have been studied, with the position and degree of localization being highly sequence-dependent. The site most favored for Mn(2+) localization in the minor groove is near the 5'-most ApA step for both the T(n)A(n) and the A(n)T(n) series. For the T(n)A(n) series, this results in two closely spaced symmetry-related Mn(2+) localization sites near the center of each duplex, while for the A(n)T(n) series, the two symmetry-related sites are separated by as much as one half-helical turn. The degree of Mn(2+) localization in the minor groove of the T(n)A(n) series decreases substantially as the AT sequence element is shortened from T(4)A(4) to T(2)A(2). The A(n)T(n) series also exhibits length-dependent Mn(2+) localization; however, the degree of minor groove occupancy by Mn(2+) is significantly less than that observed for the T(n)A(n) series. For both A(n)T(n) and T(n)A(n) sequences, the 3'-most AH2 resonance is the least broadened of the AH2 resonances. This is consistent with the observation that the minor groove of A-tract DNA narrows in the 5' to 3' direction, apparently becoming too narrow after two base pairs for the entry of a fully hydrated divalent cation. The results that are reported illustrate the delicate interplay that exists between DNA nucleotide sequence, minor groove width, and divalent cation localization. The proposed role of cation localization in helical axis bending by A-tracts is also discussed.  相似文献   

18.
In this paper, we study the problem of computing the similarity of two protein structures by measuring their contact-map overlap. Contact-map overlap abstracts the problem of computing the similarity of two polygonal chains as a graph-theoretic problem. In R3, we present the first polynomial time algorithm with any guarantee on the approximation ratio for the 3-dimensional problem. More precisely, we give an algorithm for the contact-map overlap problem with an approximation ratio of sigma where sigma = min{sigma(P1), sigma(P2)} 0, is hard.  相似文献   

19.
There are a few algorithms designed to solve the problem of the optimal alignment of one sequence, the pattern, of length m, with another, longer sequence the text, of length n. These algorithms allow mismatches, deletions and insertions. Algorithms to date run in O(mn) time. Let us define an integer, k, which is the maximal number of differences allowed. We present a simple algorithm showing that sequences can be optimally aligned in O(k2n) time. For long sequences the gain factor over the currently used algorithms is very large.  相似文献   

20.
MOTIVATION: The reconstruction of genetic networks is the holy grail of functional genomics. Its core task is to identify the causal structure of a gene network, that is, to distinguish direct from indirect regulatory interactions among gene products. In other words, to reconstruct a genetic network is to identify, for each network gene, which other genes and their activity the gene influences directly. Crucial to this task are perturbations of gene activity. Genomic technology permits large-scale experiments perturbing the activity of many genes and assessing the effect of each perturbation on all other genes in a genome. However, such experiments cannot distinguish between direct and indirect effects of a genetic perturbation. RESULTS: I present an algorithm to reconstruct direct regulatory interactions in gene networks from the results of gene perturbation experiments. The algorithm is based on a graph representation of genetic networks and applies to networks of arbitrary size and complexity. Algorithmic complexity in both storage and time is low, less than O(n(2)). In practice, the algorithm can reconstruct networks of several thousand genes in mere CPU seconds on a desktop workstation. AVAILABILITY: A perl implementation of the algorithm is given in the Appendix. CONTACT: wagnera@unm.edu  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号