首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation, but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use. To reduce the high computational burden of ML method and make it more suitable to engineering applications, we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation. As a recently proposed bio-inspired computing algorithm, ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment. It offers an excellent alternative to the conventional methods in ML-DOA estimation. The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence, Signal-to-Noise Ratio (SNR), and number of iterations. The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated. Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.  相似文献   

2.
This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than using second-order information of the function, thus decreasing the workload and time involved in the computation. Under suitable conditions, the algorithm converges globally to an optimal solution. Numerical results show that this algorithm can successfully solve nonsmooth unconstrained convex problems.  相似文献   

3.
Hannenhalli and Pevzner gave the first polynomial-time algorithm for computing the inversion distance between two signed permutations, as part of the larger task of determining the shortest sequence of inversions needed to transform one permutation into the other. Their algorithm (restricted to distance calculation) proceeds in two stages: in the first stage, the overlap graph induced by the permutation is decomposed into connected components; then, in the second stage, certain graph structures (hurdles and others) are identified. Berman and Hannenhalli avoided the explicit computation of the overlap graph and gave an O(nalpha(n)) algorithm, based on a Union-Find structure, to find its connected components, where alpha is the inverse Ackerman function. Since for all practical purposes alpha(n) is a constant no larger than four, this algorithm has been the fastest practical algorithm to date. In this paper, we present a new linear-time algorithm for computing the connected components, which is more efficient than that of Berman and Hannenhalli in both theory and practice. Our algorithm uses only a stack and is very easy to implement. We give the results of computational experiments over a large range of permutation pairs produced through simulated evolution; our experiments show a speed-up by a factor of 2 to 5 in the computation of the connected components and by a factor of 1.3 to 2 in the overall distance computation.  相似文献   

4.
Toward a theory of evolutionary computation   总被引:1,自引:0,他引:1  
Eberbach E 《Bio Systems》2005,82(1):1-19
We outline a theory of evolutionary computation using a formal model of evolutionary computation--the Evolutionary Turing Machine--which is introduced as the extension of the Turing Machine model. Evolutionary Turing Machines provide a better and a more complete model for evolutionary computing than conventional Turing Machines, algorithms, and Markov chains. The convergence and convergence rate are defined and investigated in terms of this new model. The sufficient conditions needed for the completeness and optimality of evolutionary search are investigated. In particular, the notion of the total optimality as an instance of the multiobjective optimization of the Universal Evolutionary Turing Machine is introduced. This provides an automatic way to deal with the intractability of evolutionary search by optimizing the quality of solutions and search costs simultaneously. Based on a new model a very flexible classification of optimization problem hardness for the evolutionary techniques is proposed. The expressiveness of evolutionary computation is investigated. We show that the problem of the best evolutionary algorithm is undecidable independently of whether the fitness function is time dependent or fixed. It is demonstrated that the evolutionary computation paradigm is more expressive than Turing Machines, and thus the conventional computer science based on them. We show that an Evolutionary Turing Machine is able to solve nonalgorithmically the halting problem of the Universal Turing Machine and, asymptotically, the best evolutionary algorithm problem. In other words, the best evolutionary algorithm does not exist, but it can be potentially indefinitely approximated using evolutionary techniques.  相似文献   

5.
MOTIVATION: Multiple sequence alignment is an important tool in computational biology. In order to solve the task of computing multiple alignments in affordable time, the most commonly used multiple alignment methods have to use heuristics. Nevertheless, the computation of optimal multiple alignments is important in its own right, and it provides a means of evaluating heuristic approaches or serves as a subprocedure of heuristic alignment methods. RESULTS: We present an algorithm that uses the divide-and-conquer alignment approach together with recent results on search space reduction to speed up the computation of multiple sequence alignments. The method is adaptive in that depending on the time one wants to spend on the alignment, a better, up to optimal alignment can be obtained. To speed up the computation in the optimal alignment step, we apply the alpha(*) algorithm which leads to a procedure provably more efficient than previous exact algorithms. We also describe our implementation of the algorithm and present results showing the effectiveness and limitations of the procedure.  相似文献   

6.
Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The A* algorithm (or goal-directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the A* algorithm in a computer program similar to MSA (Gupta et al., 1995) and FMA (Shibuya and Imai, 1997). We incorporated in this program new bounding strategies for both lower and upper bounds and show that the A* algorithm, together with our improvements, can speed up computations considerably. Additionally, we show that the A* algorithm together with a standard bounding technique is superior to the well-known Carrillo-Lipman bounding since it excludes more nodes from consideration.  相似文献   

7.
We have developed an algorithm for the estimation of cardiac motion from medical images. The algorithm exploits monogenic signal theory, recently introduced as an N-dimensional generalization of the analytic signal. The displacement is computed locally by assuming the conservation of the monogenic phase over time. A local affine displacement model replaces the standard translation model to account for more complex motions as contraction/expansion and shear. A coarse-to-fine B-spline scheme allows a robust and effective computation of the models parameters and a pyramidal refinement scheme helps handle large motions. Robustness against noise is increased by replacing the standard pointwise computation of the monogenic orientation with a more robust least-squares orientation estimate. This paper reviews the results obtained on simulated cardiac images from different modalities, namely 2D and 3D cardiac ultrasound and tagged magnetic resonance. We also show how the proposed algorithm represents a valuable alternative to state-of-the-art algorithms in the respective fields.  相似文献   

8.
The calculation of multipoint likelihoods of pedigree data is crucial for extracting the full available information needed for both parametric and nonparametric linkage analysis. Recent mathematical advances in both the Elston-Stewart and Lander-Green algorithms for computing exact multipoint likelihoods of pedigree data have enabled researchers to analyze data sets containing more markers and more individuals both faster and more efficiently. This paper presents novel algorithms that further extend the computational boundary of the Elston-Stewart algorithm. They have been implemented into the software package VITESSE v. 2 and are shown to be several orders of magnitude faster than the original implementation of the Elston-Stewart algorithm in VITESSE v. 1 on a variety of real pedigree data. VITESSE v. 2 was faster by a factor ranging from 168 to over 1,700 on these data sets, thus making a qualitative difference in the analysis. The main algorithm is based on the faster computation of the conditional probability of a component nuclear family within the pedigree by summing over the joint genotypes of the children instead of the parents as done in the VITESSE v. 1. This change in summation allows the parent-child transmission part of the calculation to be not only computed for each parent separately, but also for each locus separately by using inheritance vectors as is done in the Lander-Green algorithm. Computing both of these separately can lead to substantial computational savings. The use of inheritance vectors in the nuclear family calculation represents a partial synthesis of the techniques of the Lander-Green algorithm into the Elston-Stewart algorithm. In addition, the technique of local set recoding is introduced to further reduce the complexity of the nuclear family computation. These new algorithms, however, are not universally faster on all types of pedigree data compared to the method implemented in VITESSE v. 1 of summing over the parents. Therefore, a hybrid algorithm is introduced which combines the strength of both summation methods by using a numerical heuristic to decide which of the two to use for a given nuclear family within the pedigree and is shown to be faster than either method on its own. Finally, this paper discusses various complexity issues regarding both the Elston-Stewart and Lander-Green algorithms and possible future directions of further synthesis.  相似文献   

9.
根据极谱氧电极法测定SOD活性理论,应用计算机技术,研制出自动SOD活性分析仪器。该仪器采用灵敏的氧电极和放大电路,以单片计算机为核心组成,使SOD活性测试更加快捷,微量和准确,并基本实现自动测试。  相似文献   

10.
A method of computation of retention volumes of linear peptides of known composition that contain no more than 25 amino acids in gradient reversed phase HPLC was developed. Tha method is suitable for various acetonitrile gradient profiles. The calculations were carried out on the basis of a statistical model, the parameters of which were experimental dependences of the retention of individual amino acids on acetonitrile concentration. The method developed was used to predict the chromatographic behavior of 34 peptides in four different acetonitrile gradients. The correlation coefficients between the predicted and experimental retention volumes were more than 0.9, and the average relative error of prediction was less than 15%.  相似文献   

11.
A method of computation of retention volumes of linear peptides of known composition that contain no more than 25 amino acids in gradient reversed phase HPLC was developed. The method is suitable for various acetonitrile gradient profiles. The calculations were carried out on the basis of a statistical model, the parameters of which were experimental dependences of the retention of individual amino acids on acetonitrile concentration. The method developed was used to predict the chromatographic behavior of 34 peptides in four different acetonitrile gradients. The correlation coefficients between the predicted and experimental retention volumes were more than 0.9, and the average relative error of prediction was less than 15%. The English version of the paper: Russiatn .lournal of Bioorganic Chemistry, 2008, vol. 34, no. 2; see also http://www.maik.ru.  相似文献   

12.
Sequence analysis is the basis of bioinformatics, while sequence alignment is a fundamental task for sequence analysis. The widely used alignment algorithm, Dynamic Programming, though generating optimal alignment, takes too much time due to its high computation complexity O(N(2)). In order to reduce computation complexity without sacrificing too much accuracy, we have developed a new approach to align two homologous sequences. The new approach presented here, adopting our novel algorithm which combines the methods of probabilistic and combinatorial analysis, reduces the computation complexity to as low as O(N). The computation speed by our program is at least 15 times faster than traditional pairwise alignment algorithms without a loss of much accuracy. We hence named the algorithm Super Pairwise Alignment (SPA). The pairwise alignment execution program based on SPA and the detailed results of the aligned sequences discussed in this article are available upon request.  相似文献   

13.
The simplest dynamic algorithm for planar RNA folding searches for the maximum number of base pairs. The algorithm uses O(n3) steps. The more general case, where different weights (energies) are assigned to stacked base pairs and to the various types of single-stranded region topologies, requires a considerably longer computation time because of the partial backtracking involved. Limiting the loop size reduces the running time back to O(n3). Reduction in the number of steps in the calculations of the various RNA topologies has recently been suggested, thereby improving the time behavior. Here we show how a "jumping" procedure can be used to speed up the computation, not only for the maximal number of base pairs algorithm, but for the minimal energy algorithm as well.  相似文献   

14.
The compartment model analysis using medical imaging data is the well-established but extremely time consuming technique for quantifying the changes in microvascular physiology of targeted organs in clinical patients after antivascular therapies. In this paper, we present a first graphics processing unit-accelerated method for compartmental modeling of medical imaging data. Using this approach, we performed the analysis of dynamic contrast-enhanced magnetic resonance imaging data from bevacizumab-treated glioblastoma patients in less than one minute per slice without losing accuracy. This approach reduced the computation time by more than 120-fold comparing to a central processing unit-based method that performed the analogous analysis steps in serial and more than 17-fold comparing to the algorithm that optimized for central processing unit computation. The method developed in this study could be of significant utility in reducing the computational times required to assess tumor physiology from dynamic contrast-enhanced magnetic resonance imaging data in preclinical and clinical development of antivascular therapies and related fields.  相似文献   

15.
A BASIC-program for the computation of the Wilcoxon test statistic is introduced. It is more simple than the usual ones, because it is based on a more simple kind of computation.  相似文献   

16.
In studies designed to compare different methods of measurement where more than two methods are compared or replicate measurements by each method are available, standard statistical approaches such as computation of limits of agreement are not directly applicable. A model is presented for comparing several methods of measurement in the situation where replicate measurements by each method are available. Measurements are viewed as classified by method, subject and replicate. Models assuming exchangeable as well as non-exchangeable replicates are considered. A fitting algorithm is presented that allows the estimation of linear relationships between methods as well as relevant variance components. The algorithm only uses methods already implemented in most statistical software.  相似文献   

17.
We describe an algorithm (IRSA) for identification of common regulatory signals in samples of unaligned DNA sequences. The algorithm was tested on randomly generated sequences of fixed length with implanted signal of length 15 with 4 mutations, and on natural upstream regions of bacterial genes regulated by PurR, ArgR and CRP. Then it was applied to upstream regions of orthologous genes from Escherichia coli and related genomes. Some new palindromic binding and direct repeats signals were identified. Finally we present a parallel version suitable for computers supporting the MPI protocol. This implementation is not strictly bounded by the number of available processors. The computation speed linearly depends on the number of processors.  相似文献   

18.
An algorithm and the corresponding FORTRAN-program for the computation of the MANN-WHITNEY U-Test test statistic are introduced which are simpler than the familiar procedure using ranks.  相似文献   

19.
Microarray data contains a large number of genes (usually more than 1000) and a relatively small number of samples (usually fewer than 100). This presents problems to discriminant analysis of microarray data. One way to alleviate the problem is to reduce dimensionality of data by selecting important genes to the discriminant problem. Gene selection can be cast as a feature selection problem in the context of pattern classification. Feature selection approaches are broadly grouped into filter methods and wrapper methods. The wrapper method outperforms the filter method but at the cost of more intensive computation. In the present study, we proposed a wrapper-like gene selection algorithm based on the Regularization Network. Compared with classical wrapper method, the computational costs in our gene selection algorithm is significantly reduced, because the evaluation criterion we proposed does not demand repeated training in the leave-one-out procedure.  相似文献   

20.
The affected-pedigree-member (APM) method of linkage analysis is designed to detect departures from independent segregation of disease and marker phenotypes. The underlying statistic of the APM method operates on the identity-by-state relations implied by the marker phenotypes of the affected within a pedigree. Here we generalize the APM statistic to multiple linked markers. This generalization relies on recursive computation of two-locus kinship coefficients by an algorithm of Thompson. The distributional properties of the extended APM statistic are investigated theoretically and by simulation in the context of one real and one artificial data set. In both examples, the multilocus statistic tends to reject, more strongly than the single-locus statistics do, the null hypothesis of independent segregation between the disease locus and the marker loci.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号