首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
There is a pressing need to align the growing set of expressed sequence tags (ESTs) with the newly sequenced human genome. However, the problem is complicated by the exon/intron structure of eukaryotic genes misread nucleotides in ESTs, and the millions of repetitive sequences in genomic sequences. To solve this problem, algorithms that use dynamic programming have been proposed. In reality, however, these algorithms require an enormous amount of processing time. In an effort to improve the computational efficiency of these classical DP algorithms, we developed software that fully utilizes lookup-tables to detect the start- and endpoints of an EST within a given DNA sequence efficiently, and subsequently promptly identify exons and introns. In addition, the locations of all splice sites must be calculated correctly with high sensitivity and accuracy, while retaining high computational efficiency. This goal is hard to accomplish in practice, due to misread nucleotides in ESTs and repetitive sequences in the genome. Nevertheless, we present two heuristics that effectively settle this issue. Experimental results confirm that our technique improves the overall computation time by orders of magnitude compared with common tools, such as SIM4 and BLAT, and simultaneously attains high sensitivity and accuracy against a clean dataset of documented genes.  相似文献   

2.
The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments.  相似文献   

3.
4.
A new dynamical layout algorithm for complex biochemical reaction networks   总被引:1,自引:0,他引:1  

Background  

To study complex biochemical reaction networks in living cells researchers more and more rely on databases and computational methods. In order to facilitate computational approaches, visualisation techniques are highly important. Biochemical reaction networks, e.g. metabolic pathways are often depicted as graphs and these graphs should be drawn dynamically to provide flexibility in the context of different data. Conventional layout algorithms are not sufficient for every kind of pathway in biochemical research. This is mainly due to certain conventions to which biochemists/biologists are used to and which are not in accordance to conventional layout algorithms. A number of approaches has been developed to improve this situation. Some of these are used in the context of biochemical databases and make more or less use of the information in these databases to aid the layout process. However, visualisation becomes also more and more important in modelling and simulation tools which mostly do not offer additional connections to databases. Therefore, layout algorithms used in these tools have to work independently of any databases. In addition, all of the existing algorithms face some limitations with respect to the number of edge crossings when it comes to larger biochemical systems due to the interconnectivity of these. Last but not least, in some cases, biochemical conventions are not met properly.  相似文献   

5.
In many manufacturing systems, parts must be fed to automatic machines in a specific orientation. This is often accomplished by what are known as part-orienting systems (POSs). A POS consists of one or more separate devices that orient parts, usually with multiple orientations. A theoretical analysis along with the development of some heuristic algorithms is carried out in order to treat the problem of efficient selection and ordering of part-orienting devices that make up a POS. It is indicated that the size of such problems can be quite large in the context of flexible manufacturing systems which may require what are known as flexible part-orienting systems for their efficient operation. Computational experiments are performed in order to evaluate the relative performance of the heuristic algorithms.  相似文献   

6.
《Endocrine practice》2023,29(3):179-184
ObjectivesDiabetes management presents a substantial burden to individuals living with the condition and their families, health care professionals, and health care systems. Although an increasing number of digital tools are available to assist with tasks such as blood glucose monitoring and insulin dose calculation, multiple persistent barriers continue to prevent their optimal use.MethodsAs a guide to creating an equitable connected digital diabetes ecosystem, we propose a roadmap with key milestones that need to be achieved along the way.ResultsDuring the Coronavirus 2019 pandemic, there was an increased use of digital tools to support diabetes care, but at the same time, the pandemic also highlighted problems of inequities in access to and use of these same technologies. Based on these observations, a connected diabetes ecosystem should incorporate and optimize the use of existing treatments and technologies, integrate tasks such as glucose monitoring, data analysis, and insulin dose calculations, and lead to improved and equitable health outcomes.ConclusionsDevelopment of this ecosystem will require overcoming multiple obstacles, including interoperability and data security concerns. However, an integrated system would optimize existing devices, technologies, and treatments to improve outcomes.  相似文献   

7.

Background  

R is the preferred tool for statistical analysis of many bioinformaticians due in part to the increasing number of freely available analytical methods. Such methods can be quickly reused and adapted to each particular experiment. However, in experiments where large amounts of data are generated, for example using high-throughput screening devices, the processing time required to analyze data is often quite long. A solution to reduce the processing time is the use of parallel computing technologies. Because R does not support parallel computations, several tools have been developed to enable such technologies. However, these tools require multiple modications to the way R programs are usually written or run. Although these tools can finally speed up the calculations, the time, skills and additional resources required to use them are an obstacle for most bioinformaticians.  相似文献   

8.
Identification of fusion proteins has contributed significantly to our understanding of cancer progression, yielding important predictive markers and therapeutic targets. While fusion proteins can be potentially identified by mass spectrometry, all previously found fusion proteins were identified using genomic (rather than mass spectrometry) technologies. This lack of MS/MS applications in studies of fusion proteins is caused by the lack of computational tools that are able to interpret mass spectra from peptides covering unknown fusion breakpoints (fusion peptides). Indeed, the number of potential fusion peptides is so large that the existing MS/MS database search tools become impractical even in the case of small genomes. We explore computational approaches to identifying fusion peptides, propose an algorithm for solving the fusion peptide identification problem, and analyze the performance of this algorithm on simulated data. We further illustrate how this approach can be modified for human exons prediction.  相似文献   

9.
10.
Multi-species comparisons of DNA sequences are more powerful for discovering functional sequences than pairwise DNA sequence comparisons. Most current computational tools have been designed for pairwise comparisons, and efficient extension of these tools to multiple species will require knowledge of the ideal evolutionary distance to choose and the development of new algorithms for alignment, analysis of conservation, and visualization of results.  相似文献   

11.
12.
The exact resolution of large instances of combinatorial optimization problems, such as three dimensional quadratic assignment problem (Q3AP), is a real challenge for grid computing. Indeed, it is necessary to reconsider the resolution algorithms and take into account the characteristics of such environments, especially large scale and dynamic availability of resources, and their multi-domain administration. In this paper, we revisit the design and implementation of the branch and bound algorithm for solving large combinatorial optimization problems such as Q3AP on the computational grids. Such gridification is based on new ways to efficiently deal with some crucial issues, mainly dynamic adaptive load balancing and fault tolerance. Our new approach allowed the exact resolution on a nation-wide grid of a difficult Q3AP instance. To solve this instance, an average of 1,123 computing cores were used for less than 12 days with a peak of around 3,427 computing cores.  相似文献   

13.
14.
Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European 'Network of Excellence' (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science.  相似文献   

15.
The vast number of microbial sequences resulting from sequencing efforts using new technologies require us to re-assess currently available analysis methodologies and tools. Here we describe trends in the development and distribution of software for analyzing microbial sequence data. We then focus on one widely used set of methods, dimensionality reduction techniques, which allow users to summarize and compare these vast datasets. We conclude by emphasizing the utility of formal software engineering methods for the development of computational biology tools, and the need for new algorithms for comparing microbial communities. Such large-scale comparisons will allow us to fulfill the dream of rapid integration and comparison of microbial sequence data sets, in a replicable analytical environment, in order to describe the microbial world we inhabit.  相似文献   

16.
Lagana AA  Lohe MA  von Smekal L 《PloS one》2011,6(12):e29417
We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer.  相似文献   

17.
Algorithms using 4-pixel Feistel structure and chaotic systems have been shown to resolve security problems caused by large data capacity and high correlation among pixels for color image encryption. In this paper, a fast color image encryption algorithm based on the modified 4-pixel Feistel structure and multiple chaotic maps is proposed to improve the efficiency of this type of algorithm. Two methods are used. First, a simple round function based on a piecewise linear function and tent map are used to reduce computational cost during each iteration. Second, the 4-pixel Feistel structure reduces round number by changing twist direction securely to help the algorithm proceed efficiently. While a large number of simulation experiments prove its security performance, additional special analysis and a corresponding speed simulation show that these two methods increase the speed of the proposed algorithm (0.15s for a 256*256 color image) to twice that of an algorithm with a similar structure (0.37s for the same size image). Additionally, the method is also faster than other recently proposed algorithms.  相似文献   

18.
Despite the increasing number of recently solved membrane protein structures, coverage of membrane protein fold space remains relatively sparse. This necessitates the use of computational strategies to investigate membrane protein structure, allowing us to further our understanding of how membrane proteins carry out their diverse range of functions, while aiding the development of novel predictive tools with which to probe uncharacterised folds. Analysis of known structures, the application of machine learning techniques, molecular dynamics simulations and protein structure prediction have enabled significant advances to be made in the field of membrane protein research. In this communication, the key bioinformatic methods that allow the characterisation of membrane proteins are reviewed, the tools available for the structural analysis of membrane proteins are presented and the contribution these tools have made to expanding our understanding of membrane protein structure, function and stability is discussed.  相似文献   

19.
Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high‐level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future.  相似文献   

20.
A multi-algorithm, multi-timescale method for cell simulation   总被引:3,自引:0,他引:3  
MOTIVATION: Many important problems in cell biology require the dense nonlinear interactions between functional modules to be considered. The importance of computer simulation in understanding cellular processes is now widely accepted, and a variety of simulation algorithms useful for studying certain subsystems have been designed. Many of these are already widely used, and a large number of models constructed on these existing formalisms are available. A significant computational challenge is how we can integrate such sub-cellular models running on different types of algorithms to construct higher order models. RESULTS: A modular, object-oriented simulation meta-algorithm based on a discrete-event scheduler and Hermite polynomial interpolation has been developed and implemented. It is shown that this new method can efficiently handle many components driven by different algorithms and different timescales. The utility of this simulation framework is demonstrated further with a 'composite' heat-shock response model that combines the Gillespie-Gibson stochastic algorithm and deterministic differential equations. Dramatic improvements in performance were obtained without significant accuracy drawbacks. A multi-timescale demonstration of coupled harmonic oscillators is also shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号