首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background  

Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered.  相似文献   

2.
Two programs have been written in BASIC as a teaching aid for instruction of HNC, HND, and degree students in a strategy of experimentation, taking enzyme kinetics as a particular example. Little prior knowledge is required to use the programs. One of the programs (ENZY) simulates the action of a non-allosteric enzyme as (a) a simple case with or without inhibitors including substrate inhibition (b) a two-substrate case with either random-order or ping-pong kinetics. The other program (ALLO) simulates the action of an allosteric enzyme with or without activators or inhibitors or both. A detailed example of an investigation using ALLO is given.  相似文献   

3.
GMAP: a genomic mapping and alignment program for mRNA and EST sequences   总被引:13,自引:0,他引:13  
MOTIVATION: We introduce GMAP, a standalone program for mapping and aligning cDNA sequences to a genome. The program maps and aligns a single sequence with minimal startup time and memory requirements, and provides fast batch processing of large sequence sets. The program generates accurate gene structures, even in the presence of substantial polymorphisms and sequence errors, without using probabilistic splice site models. Methodology underlying the program includes a minimal sampling strategy for genomic mapping, oligomer chaining for approximate alignment, sandwich DP for splice site detection, and microexon identification with statistical significance testing. RESULTS: On a set of human messenger RNAs with random mutations at a 1 and 3% rate, GMAP identified all splice sites accurately in over 99.3% of the sequences, which was one-tenth the error rate of existing programs. On a large set of human expressed sequence tags, GMAP provided higher-quality alignments more often than blat did. On a set of Arabidopsis cDNAs, GMAP performed comparably with GeneSeqer. In these experiments, GMAP demonstrated a several-fold increase in speed over existing programs. AVAILABILITY: Source code for gmap and associated programs is available at http://www.gene.com/share/gmap SUPPLEMENTARY INFORMATION: http://www.gene.com/share/gmap.  相似文献   

4.
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes.  相似文献   

5.
Structural alignment of proteins is widely used in various fields of structural biology. In order to further improve the quality of alignment, we describe an algorithm for structural alignment based on text modelling techniques. The technique firstly superimposes secondary structure elements of two proteins and then, models the 3D-structure of the protein in a sequence of alphabets. These sequences are utilized by a step-by-step sequence alignment procedure to align two protein structures. A benchmark test was organized on a set of 200 non-homologous proteins to evaluate the program and compare it to state of the art programs, e.g. CE, SAL, TM-align and 3D-BLAST. On average, the results of all-against-all structure comparison by the program have a competitive accuracy with CE and TM-align where the algorithm has a high running speed like 3D-BLAST.  相似文献   

6.
Macroevolutionary and biogeographical studies commonly apply multiple models to test state-dependent diversification. These models track the association between states of interest along a phylogeny, although many of them do not consider whether different clades might be evolving under different evolutionary drivers. Yet, they are still commonly applied to empirical studies without careful consideration of possible lineage diversification heterogeneity along the phylogenetic tree. A recent biogeographic study has suggested that orogenic uplift of the southern Andes has acted as a species pump, driving diversification of the lizard family Liolaemidae (307 described species), native to temperate southern South America. Here, we argue against the Andean uplift as main driver of evolution in this group. We show that there is a clear pattern of heterogeneous diversification in the Liolaemidae, which biases state- and environment-dependent analyses in, respectively, the GeoSSE and RPANDA programs. We show here that there are two shifts to accelerated speciation rates involving two clades that have both been classified as having “Andean” distributions. We incorporated the Geographic Hidden-State Speciation and Extinction model (GeoHiSSE) to accommodate unrelated diversification shifts, and also re-analyzed the data in RPANDA program after splitting biologically distinct clades for separate analyses, as well as including a more appropriate set of models. We demonstrate that the “Andean uplift” hypothesis is not supported when the heterogeneous diversification histories among these lizards is considered. We use the Liolaemidae as an ideal system to demonstrate potential risks of ignoring clade-specific differences in diversification patterns in macroevolutionary studies. We also implemented simulations to show that, in agreement with previous findings, the HiSSE approach can effectively and substantially reduce the level of distribution-dependent models receiving the highest AIC weights in such scenarios. However, we still find a relatively high rate (15%) of distribution-dependent models receiving the highest AIC weights, and provide recommendations related to the set of models included in the analyses that reduce these rates by half. Finally, we demonstrate that trees including clades following different dependent-drivers affect RPANDA analyses by producing different outcomes, ranging from partially correct models to completely misleading results. We provide recommendations for the implementation of both programs.  相似文献   

7.
《Biological Control》2006,36(3):348-357
Economic analyses are a valuable input into the decision-making process for biological control programs. The challenge though is how to incorporate qualitative risk assessments of biological control programs, or the risk of nontargeted effects into mathematical economic models. A technique known as threshold cost/benefit analysis is presented and an example on how to apply this method is illustrated using the yellow starthistle biological control program. The results show that incorporating uncertainty into the analysis can have a significant impact on the decision to undertake a biological control program.  相似文献   

8.
Economic framework for decision making in biological control   总被引:1,自引:1,他引:0  
Economic analyses are a valuable input into the decision-making process for biological control programs. The challenge though is how to incorporate qualitative risk assessments of biological control programs, or the risk of nontargeted effects into mathematical economic models. A technique known as threshold cost/benefit analysis is presented and an example on how to apply this method is illustrated using the yellow starthistle biological control program. The results show that incorporating uncertainty into the analysis can have a significant impact on the decision to undertake a biological control program.  相似文献   

9.
When researchers build high-quality models of protein structure from sequence homology, it is today common to use several alternative target-template alignments. Several methods can, at least in theory, utilize information from multiple templates, and many examples of improved model quality have been reported. However, to our knowledge, thus far no study has shown that automatic inclusion of multiple alignments is guaranteed to improve models without artifacts. Here, we have carried out a systematic investigation of the potential of multiple templates to improving homology model quality. We have used test sets consisting of targets from both recent CASP experiments and a larger reference set. In addition to Modeller and Nest, a new method (Pfrag) for multiple template-based modeling is used, based on the segment-matching algorithm from Levitt's SegMod program. Our results show that all programs can produce multi-template models better than any of the single-template models, but a large part of the improvement is simply due to extension of the models. Most of the remaining improved cases were produced by Modeller. The most important factor is the existence of high-quality single-sequence input alignments. Because of the existence of models that are worse than any of the top single-template models, the average model quality does not improve significantly. However, by ranking models with a model quality assessment program such as ProQ, the average quality is improved by approximately 5% in the CASP7 test set.  相似文献   

10.
Abstract

The Louisiana and Texas Rigs-to-Reefs programs enjoy widespread public, industry, and government support and have become models for similar programs around the world. Louisiana’s Rigs-to-Reefs program is the largest in the world, and since its inception in 1986 about 363 oil and gas platforms have been donated, or on average about 12 structures per year. Texas’s Rigs-to-Reefs program started in 1990, and since this time about 154 structures have been donated, or about six structures per year. A summary update of the Louisiana and Texas reef programs is provided, along with recent changes in legislative activity. Donation trends and statistics are reviewed. The Rigs-to-Reefs programs are unlikely to see donation activity above historic levels, and both programs should start planning for a future where the income generated from future projects diminishes.  相似文献   

11.
12.
FFGenerAtor 2.0 is a tool to customize the MM3 force field. It consists of two main programs, one that determines the missing parameters in the chosen structures and one that optimizes the parameter set using a genetic algorithm. The C++ program was developed on a LINUX system; all necessary software is available free of charge. The best parameter set is determined without changing the original MM3 parameters based on the chosen structures. Several different switches allow the properties and composition of the genetic algorithm to be changed.  相似文献   

13.
Automatic display of RNA secondary structures   总被引:1,自引:1,他引:0  
  相似文献   

14.
Numerous coastal and estuarine management programs around the world are developing strategies for climate change and priorities for climate change adaptation. A multi-state work group collaborated with scientists, researchers, resource managers and non-governmental organizations to develop a monitoring program that would provide warning of climate change impacts to the Long Island Sound estuarine and coastal ecosystems. The goal of this program was to facilitate timely management decisions and adaptation responses to climate change impacts. A novel approach is described for strategic planning that combines available regional-scale predictions and climate drivers (top down) with local monitoring information (bottom up) to identify candidate sentinels of climate change. Using this approach, 37 candidate sentinels of climate change were identified as well as a suite of core abiotic parameters that are drivers of environmental change. A process for prioritizing sentinels was developed and identified six of high priority for inclusion in pilot-scale monitoring programs. A monitoring strategy and an online sentinel data clearinghouse were developed. The work and processes presented here are meant to serve as a guide to other coastal and estuarine management programs seeking to establish a targeted monitoring program for climate change and to provide a set of “lessons learned.”  相似文献   

15.
16.
Microcosm studies of ecological processes have been criticized for being unrealistic. However, since lack of realism is inherent to all experimental science, if lack of realism invalidates microcosm models of ecological processes, then such lack of realism must either also invalidate much of the rest of experimental ecology or its force with respect to microcosm studies must derive from some other limitation of microcosm apparatus. We believe that the logic of the microcosm program for ecological research has been misunderstood. Here, we respond to the criticism that microcosm studies play at most a heuristic role in ecology with a new account of scientific experimentation developed specifically with ecology and other environmental sciences in mind. Central to our account are the concepts of model-based reasoning and analogical inference. We find that microcosm studies are sound when they serve as models for nature and when certain properties, referred to as the essential properties, are in positive analogy. By extension, our account also justifies numerous other kinds of ecological experimentation. These results are important because reliable causal accounts of ecological processes are necessary for sound application of ecological theory to conservation and environmental science. A severe sensitivity to reliable representation of causes is the chief virtue of the microcosm approach.  相似文献   

17.
《Plains anthropologist》2013,58(49):219-228
Abstract

Drawing table methods of mapping artifacts prohibit extensive experimentation with artifact distribution patterns. A digital computer may be used to map efficiently and economically. A program for plotting artifacts by geological levels is presented. Preparation of the data and use of the program is explained.  相似文献   

18.
A study was made to test and compare the behavior of a standard non-linear regression program (BMDP3R) in fitting data from six classical least-squares problems. The use of three program control parameters is discussed and four measures of regression failure are utilized to give a quantitative reference of success. Recommendations are given to aid the user of packaged programs in the parameter estimation of non-linear regression models.  相似文献   

19.
Drug Guru (drug generation using rules) is a new web-based computer software program for medicinal chemists that applies a set of transformations, that is, rules, to an input structure. The transformations correspond to medicinal chemistry design rules-of-thumb taken from the historical lore of drug discovery programs. The output of the program is a list of target analogs that can be evaluated for possible future synthesis. A discussion of the features of the program is followed by an example of the software applied to sildenafil (Viagra) in generating ideas for target analogs for phosphodiesterase inhibition. Comparison with other computer-assisted drug design software is given.  相似文献   

20.
MOTIVATION: In recent years there has been increased interest in producing large and accurate phylogenetic trees using statistical approaches. However for a large number of taxa, it is not feasible to construct large and accurate trees using only a single processor. A number of specialized parallel programs have been produced in an attempt to address the huge computational requirements of maximum likelihood. We express a number of concerns about the current set of parallel phylogenetic programs which are currently severely limiting the widespread availability and use of parallel computing in maximum likelihood-based phylogenetic analysis. RESULTS: We have identified the suitability of phylogenetic analysis to large-scale heterogeneous distributed computing. We have completed a distributed and fully cross-platform phylogenetic tree building program called distributed phylogeny reconstruction by maximum likelihood. It uses an already proven maximum likelihood-based tree building algorithm and a popular phylogenetic analysis library for all its likelihood calculations. It offers one of the most extensive sets of DNA substitution models currently available. We are the first, to our knowledge, to report the completion of a distributed phylogenetic tree building program that can achieve near-linear speedup while only using the idle clock cycles of machines. For those in an academic or corporate environment with hundreds of idle desktop machines, we have shown how distributed computing can deliver a 'free' ML supercomputer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号