首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.

Background

Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings.

Results

The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline.

Conclusion

By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images.  相似文献   

2.

Background

We present a way to compute the minimal semi-positive invariants of a Petri net representing a biological reaction system, as resolution of a Constraint Satisfaction Problem. The use of Petri nets to manipulate Systems Biology models and make available a variety of tools is quite old, and recently analyses based on invariant computation for biological models have become more and more frequent, for instance in the context of module decomposition.

Results

In our case, this analysis brings both qualitative and quantitative information on the models, in the form of conservation laws, consistency checking, etc. thanks to finite domain constraint programming. It is noticeable that some of the most recent optimizations of standard invariant computation techniques in Petri nets correspond to well-known techniques in constraint solving, like symmetry-breaking. Moreover, we show that the simple and natural encoding proposed is not only efficient but also flexible enough to encompass sub/sur-invariants, siphons/traps, etc., i.e., other Petri net structural properties that lead to supplementary insight on the dynamics of the biochemical system under study.

Conclusions

A simple implementation based on GNU-Prolog's finite domain solver, and including symmetry detection and breaking, was incorporated into the BIOCHAM modelling environment and in the independent tool Nicotine. Some illustrative examples and benchmarks are provided.  相似文献   

3.

Purpose

Simulation plays a critical role in the design of products, materials, and manufacturing processes. However, there are gaps in the simulation tools used by industry to provide reliable results from which effective decisions can be made about environmental impacts at different stages of product life cycle. A holistic and systems approach to predicting impacts via sustainable manufacturing planning and simulation (SMPS) is presented in an effort to incorporate sustainability aspects across a product life cycle.

Methods

Increasingly, simulation is replacing physical tests to ensure product reliability and quality, thereby facilitating steady reductions in design and manufacturing cycles. For SMPS, we propose to extend an earlier framework developed in the Systems Integration for Manufacturing Applications (SIMA) program at the National Institute of Standards and Technology. SMPS framework has four phases, viz. design product, engineer manufacturing, engineer production system, and produce products. Each phase has its inputs, outputs, phase level activities, and sustainability-related data, metrics and tools.

Results and discussion

An automotive manufacturing scenario that highlights the potential of utilizing SMPS framework to facilitate decision making across different phases of product life cycle is presented. Various research opportunities are discussed for the SMPS framework and corresponding information models.

Conclusions

The SMPS framework built on the SIMA model has potential in aiding sustainable product development.  相似文献   

4.

Background

The creation and modification of genome-scale metabolic models is a task that requires specialized software tools. While these are available, subsequently running or visualizing a model often relies on disjoint code, which adds additional actions to the analysis routine and, in our experience, renders these applications suboptimal for routine use by (systems) biologists.

Results

The Flux Analysis and Modeling Environment (FAME) is the first web-based modeling tool that combines the tasks of creating, editing, running, and analyzing/visualizing stoichiometric models into a single program. Analysis results can be automatically superimposed on familiar KEGG-like maps. FAME is written in PHP and uses the Python-based PySCeS-CBM for its linear solving capabilities. It comes with a comprehensive manual and a quick-start tutorial, and can be accessed online at http://f-a-m-e.org/.

Conclusions

With FAME, we present the community with an open source, user-friendly, web-based "one stop shop" for stoichiometric modeling. We expect the application will be of substantial use to investigators and educators alike.  相似文献   

5.

Background

A collection of over 20,000 Salmonella typhimurium LT2 mutants, sealed for four decades in agar stabs, is a unique resource for study of genetic and evolutionary changes. Previously, we reported extensive diversity among descendants including diversity in RpoS and catalase synthesis, diversity in genome size, protein content, and reversion from auxotrophy to prototrophy.

Results

Extensive and variable losses and a few gains of catabolic functions were observed by this standardized method. Thus, 95 catabolic reactions were scored in each of three plates in wells containing specific carbon and nitrogen substrates.

Conclusion

While the phenotype microarray did not reveal a distinct pattern of mutation among the archival isolates, the data did confirm that various isolates have used multiple strategies to survive in the archival environment. Data from the MacConkey plates verified the changes in carbohydrate metabolism observed in the Biolog? system.  相似文献   

6.

Background

Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks.

Results

An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at http://www.tinkercell.com.

Conclusion

An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily accept third-party algorithms, allowing it to serve as a platform for testing different methods relevant to synthetic biology.  相似文献   

7.
8.
Xia  Fei  Dou  Yong  Lei  Guoqing  Tan  Yusong 《BMC bioinformatics》2011,12(1):1-9

Background

Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases.

Results

The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes.

Conclusions

Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.  相似文献   

9.

Background

The goal of DNA barcoding is to develop a species-specific sequence library for all eukaryotes. A 650 bp fragment of the cytochrome c oxidase 1 (CO1) gene has been used successfully for species-level identification in several animal groups. It may be difficult in practice, however, to retrieve a 650 bp fragment from archival specimens, (because of DNA degradation) or from environmental samples (where universal primers are needed).

Results

We used a bioinformatics analysis using all CO1 barcode sequences from GenBank and calculated the probability of having species-specific barcodes for varied size fragments. This analysis established the potential of much smaller fragments, mini-barcodes, for identifying unknown specimens. We then developed a universal primer set for the amplification of mini-barcodes. We further successfully tested the utility of this primer set on a comprehensive set of taxa from all major eukaryotic groups as well as archival specimens.

Conclusion

In this study we address the important issue of minimum amount of sequence information required for identifying species in DNA barcoding. We establish a novel approach based on a much shorter barcode sequence and demonstrate its effectiveness in archival specimens. This approach will significantly broaden the application of DNA barcoding in biodiversity studies.  相似文献   

10.

Background

As the demands for competency-based education grow, the need for standards-based tools to allow for publishing and discovery of competency-based learning content is more pressing. This project focused on developing federated discovery services for competency-based medical e-learning content.

Methods

We built a tool suite for authoring and discovery of medical e-learning metadata. The end-user usability of the tool suite was evaluated through a web-based survey.

Results

The suite, implemented as an open-source system, was evaluated to identify areas for improvement.

Conclusion

The MERG suite is a starting point for organizations implementing competency-based e-learning resources.  相似文献   

11.

Purpose

Recently, using a long-run refinery simulation model, Bredeson et al. conclude that the light transportation fuels have roughly the same CO2 footprint. And, any allocation scheme which shows substantial difference between gasoline and diesel CO2 intensities must be seen with caution. The purpose of this paper is to highlight the inappropriate modeling assumptions which lead to these inapplicable conclusions into the current oil refining context.

Methods

From an economic point of view, optimization models are more suitable than simulation tools for providing decision policies. Therefore, we used a calibrated refinery linear programming model to evaluate the impact of varying the gasoline-to-diesel production ratio on the refinery's CO2 emissions and the marginal CO2 intensity of the automotive fuels.

Results and discussion

Contrary to Bredeson et al.'s conclusions, our results reveal that, within a calibrated optimization framework, total and per-product CO2 emissions could be affected by the gasoline-to-diesel production ratio. More precisely, in a gasoline-oriented market, the marginal CO2 footprint of gasoline is significantly higher than diesel, while the opposite result is observed within a diesel-oriented market. These two scenarios could reflect to some extent the American and the European oil refining industry for which policy makers should adopt a different per-product taxation policy.

Conclusions

Any relevant and economic ground CO2 policies for automotive fuels should be sensitive to the environmental consequences associated with their marginal productions. This is especially true in disequilibrium markets where the average and marginal reactions could significantly differ. Optimization models, whose optimal solution is fully driven by marginal signals, show that the refinery's global and/or per-product CO2 emissions could be affected by the gasoline-to-diesel production ratio.  相似文献   

12.
13.
14.

Background

Although genome sequences are available for an ever-increasing number of bacterial species, the availability of facile genetic tools for physiological analysis have generally lagged substantially behind traditional genetic models.

Results

Here I describe the development of an improved, broad-host-range "in-out" allelic exchange vector, pCM433, which permits the generation of clean, marker-free genetic manipulations. Wild-type and mutant alleles were reciprocally exchanged at three loci in Methylobacterium extorquens AM1 in order to demonstrate the utility of pCM433.

Conclusion

The broad-host-range vector for marker-free allelic exchange described here, pCM433, has the advantages of a high copy, general Escherichia coli replicon for easy cloning, an IncP oriT enabling conjugal transfer, an extensive set of restriction sites in its polylinker, three antibiotic markers, and sacB (encoding levansucrase) for negative selection upon sucrose plates. These traits should permit pCM433 to be broadly applied across many bacterial taxa for marker-free allelic exchange, which is particularly important if multiple manipulations or more subtle genetic manipulations such as point mutations are desired.  相似文献   

15.

Background

The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches.

Methods

A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads.

Results

Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension.

Conclusion

The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.  相似文献   

16.

Background and aims

The rhizosphere, the soil immediately surrounding roots, provides a critical bridge for water and nutrient uptake. The rhizosphere is influenced by various forms of root–soil interactions of which mechanical deformation due to root growth and its effects on the hydraulics of the rhizosphere are the least studied. In this work, we focus on developing new experimental and numerical tools to assess these changes.

Methods

This study combines X-ray micro-tomography (XMT) with coupled numerical simulation of fluid and soil deformation in the rhizosphere. The study provides a new set of tools to mechanistically investigate root-induced rhizosphere compaction and its effect on root water uptake. The numerical simulator was tested on highly deformable soil to document its ability to handle a large degree of strain.

Results

Our experimental results indicate that measured rhizosphere compaction by roots via localized soil compaction increased the simulated water flow to the roots by 27 % as compared to an uncompacted fine-textured soil of low bulk density characteristic of seed beds or forest topsoils. This increased water flow primarily occurred due to local deformation of the soil aggregates as seen in the XMT images, which increased hydraulic conductivity of the soil. Further simulated root growth and deformation beyond that observed in the XMT images led to water uptake enhancement of ~50 % beyond that due to root diameter increase alone and demonstrated the positive benefits of root compaction in low density soils.

Conclusions

The development of numerical models to quantify the coupling of root driven compaction and fluid flow provides new tools to improve the understanding of plant water uptake, nutrient availability and agricultural efficiency. This study demonstrated that plants, particularly during early growth in highly deformable low density soils, are involved in active mechanical management of their surroundings. These modeling approaches may now be used to quantify compaction and root growth impacts in a wide range of soils.  相似文献   

17.

Background

High throughput techniques have generated a huge set of biological data, which are deposited in various databases. Efficient exploitation of these databases is often hampered by a lack of appropriate tools, which allow easy and reliable identification of genes that miss functional characterization but are correlated with specific biological conditions (e.g. organotypic expression).

Results

We have developed a simple algorithm (DGSA = Database-dependent Gene Selection and Analysis) to identify genes with unknown functions involved in organ development concentrating on the heart. Using our approach, we identified a large number of yet uncharacterized genes, which are expressed during heart development. An initial functional characterization of genes by loss-of-function analysis employing morpholino injections into zebrafish embryos disclosed severe developmental defects indicating a decisive function of selected genes for developmental processes.

Conclusion

We conclude that DGSA is a versatile tool for database mining allowing efficient selection of uncharacterized genes for functional analysis.  相似文献   

18.

Purpose

Life cycle assessment (LCA) methodology is a well-established analytical method to quantify environmental impacts, which has been mainly applied to products. However, recent literature would suggest that it has also the potential as an analysis and design tool for processes, and stresses that one of the biggest challenges of this decade in the field of process systems engineering (PSE) is the development of tools for environmental considerations.

Method

This article attempts to give an overview of the integration of LCA methodology in the context of industrial ecology, and focuses on the use of this methodology for environmental considerations concerning process design and optimization.

Results

The review identifies that LCA is often used as a multi-objective optimization of processes: practitioners use LCA to obtain the inventory and inject the results into the optimization model. It also shows that most of the LCA studies undertaken on process analysis consider the unit processes as black boxes and build the inventory analysis on fixed operating conditions.

Conclusions

The article highlights the interest to better assimilate PSE tools with LCA methodology, in order to produce a more detailed analysis. This will allow optimizing the influence of process operating conditions on environmental impacts and including detailed environmental results into process industry.  相似文献   

19.

Background/aim

Desmoid fibromatosis are rare, benign but locally aggressive tumors, characterized by an infiltrative growth and a tendency towards local recurrence, but an inability to metastasise. The morphological diagnosis may be difficult, requiring immunohistochemistry. The aim of our study is to determine the im munohistochemical phenotypes of these tumours to evaluate if they are helpful and to define a diagnostic strategy.

Methods

Immunohistochemistry was used to examine the expression of β-catenin, APC protein, in archival material derived from fourteen cases of extraabdominal desmoid tumors. Desmoids specimens were assembled into a clinical data-linked tissue micro — array. Nuclear β-catenin expression was observed in 100% of the specimens. Positive cytoplasmic staining for APC protein was found in 11 of 14 (78,6%). But all samples were negative for oestrogen and progesterone receptors, c-KIT and WT1.

Results

Our results regarding β-catenin and APC confirm the previous findings that those proteins play a crucial role in the pathogenesis of sporadic aggressive fibromatosis.  相似文献   

20.

Background

Sepsis is one of the main causes of mortality and morbidity. The rapid detection of pathogens in blood of septic patients is essential for adequate antimicrobial therapy and better prognosis. This study aimed to accelerate the detection and discrimination of Gram-positive (GP) and Gram-negative (GN) bacteria and Candida species in blood culture samples by molecular methods.

Methods

The Real-GP®, -GN®, and -CAN® real-time PCR kit (M&D, Wonju, Republic of Korea) assays use the TaqMan probes for detecting pan-GP, pan-GN, and pan-Candida species, respectively. The diagnostic performances of the real-time PCR kits were evaluated with 115 clinical isolates, 256 positive and 200 negative blood culture bottle samples, and the data were compared to results obtained from conventional blood culture.

Results

Eighty-seven reference strains and 115 clinical isolates were correctly identified with specific probes corresponding to GP-bacteria, GN-bacteria and Candida, respectively. The overall sensitivity and specificity of the real-time PCR kit with blood culture samples were 99.6% and 89.5%, respectively.

Conclusions

The Real-GP®, -GN®, and -CAN® real-time PCR kits could be useful tools for the rapid and accurate screening of bloodstream infections (BSIs).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号