首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
BIAS: Bioinformatics Integrated Application Software   总被引:2,自引:0,他引:2  
  相似文献   

4.
Application of computer and telecommunication technology calls serious challenges in routine diagnostic pathology. Complete data integration, fast access patients' data to usage of diagnosis thesaurus labeled with standardized codes and free text supplements, complex inquiry of the data contents, data exchange via teleconsultation and multilevel data protection are required functions of an integrated information system. Increasing requirement for teleconsultation transferring a large amount of multimedia data among different pathology information systems raises new questions in telepathology. Creation of complex telematic systems in pathology requires efficient methods of software engineering and implementation. Information technology of object-oriented modeling, usage of client server architecture and relational database management systems enables more compatible systems in field of telepathology. The aim of this paper is to present a practical example how to unify text based database, image archive and teleconsultation in a frame of an integrated telematic system and to discuss the main conceptual questions of information technology of telepathology.  相似文献   

5.
Two-dimensional electrophoresis computerized processing   总被引:2,自引:0,他引:2  
This paper describes various methods suitable for implementation of two-dimensional processing software. The different steps leading to a complete processing are described, from the digitalization of the image to the processing of the resulting data. The characteristics of a convenient digitalization system are discussed. The different software devoted to spot detection is reviewed with respect to the presence or otherwise of a spot model and its characteristics. The major techniques for gel matching are compared as are designs for database structures suitable for tabulation of measurements. Finally, the need for a sophisticated system of data processing is stressed and its main requirements are described.  相似文献   

6.
Multicast (group) communications have been widely recognized by current research and industry. Multicast is very useful for various network applications such as distributed (replicated) database, video/audio conference, information distribution and server locations, etc. But design and implementation of such multicast communication systems in networks are complicated tasks, especially when quality of services (QoS) of applications such as real-time and reliability are desired. To quick design and implement multicast communication, good tools are crucial and must be facilitated. This paper presents a novel object-oriented (O-O) QoS driven approach for the quick design and prototyping of multicast communication systems under certain QoS requirements for multicast message transmission and receptions such as real-time, total ordering, atomicity and fault-tolerance, etc.  相似文献   

7.
An object-oriented database system has been developed which is being used to store protein structure data. The database can be queried using the logic programming language Prolog or the query language Daplex. Queries retrieve information by navigating through a network of objects which represent the primary, secondary and tertiary structures of proteins. Routines written in both Prolog and Daplex can integrate complex calculations with the retrieval of data from the database, and can also be stored in the database for sharing among users. Thus object-oriented databases are better suited to prototyping applications and answering complex queries about protein structure than relational databases. This system has been used to find loops of varying length and anchor positions when modelling homologous protein structures.  相似文献   

8.
9.
Evolution of advanced manufacturing technologies and the new manufacturing paradigm has enriched the computer integrated manufacturing (CIM) methodology. The new advances have put more demands for CIM integration technology and associated supporting tools. One of these demands is to provide CIM systems with better software architecture, more flexible integration mechanisms, and powerful support platforms. In this paper, we present an integrating infrastructure for CIM implementation in manufacturing enterprises to form an integrated automation system. A research prototype of an integrating infrastructure has been developed for the development, integration, and operation of integrated CIM system. It is based on the client/server structure and employs object-oriented and agent technology. System openness, scalability, and maintenance are ensured by conforming to international standards and by using effective system design software and management tools.  相似文献   

10.
The managerial and organization practices required by an increasingly dynamic competitive manufacturing, business, and industrial environment include the formation of “virtual enterprises.” A major concern in the management of virtual enterprises is the integration and coordination of business processes contributed by partner enterprises. The traditional methods of process modeling currently used for the design of business processes do not fully support the needs of the virtual enterprise. The design of these virtual enterprises imposes requirements that make it more complex than conventional intraorganizational business process design. This paper first describes an architecture that assists in the design of the virtual enterprise. Then it discusses business process reengineering (BPR) as a methodology for modeling and designing virtual organizations. While BPR presents many useful tools, the approach itself and the modeling tools commonly used for redesign have fundamental shortcomings when dealing with the virtual enterprise. However, several innovative modeling approaches provide promise for this problem. The paper discusses some of these innovative modeling approaches, such as object-oriented modeling of business processes, agent modeling of organizational players, and the use of ontological modeling to capture and manipulate knowledge about the players and processes. The paper concludes with a conceptual modeling methodology that combines these approaches under the enterprise architecture for the design of virtual enterprises.  相似文献   

11.
Manufacturing enterprises face intensive competitive pressures, and many firms are forced to redesign processes just to stay even with the competition. But process redesign is an expensive, time-consuming, and labor-intensive activity, and first-generation computer-based tools are inadequate for redesign today. Alternatively, knowledge-based systems and intelligent tools have the ability to address the key intellectual activities required for effective process redesign. The research described in this article addresses an intelligent redesign tool called KOPeR. The article describes the KOPeR design and implementation and highlights its use and mechanics in the context of a manufacturing supply-chain example. It then turns to application of KOPeR as a redesign tool in the field, through an “industrial-strength” reengineering engagement, to redesign major supply-chain processes. The field results reveal insights into the use, utility, and potential of this tool in procurement, manufacturing, and beyond. The article closes with a number of promising future directions for related research.  相似文献   

12.
PaVESy: Pathway Visualization and Editing System   总被引:1,自引:0,他引:1  
A data managing system for editing and visualization of biological pathways is presented. The main component of PaVESy (Pathway Visualization and Editing System) is a relational SQL database system. The database design allows storage of biological objects, such as metabolites, proteins, genes and respective relations, which are required to assemble metabolic and regulatory biological interactions. The database model accommodates highly flexible annotation of biological objects by user-defined attributes. In addition, specific roles of objects are derived from these attributes in the context of user-defined interactions, e.g. in the course of pathway generation or during editing of the database content. Furthermore, the user may organize and arrange the database content within a folder structure and is free to group and annotate database objects of interest within customizable subsets. Thus, we allow an individualized view on the database content and facilitate user customization. A JAVA-based class library was developed, which serves as the database programming interface to PaVESy. This API provides classes, which implement the concepts of object persistence in SQL databases, such as entries, interactions, annotations, folders and subsets. We created editing and visualization tools for navigation in and visualization of the database content. User approved pathway assemblies are stored and may be retrieved for continued modification, annotation and export. Data export is interfaced with a range of network visualization programs, such as Pajek or other software allowing import of SBML or GML data format. AVAILABILITY: http://pavsey.mpimp-golm.mpg.de  相似文献   

13.
The ecoinformatics community recognizes that ecological synthesis across studies, space, and time will require new informatics tools and infrastructure. Recent advances have been encouraging, but many problems still face ecologists who manage their own datasets, prepare data for archiving, and search data stores for synthetic research. In this paper, we describe how work by the Canopy Database Project (CDP) might enable use of database technology by field ecologists: increasing the quality of database design, improving data validation, and providing structural and semantic metadata — all of which might improve the quality of data archives and thereby help drive ecological synthesis.The CDP has experimented with conceptual components for database design, templates, to address information technology issues facing ecologists. Templates represent forest structures and observational measurements on these structures. Using our software, researchers select templates to represent their study’s data and can generate normalized relational databases. Information hidden in those databases is used by ancillary tools, including data intake forms and simple data validation, data visualization, and metadata export. The primary question we address in this paper is, which templates are the right templates.We argue for defining simple templates (with relatively few attributes) that describe the domain's major entities, and for coupling those with focused and flexible observation templates. We present a conceptual model for the observation data type, and show how we have implemented the model as an observation entity in the DataBank database designer and generator. We show how our visualization tool CanopyView exploits metadata made explicit by DataBank to help scientists with analysis and synthesis. We conclude by presenting future plans for tools to conduct statistical calculations common to forest ecology and to enhance data mining with DataBank databases.DataBank could be extended to another domain by replacing our forest–ecology-specific templates with those for the new domain. This work extends the basic computer science idea of abstract data types and user-defined types to ecology-specific database design tools for individual users, and applies to ecoinformatics the software engineering innovations of domain-specific languages, software patterns, components, refactoring, and end-user programming.  相似文献   

14.
The recent improvements in mass spectrometry instruments and new analytical methods are increasing the intersection between proteomics and big data science. In addition, bioinformatics analysis is becoming increasingly complex and convoluted, involving multiple algorithms and tools. A wide variety of methods and software tools have been developed for computational proteomics and metabolomics during recent years, and this trend is likely to continue. However, most of the computational proteomics and metabolomics tools are designed as single‐tiered software application where the analytics tasks cannot be distributed, limiting the scalability and reproducibility of the data analysis. In this paper the key steps of metabolomics and proteomics data processing, including the main tools and software used to perform the data analysis, are summarized. The combination of software containers with workflows environments for large‐scale metabolomics and proteomics analysis is discussed. Finally, a new approach for reproducible and large‐scale data analysis based on BioContainers and two of the most popular workflow environments, Galaxy and Nextflow, is introduced to the proteomics and metabolomics communities.  相似文献   

15.
In many studies, particularly in the field of systems biology, it is essential that identical protein sets are precisely quantified in multiple samples such as those representing differentially perturbed cell states. The high degree of reproducibility required for such experiments has not been achieved by classical mass spectrometry-based proteomics methods. In this study we describe the implementation of a targeted quantitative approach by which predetermined protein sets are first identified and subsequently quantified at high sensitivity reliably in multiple samples. This approach consists of three steps. First, the proteome is extensively mapped out by multidimensional fractionation and tandem mass spectrometry, and the data generated are assembled in the PeptideAtlas database. Second, based on this proteome map, peptides uniquely identifying the proteins of interest, proteotypic peptides, are selected, and multiple reaction monitoring (MRM) transitions are established and validated by MS2 spectrum acquisition. This process of peptide selection, transition selection, and validation is supported by a suite of software tools, TIQAM (Targeted Identification for Quantitative Analysis by MRM), described in this study. Third, the selected target protein set is quantified in multiple samples by MRM. Applying this approach we were able to reliably quantify low abundance virulence factors from cultures of the human pathogen Streptococcus pyogenes exposed to increasing amounts of plasma. The resulting quantitative protein patterns enabled us to clearly define the subset of virulence proteins that is regulated upon plasma exposure.  相似文献   

16.
Kebing Yu  Arthur R. Salomon 《Proteomics》2010,10(11):2113-2122
Recent advances in the speed and sensitivity of mass spectrometers and in analytical methods, the exponential acceleration of computer processing speeds, and the availability of genomic databases from an array of species and protein information databases have led to a deluge of proteomic data. The development of a lab‐based automated proteomic software platform for the automated collection, processing, storage, and visualization of expansive proteomic data sets is critically important. The high‐throughput autonomous proteomic pipeline described here is designed from the ground up to provide critically important flexibility for diverse proteomic workflows and to streamline the total analysis of a complex proteomic sample. This tool is composed of a software that controls the acquisition of mass spectral data along with automation of post‐acquisition tasks such as peptide quantification, clustered MS/MS spectral database searching, statistical validation, and data exploration within a user‐configurable lab‐based relational database. The software design of high‐throughput autonomous proteomic pipeline focuses on accommodating diverse workflows and providing missing software functionality to a wide range of proteomic researchers to accelerate the extraction of biological meaning from immense proteomic data sets. Although individual software modules in our integrated technology platform may have some similarities to existing tools, the true novelty of the approach described here is in the synergistic and flexible combination of these tools to provide an integrated and efficient analysis of proteomic samples.  相似文献   

17.
Cyclone aims at facilitating the use of BioCyc, a collection of Pathway/Genome Databases (PGDBs). Cyclone provides a fully extensible Java Object API to analyze and visualize these data. Cyclone can read and write PGDBs, and can write its own data in the CycloneML format. This format is automatically generated from the BioCyc ontology by Cyclone itself, ensuring continued compatibility. Cyclone objects can also be stored in a relational database CycloneDB. Queries can be written in SQL, and in an intuitive and concise object-oriented query language, Hibernate Query Language (HQL). In addition, Cyclone interfaces easily with Java software including the Eclipse IDE for HQL edition, the Jung API for graph algorithms or Cytoscape for graph visualization. AVAILABILITY: Cyclone is freely available under an open source license at: http://sourceforge.net/projects/nemo-cyclone. SUPPLEMENTARY INFORMATION: For download and installation instructions, tutorials, use cases and examples, see http://nemo-cyclone.sourceforge.net.  相似文献   

18.
Predicting the distribution of metabolic fluxes in biochemical networks is of major interest in systems biology. Several databases provide metabolic reconstructions for different organisms. Software to analyze flux distributions exists, among others for the proprietary MATLAB environment. Given the large user community for the R computing environment, a simple implementation of flux analysis in R appears desirable and will facilitate easy interaction with computational tools to handle gene expression data. We extended the R software package BiGGR, an implementation of metabolic flux analysis in R. BiGGR makes use of public metabolic reconstruction databases, and contains the BiGG database and the reconstruction of human metabolism Recon2 as Systems Biology Markup Language (SBML) objects. Models can be assembled by querying the databases for pathways, genes or reactions of interest. Fluxes can then be estimated by maximization or minimization of an objective function using linear inverse modeling algorithms. Furthermore, BiGGR provides functionality to quantify the uncertainty in flux estimates by sampling the constrained multidimensional flux space. As a result, ensembles of possible flux configurations are constructed that agree with measured data within precision limits. BiGGR also features automatic visualization of selected parts of metabolic networks using hypergraphs, with hyperedge widths proportional to estimated flux values. BiGGR supports import and export of models encoded in SBML and is therefore interoperable with different modeling and analysis tools. As an application example, we calculated the flux distribution in healthy human brain using a model of central carbon metabolism. We introduce a new algorithm termed Least-squares with equalities and inequalities Flux Balance Analysis (Lsei-FBA) to predict flux changes from gene expression changes, for instance during disease. Our estimates of brain metabolic flux pattern with Lsei-FBA for Alzheimer’s disease agree with independent measurements of cerebral metabolism in patients. This second version of BiGGR is available from Bioconductor.  相似文献   

19.
In this paper, we discuss the properties of biological data and challenges it poses for data management, and argue that, in order to meet the data management requirements for 'digital biology', careful integration of the existing technologies and the development of new data management techniques for biological data are needed. Based on this premise, we present PathCase: Case Pathways Database System. PathCase is an integrated set of software tools for modelling, storing, analysing, visualizing and querying biological pathways data at different levels of genetic, molecular, biochemical and organismal detail. The novel features of the system include: (i) genomic information integrated with other biological data and presented starting from pathways; (ii) design for biologists who are possibly unfamiliar with genomics, but whose research is essential for annotating gene and genome sequences with biological functions; (iii) database design, implementation and graphical tools which enable users to visualize pathways data in multiple abstraction levels and to pose exploratory queries; (iv) a wide range of different types of queries including, 'path' and 'neighbourhood queries' and graphical visualization of query outputs; and (v) an implementation that allows for web (XML)-based dissemination of query outputs (i.e. pathways data in BIOPAX format) to researchers in the community, giving them control on the use of pathways data.  相似文献   

20.
File and Object Replication in Data Grids   总被引:23,自引:0,他引:23  
Data replication is a key issue in a Data Grid and can be managed in different ways and at different levels of granularity: for example, at the file level or object level. In the High Energy Physics community, Data Grids are being developed to support the distributed analysis of experimental data. We have produced a prototype data replication tool, the Grid Data Mirroring Package (GDMP) that is in production use in one physics experiment, with middleware provided by the Globus Toolkit used for authentication, data movement, and other purposes. We present here a new, enhanced GDMP architecture and prototype implementation that uses Globus Data Grid tools for efficient file replication. We also explain how this architecture can address object replication issues in an object-oriented database management system. File transfer over wide-area networks requires specific performance tuning in order to gain optimal data transfer rates. We present performance results obtained with GridFTP, an enhanced version of FTP, and discuss tuning parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号