首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
MOTIVATION: The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. RESULTS: We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. AVAILABILITY: This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.  相似文献   

2.
MOTIVATION: There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. RESULTS: We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.  相似文献   

3.
Lo SL  You T  Lin Q  Joshi SB  Chung MC  Hew CL 《Proteomics》2006,6(6):1758-1769
In the field of proteomics, the increasing difficulty to unify the data format, due to the different platforms/instrumentation and laboratory documentation systems, greatly hinders experimental data verification, exchange, and comparison. Therefore, it is essential to establish standard formats for every necessary aspect of proteomics data. One of the recently published data models is the proteomics experiment data repository [Taylor, C. F., Paton, N. W., Garwood, K. L., Kirby, P. D. et al., Nat. Biotechnol. 2003, 21, 247-254]. Compliant with this format, we developed the systematic proteomics laboratory analysis and storage hub (SPLASH) database system as an informatics infrastructure to support proteomics studies. It consists of three modules and provides proteomics researchers a common platform to store, manage, search, analyze, and exchange their data. (i) Data maintenance includes experimental data entry and update, uploading of experimental results in batch mode, and data exchange in the original PEDRo format. (ii) The data search module provides several means to search the database, to view either the protein information or the differential expression display by clicking on a gel image. (iii) The data mining module contains tools that perform biochemical pathway, statistics-associated gene ontology, and other comparative analyses for all the sample sets to interpret its biological meaning. These features make SPLASH a practical and powerful tool for the proteomics community.  相似文献   

4.
The design of Jemboss: a graphical user interface to EMBOSS   总被引:2,自引:0,他引:2  
DESIGN: Jemboss is a graphical user interface (GUI) for the European Molecular Biology Open Software Suite (EMBOSS). It is being developed at the MRC UK HGMP-RC as part of the EMBOSS project. This paper explains the technical aspects of the Jemboss client-server design. The client-server model optionally allows that a Jemboss user have an account on the remote server. The Jemboss client is written in Java and is downloaded automatically to a user's workstation via Java Web Start using the HTML protocol. The client then communicates with the remote server using SOAP (Simple Object Access Protocol). A Tomcat server listens on the remote machine and communicates the SOAP requests to a Jemboss server, again written in Java. This Java server interprets the client requests and executes them through Java Native Interface (JNI) code written in the C language. Another C program having setuid privilege, jembossctl, is called by the JNI code to perform the client requests under the user's account on the server. The commands include execution of EMBOSS applications, file management and project management tasks. Jemboss allows the use of JSSE for encryption of communication between the client and server. The GUI parses the EMBOSS Ajax Command Definition language for form generation and maximum input flexibility. Jemboss interacts directly with the EMBOSS libraries to allow dynamic generation of application default settings. RESULTS: This interface is part of the EMBOSS distribution and has attracted much interest. It has been set up at many other sites globally as well as being used at the HGMP-RC for registered users. AVAILABILITY: The software, EMBOSS and Jemboss, is freely available to academics and commercial users under the GPL licence. It can be downloaded from the EMBOSS ftp server: http://www.uk.embnet.org/Software/EMBOSS/, ftp://ftp.uk.embnet.org/pub/EMBOSS/. Registered HGMP-RC users can access an installed server from: http://www.uk.embnet.org/Software/EMBOSS/Jemboss/  相似文献   

5.
The field of proteomics is advancing rapidly as a result of powerful new technologies and proteomics experiments yield a vast and increasing amount of information. Data regarding protein occurrence, abundance, identity, sequence, structure, properties, and interactions need to be stored. Currently, a common standard has not yet been established and open access to results is needed for further development of robust analysis algorithms. Databases for proteomics will evolve from pure storage into knowledge resources, providing a repository for information (meta-data) which is mainly not stored in simple flat files. This review will shed light on recent steps towards the generation of a common standard in proteomics data storage and integration, but is not meant to be a comprehensive overview of all available databases and tools in the proteomics community.  相似文献   

6.
This paper presents our experience in developing and implementing Internet telerobotics system. Internet telerobotics system refers to a robot system controlled and monitored remotely through the Internet. A robot manipulator with five degrees of freedom, called Mentor, is employed. Client-server architecture is chosen as a platform for our Internet telerobotics system. Three generations of telerobotics systems have evolved in this research. The first generation was based on CGI and two tiered architectures, where a client presents a Graphical User Interface to the user, and utilizes the user's data entry and actions to perform requests to robot server running on a different machine. The second generation was developed using Java. We also employ Java 3D for creating and manipulating 3D geometry of manipulator links, and for constructing the structures used in rendering that geometry, resulting in 3D robot movement simulation presented to the users (clients) through their web browser. Recent development in our Internet telerobotics includes object recognition through image captured by a camera, which poses challenging problem, giving the undeterministic latency of the Internet. The third generation is centered around the use of CORBA for development platform of distributed internet telerobotics system, aimed at distributing task of telerobotics system.  相似文献   

7.
The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet?, runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet? will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet? will consist of three major software components: • Animal Management System (AMS) • Sample Tracking System (STS) • Result Documentation System (RDS) MouseNet? provides the following major advantages: • being accessible from different client platforms via the Internet • being a full-featured multi-user system (including access restriction and data locking mechanisms) • relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform • supplying workflow functions and a variety of plausibility checks. Received: 20 December 1999 / Accepted: 20 December 1999  相似文献   

8.
The use of mobile computers is gaining popularity. There is an increasing trend in the number of users with laptops, PDAs, and smart phones. Access to information repositories in the future will be dominated by mobile clients rather than traditional “fixed” clients. These mobile clients download information by periodically connecting to repositories of data stored in either databases or file systems. Such mobile clients constitute a new and different kind of workload and exhibit a different access pattern than seen in traditional client server systems. Though file systems have been modified to handle clients that can download information, disconnect, and later reintegrate, databases have not been redesigned to accommodate mobile clients. There is a need to support mobile clients in the context of client server databases. This paper is about organizing the database server to take into consideration the access patterns of mobile clients. We propose the concept of hoard attributes which capture these access patterns. Three different techniques for organizing data on the server based on the hoard attribute are presented. We argue that each technique is suited for a particular workload. The workload is a combination of requests from mobile clients and traditional clients. This reorganization also allows us to address issues of concurrency control, disconnection and replica control in mobile databases. We present simulation results that show the performance of server reorganization using hoard attributes. We also provide an elaborate discussion of issues resulting from this reorganization in this new paradigm taking into account both mobile and traditional clients. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

9.
In this study, we present two freely available and complementary Distributed Annotation System (DAS) resources: a DAS reference server that provides up-to-date sequence and annotation from UniProt, with additional feature links and database cross-references from InterPro and a DAS client implemented using Java and Macromedia Flash that is optimized for the display of protein features.  相似文献   

10.
Increasing the amount of carbon stored in harvested wood products (HWPs) is an internationally recognized measure to mitigate climate change. Several approaches and tiers of methods may be used to analyze the contribution of HWP in terms of greenhouse gas emissions and removals at a regional and national level. The Intergovernmental Panel on Climate Change (IPCC) provides guidelines on three tiers of methods for estimating annual carbon stock changes in the carbon pool of HWPs. These tiers mostly differ by the availability of input data and the level of HWP aggregation. In this case study for the Czech Republic, we have applied the production approach and alternative tiers of accounting methods, which are described in the IPCC guidelines, including the default method (tier 2) and the most advanced method (tier 3). We used country‐specific data and material flow analysis to trace the carbon flow over the entire forest‐based sector, including only the domestic harvest and the primary and secondary wood products manufactured within the country. The results of this study show that the carbon stored in the HWP pool could be underestimated if simpler methods and default values nonspecific to the country are applied. At the national level, applying the tier 3 method resulted in a 15.8% higher annual carbon inflow in the pool of HWPs compared to the tier 2 IPCC default method. This means that the advanced method reveals an apparently higher carbon sink in HWPs. A documented increase of carbon storage might bring additional credits to reporting countries, and, more important, it could promote the use of long‐life HWPs to mitigate climate change.  相似文献   

11.
Since it was launched in 1993, the ExPASy server has been and is still a reference in the proteomics world. ExPASy users access various databases, many dedicated tools, and lists of resources, among other services. A significant part of resources available is devoted to two-dimensional electrophoresis data. Our latest contribution to the expansion of the pool of on-line proteomics data is the World-2DPAGE Constellation, accessible at http://world-2dpage.expasy.org/. It is composed of the established WORLD-2DPAGE List of 2-D PAGE database servers, the World-2DPAGE Portal that queries simultaneously world-wide proteomics databases, and the recently created World-2DPAGE Repository. The latter component is a public standards-compliant repository for gel-based proteomics data linked to protein identifications published in the literature. It has been set up using the Make2D-DB package, a software tool that helps building SWISS-2DPAGE-like databases on one's own Web site. The lack of necessary informatics infrastructure to build and run a dedicated website is no longer an obstacle to make proteomics data publicly accessible on the Internet.  相似文献   

12.
The Biology of Addictive Diseases-Database (BiolAD-DB) system is a research bioinformatics system for archiving, analyzing, and processing of complex clinical and genetic data. The database schema employs design principles for handling complex clinical information, such as response items in genetic questionnaires. Data access and validation is provided by the BiolAD-DB client application, which features a data validation engine tightly coupled to a graphical user interface. Data integrity is provided by the password-protected BiolAD-DB SQL compliant server and database. BiolAD-DB tools further provide functionalities for generating customized reports and views. The BiolAD-DB system schema, client, and installation instructions are freely available at http://www.rockefeller.edu/biolad-db/.  相似文献   

13.
14.
15.
The structure and ultrastructure of the adhesive organ (AO) in the catfish, Pseudocheneis sulcatus (Sisoridae), an inhabitant of the sub‐Himalayan streams of India, is described. The surface of the AO is thrown into folds, the ridges of which bear curved spines. The AO epidermis consists of 10–12 tiers of filament‐rich cells, of which the outer tier cells project spines lined with a thick plasma membrane and bear bundles of tonofilaments (TF). Their cytoplasm contains TF and large mucus‐like granules, but no obvious organelles. A second tier of living cells with spines is present beneath the outer tier and seems to replace the latter when its spines are damaged or shed. The outer tier cells react positively with antibody to cytokeratin. Actin labelling is clearly absent from the outer tier, indicating that keratinization of the outer tier occurs in the absence of actin filaments. In the cells of the third to fifth tiers, the cytoplasm possesses abundant small mucous granules (0.1–0.3 µm), and fewer TF compared to the cytoplasm in the spines. The cells of the innermost tiers and the basal layer possess few TF bundles, but no mucous granules. The potential of AO filament cells to produce both mucous granules and keratin filaments is noteworthy. The observations provide evidence that specific regions of fish epidermis can actually undergo a true process of keratinization.  相似文献   

16.
Battye F 《Cytometry》2001,43(2):143-149
BACKGROUND: The obvious benefits of centralized data storage notwithstanding, the size of modern flow cytometry data files discourages their transmission over commonly used telephone modem connections. The proposed solution is to install at the central location a web servlet that can extract compact data arrays, of a form dependent on the requested display type, from the stored files and transmit them to a remote client computer program for display. METHODS: A client program and a web servlet, both written in the Java programming language, were designed to communicate over standard network connections. The client program creates familiar numerical and graphical display types and allows the creation of gates from combinations of user-defined regions. Data compression techniques further reduce transmission times for data arrays that are already much smaller than the data file itself. RESULTS: For typical data files, network transmission times were reduced more than 700-fold for extraction of one-dimensional (1-D) histograms, between 18 and 120-fold for 2-D histograms, and 6-fold for color-coded dot plots. Numerous display formats are possible without further access to the data file. CONCLUSIONS: This scheme enables telephone modem access to centrally stored data without restricting flexibility of display format or preventing comparisons with locally stored files.  相似文献   

17.
对蛋白质质谱数据进行数据库比对和鉴定是蛋白质组学研究技术中的一个重要步骤。由于公共数据库蛋白质数据信息不全,有些蛋白质质谱数据无法得到有效的鉴定。而利用相关物种的EST序列构建专门的质谱数据库则可以增加鉴定未知蛋白的几率。本文介绍了利用EST序列构建Mascot本地数据库的具体方法和步骤,扩展了Mascot检索引擎对蛋白质质谱数据的鉴定范围,从数据库层面提高了对未知蛋白的鉴别几率,为蛋白质组学研究提供了一种较为实用的生物信息学分析技术。  相似文献   

18.
Iwasaki W  Yamamoto Y  Takagi T 《PloS one》2010,5(12):e15305
In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.  相似文献   

19.
Content-Aware Dispatching Algorithms for Cluster-Based Web Servers   总被引:1,自引:0,他引:1  
Cluster-based Web servers are leading architectures for highly accessed Web sites. The most common Web cluster architecture consists of replicated server nodes and a Web switch that routes client requests among the nodes. In this paper, we consider content-aware Web switches that can use application level information to assign client requests. We evaluate the performance of some representative state-of-the-art dispatching algorithms for Web switches operating at layer 7 of the OSI protocol stack. Specifically, we consider dispatching algorithms that use only client information as well as the combination of client and server information for load sharing, reference locality or service partitioning. We demonstrate through a wide set of simulation experiments that dispatching policies aiming to improve locality in server caches give best results for traditional Web publishing sites providing static information and some simple database searches. On the other hand, when we consider more recent Web sites providing dynamic and secure services, dispatching policies that aim to share the load are the most effective.  相似文献   

20.
We have built a microarray database, StressDB, for management of microarray data from our studies on stress-modulated genes in Arabidopsis. StressDB provides small user groups with a locally installable web-based relational microarray database. It has a simple and intuitive architecture and has been designed for cDNA microarray technology users. StressDB uses Windows(trade mark) 2000 as the centralized database server with Oracle(trade mark) 8i as the relational database management system. It allows users to manage microarray data and data-related biological information over the Internet using a web browser. The source-code is currently available on request from the authors and will soon be made freely available for downloading from our website athttp://arastressdb.cac.psu.edu.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号