首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent advances in electron cryomicroscopy instrumentation and single particle reconstruction have created opportunities for high-throughput and high-resolution three-dimensional (3D) structure determination of macromolecular complexes. However, it has become impractical and inefficient to rely on conventional text file data management and command-line programs to organize and process the increasing numbers of image data required in high-resolution studies. Here, we present a distributed relational database for managing complex datasets and its integration into our high-resolution software package IMIRS (Image Management and Icosahedral Reconstruction System). IMIRS consists of a complete set of modular programs for icosahedral reconstruction organized under a graphical user interface and provides options for user-friendly, step-by-step data processing as well as automatic reconstruction. We show that the integration of data management with processing in IMIRS automates the tedious tasks of data management, enables data coherence, and facilitates information sharing in a distributed computer and user environment without significantly increasing the time of program execution. We demonstrate the applicability of IMIRS in icosahedral reconstruction toward high resolution by using it to obtain an 8-A 3D structure of an intermediate-sized dsRNA virus.  相似文献   

2.
Knowing which proteins interact with each other is essential information for understanding how most biological processes at the cellular and organismal level operate and how their perturbation can cause disease. Continuous technical and methodological advances over the last two decades have led to many genome-wide systematically-generated protein–protein interaction (PPI) maps. To help store, visualize, analyze and disseminate these specialized experimental datasets via the web, we developed the freely-available Open-source Protein Interaction Platform (openPIP) as a customizable web portal designed to host experimental PPI maps. Such a portal is often required to accompany a paper describing the experimental data set, in addition to depositing the data in a standard repository. No coding skills are required to set up and customize the database and web portal. OpenPIP has been used to build the databases and web portals of two major protein interactome maps, the Human and Yeast Reference Protein Interactome maps (HuRI and YeRI, respectively). OpenPIP is freely available as a ready-to-use Docker container for hosting and sharing PPI data with the scientific community at http://openpip.baderlab.org/ and the source code can be downloaded from https://github.com/BaderLab/openPIP/.  相似文献   

3.
Marla S  Singh VK 《In silico biology》2007,7(4-5):543-545
Recent sequencing of genomes of several microorganisms provides an opportunity to have access to huge volumes of data stored in various databases. This has resulted in the development of various computational and visualization tools to aid in retrieval and analysis of data. Development of user friendly genome data mapping and visualization tools facilitates researchers to closely examine various features of genes and make inferences from the displayed data efficiently. PGV - Prokaryotic Genome Viewer is a Java based web application tool capable of generating high quality interactive circular chromosome maps. With simple mouse roll over tasks on the interested region on the displayed map, the user is provided with features such as feature labeling, multi-fold zooming, image rotation and hyperlinking to different information resources. The tool is capable of instantaneously generating maps using user-supplied sequence data.  相似文献   

4.
Cryo-electron microscopy (cryoEM) entails flash-freezing a thin layer of sample on a support, and then visualizing the sample in its frozen hydrated state by transmission electron microscopy (TEM). This can be achieved with very low quantity of protein and in the buffer of choice, without the use of any stain, which is very useful to determine structure-function correlations of macromolecules. When combined with single-particle image processing, the technique has found widespread usefulness for 3D structural determination of purified macromolecules. The protocol presented here explains how to perform cryoEM and examines the causes of most commonly encountered problems for rational troubleshooting; following all these steps should lead to acquisition of high quality cryoEM images. The technique requires access to the electron microscope instrument and to a vitrification device. Knowledge of the 3D reconstruction concepts and software is also needed for computerized image processing. Importantly, high quality results depend on finding the right purification conditions leading to a uniform population of structurally intact macromolecules. The ability of cryoEM to visualize macromolecules combined with the versatility of single particle image processing has proven very successful for structural determination of large proteins and macromolecular machines in their near-native state, identification of their multiple components by 3D difference mapping, and creation of pseudo-atomic structures by docking of x-ray structures. The relentless development of cryoEM instrumentation and image processing techniques for the last 30 years has resulted in the possibility to generate de novo 3D reconstructions at atomic resolution level.  相似文献   

5.
The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the “hidden” web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a “search engine for data”, searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases that involve searching for research resources.  相似文献   

6.
DBToolkit: processing protein databases for peptide-centric proteomics   总被引:2,自引:0,他引:2  
SUMMARY: DBToolkit is a user-friendly, easily extensible tool that allows the processing of protein sequence databases to peptide-centric sequence databases. This processing is primarily aimed at enhancing the useful information content of these databases for use as optimized search spaces for efficient identification of peptide fragmentation spectra obtained by mass spectrometry. In addition, DBToolkit can be used to reliably solve a range of other typical tasks in processing sequence databases. AVAILABILITY: DBToolkit is open source under the GNU GPL license. The source code, full user and developer documentation and cross-platform binaries are freely downloadable from the project website at http://genesis.UGent.be/dbtoolkit/ CONTACT: lennart.martens@UGent.be  相似文献   

7.
The application of novel and modern techniques in genetic engineering and genomics has resulted in information explosion in genomics. Three major genome databases under International Nucleotide Sequence Database collaboration NCBI, DDBJ and EMBL have been providing a convenient platform for submission of sequences which they share among themselves. Many institutes in India under Indian Council of Agricultural Research have scientists working on biotechnology and bioinformatics research. The various studies conducted by them, generate massive data related to biological information of plants, animals, insects, microbes and fisheries. These scientists are dependent on NCBI, EMBL, DDBJ and other portals for their sequence submissions, analysis and other data mining tasks. Due to various limitations imposed on these sites and the poor connectivity problem prevents them to conduct their studies on these open domain databases. The valued information generated by them needs to be shared by the scientific communities to eliminate the duplication of efforts and expedite their knowledge extended towards new findings. A secured common submission portal system with user-friendly interfaces, integrated help and error checking facilities has been developed in such a way that the database at the backend consists of a union of the items available on the above mentioned databases. Standard database management concepts have been employed for their systematic storage management. Extensive hardware resources in the form of high performance computing facility are being installed for deployment of this portal.

Availability

http://cabindb.iasri.res.in:8080/sequence_portal/  相似文献   

8.
Peña C  Malm T 《PloS one》2012,7(6):e39071
There is an ever growing number of molecular phylogenetic studies published, due to, in part, the advent of new techniques that allow cheap and quick DNA sequencing. Hence, the demand for relational databases with which to manage and annotate the amassing DNA sequences, genes, voucher specimens and associated biological data is increasing. In addition, a user-friendly interface is necessary for easy integration and management of the data stored in the database back-end. Available databases allow management of a wide variety of biological data. However, most database systems are not specifically constructed with the aim of being an organizational tool for researchers working in phylogenetic inference. We here report a new software facilitating easy management of voucher and sequence data, consisting of a relational database as back-end for a graphic user interface accessed via a web browser. The application, VoSeq, includes tools for creating molecular datasets of DNA or amino acid sequences ready to be used in commonly used phylogenetic software such as RAxML, TNT, MrBayes and PAUP, as well as for creating tables ready for publishing. It also has inbuilt BLAST capabilities against all DNA sequences stored in VoSeq as well as sequences in NCBI GenBank. By using mash-ups and calls to web services, VoSeq allows easy integration with public services such as Yahoo! Maps, Flickr, Encyclopedia of Life (EOL) and GBIF (by generating data-dumps that can be processed with GBIF's Integrated Publishing Toolkit).  相似文献   

9.
A database was used for data management and interprogram communication in an image processing and three-dimensional reconstruction program suite for biological bundles. The programs were modified from the MRC crystallographic package. The database server works with local and remote programs and data sets, allows simultaneous requests from multiple clients, and maintains multiple databases and data tables within them. It has built-in security for the data access. Several graphical user interfaces are available to view and/or edit data tables. In addition, FORTRAN interface and function libraries are written to communicate with image processing software. The data management overhead is inexpensive, requiring only narrow bandwidth from the network. It easily handles several data tables with over 1000 entries.  相似文献   

10.
Newcomb WW  Homa FL  Brown JC 《Journal of virology》2005,79(16):10540-10546
DNA enters the herpes simplex virus capsid by way of a ring-shaped structure called the portal. Each capsid contains a single portal, located at a unique capsid vertex, that is composed of 12 UL6 protein molecules. The position of the portal requires that capsid formation take place in such a way that a portal is incorporated into one of the 12 capsid vertices and excluded from all other locations, including the remaining 11 vertices. Since initiation or nucleation of capsid formation is a unique step in the overall assembly process, involvement of the portal in initiation has the potential to cause its incorporation into a unique vertex. In such a mode of assembly, the portal would need to be involved in initiation but not able to be inserted in subsequent assembly steps. We have used an in vitro capsid assembly system to test whether the portal is involved selectively in initiation. Portal incorporation was compared in capsids assembled from reactions in which (i) portals were present at the beginning of the assembly process and (ii) portals were added after assembly was under way. The results showed that portal-containing capsids were formed only if portals were present at the outset of assembly. A delay caused formation of capsids lacking portals. The findings indicate that if portals are present in reaction mixtures, a portal is incorporated during initiation or another early step in assembly. If no portals are present, assembly is initiated in another, possibly related, way that does not involve a portal.  相似文献   

11.
ObjectiveA diabetes patient web portal allows patients to access their personal health record and may improve diabetes outcomes; however, patients’ adoption is slow. We aimed to get insight into patients’ experiences with a web portal to understand how the portal is being used, how patients perceive the content of the portal and to assess whether redesign of the portal might be needed.Results632 patients (42.1%) returned the questionnaire. Their mean age was 59.7 years, 63.1% was male and 81.8% had type 2 diabetes. 413 (65.3%) people were persistent users and 34.7% early quitters. In the multivariable analysis, insulin use (OR2.07; 95%CI[1.18–3.62]), experiencing more frequently hyperglycemic episodes (OR1.30;95%CI[1.14–1.49]) and better diabetes knowledge (OR1.02, 95%CI[1.01–1.03]) do increase the odds of being a persistent user. Persistent users perceived the usefulness of the patient portal significantly more favorable. However, they also more decisively declared that the patient portal is not helpful in supporting life style changes. Early quitters felt significantly more items not applicable in their situation compared to persistent users. Both persistent users (69.8%) and early quitters (58.8%) would prefer a reminder function for scheduled visits. About 60% of both groups wanted information about medication and side-effects in their portal.ConclusionsThe diabetes patient web portal might be improved significantly by taking into account the patients’ experiences and attitudes. We propose creating separate portals for patients on insulin or not.  相似文献   

12.

Background  

Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow.  相似文献   

13.
The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members.  相似文献   

14.
Development of NPACI Grid Application Portals and Portal Web Services   总被引:2,自引:0,他引:2  
Grid portals and services are emerging as convenient mechanisms for providing the scientific community with familiar and simplified interfaces to the Grid. Our experiences in implementing computational grid portals, and the services needed to support them, has led to the creation of GridPort: a unique, integrated, layered software system for building portals and hosting portal services that access Grid services. The usefulness of this system has been successfully demonstrated with the implementation of several application portals. This system has several unique features: the software is portable and runs on most webservers; written in Perl/CGI, it is easy to support and modify; a single API provides access to a host of Grid services; it is flexible and adaptable; it supports single login between multiple portals; and portals built with it may run across multiple sites and organizations. In this paper we summarize our experiences in building this system, including philosophy and design choices and we describe the software we are building that support portal development, portal services. Finally, we discuss our experiences in developing the GridPort Client Toolkit in support of remote Web client portals and Grid Web services.  相似文献   

15.
16.
This paper describes a database for cell signaling enzymes. Our web database offers methods to study, interpret and compare cell-signaling enzymes. Searching and retrieving data from this database has been made easy and user friendly and it is well integrated with other related databases. We believe the end user will be benefited from this database. AVAILABILITY: http://www.sastra.edu/dcse/index.html.  相似文献   

17.
In spite of its recent achievements, the technique of single particle electron cryomicroscopy (cryoEM) has not been widely used to study proteins smaller than 100 kDa, although it is a highly desirable application of this technique. One fundamental limitation is that images of small proteins embedded in vitreous ice do not contain adequate features for accurate image alignment. We describe a general strategy to overcome this limitation by selecting a fragment antigen binding (Fab) to form a stable and rigid complex with a target protein, thus providing a defined feature for accurate image alignment. Using this approach, we determined a three-dimensional structure of an ~65 kDa protein by single particle cryoEM. Because Fabs can be readily generated against a wide range of proteins by phage display, this approach is generally applicable to study many small proteins by single particle cryoEM.  相似文献   

18.
SNPper: retrieval and analysis of human SNPs   总被引:4,自引:0,他引:4  
MOTIVATION: Single Nucleotide Polymorphisms (SNPs) are an increasingly important tool for the study of the human genome. SNPs can be used as markers to create high-density genetic maps, as causal candidates for diseases, or to reconstruct the history of our genome. SNP-based studies rely on the availability of large numbers of validated, high-frequency SNPs whose position on the chromosomes is known with precision. Although large collections of SNPs exist in public databases, researchers need tools to effectively retrieve and manipulate them. RESULTS: We describe the implementation and usage of SNPper, a web-based application to automate the tasks of extracting SNPs from public databases, analyzing them and exporting them in formats suitable for subsequent use. Our application is oriented toward the needs of candidate-gene, whole-genome and fine-mapping studies, and provides several flexible ways to present and export the data. The application has been publicly available for over a year, and has received positive user feedback and high usage levels.  相似文献   

19.
In recent years, the deluge of complicated molecular and cellular microscopic images creates compelling challenges for the image computing community. There has been an increasing focus on developing novel image processing, data mining, database and visualization techniques to extract, compare, search and manage the biological knowledge in these data-intensive problems. This emerging new area of bioinformatics can be called 'bioimage informatics'. This article reviews the advances of this field from several aspects, including applications, key techniques, available tools and resources. Application examples such as high-throughput/high-content phenotyping and atlas building for model organisms demonstrate the importance of bioimage informatics. The essential techniques to the success of these applications, such as bioimage feature identification, segmentation and tracking, registration, annotation, mining, image data management and visualization, are further summarized, along with a brief overview of the available bioimage databases, analysis tools and other resources.  相似文献   

20.
With the establishment of high-throughput (HT) screening methods there is an increasing need for automatic analysis methods. Here we present RReportGenerator, a user-friendly portal for automatic routine analysis using the statistical platform R and Bioconductor. RReportGenerator is designed to analyze data using predefined analysis scenarios via a graphical user interface (GUI). A report in pdf format combining text, figures and tables is automatically generated and results may be exported. To demonstrate suitable analysis tasks we provide direct web access to a collection of analysis scenarios for summarizing data from transfected cell arrays (TCA), segmentation of CGH data, and microarray quality control and normalization. AVAILABILITY: RReportGenerator, a user manual and a collection of analysis scenarios are available under a GNU public license on http://www-bio3d-igbmc.u-strasbg.fr/~wraff  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号