首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Computational Grids [17,25] have become an important asset in large-scale scientific and engineering research. By providing a set of services that allow a widely distributed collection of resources to be tied together into a relatively seamless computing framework, teams of researchers can collaborate to solve problems that they could not have attempted before. Unfortunately the task of building Grid applications remains extremely difficult because there are few tools available to support developers. To build reliable and re-usable Grid applications, programmers must be equipped with a programming framework that hides the details of most Grid services and allows the developer a consistent, non-complex model in which applications can be composed from well tested, reliable sub-units. This paper describes experiences with using a software component framework for building Grid applications. The framework, which is based on the DOE Common Component Architecture (CCA) [1,2,3,8], allows individual components to export function/service interfaces that can be remotely invoked by other components. The framework also provides a simple messaging/event system for asynchronous notification between application components. The paper also describes how the emerging Web-services [52] model fits with a component-oriented application design philosophy. To illustrate the connection between Web services and Grid application programming we describe a simple design pattern for application factory services which can be used to simplify the task of building reliable Grid programs. Finally we address several issues of Grid programming that better understood from the perspective of Peer-to-Peer (P2P) systems. In particular we describe how models for collaboration and resource sharing fit well with many Grid application scenarios.  相似文献   

2.
Newcomb WW  Homa FL  Brown JC 《Journal of virology》2005,79(16):10540-10546
DNA enters the herpes simplex virus capsid by way of a ring-shaped structure called the portal. Each capsid contains a single portal, located at a unique capsid vertex, that is composed of 12 UL6 protein molecules. The position of the portal requires that capsid formation take place in such a way that a portal is incorporated into one of the 12 capsid vertices and excluded from all other locations, including the remaining 11 vertices. Since initiation or nucleation of capsid formation is a unique step in the overall assembly process, involvement of the portal in initiation has the potential to cause its incorporation into a unique vertex. In such a mode of assembly, the portal would need to be involved in initiation but not able to be inserted in subsequent assembly steps. We have used an in vitro capsid assembly system to test whether the portal is involved selectively in initiation. Portal incorporation was compared in capsids assembled from reactions in which (i) portals were present at the beginning of the assembly process and (ii) portals were added after assembly was under way. The results showed that portal-containing capsids were formed only if portals were present at the outset of assembly. A delay caused formation of capsids lacking portals. The findings indicate that if portals are present in reaction mixtures, a portal is incorporated during initiation or another early step in assembly. If no portals are present, assembly is initiated in another, possibly related, way that does not involve a portal.  相似文献   

3.
4.
Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of uploading and accessing distributed datasets through the Grid middleware and against its ability in managing the execution of jobs on distributed computational resources. Results from the Grid test will be discussed in a further paper.  相似文献   

5.
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL.  相似文献   

6.
The exponential increase of image data in high-resolution reconstructions by electron cryomicroscopy (cryoEM) has posed a need for efficient data management solutions in addition to powerful data processing procedures. Although relational databases and web portals are commonly used to manage sequences and structures in biological research, their application in cryoEM has been limited due to the complexity in accomplishing the dual tasks of interacting with proprietary software and simultaneously providing data access to users without database knowledge. Here, we report our results in developing web portal to SQL image databases used by the Image Management and Icosahedral Reconstruction System (IMIRS) to manage cryoEM images for subnanometer-resolution reconstructions. Fundamental issues related to the design and deployment of web portals to image databases are described. A web browser-based user interface was designed to accomplish data reporting and other database-related services, including user authentication, data entry, graph-based data mining, and various query and reporting tasks with interactive image manipulation capabilities. With an integrated web portal, IMIRS represents the first cryoEM application that incorporates both web-based data reporting tools and a complete set of data processing modules. Our examples should thus provide general guidelines applicable to other cryoEM technology development efforts.  相似文献   

7.
The LTER Grid Pilot Study was conducted by the National Center for Supercomputing Applications, the University of New Mexico, and Michigan State University, to design and build a prototype grid for the ecological community. The featured grid application, the Biophony Grid Portal, manages acoustic data from field sensors and allows researchers to conduct real-time digital signal processing analysis on high-performance systems via a web-based portal. Important characteristics addressed during the study include the management, access, and analysis of a large set of field collected acoustic observations from microphone sensors, single signon, and data provenance. During the development phase of this project, new features were added to standard grid middleware software and have already been successfully leveraged by other, unrelated grid projects. This paper provides an overview of the Biophony Grid Portal application and requirements, discusses considerations regarding grid architecture and design, details the technical implementation, and summarizes key experiences and lessons learned that are generally applicable to all developers and administrators in a grid environment.  相似文献   

8.
To deal with the environment’s heterogeneity, information providers usually offer access to their data by publishing Web services in the domain of pervasive computing. Therefore, to support applications that need to combine data from a diverse range of sources, pervasive computing requires a middleware to query multiple Web services. There exist works that have been investigating on generating optimal query plans. We however in this paper propose a query execution model, called PQModel, to optimize the process of query execution over Web Services. In other words, we attempt to improve query efficiency from the aspect of optimizing the execution processing of query plans.  相似文献   

9.
Icosahedral double-stranded DNA viruses use a single portal for genome delivery and packaging. The extensive structural similarity revealed by such portals in diverse viruses, as well as their invariable positioning at a unique icosahedral vertex, led to the consensus that a particular, highly conserved vertex-portal architecture is essential for viral DNA translocations. Here we present an exception to this paradigm by demonstrating that genome delivery and packaging in the virus Acanthamoeba polyphaga mimivirus occur through two distinct portals. By using high-resolution techniques, including electron tomography and cryo-scanning electron microscopy, we show that Mimivirus genome delivery entails a large-scale conformational change of the capsid, whereby five icosahedral faces open up. This opening, which occurs at a unique vertex of the capsid that we coined the “stargate”, allows for the formation of a massive membrane conduit through which the viral DNA is released. A transient aperture centered at an icosahedral face distal to the DNA delivery site acts as a non-vertex DNA packaging portal. In conjunction with comparative genomic studies, our observations imply a viral packaging pathway akin to bacterial DNA segregation, which might be shared by diverse internal membrane–containing viruses.  相似文献   

10.
Icosahedral double-stranded DNA viruses use a single portal for genome delivery and packaging. The extensive structural similarity revealed by such portals in diverse viruses, as well as their invariable positioning at a unique icosahedral vertex, led to the consensus that a particular, highly conserved vertex-portal architecture is essential for viral DNA translocations. Here we present an exception to this paradigm by demonstrating that genome delivery and packaging in the virus Acanthamoeba polyphaga mimivirus occur through two distinct portals. By using high-resolution techniques, including electron tomography and cryo-scanning electron microscopy, we show that Mimivirus genome delivery entails a large-scale conformational change of the capsid, whereby five icosahedral faces open up. This opening, which occurs at a unique vertex of the capsid that we coined the “stargate”, allows for the formation of a massive membrane conduit through which the viral DNA is released. A transient aperture centered at an icosahedral face distal to the DNA delivery site acts as a non-vertex DNA packaging portal. In conjunction with comparative genomic studies, our observations imply a viral packaging pathway akin to bacterial DNA segregation, which might be shared by diverse internal membrane–containing viruses.  相似文献   

11.
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.  相似文献   

12.
MOTIVATION: The (my)Grid project aims to exploit Grid technology, with an emphasis on the Information Grid, and provide middleware layers that make it appropriate for the needs of bioinformatics. (my)Grid is building high level services for data and application integration such as resource discovery, workflow enactment and distributed query processing. Additional services are provided to support the scientific method and best practice found at the bench but often neglected at the workstation, notably provenance management, change notification and personalisation. RESULTS: We give an overview of these services and their metadata. In particular, semantically rich metadata expressed using ontologies necessary to discover, select and compose services into dynamic workflows.  相似文献   

13.
The herpes simplex virus type 1 (HSV-1) portal complex is a ring-shaped structure located at a single vertex in the viral capsid. Composed of 12 U(L)6 protein molecules, the portal functions as a channel through which DNA passes as it enters the capsid. The studies described here were undertaken to clarify how the portal becomes incorporated as the capsid is assembled. We tested the idea that an intact portal may be donated to the growing capsid by way of a complex with the major scaffolding protein, U(L)26.5. Soluble U(L)26.5-portal complexes were found to assemble when purified portals were mixed in vitro with U(L)26.5. The complexes, called scaffold-portal particles, were stable during purification by agarose gel electrophoresis or sucrose density gradient ultracentrifugation. Examination of the scaffold-portal particles by electron microscopy showed that they resemble the 50- to 60-nm-diameter "scaffold particles" formed from purified U(L)26.5. They differed, however, in that intact portals were observed on the surface. Analysis of the protein composition by sodium dodecyl sulfate-polyacrylamide gel electrophoresis demonstrated that portals and U(L)26.5 combine in various proportions, with the highest observed U(L)6 content corresponding to two or three portals per scaffold particle. Association between the portal and U(L)26.5 was antagonized by WAY-150138, a small-molecule inhibitor of HSV-1 replication. Soluble scaffold-portal particles were found to function in an in vitro capsid assembly system that also contained the major capsid (VP5) and triplex (VP19C and VP23) proteins. Capsids that formed in this system had the structure and protein composition expected of mature HSV-1 capsids, including U(L)6, at a level corresponding to approximately 1 portal complex per capsid. The results support the view that U(L)6 becomes incorporated into nascent HSV-1 capsids by way of a complex with U(L)26.5 and suggest further that U(L)6 may be introduced into the growing capsid as an intact portal.  相似文献   

14.
Structural Genomics has been successful in determining the structures of many unique proteins in a high throughput manner. Still, the number of known protein sequences is much larger than the number of experimentally solved protein structures. Homology (or comparative) modeling methods make use of experimental protein structures to build models for evolutionary related proteins. Thereby, experimental structure determination efforts and homology modeling complement each other in the exploration of the protein structure space. One of the challenges in using model information effectively has been to access all models available for a specific protein in heterogeneous formats at different sites using various incompatible accession code systems. Often, structure models for hundreds of proteins can be derived from a given experimentally determined structure, using a variety of established methods. This has been done by all of the PSI centers, and by various independent modeling groups. The goal of the Protein Model Portal (PMP) is to provide a single portal which gives access to the various models that can be leveraged from PSI targets and other experimental protein structures. A single interface allows all existing pre-computed models across these various sites to be queried simultaneously, and provides links to interactive services for template selection, target-template alignment, model building, and quality assessment. The current release of the portal consists of 7.6 million model structures provided by different partner resources (CSMP, JCSG, MCSG, NESG, NYSGXRC, JCMM, ModBase, SWISS-MODEL Repository). The PMP is available at and from the PSI Structural Genomics Knowledgebase.  相似文献   

15.
Pise is interface construction software for bioinformatics applications that run by command-line operations. It creates common, easy-to-use interfaces to these applications for the Web, or other uses. It is adaptable to new bioinformatics tools, and offers program chaining, Unix system batch and other controls, making it an attractive method for building and using your own bioinformatics web services.  相似文献   

16.
Geotemporal information, information associated with geographical space and time, has always been critical to climate and environmental science. However, this information is certainly not universally or easily accessible. In fact, obtaining and using geotemporal information often comes with a considerable technical overheads, impeding research progress. To address this, we introduce FetchClimate: a cloud service designed to provide easy, universal access to geotemporal information. FetchClimate enables and accelerates the use of geotemporal information by enabling it to be accessed programmatically from a Web service (such as the statistical software R) or non‐programmatically using a Web browser. We intend the service to accelerate the pace of ecological and environmental research by eliminating the technical overheads currently needed to obtain geotemporal information. The software, online manual, and user support are freely available at < http://www.fetchclimate.com >.  相似文献   

17.
Genetic and biochemical studies have suggested the existence of a bacteriophage-like, DNA-packaging/ejecting portal complex in herpesviruses capsids, but its arrangement remained unknown. Here, we report the first visualization of a unique vertex in the Kaposi's sarcoma-associated herpesvirus (KSHV) capsid by cryoelectron tomography, thus providing direct structural evidence for the existence of a portal complex in a gammaherpesvirus. This putative KSHV portal is an internally localized, umbilicated structure and lacks all of the external machineries characteristic of portals in DNA bacteriophages.  相似文献   

18.
ObjectiveA diabetes patient web portal allows patients to access their personal health record and may improve diabetes outcomes; however, patients’ adoption is slow. We aimed to get insight into patients’ experiences with a web portal to understand how the portal is being used, how patients perceive the content of the portal and to assess whether redesign of the portal might be needed.Results632 patients (42.1%) returned the questionnaire. Their mean age was 59.7 years, 63.1% was male and 81.8% had type 2 diabetes. 413 (65.3%) people were persistent users and 34.7% early quitters. In the multivariable analysis, insulin use (OR2.07; 95%CI[1.18–3.62]), experiencing more frequently hyperglycemic episodes (OR1.30;95%CI[1.14–1.49]) and better diabetes knowledge (OR1.02, 95%CI[1.01–1.03]) do increase the odds of being a persistent user. Persistent users perceived the usefulness of the patient portal significantly more favorable. However, they also more decisively declared that the patient portal is not helpful in supporting life style changes. Early quitters felt significantly more items not applicable in their situation compared to persistent users. Both persistent users (69.8%) and early quitters (58.8%) would prefer a reminder function for scheduled visits. About 60% of both groups wanted information about medication and side-effects in their portal.ConclusionsThe diabetes patient web portal might be improved significantly by taking into account the patients’ experiences and attitudes. We propose creating separate portals for patients on insulin or not.  相似文献   

19.
In this paper, we propose bounding models, which provide upper and lower bounds on response time in composite Web service model, for alleviating the state explosion problem. The considered models have heterogeneous servers and the number of elementary Web services can be very large. More precisely, we study two types of composite Web services. First, we investigate the performance of a single composite Web service execution instance. Second, this assumption is relaxed (i.e. multiple composite Web services execution instances are considered). These models allows to find trade-off between the accuracy of the bounds and the computation complexity.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号