首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we discuss strategies for providing World Wide Web service users with adequate Quality of Service (QoS). We argue that QoS can be provided by distributing the service requests processing load among replicated Web servers (WSs), that can be geographically distributed across the Internet. In order to support our argument, we compare and contrast several load distribution strategies, and assess their effectivness when deployed within the context of a geographically replicated Web service; the principal figure of merit we use in this assessment is the response time experienced by the users of that service. As a result of this comparison, we propose a specific strategy, named QoS-based, that implements load distribution among WS replicas by binding a user to the replica that provides the shortest user response time. We examine several architectures that exploit our QoS-based strategy. Two of these architectures, named, respectively, Browser-based and Load Distribution-based, are described in detail as they are particularly appropriate for implementing our strategy.  相似文献   

2.
MOTIVATION: There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. RESULTS: We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.  相似文献   

3.
Content-Aware Dispatching Algorithms for Cluster-Based Web Servers   总被引:1,自引:0,他引:1  
Cluster-based Web servers are leading architectures for highly accessed Web sites. The most common Web cluster architecture consists of replicated server nodes and a Web switch that routes client requests among the nodes. In this paper, we consider content-aware Web switches that can use application level information to assign client requests. We evaluate the performance of some representative state-of-the-art dispatching algorithms for Web switches operating at layer 7 of the OSI protocol stack. Specifically, we consider dispatching algorithms that use only client information as well as the combination of client and server information for load sharing, reference locality or service partitioning. We demonstrate through a wide set of simulation experiments that dispatching policies aiming to improve locality in server caches give best results for traditional Web publishing sites providing static information and some simple database searches. On the other hand, when we consider more recent Web sites providing dynamic and secure services, dispatching policies that aim to share the load are the most effective.  相似文献   

4.
This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.  相似文献   

5.
Server scalability is more important than ever in today's client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server architectures: OSI layer two dispatching (LSMAC) and OSI layer three dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware in contrast to other, similar, solutions which require specialized hardware/software. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

6.
GABAagent: a system for integrating data on GABA receptors   总被引:1,自引:0,他引:1  
  相似文献   

7.

Background

The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse.

Results

Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on.

Conclusions

GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp.  相似文献   

8.
In a world where many users rely on the Web for up-to-date personal and business information and transactions, it is fundamental to build Web systems that allow service providers to differentiate user expectations with multi-class Service Level Agreements (SLAs). In this paper we focus on the server components of the Web, by implementing QoS principles in a Web-server cluster that is, an architecture composed by multiple servers and one front-end node called Web switch. We first propose a methodology to determine a set of confident SLAs in a real Web cluster for multiple classes of users and services. We then decide to implement at the Web switch level all mechanisms that transform a best-effort Web cluster into a QoS-enhanced system. We also compare three QoS-aware policies through experimental results in a real test-bed system. We show that the policy implementing all QoS principles allows a Web content provider to guarantee the contractual SLA targets also in severe load conditions. Other algorithms lacking some QoS principles cannot be used for respecting SLA constraints although they provide acceptable performance for some load and system conditions.  相似文献   

9.
Developers base selection of a User Interface (UI) development approach on functionality, development and maintenance costs, usability, responsiveness, etc. User expectations continue to grow for greater functionality and continuous interactivity, extending demands on computational resources. To facility scale, recent approaches push more UI computation to clients. Such client-side delegation of functionality increase, continuous usage, and localized computation create ever-growing energy demands, which may negatively impact battery life on mobile platforms. Nonetheless, developers given little attention to the power demands aspects of UI framework selection. We evaluate the impact of contemporary UI framework selection on resource utilization and energy consumption. We suggest an alternative delivery approach designed to preserve low energy demands on clients while still allowing offloading of computation from server to client. Our work focuses on web-based mobile applications; however, we believe our approach to energy demand reduction and framework evaluation to be generally applicable.  相似文献   

10.

Background  

Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis.  相似文献   

11.
Ojha KK  Swati D 《Bioinformation》2010,5(5):213-218
Genome replication is a crucial and essential process for the continuity of life.In all organisms it starts at a specific region of the genome known as origin of replication (Ori) site. The number of Ori sites varies in prokaryotes and eukaryotes. Replication starts at a single Ori site in bacteria, but in eukaryotes multiple Ori sites are used for fast copying across all chromosomes. The situation becomes complex in archaea, where some groups have single and others have multiple origins of replication. Themococcales, are a hyperthermophilic order of archaea. They are anaerobes and heterotrophs-peptide fermenters, sulphate reducers, methanogens being some of the examples of metabolic types. In this paper we have applied a combination of multiple in silico approaches - Z curve, the cell division cycle (cdc6) gene location and location of consensus origin recognition box (ORB) sequences for location of origin of replication in Thermococcus onnurineus, Thermococcus gammatolerans and other Themococcales and compared the results to that of the well-documented case of Pyrococcus abyssi. The motivation behind this study is to find the number of Ori sites based on the data available for members of this order. Results from this in silico analysis show that the Themococcales have a single origin of replication.  相似文献   

12.
In the last decade, directed evolution has become a routine approach for engineering proteins with novel or altered properties. Concurrently, a trend away from purely 'blind' randomization strategies and towards more 'semi-rational' approaches has also become apparent. In this review, we discuss ways in which structural information and predictive computational tools are playing an increasingly important role in guiding the design of randomized libraries: web servers such as ConSurf-HSSP and SCHEMA allow the prediction of sites to target for producing functional variants, while algorithms such as GLUE, PEDEL and DRIVeR are useful for estimating library completeness and diversity. In addition, we review recent methodological developments that facilitate the construction of unbiased libraries, which are inherently more diverse than biased libraries and therefore more likely to yield improved variants.  相似文献   

13.
In order to understand the mechanisms leading to the complete duplication of linear eukaryotic chromosomes, the temporal order of the events involved in replication of a 7.5-kb Saccharomyces cerevisiae linear plasmid called YLpFAT10 was determined. Two-dimensional agarose gel electrophoresis was used to map the position of the replication origin and the direction of replication fork movement through the plasmid. Replication began near the center of YLpFAT10 at the site in the 2 microns sequences that corresponds to the 2 microns origin of DNA replication. Replication forks proceeded bidirectionally from the origin to the ends of YLpFAT10. Thus, yeast telomeres do not themselves act as origins of DNA replication. The time of origin utilization on YLpFAT10 and on circular 2 microns DNA in the same cells was determined both by two-dimensional gel electrophoresis and by density transfer experiments. As expected, 2 microns DNA replicated in early S phase. However, replication of YLpFAT10 occurred in late S phase. Thus, the time of activation of the 2 microns origin depended upon its physical context. Density transfer experiments established that the acquisition of telomeric TG1-3 single-strand tails, a predicted intermediate in telomere replication, occurred immediately after the replication forks approached the ends of YLpFAT10. Thus, telomere replication may be the very last step in S phase.  相似文献   

14.
Eukaryotic genomes are replicated from multiple DNA replication origins. We present complementary deep sequencing approaches to measure origin location and activity in Saccharomyces cerevisiae. Measuring the increase in DNA copy number during a synchronous S-phase allowed the precise determination of genome replication. To map origin locations, replication forks were stalled close to their initiation sites; therefore, copy number enrichment was limited to origins. Replication timing profiles were generated from asynchronous cultures using fluorescence-activated cell sorting. Applying this technique we show that the replication profiles of haploid and diploid cells are indistinguishable, indicating that both cell types use the same cohort of origins with the same activities. Finally, increasing sequencing depth allowed the direct measure of replication dynamics from an exponentially growing culture. This is the first time this approach, called marker frequency analysis, has been successfully applied to a eukaryote. These data provide a high-resolution resource and methodological framework for studying genome biology.  相似文献   

15.
Branzei D  Foiani M 《DNA Repair》2007,6(7):994-1003
DNA replication is an essential process that occurs in all growing cells and needs to be tightly regulated in order to preserve genetic integrity. Eukaryotic cells have developed multiple mechanisms to ensure the fidelity of replication and to coordinate the progression of replication forks. Replication is often impeded by DNA damage or replication blocks, and the resulting stalled replication forks are sensed and protected by specialized surveillance mechanisms called checkpoints. The replication checkpoint plays an essential role in preventing the breakdown of stalled replication forks and the accumulation of DNA structures that enhance recombination and chromosomal rearrangements that ultimately lead to genomic instability and cancer development. In addition, the replication checkpoint is thought to assist and coordinate replication fork restart processes by controlling DNA repair pathways, regulating chromatin structure, promoting the recruitment of proteins to sites of damage, and controlling cell cycle progression. In this review we focus mainly on the results obtained in budding yeast to discuss on the multiple roles of checkpoints in maintaining fork integrity and on the enzymatic activities that cooperate with the checkpoint pathway to promote fork resumption and repair of DNA lesions thereby contributing to genome integrity.  相似文献   

16.
Given the existence of powerful multiprocessor client workstations in many client-server object database applications, the performance bottleneck is the delay in transferring pages from the server to the client. We present a prefetching technique that can avoid this delay, especially where the client application requests pages from several database servers. This technique has been added to the EXODUS storage manager. Part of the novelty of this approach lies in the way that multithreading on the client workstation is exploited, in particular for activities such as prefetching and flushing dirty pages to the server. Using our own complex object benchmark, we analyze the performance of the prefetching technique with multiple clients and multiple servers. The technique is also tested under a variety of client host workload levels. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

17.
Replication of eukaryotic DNA is driven by a protein complex, in which the central part is played by DNA polymerases. Synthesis with eukaryotic DNA polymerases alpha, delta, and epsilon involves various replication factors, including the replication protein A, replication factor C, proliferating cell nuclear antigen, etc. Replication enzymes and factors also participate in DNA repair, which is in an interplay with DNA replication. The function of the entire multicomponent system is regulated by protein--nucleic acid and protein--protein interactions. The eukaryotic replication complex was not isolated as a stable supramolecular structure, suggesting its dynamic organization. Hence X-ray analysis and other instrumental techniques are hardly suitable for studying this system. An alternative approach is affinity modification. Its most promising version involves in situ generation of photoreactive DNA replication intermediates. The review considers the recent progress in photoaffinity modification studies of DNA polymerases, eukaryotic replication factors, and their interactions with DNA replication intermediates.  相似文献   

18.
Cloud computing took a step forward in the efficient use of hardware through virtualization technology. And as a result, cloud brings evident benefits for both users and providers. While users can acquire computational resources on-demand elastically, cloud vendors can also utilize maximally the investment costs for data centers infrastructure. In the Internet era, the number of appliances and services migrated to cloud environment increases exponentially. This leads to the expansion of data centers, which become bigger and bigger. Not just that these data centers must have the architecture with a high elasticity in order to serve the huge upsurge of tasks and balance the energy consumption. Although in recent times, many research works have dealt with finite capacity for single job queue in data centers, the multiple finite-capacity queues architecture receives less attention. In reality, the multiple queues architecture is widely used in large data centers. In this paper, we propose a novel three-state model for cloud servers. The model is deployed in both single and multiple finite capacity queues. We also bring forward several strategies to control multiple queues at the same time. This approach allows to reduce service waiting time for jobs and managing elastically the service capability for the whole system. We use CloudSim to simulate the cloud environment and to carry out the experiments in order to demonstrate the operability and effectiveness of the proposed method and strategies. The power consumption is also evaluated to provide insights into the system performance in respect of performance-energy trade-off.  相似文献   

19.
Molecular Probe Data Base (MPDB).   总被引:1,自引:1,他引:0       下载免费PDF全文
In this paper, the current status of the Molecular Probe Data Base (http://www.biotech.ist.unige.it/interlab/ mpdb.html ) is briefly presented together with a short analysis of its activity during 1997. This has been performed by statistically evaluating the 'logs' of the Internet servers that are used for its distribution with reference to the geographical origin of the requests, the words that were utilized to carry out of the searches and the oligonucleotides that were retrieved. Planned enhancements of this database are also described. They include a revision of its data structure and, even more relevant, of its data management procedures.  相似文献   

20.
The emergent needs of the bioinformatics community challenge current information systems. The pace of biological data generation far outstrips Moore's Law. Therefore, a gap continues to widen between the capabilities to produce biological (molecular and cell) data sets and the capability to manage and analyze these data sets. As a result, Federal investments in large data set generation produces diminishing returns in terms of the community's capabilities of understanding biology and leveraging that understanding to make scientific and technological advances that improve society. We are building an open framework to address various data management issues including data and tool interoperability, nomenclature and data communication standardization, and database integration. PathPort, short for Pathogen Portal, employs a generic, web-services based framework to deal with some of the problems identified by the bioinformatics community. The motivating research goal of a scalable system to provide data management and analysis for key pathosystems, especially relating to molecular data, has resulted in a generic framework using two major components. On the server-side, we employ web-services. On the client-side, a Java application called ToolBus acts as a client-side "bus" for contacting data and tools and viewing results through a single, consistent user interface.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号