首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a world where many users rely on the Web for up-to-date personal and business information and transactions, it is fundamental to build Web systems that allow service providers to differentiate user expectations with multi-class Service Level Agreements (SLAs). In this paper we focus on the server components of the Web, by implementing QoS principles in a Web-server cluster that is, an architecture composed by multiple servers and one front-end node called Web switch. We first propose a methodology to determine a set of confident SLAs in a real Web cluster for multiple classes of users and services. We then decide to implement at the Web switch level all mechanisms that transform a best-effort Web cluster into a QoS-enhanced system. We also compare three QoS-aware policies through experimental results in a real test-bed system. We show that the policy implementing all QoS principles allows a Web content provider to guarantee the contractual SLA targets also in severe load conditions. Other algorithms lacking some QoS principles cannot be used for respecting SLA constraints although they provide acceptable performance for some load and system conditions.  相似文献   

2.
Content-Aware Dispatching Algorithms for Cluster-Based Web Servers   总被引:1,自引:0,他引:1  
Cluster-based Web servers are leading architectures for highly accessed Web sites. The most common Web cluster architecture consists of replicated server nodes and a Web switch that routes client requests among the nodes. In this paper, we consider content-aware Web switches that can use application level information to assign client requests. We evaluate the performance of some representative state-of-the-art dispatching algorithms for Web switches operating at layer 7 of the OSI protocol stack. Specifically, we consider dispatching algorithms that use only client information as well as the combination of client and server information for load sharing, reference locality or service partitioning. We demonstrate through a wide set of simulation experiments that dispatching policies aiming to improve locality in server caches give best results for traditional Web publishing sites providing static information and some simple database searches. On the other hand, when we consider more recent Web sites providing dynamic and secure services, dispatching policies that aim to share the load are the most effective.  相似文献   

3.
4.
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.  相似文献   

5.
生态系统服务流研究进展   总被引:15,自引:6,他引:9  
王嘉丽  周伟奇 《生态学报》2019,39(12):4213-4222
生态系统服务流是实现生态系统服务供给与人类需求耦合的重要桥梁,是当前生态系统服务研究的热点与前沿。科学理解生态系统服务从产生、传递到使用的全过程,明确区域生态系统服务供给与需求的平衡状况,对于实现区域可持续发展与提高人类福祉具有重要意义。在综合分析国内外相关研究的基础上,阐述了生态系统服务流的概念内涵,归纳总结了生态系统服务流的定量研究方法,并系统梳理了国内外生态系统服务流研究在理论探索与应用案例方面的研究进展;探讨了当前生态系统服务流研究中存在的问题和不足,包括:生态系统服务从供给区传递至需求区的空间转移规律尚不明晰、定量评估生态系统服务流的方法不成熟、面向应用的生态系统服务流的研究较为缺乏,以及人与自然耦合系统中生态系统服务流研究框架有待进一步完善。未来的研究需要重点关注复杂系统中生态系统服务流跨尺度、跨区域的传递过程,加强生态系统服务的空间流动过程与传递路径的定量分析与模拟,重视生态系统服务流的应用性研究,尤其是在生态恢复、生态补偿、城市规划等方面的应用,并完善人与自然耦合系统中生态系统服务流的研究框架,以期促进生态系统服务流研究的发展。  相似文献   

6.
吴舒尧  黄姣  李双成 《生态学报》2017,37(20):6986-6999
全球范围内关键生态系统服务的减少使人类社会面临巨大的威胁,生物多样性是生态系统提供各种产品和服务的基础。生态恢复工程对退化的生态系统服务和生物多样性进行修复,对于缓解人类环境压力具有非常重要的意义。长期的理论和实践工作形成了多种生态恢复措施:(1)单纯基于生态系统自我设计的自然恢复方式,(2)人为设计对环境条件进行干预,反馈影响生态系统的自我设计,(3)人为设计对目标种群和生态系统进行直接干预和重建。这3类恢复方式可以在不同程度上定向的影响生态系统的恢复进程,反映了人类对生态系统的低度、中度和高度介入。哪种恢复方式和介入程度能够实现更好的恢复效果,是生态恢复学中的一个关键问题,但到目前为止,虽广有争议,却无定量的分析和结论。针对这个空白,通过对ISI Web of Knowledge数据库中生态恢复相关文献的整合分析,基于数学统计的方法定量比较在不同条件下低度介入(自然恢复)、中度介入(环境干预)和高度介入(直接干预)3种恢复方式对生态系统服务与生物多样性的恢复效果。论文从4个方面展开研究:(1)低度、中度、高度介入生态恢复方式的划分,(2)比较3大类介入方式对生态系统服务和生物多样性恢复效果的差异,(3)不同气候条件、生态系统类型和恢复时间等背景因素的影响,(4)生物多样性恢复和生态系统服务恢复之间的关系。研究结果揭示了不同生态恢复方式的适用条件,以及对生物多样性和生态系统恢复相互关系的作用,对生态恢复实践中恢复方式的选择有指导作用。对未来的研究也有启示意义,如针对特定生态系统服务或具体研究问题进一步探索低度、中度和高度介入生态恢复方式的作用规律和机制;将地区的社会经济水平、生态系统的受损程度等因素纳入生态恢复方式的考察,以最优化生态恢复成本-效率等。  相似文献   

7.
Many sequenced genes are mainly annotated through automatic transfer of annotation from similar sequences. Manual comparison of results or intermediate results from different tools can help avoid wrong annotations and give hints to the function of a gene even if none of the automated tools could return any result. AFAWE simplifies the task of manual functional annotation by running different tools and workflows for automatic function prediction and displaying the results in a way that facilitates comparison. Because all programs are executed as web services, AFAWE is easily extensible and can directly query primary databases, thereby always using the most up-to-date data sources. Visual filters help to distinguish trustworthy results from non-significant results. Furthermore, an interface to add detailed manual annotation to each gene is provided, which can be displayed to other users.  相似文献   

8.
In high performance computing (HPC) resources’ extensive experiments are frequently executed. HPC resources (e.g. computing machines and switches) should be able to handle running several experiments in parallel. Typically HPC utilizes parallelization in programs, processing and data. The underlying network is seen as the only non-parallelized HPC component (i.e. no dynamic virtual slicing based on HPC jobs). In this scope we present an approach in this paper to utilize software defined networking (SDN) to parallelize HPC clusters among the different running experiments. We propose to accomplish this through two major components: A passive module (network mapper/remapper) to select for each experiment as soon as it starts the least busy resources in the network, and an SDN-HPC active load balancer to perform more complex and intelligent operations. Active load balancer can logically divide the network based on experiments’ host files. The goal is to reduce traffic to unnecessary hosts or ports. An HPC experiment should multicast, rather than broadcast to only cluster nodes that are used by the experiment. We use virtual tenant network modules in Opendaylight controller to create VLANs based on HPC experiments. In each HPC host, virtual interfaces are created to isolate traffic from the different experiments. The traffic between the different physical hosts that belong to the same experiment can be distinguished based on the VLAN ID assigned to each experiment. We evaluate the new approach using several HPC public benchmarks. Results show a significant enhancement in experiments’ performance especially when HPC cluster experiences running several heavy load experiments simultaneously. Results show also that this multi-casting approach can significantly reduce casting overhead that is caused by using a single cast for all resources in the HPC cluster. In comparison with InfiniBand networks that offer interconnect services with low latency and high bandwidth, HPC services based on SDN can provide two distinguished objectives that may not be possible with InfiniBand: The first objective is the integration of HPC with Ethernet enterprise networks and hence expanding HPC usage to much wider domains. The second objective is the ability to enable users and their applications to customize HPC services with different QoS requirements that fit the different needs of those applications and optimize the usage of HPC clusters.  相似文献   

9.
Acoustic analysis is a useful tool to diagnose voice diseases. Furthermore it presents several advantages: it is non-invasive, provides an objective diagnostic and, also, it can be used for the evaluation of surgical and pharmacological treatments and rehabilitation processes. Most of the approaches found in the literature address the automatic detection of voice impairments from speech by using the sustained phonation of vowels. In this paper it is proposed a new scheme for the detection of voice impairments from text-dependent running speech. The proposed methodology is based on the segmentation of speech into voiced and non-voiced frames, parameterising each voiced frame with mel-frequency cepstral parameters. The classification is carried out using a discriminative approach based on a multilayer perceptron neural network. The data used to train the system were taken from the voice disorders database distributed by Kay Elemetrics. The material used for training and testing contains the running speech corresponding to the well known “rainbow passage” of 140 patients (23 normal and 117 pathological). The results obtained are compared with those using sustained vowels. The text-dependent running speech showed a light improvement in the accuracy of the detection.  相似文献   

10.
The complexity and requirements of web applications are increasing in order to meet more sophisticated business models (web services and cloud computing, for instance). For this reason, characteristics such as performance, scalability and security are addressed in web server cluster design. Due to the rising energy costs and also to environmental concerns, energy consumption in this type of system has become a main issue. This paper shows energy consumption reduction techniques that use a load forecasting method, combined with DVFS (Dynamic Voltage and Frequency Scaling) and dynamic configuration techniques (turning servers on and off), in a soft real-time web server clustered environment. Our system promotes energy consumption reduction while maintaining user’s satisfaction with respect to request deadlines being met. The results obtained show that prediction capabilities increase the QoS (Quality of Service) of the system, while maintaining or improving the energy savings over state-of-the-art power management mechanisms. To validate this predictive policy, a web application running a real workload profile was deployed in an Apache server cluster testbed running Linux.  相似文献   

11.
Some classes of real-time systems function in environments, which cannot be modeled with static approaches. In such environments, the arrival rates of events which drive transient computations may be unknown. Also, periodic computations may be required to process varying numbers of data elements per period, but the number of data elements to be processed in an arbitrary period cannot be known at the time of system engineering, nor can an upper bound be determined for the number of data items; thus, a worst case execution time cannot be obtained for such periodics. This paper presents middleware services that support such dynamic real-time systems through load balancing. The middleware services have been implemented and employed for (1) the DynBench dynamic real-time benchmark suite and (2) an experimental Navy system. Experimental results show the effectiveness of our load balancing techniques for consistently delivering real-time quality-of-service, even in highly dynamic environments. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
13.
Enabling deft data integration from numerous, voluminous andheterogeneous data sources is a major bioinformatic challenge.Several approaches have been proposed to address this challenge,including data warehousing and federated databasing. Yet despitethe rise of these approaches, integration of data from multiplesources remains problematic and toilsome. These two approachesfollow a user-to-computer communication model for data exchange,and do not facilitate a broader concept of data sharing or collaborationamong users. In this report, we discuss the potential of Web2.0 technologies to transcend this model and enhance bioinformaticsresearch. We propose a Web 2.0-based Scientific Social Community(SSC) model for the implementation of these technologies. Byestablishing a social, collective and collaborative platformfor data creation, sharing and integration, we promote a webservices-based pipeline featuring web services for computer-to-computerdata exchange as users add value. This pipeline aims to simplifydata integration and creation, to realize automatic analysis,and to facilitate reuse and sharing of data. SSC can fostercollaboration and harness collective intelligence to createand discover new knowledge. In addition to its research potential,we also describe its potential role as an e-learning platformin education. We discuss lessons from information technology,predict the next generation of Web (Web 3.0), and describe itspotential impact on the future of bioinformatics studies.   相似文献   

14.
MOTIVATION: Computationally, in silico experiments in biology are workflows describing the collaboration of people, data and methods. The Grid and Web services are proposed to be the next generation infrastructure supporting the deployment of bioinformatics workflows. But the growing number of autonomous and heterogeneous services pose challenges to the used middleware w.r.t. composition, i.e. discovery and interoperability of services required within in silico experiments. In the IRIS project, we handle the problem of service interoperability by a semi-automatic procedure for identifying and placing customizable adapters into workflows built by service composition. RESULTS: We show the effectiveness and robustness of the software-aided composition procedure by a case study in the field of life science. In this study we combine different database services with different analysis services with the objective of discovering required adapters. Our experiments show that we can identify relevant adapters with high precision and recall.  相似文献   

15.
Results from an analysis of 6 different design approaches for stabilization ponds (plus 5 sub-approaches) under the influence of country-specific conditions are presented. The investigation included facultative aerated ponds, facultative ponds and anaerobic ponds. Two different approaches were used to investigate sensitivity. A Monte Carlo method running several thousand automated simulations was carried out as well as an analysis focussing especially on temperature effects. The results showed high temperature dependencies as well as structural differences between the different approaches. Temperature increases by only 5 °C caused maximum decreases in calculated areas by 15% (aerated facultative ponds), by around 40% (facultative ponds) and by even 50% (anaerobic pond). On the other hand the calculated efficiencies were usually less dependent on temperature or were not part of the approach at all. Significant differences between the design approaches of a certain treatment system occurred (e.g. up to more than 80% with respect to areas). The results suggest that the applicability of design approaches may be restricted and these approaches should be analysed carefully for every specific situation. A design based on stochastic simulations is recommended especially if combined systems are to be designed.  相似文献   

16.
Distributed systems based on cluster of workstation are more and more difficult to manage due to the increasing number of processors involved, and the complexity of associated applications. Such systems need efficient and flexible monitoring mechanisms to fulfill administration services requirements. In this paper, we present PHOENIX a distributed platform supporting both applications and operating system monitoring with a variable granularity. The granularity is defined using logical expressions to specify complex monitoring conditions. These conditions can be dynamically modified during the application execution. Observation techniques, based on an automatic probe insertion combined with a system agent to minimize the PHOENIX execution time overhead. The platform extensibility offers a suitable environment to design distributed value added services (performance monitoring, load balancing, accounting, cluster management, etc.).  相似文献   

17.
The typology of wetlands provides important information for both water resource managers and conservation planners. One of the most important aims of allocating wetlands to a certain type or class is to provide information about the ecosystem services that the wetland provides. There are two main approaches towards wetland classification. Firstly, there are top-down approaches whereby wetlands are divided into several categories based on a conceptual understanding of how the wetland functions (mostly with regards to water flows). Secondly there are bottom-up approaches whereby the classification of wetlands is based on the collection of data in the wetland that is then subjected to various clustering techniques (mostly with regards to biodiversity). The most utilized system of top-down classification assigns wetlands into hydrogeomorphic units, which function as a single unit in terms of hydrology and geomorphology. This type of classification is most useful for water resource planning, as it provides information about how the wetland is connected to the drainage network and what are the water inflows, throughflows and outflows of the wetland. The bottom-up classification approach typically focusses on the classification of wetland habitats rather than complete wetlands, where wetland habitat represents a spatial unit delineated on the basis of vegetation, embedded within the (complete) hydrogeomorphic unit, and defined as an area of wetland that is homogeneous in terms of opportunities for plant growth. At a broad scale, most ecosystem services can be superficially derived from the hydrogeomorphic unit type and the way water moves through a wetland, but habitat units and the plant species that define them would have a specific effect on the delivery of ecosystem services, for example, with different assemblages providing different resistance to flow. Some types of ecosystem services are exclusively linked to specific wetland habitats, especially provisioning services. For this reason, it is proposed that a combined approach of hydrogeomorphic classification together with a vegetation map, offers the maximum information value for ecosystem service determination. In order to account for the potential pitfall of “double counting” when combining the top-down and bottom-up approaches, each service needs to be considered individually with reference to the degree to which a service is either: (a) primarily determined by HGM class/attributes and modified by the vegetation class/attributes; or (b) primarily determined by the vegetation class/attributes.  相似文献   

18.
Development of NPACI Grid Application Portals and Portal Web Services   总被引:2,自引:0,他引:2  
Grid portals and services are emerging as convenient mechanisms for providing the scientific community with familiar and simplified interfaces to the Grid. Our experiences in implementing computational grid portals, and the services needed to support them, has led to the creation of GridPort: a unique, integrated, layered software system for building portals and hosting portal services that access Grid services. The usefulness of this system has been successfully demonstrated with the implementation of several application portals. This system has several unique features: the software is portable and runs on most webservers; written in Perl/CGI, it is easy to support and modify; a single API provides access to a host of Grid services; it is flexible and adaptable; it supports single login between multiple portals; and portals built with it may run across multiple sites and organizations. In this paper we summarize our experiences in building this system, including philosophy and design choices and we describe the software we are building that support portal development, portal services. Finally, we discuss our experiences in developing the GridPort Client Toolkit in support of remote Web client portals and Grid Web services.  相似文献   

19.
Ecosystem services research faces several challenges stemming from the plurality of interpretations of classifications and terminologies. In this paper we identify two main challenges with current ecosystem services classification systems: i) the inconsistency across concepts, terminology and definitions, and; ii) the mix up of processes and end-state benefits, or flows and assets. Although different ecosystem service definitions and interpretations can be valuable for enriching the research landscape, it is necessary to address the existing ambiguity to improve comparability among ecosystem-service-based approaches. Using the cascade framework as a reference, and Systems Ecology as a theoretical underpinning, we aim to address the ambiguity across typologies. The cascade framework links ecological processes with elements of human well-being following a pattern similar to a production chain. Systems Ecology is a long-established discipline which provides insight into complex relationships between people and the environment. We present a refreshed conceptualization of ecosystem services which can support ecosystem service assessment techniques and measurement. We combine the notions of biomass, information and interaction from system ecology, with the ecosystem services conceptualization to improve definitions and clarify terminology. We argue that ecosystem services should be defined as the interactions (i.e. processes) of the ecosystem that produce a change in human well-being, while ecosystem components or goods, i.e. countable as biomass units, are only proxies in the assessment of such changes. Furthermore, Systems Ecology can support a re-interpretation of the ecosystem services conceptualization and related applied research, where more emphasis is needed on the underpinning complexity of the ecological system.  相似文献   

20.
SUMMARY: Sequence analysis using Web Resources (SeWeR) is an integrated, Dynamic HTML (DHTML) interface to commonly used bioinformatics services available on the World Wide Web. It is highly customizable, extendable, platform neutral, completely server-independent and can be hosted as a web page as well as being used as stand-alone software running within a web browser.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号