首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.

Software-Defined Network (SDN) technology is a network management approach that facilitates a high level of programmability and centralized manageability. By leveraging the control and data plane separation, an energy-aware routing model could be easily implemented in the networks. In the present paper, we propose a two-phase SDN-based routing mechanism that aims at minimizing energy consumption while providing a certain level of QoS for the users’ flows and realizing the link load balancing. To reduce the network energy consumption, a minimum graph-based Ant Colony Optimization (ACO) approach is used in the first phase. It prunes and optimizes the network tree by turning unnecessary switches off and providing an energy-minimized sub-graph that is responsible for the network existing flows. In the second phase, an innovative weighted routing approach is developed that guarantees the QoS requirements of the incoming flows and routes them so that to balance the loads on the links. We validated our proposed approach by conducting extensive simulations on different traffic patterns and scenarios with different thresholds. The results indicate that the proposed routing method considerably minimizes the network energy consumption, especially for congested traffics with mice-type flows. It can provide effective link load balancing while satisfying the users’ QoS requirements.

  相似文献   

2.
QoS and Contention-Aware Multi-Resource Reservation   总被引:1,自引:0,他引:1  
To provide Quality of Service (QoS) guarantee in distributed services, it is necessary to reserve multiple computing and communication resources for each service session. Meanwhile, techniques have been available for the reservation and enforcement of various types of resources. Therefore, there is a need to create an integrated framework for coordinated multi-resource reservation. One challenge in creating such a framework is the complex relation between the end-to-end application-level QoS and the corresponding end-to-end resource requirement. Furthermore, the goals of (1) providing the best end-to-end QoS for each distributed service session and (2) increasing the overall reservation success rate of all service sessions are in conflict with each other. In this paper, we present a QoS and contention-aware framework of end-to-end multi-resource reservation for distributed services. The framework assumes a reservation-enabled environment, where each type of resource can be reserved. The framework consists of (1) a component-based QoS-Resource Model, (2) a runtime system architecture for coordinated reservation, and (3) a runtime algorithm for the computation of end-to-end multi-resource reservation plans. The algorithm provides a solution to alleviating the conflict between the QoS of an individual service session and the success rate of all service sessions. More specifically, for each service session, the algorithm computes an end-to-end reservation plan, such that it guarantees the highest possible end-to-end QoS level under the current end-to-end resource availability, and requires the lowest percentage of bottleneck resource(s) among all feasible reservation plans. Our simulation results show excellent performance of this algorithm.  相似文献   

3.
High performance and distributed computing systems such as peta-scale, grid and cloud infrastructure are increasingly used for running scientific models and business services. These systems experience large availability variations through hardware and software failures. Resource providers need to account for these variations while providing the required QoS at appropriate costs in dynamic resource and application environments. Although the performance and reliability of these systems have been studied separately, there has been little analysis of the lost Quality of Service (QoS) experienced with varying availability levels. In this paper, we present a resource performability model to estimate lost performance and corresponding cost considerations with varying availability levels. We use the resulting model in a multi-phase planning approach for scheduling a set of deadline-sensitive meteorological workflows atop grid and cloud resources to trade-off performance, reliability and cost. We use simulation results driven by failure data collected over the lifetime of high performance systems to demonstrate how the proposed scheme better accounts for resource availability.  相似文献   

4.
The proliferation of cloud data center applications and network function virtualization (NFV) boosts dynamic and QoS dependent traffic into the data centers network. Currently, lots of network routing protocols are requirement agnostic, while other QoS-aware protocols are computationally complex and inefficient for small flows. In this paper, a computationally efficient congestion avoidance scheme, called CECT, for software-defined cloud data centers is proposed. The proposed algorithm, CECT, not only minimizes network congestion but also reallocates the resources based on the flow requirements. To this end, we use a routing architecture to reconfigure the network resources triggered by two events: (1) the elapsing of a predefined time interval, or, (2) the occurrence of congestion. Moreover, a forwarding table entries compression technique is used to reduce the computational complexity of CECT. In this way, we mathematically formulate an optimization problem and define a genetic algorithm to solve the proposed optimization problem. We test the proposed algorithm on real-world network traffic. Our results show that CECT is computationally fast and the solution is feasible in all cases. In order to evaluate our algorithm in term of throughput, CECT is compared with ECMP (where the shortest path algorithm is used as the cost function). Simulation results confirm that the throughput obtained by running CECT is improved up to 3× compared to ECMP while packet loss is decreased up to 2×.  相似文献   

5.
The performance of mobile devices including smart phones and laptops is steadily rising as prices plummet sharply. So, mobile devices are changing from being a mere interface for requesting services to becoming computing resources for providing and sharing services due to immeasurably improved performance. With the increasing number of mobile device users, the utilization rate of SNS (Social Networking Service) is also soaring. Applying SNS to the existing computing environment enables members of social network to share computing services without further authentication. To use mobile device as a computing resource, temporary network disconnection caused by user mobility and various HW/SW faults causing service disruption should be considered. Also these issues must be resolved to support mobile users and to provide user requirements for services. Accordingly, we propose fault tolerance and QoS (Quality of Services) scheduling using CAN (Content Addressable Network) in Mobile Social Cloud Computing (MSCC). MSCC is a computing environment that integrates social network-based cloud computing and mobile devices. In the computing environment, a mobile user can, through mobile devices, become a member of a social network through real world relationships. Essentially, members of a social network share cloud service or data with other members without further authentication by using their mobile device. We use CAN as the underlying MSCC to logically manage the locations of mobile devices. Fault tolerance and QoS scheduling consists of four sub-scheduling algorithms: malicious-user filtering, cloud service delivery, QoS provisioning, and replication and load-balancing. Under the proposed scheduling, a mobile device is used as a resource for providing cloud services, faults caused from user mobility or other reasons are tolerated and user requirements for QoS are considered. We simulate scheduling both with and without CAN. The simulation results show that our proposed scheduling algorithm enhances cloud service execution time, finish time and reliability and reduces the cloud service error rate.  相似文献   

6.
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.  相似文献   

7.
This paper presents a novel Call Admission Control (CAC) scheme which adopts the neural network approach, namely Minimal Resource Allocation Network (MRAN) and its extended version EMRAN. Though the current focus is on the Call Admission Control (CAC) for Asynchronous Transfer Mode (ATM) networks, the scheme is applicable to most high-speed networks. As there is a need for accurate estimation of the required bandwidth for different services, the proposed scheme can offer a simple design procedure and provide a better control in fulfilling the Quality of Service (QoS) requirements. MRAN and EMRAN are on-line learning algorithms to facilitate efficient admission control in different traffic environments. Simulation results show that the proposed CAC schemes are more efficient than the two conventional CAC approaches, the Peak Bandwidth Allocation scheme and the Cell Loss Ratio (CLR) upperbound formula scheme. The prediction precision and computational time of MRAN and EMRAN algorithms are also investigated. Both MRAN and EMRAN algorithms yield similar performance results, but the EMRAN algorithm has less computational load.  相似文献   

8.
Boosted by technology advancements, government and commercial interest, ad-hoc wireless networks are emerging as a serious platform for distributed mission-critical applications. Guaranteeing QoS in this environment is a hard problem because several applications may share the same resources in the network, and mobile ad-hoc wireless networks (MANETs) typically exhibit high variability in network topology and communication quality. In this paper we introduce DYNAMIQUE, a resource management infrastructure for MANETs. We present a resource model for multi-application admission control that optimizes the application admission utility, defined as a combination of the QoS satisfaction ratio. A method based on external adaptation (shrinking QoS for existing applications and later QoS expansion) is introduced as a way to reduce computation complexity by reducing the search space. We designed an application admission protocol that uses a greedy heuristic to improve application utility. For this, the admission control considers network topology information from the routing layer. Specifically, the admission protocol takes benefit from a cluster network organization, as defined by ad-hoc routing protocols such as CBRP and LANMAR. Information on cluster membership and cluster head elections allows the admission protocol to minimize control signaling and to improve application quality by localizing task mapping.  相似文献   

9.
Dynamically forecasting network performance using the Network Weather Service   总被引:18,自引:0,他引:18  
The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling (Berman et al., 1996) and, by the metacomputing software infrastructure, to develop quality-of-service guarantees (DeFanti et al., to appear; Grimshaw et al., 1994). This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

10.
With the advances of network function virtualization and cloud computing technologies, a number of network services are implemented across data centers by creating a service chain using different virtual network functions (VNFs) running on virtual machines. Due to the complexity of network infrastructure, creating a service chain requires high operational cost especially in carrier-grade network service providers and supporting stringent QoS requirements from users is also a complicated task. There have been various research efforts to address these problems that only focus on one aspect of optimization goal either from users such as latency minimization and QoS based optimization, or from service providers such as resource optimization and cost minimization. However, meeting the requirements both from users and service providers efficiently is still a challenging issue. This paper proposes a VNF placement algorithm called VNF-EQ that allows users to meet their service latency requirements, while minimizing the energy consumption at the same time. The proposed algorithm is dynamic in a sense that the locations or the service chains of VNFs are reconfigured to minimize the energy consumption when the traffic passing through the chain falls below a pre-defined threshold. We use genetic algorithm to formulate this problem because it is a variation of the multi-constrained path selection problem known as NP-complete. The benchmarking results show that the proposed approach outperforms other heuristic algorithms by as much as 49% and reduces the energy consumptions by rearranging VNFs.  相似文献   

11.
Sustainable urban resource management depends essentially on a sound understanding of a city's resource flows. One established method for analyzing the urban metabolism (UM) is the Eurostat material flow analysis (MFA). However, for a comprehensive assessment of the UM, this method has its limitations. It does not account for all relevant resource flows, such as locally sourced resources, and it does not differentiate between flows that are associated with the city's resource consumption and resources that only pass through the city. This research sought to gain insights into the UM of Amsterdam by performing an MFA employing the Eurostat method. Modifications to that method were made to enhance its performance for comprehensive UM analyses. A case study of Amsterdam for the year 2012 was conducted and the results of the Eurostat and the modified Eurostat method were compared. The results show that Amsterdam's metabolism is dominated by water flows and by port‐related throughput of fossil fuels. The modified Eurostat method provides a deeper understanding of the UM than the urban Eurostat MFA attributed to three major benefits of the proposed modifications. First, the MFA presents a more complete image of the flows in the UM. Second, the modified resource classification presents findings in more detail. Third, explicating throughput flows yields a much‐improved insight into the nature of a city's imports, exports, and stock. Overall, these advancements provide a deeper understanding of the UM and make the MFA method more useful for sustainable urban resource management.  相似文献   

12.
Development of high-performance distributed applications, called metaapplications, is extremely challenging because of their complex runtime environment coupled with their requirements of high-performance and Quality of Service (QoS). Such applications typically run on a set of heterogeneous machines with dynamically varying loads, connected by heterogeneous networks possibly supporting a wide variety of communication protocols. In spite of the size and complexity of such applications, they must provide the high-performance and QoS mandated by their users. In order to achieve the goal of high-performance, they need to adaptively utilize their computational and communication resources. Apart from the requirements of adaptive resource utilization, such applications have a third kind of requirement related to remote access QoS. Different clients, although accessing a single server resource, may have differing QoS requirements from their remote connections. A single server resource may also need to provide different QoS for different clients, depending on various issues such as the amount of trust between the server and a given client. These QoS requirements can be encapsulated under the abstraction of remote access capabilities. Metaapplications need to address all the above three requirements in order to achieve the goal of high-performance and satisfy user expectations of QoS. This paper presents Open HPC++, a programming environment for high-performance applications running in a complex and heterogeneous run-time environment. Open HPC++ provides application level tools and mechanisms to satisfy application requirements of adaptive resource utilization and remote access capabilities. Open HPC++ is designed on the lines of CORBA and uses an Object Request Broker (ORB) to support seamless communication between distributed application components. In order to provide adaptive utilization of communication resources, it uses the principle of open implementation to open up the communication mechanisms of its ORB. By virtue of its open architecture, the ORB supports multiple, possibly custom, communication protocols, along with automatic and user controlled protocol selection at run-time. An extension of the same mechanism is used to support the concept of remote access capabilities. In order to support adaptive utilization of computational resources, Open HPC++ also provides a flexible yet powerful set of load-balancing mechanisms that can be used to implement custom load-balancing strategies. The paper also presents performance evaluations of Open HPC++ adaptivity and load-balancing mechanisms. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

13.
《IRBM》2014,35(6):299-309
Network technologies have facilitated the implementation of health services based on ubiquitous systems, allowing pervasive monitoring of patients in their daily activities without significantly interfering in their lifestyle. This event entails the need to ensure adequate management and security of healthcare environment networks. However, traffic monitoring became an arduous work, requiring autonomic mechanisms to describe the network's normal behavior. Thus, Digital Signature of Network Segment using Flow analysis (DSNSF) as a mechanism to assist the networks management through traffic characterization is introduced. For this purpose, three methods belonging to different groups of algorithms are used: the statistical procedure Principal Component Analysis (PCA), the Ant Colony Optimization (ACO) metaheuristic and Holt–Winters forecasting method. These methods characterize the traffic into two distinct levels. The first one is the network infrastructure, which encompasses the entire network, including non-healthcare data from different sectors which compose an e-health environment. Profile creation about traffic used for monitoring of patients' vital and behavioral signs identifies the second level. Also, an approach for anomaly detection is proposed, which is able to recognize unusual events that may affect the proper operation of the services provided by the network.  相似文献   

14.
A complexity indicator based on the diversity of energy and resource uses by a system is proposed in this paper. The indicator is an emergy-based index of complexity derived from a modified Shannon information formula that provides a quantitative assessment of the diversity of sources. The emergy approach assigns to each driving input a weight that derives from the environmental work performed by nature in order to generate such resource. This quality assessment goes far beyond the simple accounting of mass and energy of input flows, but takes into proper account their interlinkage with the biosphere dynamics. The rationale of the proposed indicator is that complexity cannot be assessed by simply counting individuals, species and processes, but requires that focus is placed on several aspects of resource flows, namely their amount, frequency, and quality. Different mixes of emergy input flows originate different levels of growth and complexity. Systems that only rely on a small set of sources out of the large number potentially available possess a built-in fragility, that may determine their collapse in times of shrinking or changing resource basis. For validation purpose, the proposed indicator was applied to the performance of selected national economies (Nicaragua, Latvia, Denmark and Italy) in selected years and of the urban system of Roma (Italy) over a forty-year (1962–2002) historical series. Results point out an increasing complexity of the urban system of Rome over time, while a lower complexity was calculated for the investigated national systems as a whole (likely effect of nationwide averaging), with Italy ranking highest and Latvia lowest. The same assessment performed for the Italian agricultural system over a twenty-year time series (1985–2006) shows a decline of the emergy-adjusted Shannon indicator from about 75% down to 62%, while the decline was from 73% to 63% for the agriculture of Campania region (southern Italy).  相似文献   

15.
HiperLAN/2 (HIgh PErformance Radio Local Area Network) is a new standard from ETSI (European Telecommunications Standards Institute) for high-speed wireless LANs, interconnecting portable devices to each other and to broadband core networks, based on different networking technologies such as IP, ATM, IEEE 1394, and others. This paper introduces the basic features of the HiperLAN/2 MAC protocol. It presents performance evaluation results, specifically related to the mechanisms provided by HiperLAN/2 to manage bandwidth resource requests and granting. These results are assessed in terms of their flexibility and efficiency in supporting delay sensitive traffic, such as voice and Web data traffic, which are expected to be transported by broadband wireless LANs.  相似文献   

16.
In large scale networks like the IEEE 802.16 (WiMAX), it is very important not only to monitor, but also to control the amount of traffic injected into the network. This process helps in decreasing congestion and by consequence, in guaranteeing the Quality of Service (QoS) requirements for each class of traffic. In this study we propose a traffic policer based on token bucket concept for WiMAX networks. Token bucket parameters (token rate and bucket size) are adjusted according to the traffic characteristics for each traffic class individually. Simulation results show that the proposed traffic policing technique greatly enhances the network performance. It decreases the average delay for real time traffic like rtPS, and therefore, reduces data drop probability due to missed deadlines. It also decreases data loss probability for non real time service class like nrtPS.  相似文献   

17.
Micro-grid systems (MGS) are increasingly investigated for green and energy efficient buildings in order to reduce energy consumption while maintaining occupants’ comfort. It includes renewable energy sources for power production, storage devices for storing power excess, and control strategies for orchestrating all components and improving the system's efficiency. In fact, MGS can be seen as complex systems composed of different heterogeneous entities that interact dynamically and in collective manner in order to balance between energy efficiency and occupants’ comfort. However, the uncertainty and intermittency of energy production and consumption requires the development of real-time forecasting methods and predictive control strategies. The State-of-Charge (SoC) of batteries is one of the main parameters used in MGS predictive control algorithms. It indicates how much energy is stored and how long MGS can be relying on deployed storage devices. Several methods have been developed for SoC estimation, but little work, however, has been dedicated for SoC forecasting in MGS. In this paper, we focus on advancing MGS predictive control through near real-time embedded forecasting of batteries SoC. In fact, we have deployed, on two platforms, two forecasting methods, Long Short-Term Memory (LSTM) and Auto Regressive Integrated Moving Average (ARIMA). Their accuracy and performance have been evaluated in both classical batch mode and streaming mode. Extensive experiments have been conducted for different forecasting horizons and results are presented using two main metrics, the accuracy and the computational time. Obtained results show that LSTM outperforms ARIMA for real-time forecasting, it has the better tradeoff in terms of forecasting accuracy and performance.  相似文献   

18.
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.  相似文献   

19.
Bluetooth polling, also referred to as Bluetooth MAC scheduling or intra-piconet scheduling, is the mechanism that schedules the traffic between the participants in a Bluetooth network. Hence, this mechanism is highly determining with respect to the delay packets experience in a Bluetooth network. In this paper, we present a polling mechanism that provides delay guarantees in an efficient manner, and we evaluate this polling mechanism by means of simulation. It is shown that this polling mechanism is able to provide delay guarantees while saving as much as possible resources, which can be used for transmission of best effort traffic or for retransmissions.Rachid Ait Yaiz (1974) received his BS in Electrical Engineering from the Technische Hogeschool Arnhem, the Netherlands, in 1996 and his MSc in Electrical Engineering from the University of Twente, the Netherlands, in 1999. He received his Ph.D. in Telecommunications from the same university in 2004. Currently, he works for TNO Telecom. His research interests include mobile and wireless networks, and he is particularly interested in the area of quality of service over mobile and wireless networks.Geert Heijenk (1965) received his MSc in Computer Science from University of Twente, the Netherlands, in 1988. He worked as a research staff member at the same university and received his Ph.D. in Telecommunications in 1995. He has also held a part-time position as researcher at KPN research, the Netherlands, from 1989 until 1991. From 1995 until 2003, he was with Ericsson EuroLab Netherlands, first as a senior strategic engineer, and since 1999 as a research department manager. From 1998 until 2003 he was also a part-time senior researcher at the University of Twente. Currently, he is a full-time associate professor at the same university. His research interests include mobile and wireless networks, resource management, and quality of service.  相似文献   

20.
Data centers, as resource providers, are expected to deliver on performance guarantees while optimizing resource utilization to reduce cost. Virtualization techniques provide the opportunity of consolidating multiple separately managed containers of virtual resources on underutilized physical servers. A key challenge that comes with virtualization is the simultaneous on-demand provisioning of shared physical resources to virtual containers and the management of their capacities to meet service-quality targets at the least cost. This paper proposes a two-level resource management system to dynamically allocate resources to individual virtual containers. It uses local controllers at the virtual-container level and a global controller at the resource-pool level. An important advantage of this two-level control architecture is that it allows independent controller designs for separately optimizing the performance of applications and the use of resources. Autonomic resource allocation is realized through the interaction of the local and global controllers. A novelty of the local controller designs is their use of fuzzy logic-based approaches to efficiently and robustly deal with the complexity and uncertainties of dynamically changing workloads and resource usage. The global controller determines the resource allocation based on a proposed profit model, with the goal of maximizing the total profit of the data center. Experimental results obtained through a prototype implementation demonstrate that, for the scenarios under consideration, the proposed resource management system can significantly reduce resource consumption while still achieving application performance targets.
Mazin YousifEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号