首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Efficient task processing and data storage are still among the most important challenges in Autonomous Driving (AD). In-board processing units struggle to deal with the workload of AD tasks, especially for Artificial Intelligence (AI) based applications. Cloud and Fog computing represent good opportunities to overcome the limitation of in-board processing capacities. However, communication delays and task real-time constraints are the main issues to be considered during the task mapping. Also, a fair resources allocation is a miss-explored concept in the context of AD task offloading where the mobility increases its complexity. We propose a task offloading simulation tool and approaches based on intelligent agents. Agents at the edge and the fog communicate and exchange their knowledge and history. We show results and proof-of-concept scenarios that illustrate our multi-agent-based proposition and task offloading simulation tool. We also analyze the impact of communication delays and processing units constraints on AD task offloading issues.

  相似文献   

2.
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.  相似文献   

3.

The modeling of complex computational applications as giant computational workflows has been a critically effective means of better understanding the intricacies of applications and of determining the best approach to their realization. It is a challenging assignment to schedule such workflows in the cloud while also considering users’ different quality of service requirements. The present paper introduces a new direction based on a divide-and-conquer approach to scheduling these workflows. The proposed Divide-and-conquer Workflow Scheduling algorithm (DQWS) is designed with the objective of minimizing the cost of workflow execution while respecting its deadline. The critical path concept is the inspiration behind the divide-and-conquer process. DQWS finds the critical path, schedules it, removes the critical path from the workflow, and effectively divides the leftover into some mini workflows. The process continues until only chain structured workflows, called linear graphs, remain. Scheduling linear graphs is performed in the final phase of the algorithm. Experiments show that DQWS outperforms its competitors, both in terms of meeting deadlines and minimizing the monetary costs of executing scheduled workflows.

  相似文献   

4.

High energy consumption (EC) is one of the leading and interesting issue in the cloud environment. The optimization of EC is generally related to scheduling problem. Optimum scheduling strategy is used to select the resources or tasks in such a way that system performance is not violated while minimizing EC and maximizing resource utilization (RU). This paper presents a task scheduling model for scheduling the tasks on virtual machines (VMs). The objective of the proposed model is to minimize EC, maximize RU, and minimize workflow makespan while preserving the task’s deadline and dependency constraints. An energy and resource efficient workflow scheduling algorithm (ERES) is proposed to schedule the workflow tasks to the VMs and dynamically deploy/un-deploy the VMs based on the workflow task’s requirements. An energy model is presented to compute the EC of the servers. Double threshold policy is used to perceive the server’ status i.e. overloaded/underloaded or normal. To balance the workload on the overloaded/underloaded servers, live VM migration strategy is used. To check the effectiveness of the proposed algorithm, exhaustive simulation experiments are conducted. The proposed algorithm is compared with power efficient scheduling and VM consolidation (PESVMC) algorithm on the accounts of RU, energy efficiency and task makespan. Further, the results are also verified in the real cloud environment. The results demonstrate the effectiveness of the proposed ERES algorithm.

  相似文献   

5.
Cheng  Feng  Huang  Yifeng  Tanpure  Bhavana  Sawalani  Pawan  Cheng  Long  Liu  Cong 《Cluster computing》2022,25(1):619-631

As the services provided by cloud vendors are providing better performance, achieving auto-scaling, load-balancing, and optimized performance along with low infrastructure maintenance, more and more companies migrate their services to the cloud. Since the cloud workload is dynamic and complex, scheduling the jobs submitted by users in an effective way is proving to be a challenging task. Although a lot of advanced job scheduling approaches have been proposed in the past years, almost all of them are designed to handle batch jobs rather than real-time workloads, such as that user requests are submitted at any time with any amount of numbers. In this work, we have proposed a Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle this problem. Specifically, we focus on scheduling user requests in such a way as to provide the quality of service (QoS) to the end-user along with a significant reduction of the cost spent on the execution of jobs on the virtual instances. We have implemented our method by Deep Q-learning Network (DQN) model, and our experimental results demonstrate that our approach can significantly outperform the commonly used real-time scheduling algorithms.

  相似文献   

6.
As studies on vehicular ad hoc networks have been conducted actively in recent years, convenient and reliable services can be provided to vehicles through traffic information, surrounding information, and file sharing. To provide services for multiple requests, road side units (RSUs) should receive requests from vehicles and provide a scheduling scheme for data transfer according to priority. In this paper, we propose a new scheduling scheme by which multiple RSUs are connected through wired networks and data is transferred through the collaboration of RSUs. The proposed scheme transfers safety and non-safety data by employing a collaborative strategy of multiple RSUs as well as reducing the deadline miss ratio and average response time. When safety data is generated, data is transferred from the previous RSU in advance, while priority is assigned considering the deadline and reception rate. Since non-safety data is an on-demand data processed by user requests, the proposed scheme provides a method that reduces the deadline miss ratio upon loads generated in RSUs. To prove the superiority of the proposed scheme, we perform a performance evaluation in which the number and velocities of vehicles were changed. It is shown through the performance evaluation that the proposed scheme has better deadline miss ratios and faster response time than the existing schemes.  相似文献   

7.
Zhang  Degan  Cao  Lixiang  Zhu  Haoli  Zhang  Ting  Du  Jinyu  Jiang  Kaiwen 《Cluster computing》2022,25(2):1175-1187

Compared with the traditional network tasks, the emerging Internet of Vehicles (IoV) technology has higher requirements for network bandwidth and delay. However, due to the limitation of computing resources and battery capacity of existing mobile devices, it is hard to meet the above requirements. How to complete task offloading and calculation with lower task delay and lower energy consumption is the most important issue. Aiming at the task offloading system of the IoV, this paper considers the situation of multiple MEC servers when modeling, and proposes a dynamic task offloading scheme based on deep reinforcement learning. It improves the traditional Q-Learning algorithm and combines deep learning with reinforcement learning to avoid dimensional disaster in the Q-Learning algorithm. Simulation results show that the proposed algorithm has better performance on delay, energy consumption, and total system overhead under the different number of tasks and wireless channel bandwidth.

  相似文献   

8.
The current works about MapReduce task scheduling with deadline constraints neither take the differences of Map and Reduce task, nor the cluster’s heterogeneity into account. This paper proposes an extensional MapReduce Task Scheduling algorithm for Deadline constraints in Hadoop platform: MTSD. It allows user specify a job’s deadline and tries to make the job be finished before the deadline. Through measuring the node’s computing capacity, a node classification algorithm is proposed in MTSD. This algorithm classifies the nodes into several levels in heterogeneous clusters. Under this algorithm, we firstly illuminate a novel data distribution model which distributes data according to the node’s capacity level respectively. The experiments show that the node classification algorithm can improved data locality observably to compare with default scheduler and it also can improve other scheduler’s locality. Secondly, we calculate the task’s average completion time which is based on the node level. It improves the precision of task’s remaining time evaluation. Finally, MTSD provides a mechanism to decide which job’s task should be scheduled by calculating the Map and Reduce task slot requirements.  相似文献   

9.
This paper presents a new approach to cost analysis of family planning programmes that focuses on behaviour change of programme clients as the final 'output' rather than units of contraceptive services delivered, as does the familiar couple-years-of-protection index. It is useful to know how much it costs to deliver a unit of contraceptive services, but it would also seem useful to know how much it costs to change a prospective client's behaviour. The proposed approach rests on the familiar 'steps to behaviour change' paradigm and: (1) develops a methodology for applying a client-behaviour-change-centred cost analysis to programme activities; (2) tests the methodology and concepts by applying them retrospectively to a case study of mass media interventions in Egypt; (3) derives cost per unit of behaviour changes for these Egyptian communications campaigns to demonstrate the workability of the approach. This framework offers a new approach to impact evaluation that would seem to be applicable to other components of family planning and reproductive health programmes.  相似文献   

10.

Transmitting electronic medical records (EMR) and other communication in modern Internet of Things (IoT) healthcare ecosystem is both delay and integrity-sensitive. Transmitting and computing volumes of EMR data on traditional clouds away from healthcare facilities is a main source of trust-deficit using IoT-enabled applications. Reliable IoT-enabled healthcare (IoTH) applications demand careful deployment of computing and communication infrastructure (CnCI). This paper presents a FOG-assisted CnCI model for reliable healthcare facilities. Planning a secure and reliable CnCI for IoTH networks is a challenging optimization task. We proposed a novel mathematical model (i.e., integer programming) to plan FOG-assisted CnCI for IoTH networks. It considers wireless link interfacing gateways as a virtual machine (VM). An IoTH network contains three wirelessly communicating nodes: VMs, reduced computing power gateways (RCPG), and full computing power gateways (FCPG). The objective is to minimize the weighted sum of infrastructure and operational costs of the IoTH network planning. Swarm intelligence-based evolutionary approach is used to solve IoTH networks planning for superior quality solutions in a reasonable time. The discrete fireworks algorithm with three local search methods (DFWA-3-LSM) outperformed other experimented algorithms in terms of average planning cost for all experimented problem instances. The DFWA-3-LSM lowered the average planning cost by 17.31%, 17.23%, and 18.28% when compared against discrete artificial bee colony with 3 LSM (DABC-3-LSM), low-complexity biogeography-based optimization (LC-BBO), and genetic algorithm, respectively. Statistical analysis demonstrates that the performance of DFWA-3-LSM is better than other experimented algorithms. The proposed mathematical model is envisioned for secure, reliable and cost-effective EMR data manipulation and other communication in healthcare.

  相似文献   

11.
In large-scale heterogeneous cluster computing systems, processor and network failures are inevitable and can have an adverse effect on applications executing on such systems. One way of taking failures into account is to employ a reliable scheduling algorithm. However, most existing scheduling algorithms for precedence constrained tasks in heterogeneous systems only consider scheduling length, and not efficiently satisfy the reliability requirements of task. In recognition of this problem, we build an application reliability analysis model based on Weibull distribution, which can dynamically measure the reliability of task executing on heterogeneous cluster with arbitrary networks architectures. Then, we propose a reliability-driven earliest finish time with duplication scheduling algorithm (REFTD) which incorporates task reliability overhead into scheduling. Furthermore, to improve system reliability, it duplicates task as if task hazard rate is more than threshold \(\theta \) . The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithm can shorten schedule length and improve system reliability significantly.  相似文献   

12.
In this paper, we consider the problem of scheduling and mapping precedence-constrained tasks to a network of heterogeneous processors. In such systems, processors are usually physically distributed, implying that the communication cost is considerably higher than in tightly coupled multiprocessors. Therefore, scheduling and mapping algorithms for such systems must schedule the tasks as well as the communication traffic by treating both the processors and communication links as equally important resources. We propose an algorithm that achieves these objectives and adapts its task scheduling and mapping decisions according to the given network topology. Just like tasks, messages are also scheduled and mapped to suitable links during the minimization of the finish times of tasks. Heterogeneity of processors is exploited by scheduling critical tasks to the fastest processors. Our experimental study has demonstrated that the proposed algorithm is efficient and robust, and yields consistent performance over a wide range of scheduling parameters. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

13.
With the popularization and development of cloud computing, lots of scientific computing applications are conducted in cloud environments. However, current application scenario of scientific computing is also becoming increasingly dynamic and complicated, such as unpredictable submission times of jobs, different priorities of jobs, deadlines and budget constraints of executing jobs. Thus, how to perform scientific computing efficiently in cloud has become an urgent problem. To address this problem, we design an elastic resource provisioning and task scheduling mechanism to perform scientific workflow jobs in cloud. The goal of this mechanism is to complete as many high-priority workflow jobs as possible under budget and deadline constraints. This mechanism consists of four steps: job preprocessing, job admission control, elastic resource provisioning and task scheduling. We perform the evaluation with four kinds of real scientific workflow jobs under different budget constraints. We also consider the uncertainties of task runtime estimations, provisioning delays, and failures in evaluation. The results show that in most cases our mechanism achieves a better performance than other mechanisms. In addition, the uncertainties of task runtime estimations, VM provisioning delays, and task failures do not have major impact on the mechanism’s performance.  相似文献   

14.

Fog-cloud computing is a promising distributed model for hosting ever-increasing Internet of Things (IoT) applications. IoT applications should meet different characteristics such as deadline, frequency rate, and input file size. Fog nodes are heterogeneous, resource-limited devices and cannot accommodate all the IoT applications. Due to these difficulties, designing an efficient algorithm to deploy a set of IoT applications in a fog-cloud environment is very important. In this paper, a fuzzy approach is developed to classify applications based on their characteristics then an efficient heuristic algorithm is proposed to place applications on the virtualized computing resources. The proposed policy aims to provide a high quality of service for IoT users while the profit of fog service providers is maximized by minimizing resource wastage. Extensive simulation experiments are conducted to evaluate the performance of the proposed policy. Results show that the proposed policy outperforms other approaches by improving the average response time up to 13%, the percentage of deadline satisfied requests up to 12%, and the resource wastage up to 26%.

  相似文献   

15.
Task switch costs often show an asymmetry, with switch costs being larger when switching from a difficult task to an easier task. This asymmetry has been explained by difficult tasks being represented more strongly and consequently requiring more inhibition prior to switching to the easier task. The present study shows that switch cost asymmetries observed in arithmetic tasks (addition vs. subtraction) do not depend on task difficulty: Switch costs of similar magnitudes were obtained when participants were presented with unsolvable pseudo-equations that did not differ in task difficulty. Further experiments showed that neither task switch costs nor switch cost asymmetries were due to perceptual factors (e.g., perceptual priming effects). These findings suggest that asymmetrical switch costs can be brought about by the association of some tasks with greater difficulty than others. Moreover, the finding that asymmetrical switch costs were observed (1) in the absence of a task switch proper and (2) without differences in task difficulty, suggests that present theories of task switch costs and switch cost asymmetries are in important ways incomplete and need to be modified.  相似文献   

16.

Background

The widespread popularity of genomic applications is threatened by the “bioinformatics bottleneck” resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly.

Results

We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers.

Conclusions

Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers.  相似文献   

17.
Scheduling mixed-parallel applications with advance reservations   总被引:1,自引:0,他引:1  
This paper investigates the scheduling of mixed-parallel applications, which exhibit both task and data parallelism, in advance reservations settings. Both the problem of minimizing application turn-around time and that of meeting a deadline are studied. For each several scheduling algorithms are proposed, some of which borrow ideas from previously published work in non-reservation settings. Algorithms are compared in simulation over a wide range of application and reservation scenarios. The main finding is that schedules computed using the previously published CPA algorithm can be adapted to advance reservation settings, notably resulting in low resource consumption and thus high efficiency.
Henri Casanova (Corresponding author)Email:
  相似文献   

18.
Task scheduling for large-scale computing systems is a challenging problem. From the users perspective, the main concern is the performance of the submitted tasks, whereas, for the cloud service providers, reducing operation cost while providing the required service is critical. Therefore, it is important for task scheduling mechanisms to balance users’ performance requirements and energy efficiency because energy consumption is one of the major operational costs. We present a time dependent value of service (VoS) metric that will be maximized by the scheduling algorithm that take into consideration the arrival time of a task while evaluating the value functions for completing a task at a given time and the tasks energy consumption. We consider the variation in value for completing a task at different times such that the value of energy reduction can change significantly between peak and non-peak periods. To determine the value of a task completion, we use completion time and energy consumption with soft and hard thresholds. We define the VoS for a given workload to be the sum of the values for all tasks that are executed during a given period of time. Our system model is based on virtual machines, where each task will be assigned a resource configuration characterized by the number of the homogeneous cores and amount of memory. For the scheduling of each task submitted to our system, we use the estimated time to compute matrix and the estimated energy consumption matrix which are created using historical data. We design, evaluate, and compare our task scheduling methods to show that a significant improvement in energy consumption can be achieved when considering time-of-use dependent scheduling algorithms. The simulation results show that we improve the performance and the energy values up to 49% when compared to schedulers that do not consider the value functions. Similar to the simulation results, our experimental results from running our value based scheduling on an IBM blade server show up to 82% improvement in performance value, 110% improvement in energy value, and up to 77% improvement in VoS compared to schedulers that do not consider the value functions.  相似文献   

19.
Maternal syphilis results in an estimated 500,000 stillbirths and neonatal deaths annually in Sub-Saharan Africa. Despite the existence of national guidelines for antenatal syphilis screening, syphilis testing is often limited by inadequate laboratory and staff services. Recent availability of inexpensive rapid point-of-care syphilis tests (RST) can improve access to antenatal syphilis screening. A 2010 pilot in Zambia explored the feasibility of integrating RST within prevention of mother-to-child-transmission of HIV services. Following successful demonstration, the Zambian Ministry of Health adopted RSTs into national policy in 2011. Cost data from the pilot and 2012 preliminary national rollout were extracted from project records, antenatal registers, clinic staff interviews, and facility observations, with the aim of assessing the cost and quality implications of scaling up a successful pilot into a national rollout. Start-up, capital, and recurrent cost inputs were collected, including costs of extensive supervision and quality monitoring during the pilot. Costs were analysed from a provider’s perspective, incremental to existing antenatal services. Total and unit costs were calculated and a multivariate sensitivity analysis was performed. Our accompanying qualitative study by Ansbro et al. (2015) elucidated quality assurance and supervisory system challenges experienced during rollout, which helped explain key cost drivers. The average unit cost per woman screened during rollout ($11.16) was more than triple the pilot unit cost ($3.19). While quality assurance costs were much lower during rollout, the increased unit costs can be attributed to several factors, including higher RST prices and lower RST coverage during rollout, which reduced economies of scale. Pilot and rollout cost drivers differed due to implementation decisions related to training, supervision, and quality assurance. This study explored the cost of integrating RST into antenatal care in pilot and national rollout settings, and highlighted important differences in costs that may be observed when moving from pilot to scale-up.  相似文献   

20.
Chitooligosaccharides (COSs) have a widespread range of biological functions and an incredible potential for various pharmaceutical and agricultural applications. Although several physical, chemical, and biological techniques have been reported for COSs production, it is still a challenge to obtain structurally defined COSs with defined polymerization (DP) and acetylation patterns, which hampers the specific characterization and application of COSs. Herein, we achieved the de novo production of structurally defined COSs using combinatorial pathway engineering in Bacillus subtilis. Specifically, the COSs synthase NodC from Azorhizobium caulinodans was overexpressed in B. subtilis, leading to 30 ± 0.86 mg/L of chitin oligosaccharides (CTOSs), the homo-oligomers of N-acetylglucosamine (GlcNAc) with a well-defined DP lower than 6. Then introduction of a GlcNAc synthesis module to promote the supply of the sugar acceptor GlcNAc, reduced CTOSs production, which suggested that the activity of COSs synthase NodC and the supply of sugar donor UDP-GlcNAc may be the limiting steps for CTOSs synthesis. Therefore, 6 exogenous COSs synthase candidates were examined, and the nodCM from Mesorhizobium loti yielded the highest CTOSs titer of 560 ± 16 mg/L. Finally, both the de novo pathway and the salvage pathway of UDP-GlcNAc were engineered to further promote the biosynthesis of CTOSs. The titer of CTOSs in 3-L fed-batch bioreactor reached 4.82 ± 0.11 g/L (85.6% CTOS5, 7.5% CTOS4, 5.3% CTOS3 and 1.6% CTOS2), which was the highest ever reported. This is the first report proving the feasibility of the de novo production of structurally defined CTOSs by synthetic biology, and provides a good starting point for further engineering to achieve the commercial production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号