首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Network attacks, such as distributed denial of service (DDoS) and Internet worms, are highly distributed and well coordinated offensive assaults on services, hosts, and the infrastructure of the Internet, and can have disastrous effects including financial losses and disruption of essential services. Consequently, effective defensive countermeasures against these attacks must provide equally sophisticated and well coordinated mechanisms for monitoring, analysis, and response. In this paper, we investigate techniques for cooperative attack detection and countermeasures using decentralized information sharing. The key underlying idea is the use of epidemic algorithms to share attack information and achieve quasi-global knowledge about attack behaviors. This paper first presents a conceptual model that defines the relationships between the level of knowledge in the distributed system and the accuracy of attack detection. The design of a cooperative attack detection and defense framework is then presented, and its use for detecting and defending against DDoS attacks and Internet worms is described. Simulation results are presented to demonstrate the feasibility and effectiveness of the framework against these attacks.  相似文献   

2.
HTTP/2 is the second major version of the HTTP protocol published by the internet engineering steering group. The protocol is designed to improve reliability and performance Such enhancements have thus delineated the protocol as being more vulnerable to distributed denial-of-service (DDoS) attacks when compared to its predecessor. Recent phenomenon showed that legitimate traffic or flash crowds could have high-traffic flow characteristics as seen in DDoS attacks. In this paper, we demonstrate that legitimate HTTP/2 flash crowd traffic can be launched to cause denial of service. To the best of our knowledge, no previous study has been conducted to analyse the effect of both DDoS as well as flash crowd traffic against HTTP/2 services. Results obtained prove the effect of such attacks when tested under four varying protocol-dependant attack scenarios.  相似文献   

3.
Virtual machines (VM) migration can improve availability, manageability, performance and fault tolerance of systems. Current migration researches mainly focus on the promotion of the efficiency by using shared storage, priority-based policy etc.. But the effect of migration is not well concerned. In fact, once physical servers are overloaded from denial-of-service attack (DDoS) attack, a hasty migration operation not only unable to alleviate the harm of the attack, but also increases the harmfulness. In this paper, a novel DDoS attack, Cloud-Droplet-Freezing (CDF) attack, is described according to the characteristics of cloud computing cluster. Our experiments show that such attack is able to congest internal network communication of cloud server cluster, whilst consume resources of physical server. Base on the analysis of CDF attack, we highlight the method of evaluating potential threats hidden behind the normal VM migration and analyze the flaws of existing intrusion detection systems/prevention system for defensing the CDF attack.  相似文献   

4.
Cluster Computing - Distributed Denial of Service (DDoS) plays a significant role in threatening the cloud-based services. DDoS is a kind of attack which targets the CPU, bandwidth and other...  相似文献   

5.
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.  相似文献   

6.
Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.  相似文献   

7.
Restoration of deforested and degraded landscapes is a globally recognized strategy to sequester carbon, improve ecological integrity, conserve biodiversity, and provide additional benefits to human health and well‐being. Investment in riparian forest restoration has received relatively little attention, in part due to their relatively small spatial extent. Yet, riparian forest restoration may be a particularly valuable strategy because riparian forests have the potential for rapid carbon sequestration, are hotspots of biodiversity, and provide numerous valuable ecosystem services. To inform this strategy, we conducted a global synthesis and meta‐analysis to identify general patterns of carbon stock accumulation in riparian forests. We compiled riparian biomass and soil carbon stock data from 117 publications, reports, and unpublished data sets. We then modeled the change in carbon stock as a function of vegetation age, considering effects of climate and whether or not the riparian forest had been actively planted. On average, our models predicted that the establishment of riparian forest will more than triple the baseline, unforested soil carbon stock, and that riparian forests hold on average 68–158 Mg C/ha in biomass at maturity, with the highest values in relatively warm and wet climates. We also found that actively planting riparian forest substantially jump‐starts the biomass carbon accumulation, with initial growth rates more than double those of naturally regenerating riparian forest. Our results demonstrate that carbon sequestration should be considered a strong co‐benefit of riparian restoration, and that increasing the pace and scale of riparian forest restoration may be a valuable investment providing both immediate carbon sequestration value and long‐term ecosystem service returns.  相似文献   

8.
The rapid growth of published cloud services in the Internet makes the service selection and recommendation a challenging task for both users and service providers. In cloud environments, software re services collaborate with other complementary services to provide complete solutions to end users. The service selection is performed based on QoS requirements submitted by end users. Software providers alone cannot guarantee users’ QoS requirements. These requirements must be end-to-end, representing all collaborating services in a cloud solution. In this paper, we propose a prediction model to compute end-to-end QoS values for vertically composed services which are composed of three types of cloud services: software (SaaS), infrastructure (IaaS) and data (DaaS) services. These values can be used during the service selection and recommendation process. Our model exploits historical QoS values and cloud service and user information to predict unknown end-to-end QoS values of composite services. The experiments demonstrate that our proposed model outperforms other prediction models in terms of the prediction accuracy. We also study the impact of different parameters on the prediction results. In the experiments, we used real cloud services’ QoS data collected using our developed QoS monitoring and collecting system.  相似文献   

9.
李婧贤  王钧 《生态学报》2019,39(17):6393-6403
海岸带生态系统服务识别、分类与制图是合理利用海岸带自然资源,协调海岸带开发与保护矛盾的重要基础。现有生态系统服务分类方法在海岸带应用存在一定的局限性。在前人研究的基础上,以我国城市化和工业化水平较高的粤港澳大湾区为研究区,对该区域海岸带生态系统服务进行识别、分类,并在此基础上使用地图大数据与遥感解译的土地利用数据对海岸带生态系统服务进行了制图。共识别出35种海岸带生态系统服务,并对其中的31种服务进行制图。结果表明,建立的这套方法能较为系统地展示粤港澳大湾区生态系统服务的类型及空间分布特征。具体而言,该区域供给服务和文化服务在城市中心区较为集中,而调节服务多分布于城市周边。对识别的生态系统服务进行综合叠加分析,可将研究区分为文化服务主体区、供给服务主体区、调节服务主体区。建立的海岸带生态系统服务识别、分类体系和制图方法可操作性强,能为我国海岸带生态系统的保育、修复和重建提供科学基础。  相似文献   

10.
Lalropuia  K. C.  Khaitan  Vandana 《Cluster computing》2021,24(3):2177-2191
Cluster Computing - Economic denial of sustainability (EDoS) attack is a new type of distributed denial of service (DDoS) attack which targets the economic resources of cloud adopters by exploiting...  相似文献   

11.
Many production landscapes are complex human-environment systems operating at various spatio-temporal scales and provide a variety of ecosystem goods and services (EGS) vital to human well-being. EGS change over space and time as a result of changing patterns of land use or changes in the composition and structure of different vegetation types. Spatio-temporal assessment of EGS can provide valuable information on the consequences of changing land use and land cover for EGS and helps to deal with this complexity. We carried out a quantitative and qualitative appraisal of selected EGS (timber production, carbon stock, provision of water, water regulation, biodiversity, and forage production) to understand how these have altered in a complex mosaic of landscape that has undergone significant change over the past 200 years.Land use and land cover types and their associated EGS were assessed and mapped using a wide range of readily available data and tools. We also evaluated the trade-offs among services associated with observed land use change. In contrast to work elsewhere, we found the recent changes in land use and land cover have an overall positive impact on various EGS due mainly to the conversion of pasture to managed plantations which are connected to the larger areas of remnant vegetation. Results also indicate that there was a high level of variation in the distribution of the EGS across the landscape. Relatively intact native vegetation provides mainly regulating services whereas the modified landscapes provides provisioning services such as timber and forage production at the cost of regulating services. Rapidly changing demand and supply of certain goods and services (e.g., timber, pulp or carbon) may also have positive and negative impact on other services. For example, increasing plantation rotation has positive impacts for biodiversity and carbon stock but reduces stream flow and water yield.  相似文献   

12.
In order to provide better fisheries management and conservation decisions, there is a need to discern the underlying relationship between the spawning stock and recruitment of marine fishes, a relationship which is influenced by the environmental conditions. Here, we demonstrate how the environmental conditions (temperature and the food availability for fish larvae) influence the stock–recruitment relationship and indeed what kind of stock–recruitment relationship we might see under different environmental conditions. Using unique zooplankton data from the Continuous Plankton Recorder, we find that food availability (i.e. zooplankton) in essence determines which model applies for the once large North Sea cod (Gadus morhua) stock. Further, we show that recruitment is strengthened during cold years and weakened during warm years. Our combined model explained 45 per cent of the total variance in cod recruitment, while the traditional Ricker and Beverton–Holt models only explained about 10 per cent. Specifically, our approach predicts that a full recovery of the North Sea cod stock might not be expected until the environment becomes more favourable.  相似文献   

13.
Anonymity protocols are employed to establish encrypted tunnels to protect the privacy of Internet users from traffic analysis attacks. However, the attackers strive to infer some traffic patterns’ characteristics (e.g. packet directions, packet sizes, inter-packet timing, etc.) in order to expose the identities of Internet users and their activities. A recent and popular traffic analysis attack is called website fingerprinting which reveals the identity of websites visited by target users. Existing work in the literature studied the website fingerprinting attack using a single web browser, namely Firefox. In this paper we propose a unified traffic analysis attack model composed of a sequence of phases that demonstrate the efficiency of website fingerprinting attack using popular web browsers under Tor (The Onion Router). In addition, we reveal the main factors that affect the accuracy of website fingerprinting attack over Tor anonymous system and using different browsers. To the best of our knowledge, no previous study uncovered such factors by deploying real-world traffic analysis attack utilizing the top five web browsers. The outcomes of the research are very relevant to Internet users (individuals/companies/governments) since they allow to assess to which extent their privacy is preserved in presence of traffic analysis attacks, in particular, website fingerprinting over different browsers. A recommendation for future research direction regarding the investigation of website fingerprinting over different scenarios is also provided.  相似文献   

14.
Autonomous recovery in componentized Internet applications   总被引:1,自引:0,他引:1  
In this paper we show how to reduce downtime of J2EE applications by rapidly and automatically recovering from transient and intermittent software failures, without requiring application modifications. Our prototype combines three application-agnostic techniques: macroanalysis for fault detection and localization, microrebooting for rapid recovery, and external management of recovery actions. The individual techniques are autonomous and work across a wide range of componentized Internet applications, making them well-suited to the rapidly changing software of Internet services. The proposed framework has been integrated with JBoss, an open-source J2EE application server. Our prototype provides an execution platform that can automatically recover J2EE applications within seconds of the manifestation of a fault. Our system can provide a subset of a system's active end users with the illusion of continuous uptime, in spite of failures occurring behind the scenes, even when there is no functional redundancy in the system.  相似文献   

15.
Stock enhancement as a fisheries management tool   总被引:1,自引:0,他引:1  
Stock enhancement has been viewed as a positive fisheries management tool for over 100 years. However, decisions to undertake such activities in the past have often been technology-based, i.e., driven by the ability to produce fishes, with most stock enhancement projects having limited or no demonstrated success. The reasons for this have been due to an inability to identify and/or control the underlying reasons why a fishery is under-performing or not meeting management objectives. Further, stock enhancement has often been applied in isolation from other fisheries management tools (e.g., effort control). To address these issues and consider stock enhancement in a broader ecosystem perspective, a new approach for stock enhancement is proposed. The proposed model comprises four major steps; a review of all information about an ecosystem/fishery/stock and the setting of clear management targets; a comparison of all relevant fisheries management tools with the potential to meet the management targets; the instigation of a scientifically based, pilot-scale, stock enhancement program with clear objectives, targets, and evaluations; and a full-scale stock enhancement program if the pilot project meets the objectives. The model uses a flow-chart that highlights a broad range of scientific and other information, and the decisions that need to be made in relation to stock enhancement and fisheries management in general. In this way all steps are transparent and all stakeholders (managers, scientists, extractive and non-extractive users, and the general public) can contribute to the information collection and decision making processes. If stock enhancement is subsequently identified as the most-appropriate tool, then the stepwise progression will provide the best possible chance of a positive outcome for a stock enhancement project, while minimizing risks and costs. In this way, stock enhancement may advance as a science and develop as a useful fisheries management tool in appropriate situations.  相似文献   

16.

Background  

Understanding protein function from its structure is a challenging problem. Sequence based approaches for finding homology have broad use for annotation of both structure and function. 3D structural information of protein domains and their interactions provide a complementary view to structure function relationships to sequence information. We have developed a web site and an API of web services that enables users to submit protein structures and identify statistically significant neighbors and the underlying structural environments that make that match using a suite of sequence and structure analysis tools. To do this, we have integrated S-BLEST, PSI-BLAST and HMMer based superfamily predictions to give a unique integrated view to prediction of SCOP superfamilies, EC number, and GO term, as well as identification of the protein structural environments that are associated with that prediction. Additionally, we have extended UCSF Chimera and PyMOL to support our web services, so that users can characterize their own proteins of interest.  相似文献   

17.
Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MG-RAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http://kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase''s microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service.  相似文献   

18.
Plant performance is influenced by both top-down (e.g., herbivores) and bottom-up (e.g., soil nutrients) controls. Research investigating the collective effects of such factors may provide important insight into the success and management of invasive plants. Through a combination of observational and experimental field studies, we examined top-down and bottom-up effects on the growth and reproduction of an invasive plant, Linaria dalmatica. First, we assessed attack levels and impacts of an introduced biocontrol agent, the stem-mining weevil Mecinus janthinus, on L. dalmatica plants across multiple years and sites. Then, we conducted a manipulative experiment to examine the effects of weevil attack, soil nitrogen availability, and interspecific competition on L. dalmatica. We found substantial variations in weevil attack within populations as well as across sites and years. Observational and experimental data showed that increased weevil attack was associated with a reduction in plant biomass and seed production, but only at the highest levels of attack. Nitrogen addition had a strong positive effect on plant performance, with a two-fold increase in biomass and seed production. Clipping neighboring vegetation resulted in no significant effects on L. dalmatica performance, suggesting that plants remained resource limited or continued to experienced belowground competitive effects. Overall, our research indicates that M. janthinus can exert top-down effects on L. dalmatica; however, weevil densities and attack rates observed in this study have not reached sufficient levels to yield effective control. Moreover, bottom-up controls, in particular, soil nitrogen availability, may have a large influence on the success and spread of this invasive plant.  相似文献   

19.
High performance and distributed computing systems such as peta-scale, grid and cloud infrastructure are increasingly used for running scientific models and business services. These systems experience large availability variations through hardware and software failures. Resource providers need to account for these variations while providing the required QoS at appropriate costs in dynamic resource and application environments. Although the performance and reliability of these systems have been studied separately, there has been little analysis of the lost Quality of Service (QoS) experienced with varying availability levels. In this paper, we present a resource performability model to estimate lost performance and corresponding cost considerations with varying availability levels. We use the resulting model in a multi-phase planning approach for scheduling a set of deadline-sensitive meteorological workflows atop grid and cloud resources to trade-off performance, reliability and cost. We use simulation results driven by failure data collected over the lifetime of high performance systems to demonstrate how the proposed scheme better accounts for resource availability.  相似文献   

20.
As the applications of mobile and ubiquitous technologies have become more extensive, the communication security issues of those applications are emerging as the most important concern. Therefore, studies are active in relation with various techniques and system applications for individual security elements. In this paper, we proposed a new technique which uses the voice features in order to generate mobile one time passwords (OTPs) and generated safe and variable and safe passwords for one time use, using voice information of biometrics, which is used for powerful personal authentication optionally. Also, we performed the availability analysis on homomorphic variability of voice feature points using dendrogram and distribution of 15 users’ voice skip sampling of feature points for the proposed password generation method. And we have described the application cases of the proposed mobile-OTP using skip sampling of voice signal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号