首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
High performance and distributed computing systems such as peta-scale, grid and cloud infrastructure are increasingly used for running scientific models and business services. These systems experience large availability variations through hardware and software failures. Resource providers need to account for these variations while providing the required QoS at appropriate costs in dynamic resource and application environments. Although the performance and reliability of these systems have been studied separately, there has been little analysis of the lost Quality of Service (QoS) experienced with varying availability levels. In this paper, we present a resource performability model to estimate lost performance and corresponding cost considerations with varying availability levels. We use the resulting model in a multi-phase planning approach for scheduling a set of deadline-sensitive meteorological workflows atop grid and cloud resources to trade-off performance, reliability and cost. We use simulation results driven by failure data collected over the lifetime of high performance systems to demonstrate how the proposed scheme better accounts for resource availability.  相似文献   

2.
The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.  相似文献   

3.

Background  

Complex biological database systems have become key computational tools used daily by scientists and researchers. Many of these systems must be capable of executing on multiple different hardware and software configurations and are also often made available to users via the Internet. We have used the Java Data Object (JDO) persistence technology to develop the database layer of such a system known as the SigPath information management system. SigPath is an example of a complex biological database that needs to store various types of information connected by many relationships.  相似文献   

4.
Developing and characterization ordered clone collection from human chromosome specific DNA libraries is proceeding as part of a larger effort to construct a physical map of the entire human genome. The robotics and automation section at Los Alamos has been focussed on developing the hardware and software tools required to support this objective. These tools are typically integrated systems that combine an intuitive user interface, a database, as well as the relevant hardware technologies. To date, we have developed a system to automatically grid clones onto nylon filters in high density arrays. We have also developed a hybridization autoradiograph software scoring tool that combines image analysis, databasing, and a user interface.  相似文献   

5.
Cluster Computing - Cloud Computing is referred to as a set of hardware and software that are being combined to deliver various services of computing. The cloud keeps the services for delivery of...  相似文献   

6.
The emergence of ad-hoc pervasive connectivity for devices based on Bluetooth-like systems provides a new way to create applications for mobile systems. We seek to realize ubiquitous computing systems based on the cooperation of autonomous, dynamic and adaptive components (hardware as well as software) which are located in vicinity of one another. In this paper we present this vision. We also describe a prototype system we have developed that implements parts of this vision – in particular a system that combines agent oriented and service oriented approaches and provides dynamic service discovery. We point out why existing systems such as Jini are not suited for this task, and how our system improves on them.  相似文献   

7.
Plants are important sources of food and plant products are essential for modern human life. Plants are increasingly gaining importance as drug and fuel resources, bioremediation tools and as tools for recombinant technology. Considering these applications, database infrastructure for plant model systems deserves much more attention. Study of plant biological pathways, the interconnection between these pathways and plant systems biology on the whole has in general lagged behind human systems biology. In this article we review plant pathway databases and the resources that are currently available. We lay out trends and challenges in the ongoing efforts to integrate plant pathway databases and the applications of database integration. We also discuss how progress in non-plant communities can serve as an example for the improvement of the plant pathway database landscape and thereby allow quantitative modeling of plant biosystems. We propose Good Database Practice as a possible model for collaboration and to ease future integration efforts.  相似文献   

8.
Recently much effort has been spent on providing a shared address space abstraction on clusters of small-scale symmetric multiprocessors. However, advances in technology will soon make it possible to construct these clusters with larger-scale cc-NUMA nodes, connected with non-coherent networks that offer latencies and bandwidth comparable to interconnection networks used in hardware cache-coherent systems. The shared memory abstraction can be provided on these systems in software across nodes and hardware within nodes.Recent simulation results have demonstrated that certain features of modern system area networks can be used to greatly reduce shared virtual memory (SVM) overheads [5,19]. In this work we leverage these results and we use detailed system emulation to investigate building future software shared memory clusters. We use an existing, large-scale hardware cache-coherent system with 64 processors to emulate a complete future cluster. We port our existing infrastructure (communication layer and shared memory protocol) on this system and study the behavior of a set of real applications. We present results for both 32- and 64-processor system configurations.We find that: (i) System emulation is invaluable in quantifying potential benefits from changes in the technology of commodity components. More importantly, it reveals potential problems in future systems that are easily overlooked in simulation studies. Thus, system emulation should be used along with other modeling techniques (e.g., simulation, implementation) to investigate future trends. (ii) Our work shows that current SVM protocols can only partially take advantage of faster interconnects and wider nodes due to operating system and architectural implications. We quantify the related issues and identify the areas where more research is required for future SVM clusters.  相似文献   

9.
Neuromorphic hardware is the term used to describe full custom-designed integrated circuits, or silicon ''chips'', that are the product of neuromorphic engineering--a methodology for the synthesis of biologically inspired elements and systems, such as individual neurons, retinae, cochleas, oculomotor systems and central pattern generators. We focus on the implementation of neurons and networks of neurons, designed to illuminate structure-function relationships. Neuromorphic hardware can be constructed with either digital or analogue circuitry or with mixed-signal circuitry--a hybrid of the two. Currently, most examples of this type of hardware are constructed using analogue circuits, in complementary metal-oxide-semiconductor technology. The correspondence between these circuits and neurons, or networks of neurons, can exist at a number of levels. At the lowest level, this correspondence is between membrane ion channels and field-effect transistors. At higher levels, the correspondence is between whole conductances and firing behaviour, and filters and amplifiers, devices found in conventional integrated circuit design. Similarly, neuromorphic engineers can choose to design Hodgkin-Huxley model neurons, or reduced models, such as integrate-and-fire neurons. In addition to the choice of level, there is also choice within the design technique itself; for example, resistive and capacitive properties of the neuronal membrane can be constructed with extrinsic devices, or using the intrinsic properties of the materials from which the transistors themselves are composed. So, silicon neurons can be built, with dendritic, somatic and axonal structures, and endowed with ionic, synaptic and morphological properties. Examples of the structure-function relationships already explored using neuromorphic hardware include correlation detection and direction selectivity. Establishing a database for this hardware is valuable for two reasons: first, independently of neuroscientific motivations, the field of neuromorphic engineering would benefit greatly from a resource in which circuit designs could be stored in a form appropriate for reuse and re-fabrication. Analogue designers would benefit particularly from such a database, as there are no equivalents to the algorithmic design methods available to designers of digital circuits. Second, and more importantly for the purpose of this theme issue, is the possibility of a database of silicon neuron designs replicating specific neuronal types and morphologies. In the future, it may be possible to use an automated process to translate morphometric data directly into circuit design compatible formats. The question that needs to be addressed is: what could a neuromorphic hardware database contribute to the wider neuroscientific community that a conventional database could not? One answer is that neuromorphic hardware is expected to provide analogue sensory-motor systems for interfacing the computational power of symbolic, digital systems with the external, analogue environment. It is also expected to contribute to ongoing work in neural-silicon interfaces and prosthetics. Finally, there is a possibility that the use of evolving circuits, using reconfigurable hardware and genetic algorithms, will create an explosion in the number of designs available to the neuroscience community. All this creates the need for a database to be established, and it would be advantageous to set about this while the field is relatively young. This paper outlines a framework for the construction of a neuromorphic hardware database, for use in the biological exploration of structure-function relationships.  相似文献   

10.
11.
随着智能手机和人工智能技术的发展, 以手机app为载体的植物识别软件慢慢走进公众生活、科普活动和科研活动的各个方面。植物识别app的识别正确率是决定其使用价值和用户体验的关键因素。目前, 国内应用市场上有许多植物识别app, 它们的开发目的和应用范围各异, 软件本身的关注点、数据库来源、算法、硬件要求也存在很大差异。对于不同人群, 植物识别app有不同的意义, 如对于科研人员来说, 识别能力强的app是提高效率的一大工具; 对植物爱好者来说, 具一定准确率的识别app可以作为入门的工具。因此, 对各app的识别能力进行分析与评价显得尤为重要。本文选取了8款常用的app, 分别对400张已准确鉴定的植物图片进行识别, 其中干旱半干旱区、温带、热带和亚热带4个区各选取100张。这些图片共计122科164属340种, 涵盖了乔木、灌木、草本、草质藤本和木质藤本5种生长型, 包含23种国家级保护植物。种、属、科准确识别正确分别计4分、2分、1分, 以此标准对软件识别能力按总得分进行排序, 正确率得分由高到低依次为花帮主、百度识图、花伴侣、形色、花卉识别、植物识别、发现识花、微软识花。  相似文献   

12.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

13.
The complexity and diversity of manufacturing software and the need to adapt this software to the frequent changes in the production requirements necessitate the use of a systematic approach to developing this software. The software life-cycle model (Royce, 1970) that consists of specifying the requirements of a software system, designing, implementing, testing, and evolving this software can be followed when developing large portions of manufacturing software. However, the presence of hardware devices in these systems and the high costs of acquiring and operating hardware devices further complicate the manufacturing software development process and require that the functionality of this software be extended to incorporate simulation and prototyping. This paper reviews recent methods for planning, scheduling, simulating, and monitoring the operation of manufacturing systems. A synopsis of the approaches to designing and implementing the real-time control software of these systems is presented. It is concluded that current methodologies support, in a very restricted sense, these planning, scheduling, and monitoring activities, and that enhanced performance can be achieved via an integrated approach.  相似文献   

14.
Progress in analytical ultracentrifugation (AUC) has been hindered by obstructions to hardware innovation and by software incompatibility. In this paper, we announce and outline the Open AUC Project. The goals of the Open AUC Project are to stimulate AUC innovation by improving instrumentation, detectors, acquisition and analysis software, and collaborative tools. These improvements are needed for the next generation of AUC-based research. The Open AUC Project combines on-going work from several different groups. A new base instrument is described, one that is designed from the ground up to be an analytical ultracentrifuge. This machine offers an open architecture, hardware standards, and application programming interfaces for detector developers. All software will use the GNU Public License to assure that intellectual property is available in open source format. The Open AUC strategy facilitates collaborations, encourages sharing, and eliminates the chronic impediments that have plagued AUC innovation for the last 20 years. This ultracentrifuge will be equipped with multiple and interchangeable optical tracks so that state-of-the-art electronics and improved detectors will be available for a variety of optical systems. The instrument will be complemented by a new rotor, enhanced data acquisition and analysis software, as well as collaboration software. Described here are the instrument, the modular software components, and a standardized database that will encourage and ease integration of data analysis and interpretation software.  相似文献   

15.
Cloud computing, an on-demand computation model that consists of large data-centers (Clouds) managed by cloud providers, offers storage and computation needs for cloud users based on service level agreements (SLAs). Services in cloud computing are offered at relatively low cost. The model, therefore, forms a great target for many applications, such as startup businesses and e-commerce applications. The area of cloud computing has grown rapidly in the last few years; yet, it still faces some obstacles. For example, there is a lack of mechanisms that guarantee for cloud users the quality that they are actually getting, compared to the quality of service that is specified in SLAs. Another example is the concern of security, privacy and trust, since users lose control over their data and programs once they are sent to cloud providers. In this paper, we introduce a new architecture that aids the design and implementation of attestation services. The services monitor cloud-based applications to ensure software quality, such as security, privacy, trust and usability of cloud-based applications. Our approach is a user-centric approach through which users have more control on their own data/applications. Further, the proposed approach is a cloud-based approach where the powers of the clouds are utilized. Simulation results show that many services can be designed based on our architecture, with limited performance overhead.  相似文献   

16.
Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a 'fingerprint' that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution.  相似文献   

17.
With the continued growth in software environments on cloud application platforms, self-management at the Platform-as-a-Service (PaaS) level has become a pressing concern, and the run-time monitoring, analysis and detection of critical situations are all fundamental requirements if we are to achieve autonomic behaviour in complex PaaS environments. In this paper we focus on cloud application platforms offering their customers a range of generic built-in re-usable services. By identifying key characteristics of these complex dynamic systems, we compare cloud application platforms to distributed sensor networks, and investigate the viability of exploiting these similarities with a case study. We treat cloud data storage services as “virtual” sensors constantly emitting monitoring data, such as numbers of connections and storage space availability, which are then analysed by the central component of a monitoring framework so as to detect and react to SLA violations. We discuss the potential benefits, as well as some shortcomings, of adopting this approach.  相似文献   

18.
Photon imaging is an increasingly important technique for the measurement and analysis of chemiluminescence and bioluminescence. New high-performance low-light level imaging systems have recently become available for the life science. These systems use advances in camera design and digital image processing and are now being used for a wide range of luminescence applications. They offer good sensitivity for photon detection and large dynamic range, and are suitable for quantitative analysis. This is achieved using a range of software techniques including image arithmetic, histogramming or summing regions of interest, feature extraction and multiple image processing for kinetics or assay screening. Improvements in imageprocessing hardware and software have increased the usefulness of these systems in the biosciences. Low-light imaging is a rapid and non-invasive method for the sensitive detection and analysis of luminescent assays. As such it offers a powerful and sensitive tool for investigating processes, both at the cellular level (luc and lux reporter genes, intracellular signalling) and for measurement of macro samples (immunoassays, gels and blots, tissue sections).  相似文献   

19.
Software architecture definition for on-demand cloud provisioning   总被引:1,自引:0,他引:1  
Cloud computing is a promising paradigm for the provisioning of IT services. Cloud computing infrastructures, such as those offered by the RESERVOIR project, aim to facilitate the deployment, management and execution of services across multiple physical locations in a seamless manner. In order for service providers to meet their quality of service objectives, it is important to examine how software architectures can be described to take full advantage of the capabilities introduced by such platforms. When dealing with software systems involving numerous loosely coupled components, architectural constraints need to be made explicit to ensure continuous operation when allocating and migrating services from one host in the Cloud to another. In addition, the need for optimising resources and minimising over-provisioning requires service providers to control the dynamic adjustment of capacity throughout the entire service lifecycle. We discuss the implications for software architecture definitions of distributed applications that are to be deployed on Clouds. In particular, we identify novel primitives to support service elasticity, co-location and other requirements, propose language abstractions for these primitives and define their behavioural semantics precisely by establishing constraints on the relationship between architecture definitions and Cloud management infrastructures using a model denotational approach in order to derive appropriate service management cycles. Using these primitives and semantic definition as a basis, we define a service management framework implementation that supports on demand cloud provisioning and present a novel monitoring framework that meets the demands of Cloud based applications.  相似文献   

20.
ABSTRACT: BACKGROUND: Downstream applications in metabolomics, as well as mathematical modelling, require data in a quantitative format, which may also necessitate the automated and simultaneous quantification of numerous metabolites. Although numerous applications have been previously developed for metabolomics data handling, automated calibration and calculation of the concentrations in terms of mumol have not been carried out. Moreover, most of the metabolomics applications are designed for GC-MS, and would not be suitable for LC-MS, since in LC, the deviation in the retention time is not linear, which is not taken into account in these applications. Moreover, only a few are web-based applications, which could improve stand-alone software in terms of compatibility, sharing capabilities and hardware requirements, even though a strong bandwidth is required. Furthermore, none of these incorporate asynchronous communication to allow real-time interaction with pre-processed results. FINDINGS: Here, we present EasyLCMS (http://www.easylcms.es/), a new application for automated quantification which was validated using more than 1000 concentration comparisons in real samples with manual operation. The results showed that only 1% of the quantifications presented a relative error higher than 15%. Using clustering analysis, the metabolites with the highest relative error distributions were identified and studied to solve recurrent mistakes. CONCLUSIONS: EasyLCMS is a new web application designed to quantify numerous metabolites, simultaneously integrating LC distortions and asynchronous web technology to present a visual interface with dynamic interaction which allows checking and correction of LC-MS raw data pre-processing results. Moreover, quantified data obtained with EasyLCMS are fully compatible with numerous downstream applications, as well as for mathematical modelling in the systems biology field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号