首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A key point in the analysis of dynamical models of biological systems is to handle systems of relatively high dimensions. In the present paper we propose a method to hierarchically organize a certain type of piecewise affine (PWA) differential systems. This specific class of systems has been extensively studied for the past few years, as it provides a good framework to model gene regulatory networks. The method, shown on several examples, allows a qualitative analysis of the asymptotic behavior of a PWA system, decomposing it into several smaller subsystems. This technique, based on the well-known strongly connected components decomposition, is not new. However, its adaptation to the non-smooth PWA differential equations turns out to be quite relevant because of the strong discrete structure underlying these equations. Its biological relevance is shown on a 7-dimensional PWA system modeling the gene network responsible for the carbon starvation response in Escherichia coli.
Laurent Tournier (Corresponding author)Email:
Jean-Luc GouzéEmail:
  相似文献   

2.
The performance skeleton of an application is a short running program whose performance in any scenario reflects the performance of the application it represents. Specifically, the execution time of the performance skeleton is a small fixed fraction of the execution time of the corresponding application in any execution environment. Such a skeleton can be employed to quickly estimate the performance of a large application under existing network and node sharing. This paper presents a framework for automatic construction of performance skeletons of a specified execution time and evaluates their use in performance prediction with CPU and network sharing. The approach is based on capturing the execution behavior of an application and automatically generating a synthetic skeleton program that reflects that execution behavior. The paper demonstrates that performance skeletons running for a few seconds can predict the application execution time fairly accurately. Relationship of skeleton execution time, application characteristics, and nature of resource sharing, to accuracy of skeleton based performance prediction, is analyzed in detail. The goal of this research is accurate performance estimation in heterogeneous and shared computational grids.
Jaspal Subhlok (Corresponding author)Email:
  相似文献   

3.
Efficient and robust data streaming services are a critical requirement of emerging Grid applications, which are based on seamless interactions and coupling between geographically distributed application components. Furthermore the dynamism of Grid environments and applications requires that these services be able to continually manage and optimize their operation based on system state and application requirements. This paper presents a design and implementation of such a self-managing data-streaming service based on online control strategies. A Grid-based fusion workflow scenario is used to evaluate the service and demonstrate its feasibility and performance.
Sherif AbdelwahedEmail:
  相似文献   

4.
The integration of multiple predictors promises higher prediction accuracy than the accuracy that can be obtained with a single predictor. The challenge is how to select the best predictor at any given moment. Traditionally, multiple predictors are run in parallel and the one that generates the best result is selected for prediction. In this paper, we propose a novel approach for predictor integration based on the learning of historical predictions. Compared with the traditional approach, it does not require running all the predictors simultaneously. Instead, it uses classification algorithms such as k-Nearest Neighbor (k-NN) and Bayesian classification and dimension reduction technique such as Principal Component Analysis (PCA) to forecast the best predictor for the workload under study based on the learning of historical predictions. Then only the forecasted best predictor is run for prediction. Our experimental results show that it achieved 20.18% higher best predictor forecasting accuracy than the cumulative MSE based predictor selection approach used in the popular Network Weather Service system. In addition, it outperformed the observed most accurate single predictor in the pool for 44.23% of the performance traces.
Renato J. FigueiredoEmail:
  相似文献   

5.
6.
Sub-Antarctic Marion Island has had a permanent research station for 50 years and the islands Wandering Albatrosses have been intensively studied for 20 years. The reactions of breeding birds to approaches by a human on foot were recorded. Three response variables were calculated: intensity of vocal reaction (IVR), intensity of non-vocal reaction (INR) and overall response index (ORI). At 5 m from the nest, twice as many birds stood and/or vocalised as at 15 m. Nearest neighbour distance, age and gender did not explain individual variability of responses. Study colony birds had higher IVR scores than non-study colony birds; birds at colonies closest to the station had the highest ORI scores. A better breeding record was associated with lower IVR and ORI scores, but a causative relationship remains to be demonstrated. A minimum viewing distance of 25 m is recommended for breeding Wandering Albatrosses.
Marienne S. de VilliersEmail: Fax: +27-21-6503434
John CooperEmail:
Peter G. RyanEmail:
  相似文献   

7.
We investigate operating system noise, which we identify as one of the main reasons for a lack of synchronicity in parallel applications. Using a microbenchmark, we measure the noise on several contemporary platforms and find that, even with a general-purpose operating system, noise can be limited if certain precautions are taken. We then inject artificially generated noise into a massively parallel system and measure its influence on the performance of collective operations. Our experiments indicate that on extreme-scale platforms, the performance is correlated with the largest interruption to the application, even if the probability of such an interruption on a single process is extremely small. We demonstrate that synchronizing the noise can significantly reduce its negative influence.
Aroon NatarajEmail:
  相似文献   

8.
The competitiveness of online algorithms is measured based on the correctness of the results produced and processing time efficiency. Traditionally evolutionary algorithms are not favored in online paradigms because of the large number of iterations involved in the algorithm which translates directly into processing time overhead. In this paper we describe MARS (Management Architecture for Resource Services) online scheduling algorithm which uses Simulated Annealing and concepts from Tabu Search to drastically decrease the processing time of the algorithm. The paper outlines the concepts behind MARS, the components involved and scheduling methodology used. In addition we also identify the time consuming bottlenecks in the performance of the system and how evolutionary algorithms help us soar past them.
Hesham El-RewiniEmail:
  相似文献   

9.
Predictive performance modelling of parallel component compositions   总被引:1,自引:0,他引:1  
Large-scale scientific computing applications frequently make use of closely-coupled distributed parallel components. The performance of such applications is therefore dependent on the component parts and their interaction at run-time. This paper describes a methodology for predictive performance modelling and evaluation of parallel applications composed of multiple interacting components. In this paper, the fundamental steps and required operations involved in the modelling and evaluation process are identified—including component decomposition, component model combination, M×N communication modelling, dataflow analysis and overall performance evaluation. A case study is presented to illustrate the modelling process and the methodology is verified through experimental analysis.
Stephen A. JarvisEmail:
  相似文献   

10.
The capacity needs of online services are mainly determined by the volume of user loads. For large-scale distributed systems running such services, it is quite difficult to match the capacities of various system components. In this paper, a novel and systematic approach is proposed to profile services for resource optimization and capacity planning. We collect resource consumption related measurements from various components across distributed systems and further search for constant relationships between these measurements. If such relationships always hold under various workloads along time, we consider them as invariants of the underlying system. After extracting many invariants from the system, given any volume of user loads, we can follow these invariant relationships sequentially to estimate the capacity needs of individual components. By comparing the current resource configurations against the estimated capacity needs, we can discover the weakest points that may deteriorate system performance. Operators can consult such analytical results to optimize resource assignments and remove potential performance bottlenecks. In this paper, we propose several algorithms to support capacity analysis and guide operator’s capacity planning tasks. Our algorithms are evaluated with real systems and experimental results are also included to demonstrate the effectiveness of our approach.
Kenji YoshihiraEmail:
  相似文献   

11.
12.
Today Graphics Processing Units (GPUs) are a largely underexploited resource on existing desktops and a possible cost-effective enhancement to high-performance systems. To date, most applications that exploit GPUs are specialized scientific applications. Little attention has been paid to harnessing these highly-parallel devices to support more generic functionality at the operating system or middleware level. This study starts from the hypothesis that generic middleware-level techniques that improve distributed system reliability or performance (such as content addressing, erasure coding, or data similarity detection) can be significantly accelerated using GPU support. We take a first step towards validating this hypothesis and we design StoreGPU, a library that accelerates a number of hashing-based middleware primitives popular in distributed storage system implementations. Our evaluation shows that StoreGPU enables up twenty five fold performance gains on synthetic benchmarks as well as on a high-level application: the online similarity detection between large data files.
Matei RipeanuEmail:
  相似文献   

13.
14.
The influences of the operating system and system-specific effects on application performance are increasingly important considerations in high performance computing. OS kernel measurement is key to understanding the performance influences and the interrelationship of system and user-level performance factors. The KTAU (Kernel TAU) methodology and Linux-based framework provides parallel kernel performance measurement from both a kernel-wide and process-centric perspective. The first characterizes overall aggregate kernel performance for the entire system. The second characterizes kernel performance when it runs in the context of a particular process. KTAU extends the TAU performance system with kernel-level monitoring, while leveraging TAU’s measurement and analysis capabilities. We explain the rational and motivations behind our approach, describe the KTAU design and implementation, and show working examples on multiple platforms demonstrating the versatility of KTAU in integrated system/application monitoring.
Alan MorrisEmail:
  相似文献   

15.
The recent contribution by Jarmila Kukalová-Peck on Hennigian phylogenetics and hexapod limb evolution is critically evaluated.
Michael S. Engel (Corresponding author)Email:
  相似文献   

16.
We present a technique that controls the peak power consumption of a high-density server by implementing a feedback controller that uses precise, system-level power measurement to periodically select the highest performance state while keeping the system within a fixed power constraint. A control theoretic methodology is applied to systematically design this control loop with analytic assurances of system stability and controller performance, despite unpredictable workloads and running environments. In a real server we are able to control power over a 1 second period to within 1 W and over an 8 second period to within 0.1 W. Conventional servers respond to power supply constraint situations by using simple open-loop policies to set a safe performance level in order to limit peak power consumption. We show that closed-loop control can provide higher performance under these conditions and implement this technique on an IBM BladeCenter HS20 server. Experimental results demonstrate that closed-loop control provides up to 82% higher application performance compared to open-loop control and up to 17% higher performance compared to a widely used ad-hoc technique.
Malcolm WareEmail:
  相似文献   

17.
Studies on the effects of a variety of exogenous and anthropogenic environmental factors, including endocrine disruptors, heavy metals, UV light, high temperature, and others, on marine organisms have been presented at the 2nd Bilateral Seminar Italy–Japan held in November 2006. Reports were discussed in order to reveal the current situation of marine ecosystems, aiming at evaluation and prediction of environmental risks.
V. MatrangaEmail:
  相似文献   

18.
Syndromic surveillance uses new ways of gathering data to identify possible disease outbreaks. Because syndromic surveillance can be implemented to detect patterns before diseases are even identified, it poses novel problems for informed consent, patient privacy and confidentiality, and risks of stigmatization. This paper analyzes these ethical issues from the viewpoint of the patient as victim and vector. It concludes by pointing out that the new International Health Regulations fail to take full account of the ethical challenges raised by syndromic surveillance.
Leslie P. FrancisEmail:
  相似文献   

19.
20.
I show that gene regulation networks are qualitatively consistent and therefore sufficiently similar to linearly seperable connectionist networks to warrant that the connectionist framework be applied to gene regulation. On this view, natural selection designs gene regulation networks to overcome the difficulty of development. I offer some general lessons about their evolvability that can be learned by examining the generic features of connectionist networks.
Roger SansomEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号