首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
A Load Balancing Tool for Distributed Parallel Loops   总被引:1,自引:0,他引:1  
Large scale applications typically contain parallel loops with many iterates. The iterates of a parallel loop may have variable execution times which translate into performance degradation of an application due to load imbalance. This paper describes a tool for load balancing parallel loops on distributed-memory systems. The tool assumes that the data for a parallel loop to be executed is already partitioned among the participating processors. The tool utilizes the MPI library for interprocessor coordination, and determines processor workloads by loop scheduling techniques. The tool was designed independent of any application; hence, it must be supplied with a routine that encapsulates the computations for a chunk of loop iterates, as well as the routines to transfer data and results between processors. Performance evaluation on a Linux cluster indicates that the tool reduces the cost of executing a simulated irregular loop without load balancing by up to 81%. The tool is useful for parallelizing sequential applications with parallel loops, or as an alternate load balancing routine for existing parallel applications.  相似文献   

2.
IMpRH server: an RH mapping server available on the Web   总被引:9,自引:0,他引:9  
SUMMARY: The INRA-Minnesota Porcine Radiation Hybrid (IMpRH) Server provides both a mapping tool (IMpRH mapping tool) and a database (IMpRH database) of officially submitted results. The mapping tool permits the mapping of a new marker relatively to markers previously mapped on the IMpRH panel. The IMpRH database is the official database for submission of new results and queries. The database not only permits the sharing of public data but also semi-private and private data.  相似文献   

3.
Our team developed a metadata editing and management system employing state of the art XML technologies initially aimed at the environmental sciences but with the potential to be useful across multiple domains. We chose a modular and distributed design for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool that generates code for the actual online editor, a native XML database, and an online user access management application. A Java Swing application that reads an XML schema, the design tool provides the designer with options to combine input fields into online forms with user-friendly tags and determine the flow of input forms. Based on design decisions, the tool generates XForm code for the online metadata editor which is based on the Orbeon XForms engine. The design tool fulfills two requirements: First data entry forms based on a schema are customized at design time and second the tool can generate data entry applications for any valid XML schema without relying on custom information in the schema. A configuration file in the design tool saves custom information generated at design time. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services.Cascading style sheets customize the look-and-feel of the finished editor. The editor produces XML files in compliance with the original schema, however, a user may save the input into a native XML database at any time independent of validity. The system uses the open source XML database eXist for storage and uses a MySQL relational database and a simple Java Server Faces user interface for file and access management. We chose three levels to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.  相似文献   

4.
5.
Globalization of business and competitiveness in manufacturing has forced companies to improve their manufacturing facilities to respond to market requirements. Machine tool evaluation involves an essential decision using imprecise and vague information, and plays a major role to improve the productivity and flexibility in manufacturing. The aim of this study is to present an integrated approach for decision-making in machine tool selection. This paper is focused on the integration of a consistent fuzzy AHP (Analytic Hierarchy Process) and a fuzzy COmplex PRoportional ASsessment (COPRAS) for multi-attribute decision-making in selecting the most suitable machine tool. In this method, the fuzzy linguistic reference relation is integrated into AHP to handle the imprecise and vague information, and to simplify the data collection for the pair-wise comparison matrix of the AHP which determines the weights of attributes. The output of the fuzzy AHP is imported into the fuzzy COPRAS method for ranking alternatives through the closeness coefficient. Presentation of the proposed model application is provided by a numerical example based on the collection of data by questionnaire and from the literature. The results highlight the integration of the improved fuzzy AHP and the fuzzy COPRAS as a precise tool and provide effective multi-attribute decision-making for evaluating the machine tool in the uncertain environment.  相似文献   

6.
SUMMARY: Data processing, analysis and visualization (datPAV) is an exploratory tool that allows experimentalist to quickly assess the general characteristics of the data. This platform-independent software is designed as a generic tool to process and visualize data matrices. This tool explores organization of the data, detect errors and support basic statistical analyses. Processed data can be reused whereby different step-by-step data processing/analysis workflows can be created to carry out detailed investigation. The visualization option provides publication-ready graphics. Applications of this tool are demonstrated at the web site for three cases of metabolomics, environmental and hydrodynamic data analysis. AVAILABILITY: datPAV is available free for academic use at http://www.sdwa.nus.edu.sg/datPAV/.  相似文献   

7.
Computational gene regulation models provide a means for scientists to draw biological inferences from time-course gene expression data. Based on the state-space approach, we developed a new modeling tool for inferring gene regulatory networks, called time-delayed Gene Regulatory Networks (tdGRNs). tdGRN takes time-delayed regulatory relationships into consideration when developing the model. In addition, a priori biological knowledge from genome-wide location analysis is incorporated into the structure of the gene regulatory network. tdGRN is evaluated on both an artificial dataset and a published gene expression data set. It not only determines regulatory relationships that are known to exist but also uncovers potential new ones. The results indicate that the proposed tool is effective in inferring gene regulatory relationships with time delay. tdGRN is complementary to existing methods for inferring gene regulatory networks. The novel part of the proposed tool is that it is able to infer time-delayed regulatory relationships.  相似文献   

8.
Genomic data visualization on the Web   总被引:2,自引:0,他引:2  
Many types of genomic data can be represented in matrix format, with rows corresponding to genes and columns corresponding to gene features. The heat map is a popular technique for visualizing such data, plotting the data on a two-dimensional grid and using a color scale to represent the magnitude of each matrix entry. Prism is a Web-based software tool for generating annotated heat map visualizations of genome-wide data quickly. The tool provides a selection of genome-specific annotation catalogs as well as a catalog upload capability. The heat maps generated are clickable, allowing the user to drill down to examine specific matrix entries, and gene annotations are linked to relevant genomic databases. AVAILABILITY: http://noble.gs.washington.edu/prism  相似文献   

9.
ArrayExpress is a public microarray repository founded on the Minimum Information About a Microarray Experiment (MIAME) principles that stores MIAME-compliant gene expression data. Plant-based data sets represent approximately one-quarter of the experiments in ArrayExpress. The majority are based on Arabidopsis (Arabidopsis thaliana); however, there are other data sets based on Triticum aestivum, Hordeum vulgare, and Populus subsp. AtMIAMExpress is an open-source Web-based software application for the submission of Arabidopsis-based microarray data to ArrayExpress. AtMIAMExpress exports data in MAGE-ML format for upload to any MAGE-ML-compliant application, such as J-Express and ArrayExpress. It was designed as a tool for users with minimal bioinformatics expertise, has comprehensive help and user support, and represents a simple solution to meeting the MIAME guidelines for the Arabidopsis community. Plant data are queryable both in ArrayExpress and in the Data Warehouse databases, which support queries based on gene-centric and sample-centric annotation. The AtMIAMExpress submission tool is available at http://www.ebi.ac.uk/at-miamexpress/. The software is open source and is available from http://sourceforge.net/projects/miamexpress/. For information, contact miamexpress@ebi.ac.uk.  相似文献   

10.
11.
Despite their strategic potential, tool management issues in flexible manufacturing systems (FMSs) have received little attention in the literature. Nonavailability of tools in FMSs cuts at the very root of the strategic goals for which such systems are designed. Specifically, the capability of FMSs to economically produce customized products (flexibility of scope) in varying batch sizes (flexibility of volume) and delivering them on an accelerated schedule (market response time) is seriously hampered when required tools are not available at the time needed. On the other hand, excess inventory of tools in such systems represents a significant cost due to the expensive nature of FMS tool inventory. This article constructs a dynamic tool requirement planning (DTRP) model for an FMS tool planning operation that allows dynamic determination of the optimal tool replenishments at the beginning of each arbitrary, managerially convenient, discrete time period. The analysis presented in the article consists of two distinct phases: In the first phase, tool demand distributions are obtained using information from manufacturing production plans (such as master production schedule (MPS) and material requirement plans (MRP)) and general tool life distributions fitted on actual time-to-failure data. Significant computational reductions are obtained if the tool failure data follow a Weibull or Gamma distribution. In the second phase, results from classical dynamic inventory models are modified to obtain optimal tool replenishment policies that permit compliance with such FMS-specific constraints as limited tool storage capacity and part/tool service levels. An implementation plan is included.  相似文献   

12.
13.
Chromatin Immuno Precipitation (ChIP) profiling detects in vivo protein-DNA binding, and has revealed a large combinatorial complexity in the binding of chromatin associated proteins and their post-translational modifications. To fully explore the spatial and combinatorial patterns in ChIP-profiling data and detect potentially meaningful patterns, the areas of enrichment must be aligned and clustered, which is an algorithmically and computationally challenging task. We have developed CATCHprofiles, a novel tool for exhaustive pattern detection in ChIP profiling data. CATCHprofiles is built upon a computationally efficient implementation for the exhaustive alignment and hierarchical clustering of ChIP profiling data. The tool features a graphical interface for examination and browsing of the clustering results. CATCHprofiles requires no prior knowledge about functional sites, detects known binding patterns "ab initio", and enables the detection of new patterns from ChIP data at a high resolution, exemplified by the detection of asymmetric histone and histone modification patterns around H2A.Z-enriched sites. CATCHprofiles' capability for exhaustive analysis combined with its ease-of-use makes it an invaluable tool for explorative research based on ChIP profiling data. CATCHprofiles and the CATCH algorithm run on all platforms and is available for free through the CATCH website: http://catch.cmbi.ru.nl/. User support is available by subscribing to the mailing list catch-users@bioinformatics.org.  相似文献   

14.
FEFCO, Groupement Ondulé and Kraft Institute have integrated the data from their recently published updated “European Database for Corrugated Board Life Cycle Studies” into a software tool that has been developed especially for the corrugated board industry. The tool links input and output data reported in the Database to average European data for upstream and downstream processes from BUWAL 250 [3]. The tool is intended to support environmental management of companies since it provides a possibility to find opportunities for improvements and to take environment into consideration when designing corrugated board boxes. The entire system of corrugated packaging is the basis for the calculations. It is assumed that the fibres that are used for the production of the corrugated base papers are produced and recycled only within this system. This simplified so-called closedloop approach, which is described in detail in the Database report, avoids the problem of allocating impacts caused by primary fibre production and the final treatment of corrugated packaging that is not recycled between primary and recovered fibre based paper grades. This means that with the software tool it is not possible to make comparisons between the production of primary fibre and recovered fibre based materials as such. The tool enables the user to vary parameters such as transport, box design, logistics and waste management according to his personal circumstances. In this way he can use the tool to introduce parameters for possible alternatives he wants to investigate. The LCA results of these alternative cases can then be compared and analysed at inventory, characterisation, normalisation and weighing level. The user cannot change the basic data nor the methodology.  相似文献   

15.
One of the most tedious steps in genetic data analyses is the reformatting data generated with one program for use with other applications. This conversion is necessary because comprehensive evaluation of the data may be based on different algorithms included in diverse software, each requiring a distinct input format. A platform‐independent and freely available program or a web‐based tool dedicated to such reformatting can save time and efforts in data processing. Here, we report widgetcon , a website and a program which has been developed to quickly and easily convert among various molecular data formats commonly used in phylogenetic analysis, population genetics, and other fields. The web‐based service is available at https://www.widgetcon.net . The program and the website convert the major data formats in four basic steps in less than a minute. The resource will be a useful tool for the research community and can be updated to include more formats and features in the future.  相似文献   

16.
The results of a meta‐analysis conducted on organic photovoltaics (OPV) lifetime data reported in the literature is presented through the compilation of an extensive OPV lifetime database based on a large number of articles, followed by analysis of the large body of data. We fully reveal the progress of reported OPV lifetimes. Furthermore, a generic lifetime marker has been defined, which helps with gauging and comparing the performance of different architectures and materials from the perspective of device stability. Based on the analysis, conclusions are drawn on the bottlenecks for stability of device configurations and packaging techniques, as well as the current level of OPV lifetimes reported under different aging conditions. The work is summarized by discussing the development of a tool for OPV lifetime prediction and the development of more stable technologies. An online platform is introduced that will aid the process of generating statistical data on OPV lifetimes and further refinement of the lifetime prediction tool.  相似文献   

17.

Background

The rapid accumulation of whole-genome data has renewed interest in the study of using gene-order data for phylogenetic analyses and ancestral reconstruction. Current software and web servers typically do not support duplication and loss events along with rearrangements.

Results

MLGO (Maximum Likelihood for Gene-Order Analysis) is a web tool for the reconstruction of phylogeny and/or ancestral genomes from gene-order data. MLGO is based on likelihood computation and shows advantages over existing methods in terms of accuracy, scalability and flexibility.

Conclusions

To the best of our knowledge, it is the first web tool for analysis of large-scale genomic changes including not only rearrangements but also gene insertions, deletions and duplications. The web tool is available from http://www.geneorder.org/server.php.  相似文献   

18.
ArrayCyGHt is a web-based application tool for analysis and visualization of microarray-comparative genomic hybridization (array-CGH) data. Full process of array-CGH data analysis, from normalization of raw data to the final visualization of copy number gain or loss, can be straightforwardly achieved on this arrayCyGHt system without the use of any further software. ArrayCyGHt, therefore, provides an easy and fast tool for the analysis of copy number aberrations in any kinds of data format. AVAILABILITY: ArrayCyGHt can be accessed at http://genomics.catholic.ac.kr/arrayCGH/  相似文献   

19.
Eco-efficiency analysis by basf: the method   总被引:2,自引:0,他引:2  
Intention, Goal, Scope, Background  BASF has developed the tool of eco-efficiency analysis to address not only strategic issues, but also issues posed by the marketplace, politics and research. It was a goal to develop a tool for decision-making processes which is useful for a lot of applications in chemistry and other industries. Objectives. The objectives were the development of a common tool, which is usable in a simple way by LCA-experts and understandable by a lot of people without any experience in this field. The results should be shown in such a way that complex studies are understandable in one view. Methods  The method belongs to the rules of ISO 14040 ff. Beyond these life cycle aspect costs, calculations are added and summarized together with the ecological results to establish an eco-efficiency portfolio. Results and Discussion  The results of the studies are shown in a simple way, the eco-efficiency portfolio. Therefore, ecological data are summarized in a special way as described in this paper. It could be shown that the weighting factors, which are used in our method, have a negligible impact on the results. In most cases, the input data have an important impact on the results of the study. Conclusions. It could be shown that the newly developed eco-efficiency analysis is a new tool, which is usable for a lot of problems in decision-making processes. It is a tool which compares different alternatives of a defined customer benefit over the whole life cycle. Recommendations and Outlook  This new method can be a helpful tool in different fields of the evaluation of product or process alternatives. It can be used in research and development as well as in the optimization of customer processes and products. It is an analytical tool for getting more sustainable processes and products in the future  相似文献   

20.
Geometric morphometric methods constitute a powerful and precise tool for the quantification of morphological differences. The use of geometric morphometrics in palaeontology is very often limited by missing data. Shape analysis methods based on landmarks are very sensible but until now have not been adapted to this kind of dataset. To analyze the prospective utility of this method for fossil taxa, we propose a model based on prosimian cranial morphology in which we test two methods of missing data reconstruction. These consist of generating missing-data in a dataset (by increments of five percent) and estimating missing data using two multivariate methods. Estimates were found to constitute a useful tool for the analysis of partial datasets (to a certain extent). These results are promising for future studies of morphological variation in fossil taxa.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号