首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
The issue of CO2 emission has become a major issue causing greater concern to the global economies due to its potential environmental effects and impact on climate change. In order to address this issue and mitigate its harmful effects on the environment, it is imperative to reduce CO2 emission drastically and fairly quickly. In this paper we have been focusing on alternative linkage methodologies for measuring CO2 emission, which entails linkages among the productive sectors in an economy. Methods, dealing with inter-sectoral carbon linkage measures can be summarized into two main categories, i.e. (a) the concept of traditional backward and forward linkages and (b) hypothetical extraction method (HEM). The (HEM) method is used to hypothetically extract a sector from an economic system and examine the influence of this extraction on other sectors in an economy. In this study we will evolve the environmentally extended input–output model to measure the CO2 emission linkages among the productive sectors in Italy using data obtained in 2011. Using the HEM method, the backward linkage emission and forward linkage emission are calculated to characterize the behavior of these sectors. The results obtained from these measures will enable us to formulate hypothesis about the direction and strength of the relationship between various linkages and will also indicate which key CO2 emitter sector measures are most similar and which are most dissimilar. According to the size of the various linkage measures, all sectors of the economy can be grouped into four categories. These measures allow us to examine and identify those sectors, which deserve more consideration in formulating mitigation policies.  相似文献   

2.
Heterogeneous Disk Arrays (HDAs) allow resource sharing of their hardware by multiple RAID levels. RAID1 (mirrored disks) and RAID5 (distributed parity arrays) are the two RAID levels considered in this study. They are both single disk failure tolerant (1DFT), but differ significantly in their efficiency in processing database workloads. The goal of the study is to maximize the number of Virtual Array (VA) allocations in HDA. We develop an analysis to estimate the load per VA based on a few parameters: the fraction of accesses to small versus large blocks and the fraction of updates versus reads. A VA is allocated according to the RAID level, which minimizes the anticipated load based on input parameters. Operation in normal and degraded mode is considered for comparison purposes, but in fact allocations are carried out using the higher load in degraded mode to ensure that single disk failures will not result in overload. We report on parametric studies to gain insight into circumstances leading to a RAID1 or RAID5 classification. An allocation experiment with a synthetic workload is used to demonstrate the superiority of HDA with respect to purely RAID1 or RAID5 disk arrays. This analytic study can be extended to 2DFT arrays, namely RAID6 versus 3-way replication.  相似文献   

3.
I/O intensive applications have posed great challenges to computational scientists. A major problem of these applications is that users have to sacrifice performance requirements in order to satisfy storage capacity requirements in a conventional computing environment. Further performance improvement is impeded by the physical nature of these storage media even when state-of-the-art I/O optimizations are employed.In this paper, we present a distributed multi-storage resource architecture, which can satisfy both performance and capacity requirements by employing multiple storage resources. Compared to a traditional single storage resource architecture, our architecture provides a more flexible and reliable computing environment. This architecture can bring new opportunities for high performance computing as well as inherit state-of-the-art I/O optimization approaches that have already been developed. It provides application users with high-performance storage access even when they do not have the availability of a single large local storage archive at their disposal. We also develop an Application Programming Interface (API) that provides transparent management and access to various storage resources in our computing environment. Since I/O usually dominates the performance in I/O intensive applications, we establish an I/O performance prediction mechanism which consists of a performance database and a prediction algorithm to help users better evaluate and schedule their applications. A tool is also developed to help users automatically generate performance data stored in databases. The experiments show that our multi-storage resource architecture is a promising platform for high performance distributed computing.  相似文献   

4.
Load balancing in a workstation-based cluster system has been investigated extensively, mainly focusing on the effective usage of global CPU and memory resources. However, if a significant portion of applications running in the system is I/O-intensive, traditional load balancing policies can cause system performance to decrease substantially. In this paper, two I/O-aware load-balancing schemes, referred to as IOCM and WAL-PM, are presented to improve the overall performance of a cluster system with a general and practical workload including I/O activities. The proposed schemes dynamically detect I/O load imbalance of nodes in a cluster, and determine whether to migrate some I/O load from overloaded nodes to other less- or under-loaded nodes. The current running jobs are eligible to be migrated in WAL-PM only if overall performance improves. Besides balancing I/O load, the scheme judiciously takes into account both CPU and memory load sharing in the system, thereby maintaining the same level of performance as existing schemes when I/O load is low or well balanced. Extensive trace-driven simulations for both synthetic and real I/O-intensive applications show that: (1) Compared with existing schemes that only consider CPU and memory, the proposed schemes improve the performance with respect to mean slowdown by up to a factor of 20; (2) When compared to the existing approaches that only consider I/O with non-preemptive job migrations, the proposed schemes achieve improvements in mean slowdown by up to a factor of 10; (3) Under CPU-memory intensive workloads, our scheme improves the performance over the existing approaches that only consider I/O by up to 47.5%. Xiao Qin received the BSc and MSc degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. His research interests include parallel and distributed systems, storage systems, real-time computing, performance evaluation, and fault-tolerance. He served on program committees of international conferences like CLUSTER, ICPP, and IPCCC. During 2000–2001, he was on the editorial board of The IEEE Distributed System Online. He is a member of the IEEE. Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Associate Professor and Vice Chair in the Department of Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, computer storage systems and parallel I/O, performance evaluation, middleware, networking, and computational engineering. He has over 70 publications in major journals and international Conferences in these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH and ACM SIGCOMM. Yifeng Zhu received the B.E. degree in Electrical Engineering from Huazhong University of Science and Technology in 1998 and the M.S. degree in computer science from University of Nebraska Lincoln (UNL) in 2002. Currently he is working towards his Ph.D. degree in the department of computer science and engineering at UNL. His main fields of research interests are parallel I/O, networked storage, parallel scheduling, and cluster computing. He is a student member of IEEE. David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In early 1999 he returned to UNL where he has coordinated the Research Computing Facility and currently serves as an Assistant Research Professor in the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska have supported his research in areas such as large-scale parallel simulation and distributed systems.  相似文献   

5.
Saidi  Ahmed  Nouali  Omar  Amira  Abdelouahab 《Cluster computing》2022,25(1):167-185

Attribute-based encryption (ABE) is an access control mechanism that ensures efficient data sharing among dynamic groups of users by setting up access structures indicating who can access what. However, ABE suffers from expensive computation and privacy issues in resource-constrained environments such as IoT devices. In this paper, we present SHARE-ABE, a novel collaborative approach for preserving privacy that is built on top of Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Our approach uses Fog computing to outsource the most laborious decryption operations to Fog nodes. The latter collaborate to partially decrypt the data using an original and efficient chained architecture. Additionally, our approach preserves the privacy of the access policy by introducing false attributes. Furthermore, we introduce a new construction of a collaboration attribute that allows users within the same group to combine their attributes while satisfying the access policy. Experiments and analyses of the security properties demonstrate that the proposed scheme is secure and efficient especially for resource-constrained IoT devices.

  相似文献   

6.
MOSIX is a cluster management system that supports preemptive process migration. This paper presents the MOSIX Direct File System Access (DFSA), a provision that can improve the performance of cluster file systems by allowing a migrated process to directly access files in its current location. This capability, when combined with an appropriate file system, could substantially increase the I/O performance and reduce the network congestion by migrating an I/O intensive process to a file server rather than the traditional way of bringing the file's data to the process. DFSA is suitable for clusters that manage a pool of shared disks among multiple machines. With DFSA, it is possible to migrate parallel processes from a client node to file servers for parallel access to different files. Any consistent file system can be adjusted to work with DFSA. To test its performance, we developed the MOSIX File-System (MFS) which allows consistent parallel operations on different files. The paper describes DFSA and presents the performance of MFS with and without DFSA.  相似文献   

7.
Reynolds KA  McLaughlin RN  Ranganathan R 《Cell》2011,147(7):1564-1575
Recent work indicates a general architecture for proteins in which sparse networks of physically contiguous and coevolving amino acids underlie basic aspects of structure and function. These networks, termed sectors, are spatially organized such that active sites are linked to many surface sites distributed throughout the structure. Using the metabolic enzyme dihydrofolate reductase as a model system, we show that: (1) the sector is strongly correlated to a network of residues undergoing millisecond conformational fluctuations associated with enzyme catalysis, and (2) sector-connected surface sites are statistically preferred locations for the emergence of allosteric control in vivo. Thus, sectors represent an evolutionarily conserved "wiring" mechanism that can enable perturbations at specific surface positions to rapidly initiate conformational control over protein function. These findings suggest that sectors enable the evolution of intermolecular communication and regulation.  相似文献   

8.
Shifting from the analysis of single nucleotide polymorphisms to the reconstruction of selected haplotypes greatly facilitates the interpretation of evolve and resequence (E&R) experiments. Merging highly correlated hitchhiker SNPs into haplotype blocks reduces thousands of candidates to few selected regions. Current methods of haplotype reconstruction from Pool‐seq data need a variety of data‐specific parameters that are typically defined ad hoc and require haplotype sequences for validation. Here, we introduce haplovalidate, a tool which detects selected haplotypes in Pool‐seq time series data without the need for sequenced haplotypes. Haplovalidate makes data‐driven choices of two key parameters for the clustering procedure, the minimum correlation between SNPs constituting a cluster and the window size. Applying haplovalidate to simulated E&R data reliably detects selected haplotype blocks with low false discovery rates. Importantly, our analyses identified a restriction of the haplotype block‐based approach to describe the genomic architecture of adaptation. We detected a substantial fraction of haplotypes containing multiple selection targets. These blocks were considered as one region of selection and therefore led to underestimation of the number of selection targets. We demonstrate that the separate analysis of earlier time points can significantly increase the separation of selection targets into individual haplotype blocks. We conclude that the analysis of selected haplotype blocks has great potential for the characterization of the adaptive architecture with E&R experiments.  相似文献   

9.
Allele-specific mismatch amplification mutation assays (MAMA) of anatomically distinct sectors of the upper bronchial tracts of nine nonsmokers revealed many numerically dispersed clusters of the point mutations C742T, G746T, G747T of the TP53 gene, G35T of the KRAS gene and G508A of the HPRT1 gene. Assays of these five mutations in six smokers have yielded quantitatively similar results. One hundred and eighty four micro-anatomical sectors of 0.5-6x10(6) tracheal-bronchial epithelial cells represented en toto the equivalent of approximately 1.7 human smokers' bronchial trees to the fifth bifurcation. Statistically significant mutant copy numbers above the 95% upper confidence limits of historical background controls were found in 198 of 425 sector assays. No significant differences (P=0.1) for negative sector fractions, mutant fractions, distributions of mutant cluster size or anatomical positions were observed for smoking status, gender or age (38-76 year). Based on the modal cluster size of mitochondrial point mutants, the size of the adult bronchial epithelial maintenance turnover unit was estimated to be about 32 cells. When data from all 15 lungs were combined the log2 of nuclear mutant cluster size plotted against log2 of the number of clusters of a given cluster size displayed a slope of approximately 1.1 over a range of cluster sizes from approximately 2(6) to 2(15) mutant copies. A parsimonious interpretation of these nuclear and previously reported data for lung epithelial mitochondrial point mutant clusters is that they arose from mutations in stem cells at a high but constant rate per stem cell doubling during at least ten stem cell doublings of the later fetal-juvenile period. The upper and lower decile range of summed point mutant fractions among lungs was about 7.5-fold, suggesting an important source of stratification in the population with regard to risk of tumor initiation.  相似文献   

10.
We present gmblock, a block-level storage sharing system over Myrinet which uses an optimized I/O path to transfer data directly between the storage medium and the network, bypassing the host CPU and main memory bus of the storage server. It is device driver independent and retains the protection and isolation features of the OS. We evaluate the performance of a prototype gmblock server and find that: (a) the proposed techniques eliminate memory and peripheral bus contention, increasing remote I/O bandwidth significantly, in the order of 20–200% compared to an RDMA-based approach, (b) the impact of remote I/O to local computation becomes negligible, (c) the performance characteristics of RAID storage combined with limited NIC resources reduce performance. We introduce synchronized send operations to improve the degree of disk to network I/O overlapping. We deploy the OCFS2 shared-disk filesystem over gmblock and show gains for various application benchmarks, provided I/O scheduling can eliminate the disk bottleneck due to concurrent access.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号