This paper aims to demonstrate how LCA can be improved by the use of linear programming (LP) (i) to determine the optimal choice between new technologies, (ii) to identify the optimal region for supplying the feedstock, and (iii) to deal with multifunctional processes without specifying a certain main product. Furthermore, the contribution of LP in the context of consequential LCA and LCC is illustrated.
MethodsWe create a mixed integer linear program (MILP) for the environmental and economic assessment of new technologies. The model is applied in order to analyze two residual beech wood-based biorefinery concepts in Germany. In terms of the optimal consequences for the system under study, the principle of the program is to find a scaling vector that minimizes the life cycle impact indicator results of the system. We further transform the original linear program to extend the assessment by life cycle costing (LCC). Thereby, two multi-objective programming methods are used, weighted goal programming and epsilon constraint method.
Results and discussionThe consequential case studies demonstrate the possibility to determine optimal locations of newly developed technologies. A high number of potential system modifications can be studied simultaneously without matrix inversion. The criteria for optimal choices are represented by the objective functions and the additional constraints such as the available feedstock in a region. By combining LCA and LCC targets within a multi-objective programming approach, it is possible to address environmental and economic trade-offs in consequential decision-making.
ConclusionsThis article shows that linear programming can be used to extend standard LCA in the field of technological choices. Additional consequential research questions can be addressed such as the determination of the optimal number of new production plants and the optimal regions for supplying the resources. The modifications of the program by additional profit requirements (LCC) into a goal program and Pareto optimization problem have been identified as promising steps toward a comprehensive multi-objective LCSA.
相似文献Haplotypes, the ordered lists of single nucleotide variations that distinguish chromosomal sequences from their homologous pairs, may reveal an individual’s susceptibility to hereditary and complex diseases and affect how our bodies respond to therapeutic drugs. Reconstructing haplotypes of an individual from short sequencing reads is an NP-hard problem that becomes even more challenging in the case of polyploids. While increasing lengths of sequencing reads and insert sizes helps improve accuracy of reconstruction, it also exacerbates computational complexity of the haplotype assembly task. This has motivated the pursuit of algorithmic frameworks capable of accurate yet efficient assembly of haplotypes from high-throughput sequencing data.
ResultsWe propose a novel graphical representation of sequencing reads and pose the haplotype assembly problem as an instance of community detection on a spatial random graph. To this end, we construct a graph where each read is a node with an unknown community label associating the read with the haplotype it samples. Haplotype reconstruction can then be thought of as a two-step procedure: first, one recovers the community labels on the nodes (i.e., the reads), and then uses the estimated labels to assemble the haplotypes. Based on this observation, we propose ComHapDet – a novel assembly algorithm for diploid and ployploid haplotypes which allows both bialleleic and multi-allelic variants.
ConclusionsPerformance of the proposed algorithm is benchmarked on simulated as well as experimental data obtained by sequencing Chromosome 5 of tetraploid biallelic Solanum-Tuberosum (Potato). The results demonstrate the efficacy of the proposed method and that it compares favorably with the existing techniques.
相似文献Land use optimization as a resource allocation problem can be defined as the process of assigning different land uses to a region. Sustainable development also involves the exploitation of environmental resources, investment orientation, technology development, and industrial changes in a coordinated form. This paper studies the multi-objective sustainable land use planning problem and proposes an integrated framework, including simulation, forecasting, and optimization approaches for this problem. Land use optimization, a multifaceted process, requires complex decisions, including selection of land uses, forecasting land use allocation percentage, and assigning locations to land uses. The land use allocation percentage in the selected horizons is simulated and predicted by designing a System Dynamics (SD) model based on socio-economic variables. Furthermore, land use assignment is accomplished with a multi-objective integer programming model that is solved using augmented ε-constraint and non-dominated sorting genetic algorithm II (NSGA-II) methods. According to the results of the SD model, land use changes depend on population growth rate and labor productivity variables. Among the possible scenarios, a scenario focusing more on sustainable planning is chosen and the forecasting results of this scenario are used for optimal land use allocation. The computational results show that the augmented ε-constraint method cannot solve this problem even for medium sizes. The NSGA-II method not only solves the problem at large sizes over a reasonable time, but also generates good-quality solutions. NSGA-II showed better performance in metrics, including number of non-dominated Pareto solutions (NNPS), mean ideal distance (MID), and dispersion metric (DM). Integrated framework is implemented to allocate four types of land uses consisting of residential, commercial, industrial, and agricultural to a given region with 900 cells.
相似文献Auction designs have recently been adopted for static and dynamic resource provisioning in IaaS clouds, such as Microsoft Azure and Amazon EC2. However, the existing mechanisms are mostly restricted to simple auctions, single-objective, offline setting, one-sided interactions either among cloud users or cloud service providers (CSPs), and possible misreports of cloud user’s private information. This paper proposes a more realistic scenario of online auctioning for IaaS clouds, with the unique characteristics of elasticity for time-varying arrival of cloud user requests under the time-based server maintenance in cloud data centers. We propose an online truthful double auction technique for balancing the multi-objective trade-offs between energy, revenue, and performance in IaaS clouds, consisting of a weighted bipartite matching based winning-bid determination algorithm for resource allocation and a Vickrey–Clarke–Groves (VCG) driven algorithm for payment calculation of winning bids. Through rigorous theoretical analysis and extensive trace-driven simulation studies exploiting Google cluster workload traces, we demonstrate that our mechanism significantly improves the performance while promising truthfulness, heterogeneity, economic efficiency, individual rationality, and has a polynomial-time computational complexity.
相似文献Artificial Bee Colony (ABC) algorithm is a nature-inspired algorithm that showed its efficiency for optimizations. However, the ABC algorithm showed some imbalances between exploration and exploitation. In order to improve the exploitation and enhance the convergence speed, a multi-population ABC algorithm based on global and local optimum (namely MPGABC) is proposed in this paper. First, in MPGABC, the initial population is generated using both chaotic systems and opposition-based learning methods. The colony in MPGABC is divided into several sub-populations to increase diversity. Moreover, the solution search mechanism is modified by introducing global and local optima in the solution search equations of both employed and onlookers. The scout bees in the proposed algorithm are generated similarly to the initial population. Finally, the proposed algorithm is compared with several state-of-art ABC algorithm variants on a set of 13 classical benchmark functions. The experimental results show that MPGABC competes and outperforms other ABC algorithm variants.
相似文献Genetically determined variants of NADH-diaphorase
Summary By means of starchgel-electrophoresis a screening for variants of NADH-Diaphorase was carried out within a sample of 725 healthy probands. Two kinds of genetically determined variants have been observed: a heterozygous phenotype with greater mobility (DIA 2-1) and a heterozygous phenotype with slower mobility (DIA 3-1). The gene-frequencies are estimated so far as 0.0021 (DIA2) and 0.0007 (DIA3).
Mit Unterstützung durch die Deutsche Forschungsgemeinschaft. 相似文献
Brugada syndrome (BrS) is a rare hereditary arrhythmia syndrome that increases an individual’s risk for sudden cardiac death (SCD) due to ventricular fibrillation. This disorder is regarded as a notable cause of death in individuals aged less than 40 years, responsible for up to 40% of sudden deaths in cases without structural heart disease, and is reported to be an endemic in Asian countries. Mutations in SCN5A are found in approximately 30% of patients with Brugada syndrome. This study aimed to investigate mutations in the SCN5A gene in a group of Iranian Brugada syndrome patients. Nine probands (n = 9, male, mean age = 39) diagnosed with Brugada syndrome were enrolled in this study. Exon 2 to 29 were amplified by PCR and subjected to direct sequencing. Eight in silico prediction tools were used to anticipate the effects of non-synonymous variants. Seven known polymorphisms and 2 previously reported disease-causing mutations, including H558R and G1406R, were found in the studied cases. Twenty novel variants were identified: 15 missense, 2 frameshift, 2 synonymous, and one nonsense variants. In silico tools predicted 11 non-synonymous variants to have damaging effects, whereas frameshift and nonsense variants were considered inherently pathogenic. The novel variants identified in this study, alongside previously reported mutations, are highly likely to be the cause of the Brugada syndrome phenotype observed in the patient group. Further analysis is required to understand the physiological effects caused by these variants.
相似文献To infer gene regulatory networks (GRNs) from gene-expression data is still a fundamental and challenging problem in systems biology. Several existing algorithms formulate GRNs inference as a regression problem and obtain the network with an ensemble strategy. Recent studies on data driven dynamic network construction provide us a new perspective to solve the regression problem.
ResultsIn this study, we propose a data driven dynamic network construction method to infer gene regulatory network (D3GRN), which transforms the regulatory relationship of each target gene into functional decomposition problem and solves each sub problem by using the Algorithm for Revealing Network Interactions (ARNI). To remedy the limitation of ARNI in constructing networks solely from the unit level, a bootstrapping and area based scoring method is taken to infer the final network. On DREAM4 and DREAM5 benchmark datasets, D3GRN performs competitively with the state-of-the-art algorithms in terms of AUPR.
ConclusionsWe have proposed a novel data driven dynamic network construction method by combining ARNI with bootstrapping and area based scoring strategy. The proposed method performs well on the benchmark datasets, contributing as a competitive method to infer gene regulatory networks in a new perspective.
相似文献Fog-cloud computing is a promising distributed model for hosting ever-increasing Internet of Things (IoT) applications. IoT applications should meet different characteristics such as deadline, frequency rate, and input file size. Fog nodes are heterogeneous, resource-limited devices and cannot accommodate all the IoT applications. Due to these difficulties, designing an efficient algorithm to deploy a set of IoT applications in a fog-cloud environment is very important. In this paper, a fuzzy approach is developed to classify applications based on their characteristics then an efficient heuristic algorithm is proposed to place applications on the virtualized computing resources. The proposed policy aims to provide a high quality of service for IoT users while the profit of fog service providers is maximized by minimizing resource wastage. Extensive simulation experiments are conducted to evaluate the performance of the proposed policy. Results show that the proposed policy outperforms other approaches by improving the average response time up to 13%, the percentage of deadline satisfied requests up to 12%, and the resource wastage up to 26%.
相似文献Internet of Things (IoT) has introduced new applications and environments. Smart Home provides new ways of communication and service consumption. In addition, Artificial Intelligence (AI) and deep learning have improved different services and tasks by automatizing them. In this field, reinforcement learning (RL) provides an unsupervised way to learn from the environment. In this paper, a new intelligent system based on RL and deep learning is proposed for Smart Home environments to guarantee good levels of QoE, focused on multimedia services. This system is aimed to reduce the impact on user experience when the classifying system achieves a low accuracy. The experiments performed show that the deep learning model proposed achieves better accuracy than the KNN algorithm and that the RL system increases the QoE of the user up to 3.8 on a scale of 10.
相似文献