首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three Strategies to Overcome the Limitations of Life-Cycle Assessment   总被引:2,自引:0,他引:2  
Many research efforts aim at an extension of life‐cycle assessment (LCA) in order to increase its spatial or temporal detail or to enlarge its scope. This is an important contribution to industrial ecology as a scientific discipline, but from the application viewpoint other options are available to obtain more detailed information, or to obtain information over a broader range of impacts in a life‐cycle perspective. This article discusses three different strategies to reach these aims: (1) extension of LCA—one consistent model; (2) use of a toolbox—separate models used in combination; and (3) hybrid analysis—combination of models with data flows between them. Extension of LCA offers the most consistent solution. Developments in LCA are moving toward greater spatial detail and temporal resolution and the inclusion of social issues. Creating a supertool with too many data and resource requirements is, however, a risk. Moreover, a number of social issues are not easily modeled in relation to a functional unit. The development of a toolbox offers the most flexibility regarding spatial and temporal information and regarding the inclusion of other types of impacts. The rigid structure of LCA no longer sets limits; every aspect can be dealt with according to the logic of the relevant tool. The results lack consistency, however, preventing further formal integration. The third strategy, hybrid analysis, takes up an intermediate position between the other two. This strategy is more flexible than extension of LCA and more consistent than a toolbox. Hybrid analysis thus has the potential to combine the strong points of the other two strategies. It offers an interesting path for further discovery, broader than the already well‐known combination of process‐LCA and input‐output‐LCA. We present a number of examples of hybrid analysis to illustrate the potentials of this strategy. Developments in the field of a toolbox or of hybrid analysis may become fully consistent with LCA, and then in fact become part of the first solution, extension of LCA.  相似文献   

2.
Hybrid Framework for Managing Uncertainty in Life Cycle Inventories   总被引:1,自引:0,他引:1  
Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input–output (I‐O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom‐up process model can be estimated via a top‐down approach such as the economic I‐O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long‐term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method.  相似文献   

3.
In this contribution we investigate the applicability of different methods from the field of independent component analysis (ICA) for the examination of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data from breast cancer research. DCE-MRI has evolved in recent years as a powerful complement to X-ray based mammography for breast cancer diagnosis and monitoring. In DCE-MRI the time related development of the signal intensity after the administration of a contrast agent can provide valuable information about tissue states and characteristics. To this end, techniques related to ICA, offer promising options for data integration and feature extraction at voxel level. In order to evaluate the applicability of ICA, topographic ICA and tree-dependent component analysis (TCA), these methods are applied to twelve clinical cases from breast cancer research with a histopathologically confirmed diagnosis. For ICA these experiments are complemented by a reliability analysis of the estimated components. The outcome of all algorithms is quantitatively evaluated by means of receiver operating characteristics (ROC) statistics whereas the results for specific data sets are discussed exemplarily in terms of reification, score-plots and score images.  相似文献   

4.
Abstract: In an article on the role of temporal information in life-cycle assessment in this journal, Field and colleagues argued that frequently it is not the single product but the "fleet" (or cohort) of products that "is the appropriate unit of analysis," and that in focusing on the fleet one "explicitly introduces the notion of time as a critical element of comparative life-cycle assessments. …" Major transitions, such as replacement of one fleet of products by an alternative fleet, correspond to a system in a transient rather than steady state, and explicit consideration of time is central to transient analysis.
One tool increasingly used as part of life-cycle assessment, economic input-output (EIO) analysis, at best deals with time in an implicit fashion. This article illustrates how the sequential interindustry model (SIM), a formulation of the EIOmodel that explicitly represents time, might be utilized in life-cycle assessment. SIM introduces this temporal component by explicitly accounting for the time required by production activities and the resulting sequencing of the inputs. This can be thought of as engineering rather than accounting information. The data demands of such a model are not likely to be met at present or at any time in the near future. Even so, simulation methods and the use of so-called synthetic data have a history of productive use in a number of fields, including the social sciences.
SIM also utilizes the contribution of Joshi on the application of the EIO model to environmental impact and the inclusion of the use as well as the production phases of a product in EIO analysis. The possibility of accounting for discounting of future events, with its impact on decision making, is also briefly discussed.  相似文献   

5.
Gong HY  Zhang PM 《生理学报》2011,63(5):431-441
在神经科学研究中,多通道记录方法被普遍应用在对神经元群体活动特性的研究中.通过分析多个神经元的活动,可以了解神经系统协同编码外界信息的规则以及大脑实现各项功能的机制.为了挖掘出多通道神经信号中携带的信息及其潜在的相关性,需要合适的计算方法辅助对神经元放电活动进行解码.本文回顾了多通道神经信号分析中的一些常见方法,以及它...  相似文献   

6.
Eco-efficiency at the product level is defined as product value per unit of environmental impact. In this paper we present a method for quantifying the eco-efficiency using quality function deployment (QFD) and life-cycle impact assessment (LCIA). These well-known tools are widely used in the manufacturing industry.
QFD, which is one of the methods used in product development based on consumer preferences, is introduced to calculate the product value. An index of the product value is calculated as the weighted average of improvement rates of quality characteristics. The importance of customer requirements, derived from the QFD matrix, is applied.
Environmental impacts throughout a product life cycle are calculated based on an LCIA method widely used in Japan. By applying the LCIA method of endpoint type, the endpoint damage caused by various life-cycle inventories is calculated. Willingness to pay is applied to integrate it into a single index.
Eco-design support tools, namely, the life-cycle planning (LCP) tool and the life-cycle assessment (LCA) tool, have already been developed. Using these tools, data required for calculation of the eco-efficiency of products can be collected. The product value is calculated based on QFD data stored in the LCP tool and the environmental impact is calculated using the LCA tool.
Case studies of eco-efficiency are adopted and the adequacy of this method is clarified. Several advantages of this method are characterized.  相似文献   

7.
This article quantifies and ranks the environmental pressure caused by different product groups consumed in Sweden. This is done using information from economic and environmental statistics. An analysis for the year 1998 is performed for approximately 50 product groups using input-output analysis. This type of analysis has some major advantages for integrated product policy (IPP) purposes: the underlying data are regularly updated, the data systems are being harmonized by international standards, and the connection between environmental goals and IPP goals can be investigated. This article summarizes two Swedish reports, one for the Producer Responsibility Committee and one for the Swedish Environmental Protection Agency. The results show that the volume of consumption is an important factor in environmental pressure from products as well as impact intensities. The most important product categories for private consumption are petroleum products, electricity, construction, and food and beverages, as well as transport. Possibilities of building indicators for IPP are also discussed.  相似文献   

8.
The amount of glycomics data being generated is rapidly increasing as a result of improvements in analytical and computational methods. Correlation and analysis of this large, distributed data set requires an extensible and flexible representational standard that is also ‘understood’ by a wide range of software applications. An XML-based data representation standard that faithfully captures essential structural details of a glycan moiety along with additional information (such as data provenance) to aid the interpretation and usage of glycan data, will facilitate the exchange of glycomics data across the scientific community. To meet this need, we introduce GLYcan Data Exchange (GLYDE) standard as an XML-based representation format to enable interoperability and exchange of glycomics data. An online tool (http://128.192.9.86/stargate/formatIndex.jsp) for the conversion of other representations to GLYDE format has been developed.  相似文献   

9.
Background, aim, and scope  As the sustainability improvement becomes an essential business task of industry, a number of companies are adopting IT-based environmental information systems (EIS). Life cycle assessment (LCA), a tool to improve environmental friendliness of a product, can also be systemized as a part of the EIS. This paper presents a case of an environmental information system which is integrated with online LCA tool to produce sets of hybrid life cycle inventory and examine its usefulness in the field application of the environmental management. Main features  Samsung SDI Ltd., the producer of display panels, has launched an EIS called Sustainability Management Initiative System (SMIS). The system comprised modules of functions such as environmental management system (EMS), green procurement (GP), customer relation (e-VOC), eco-design, and LCA. The LCA module adopted the hybrid LCA methodology in the sense that it combines process LCA for the site processes and input–output (IO) LCA for upstream processes to produce cradle-to-gate LCA results. LCA results from the module are compared with results of other LCA studies made by the application of different methodologies. The advantages and application of the LCA system are also discussed in light of the electronics industry. Results and discussion  LCA can play a vital role in sustainability management by finding environmental burden of products in their life cycle. It is especially true in the case of the electronics industry, since the electronic products have some critical public concerns in the use and end-of-life phase. SMIS shows a method for hybrid LCA through online data communication with EMS and GP module. The integration of IT-based hybrid LCA in environmental information system was set to begin in January 2006. The advantage of the comparing and regular monitoring of the LCA value is that it improves the system completeness and increases the reliability of LCA. By comparing the hybrid LCA and process LCA in the cradle-to-gate stage, the gap between both methods of the 42-in. standard definition plasma display panel (PDP) ranges from 1% (acidification impact category) to −282% (abiotic resource depletion impact category), with an average gap of 68.63%. The gaps of the impact categories of acidification (AP), eutrophication (EP), and global warming (GWP) are relatively low (less than 10%). In the result of the comparative analysis, the strength of correlation of three impact categories (AP, EP, GWP) shows that it is reliable to use the hybrid LCA when assessing the environmental impacts of the PDP module. Hybrid LCA has its own risk on data accuracy. However, the risk is affordable when it comes to the comparative LCA among different models of similar product line of a company. In the results of 2 years of monitoring of 42-in. Standard definition PDP, the hybrid LCA score has been decreased by 30%. The system also efficiently shortens man-days for LCA study per product. This fact can facilitate the eco-design of the products and can give quick response to the customer's inquiry on the product's eco-profile. Even though there is the necessity for improvement of process data currently available, the hybrid LCA provides insight into the assessments of the eco-efficiency of the manufacturing process and the environmental impacts of a product. Conclusions and recommendations  As the environmental concerns of the industries increase, the need for environmental data management also increases. LCA shall be a core part of the environmental information system by which the environmental performances of products can be controlled. Hybrid type of LCA is effective in controlling the usual eco-profile of the products in a company. For an industry, in particular electronics, which imports a broad band of raw material and parts, hybrid LCA is more practicable than the classic LCA. Continuous efforts are needed to align input data and keep conformity, which reduces data uncertainty of the system.  相似文献   

10.

Background

Genomic prediction is becoming a daily tool for plant breeders. It makes use of genotypic information to make predictions used for selection decisions. The accuracy of the predictions depends on the number of genotypes used in the calibration; hence, there is a need of combining data across years. A proper phenotypic analysis is a crucial prerequisite for accurate calibration of genomic prediction procedures. We compared stage-wise approaches to analyse a real dataset of a multi-environment trial (MET) in rye, which was connected between years only through one check, and used different spatial models to obtain better estimates, and thus, improved predictive abilities for genomic prediction. The aims of this study were to assess the advantage of using spatial models for the predictive abilities of genomic prediction, to identify suitable procedures to analyse a MET weakly connected across years using different stage-wise approaches, and to explore genomic prediction as a tool for selection of models for phenotypic data analysis.

Results

Using complex spatial models did not significantly improve the predictive ability of genomic prediction, but using row and column effects yielded the highest predictive abilities of all models. In the case of MET poorly connected between years, analysing each year separately and fitting year as a fixed effect in the genomic prediction stage yielded the most realistic predictive abilities. Predictive abilities can also be used to select models for phenotypic data analysis. The trend of the predictive abilities was not the same as the traditionally used Akaike information criterion, but favoured in the end the same models.

Conclusions

Making predictions using weakly linked datasets is of utmost interest for plant breeders. We provide an example with suggestions on how to handle such cases. Rather than relying on checks we show how to use year means across all entries for integrating data across years. It is further shown that fitting of row and column effects captures most of the heterogeneity in the field trials analysed.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-646) contains supplementary material, which is available to authorized users.  相似文献   

11.
Evaluation of: Deighton RF, Kerr LE, Short DM et al. Network generation enhances interpretation of proteomics data from induced apoptosis. Proteomics DOI: 10.1002/pmic.200900112 (2010) (Epub ahead of print).

The huge ongoing improvements in proteomics technologies, including the development of high-throughput mass spectrometry, are resulting in ever increasing information on protein behavior during cellular processes. The exponential accumulation of proteomics data has the promise to advance biomedical sciences by shedding light on the most important events that regulate mammalian cells under normal and pathophysiological conditions. This may provide practical insights that will impact medical practice and therapy, and may permit the development of a new generation of personalized therapeutics. Proteomics, as a powerful tool, creates numerous opportunities as well as challenges. At the different stages, data interpretation requires proteomics analysis, various tools to help deal with large proteomics data banks and the extraction of more functional information. Network analysis tools facilitate proteomics data interpretation and predict protein functions, functional interactions and in silica identification of intracellular pathways. The work reported by Deighton and colleagues illustrates an example of improving proteomics data interpretation by network generation. The authors used ingenuity pathway analysis to generate a protein network predicting direct and indirect interaction between 13 proteins found to be affected by staurosporine treatment. Importantly, the authors highlight the caution required when interpreting the results from a small number of proteins analyzed using network analysis tools.  相似文献   

12.
ABSTRACT Classical home range analysis is tailored to meet requirements of data with few points per individual with relatively large intervals between observations. The swift rise in Global Positioning System (GPS)-based studies requires the development of new analytical approaches because GPS data allow for more detailed analysis in time and space. The amount of data derived from GPS studies enhances the potential to more accurately separate movement strategies. We present a general, simple, conceptual approach to using large movement datasets to automatically screen and delimit spatial and temporal home ranges of individuals and movement strategies using time series segmentation. We used GPS data for moose (Alces alces) from a boreal Swedish population as an example. We tested predictions that our screening method could separate seasonal migration from dispersal and nomadic strategies by the movement profile, which includes several dimensions. Our analysis showed that broad strategies were detected using our simple analytical approach, which speeds up use of GPS data for management and research because the method can be used to calculate more objective spatial and temporal activity ranges in relation to movement strategies. Our examples illustrate the importance of using the time stamp on location data in describing home ranges and movements.  相似文献   

13.
In recent literature, prospective application of life cycle assessment (LCA) at low technology readiness levels (TRL) has gained immense interest for its potential to enable development of emerging technologies with improved environmental performances. However, limited data, uncertain functionality, scale up issues and uncertainties make it very challenging for the standard LCA guidelines to evaluate emerging technologies and requires methodological advances in the current LCA framework. In this paper, we review published literature to identify major methodological challenges and key research efforts to resolve these issues with a focus on recent developments in five major areas: cross‐study comparability, data availability and quality, scale‐up issues, uncertainty and uncertainty communication, and assessment time. We also provide a number of recommendations for future research to support the evaluation of emerging technologies at low technology readiness levels: (a) the development of a consistent framework and reporting methods for LCA of emerging technologies; (b) the integration of other tools with LCA, such as multicriteria decision analysis, risk analysis, technoeconomic analysis; and (c) the development of a data repository for emerging materials, processes, and technologies.  相似文献   

14.
15.
For the practical implementation of the assessment of environmental impact, actual procedures and data requirements should be clarified so that industrial decision makers understand them. Researchers should consider local risks related to processes and environmental impact throughout the life cycle of products simultaneously to supervise these adverse effects appropriately. Life cycle assessment (LCA) is a useful tool for quantifying the potential impact associated with a product life cycle. Risk assessment (RA) is a widely used tool for identifying chemical risks in a specific situation. In this study, we integrate LCA and RA for risk‐based decision making by devising a hierarchical activity model using the type‐zero method of integrated definition language (IDEF0). The IDEF0 activity modeling language has been applied to connect activities with information flows. Process generation, evaluation, and decision making are logically defined and visualized in the activity model with the required information. The activities, information flows, and their acquisitions are revealed, with a focus on which data should be collected by on‐site engineers. A case study is conducted on designing a metal cleaning process reducing chemical risks due to the use of a cleansing agent. LCA and RA are executed and applied effectively on the basis of integrated objective settings and interpretation. The proposed activity model can be used as a foundation to incorporate such assessments into actual business models.  相似文献   

16.
A sustainability matrix has been developed at Shell Global Solutions to show the environmental, social, and economic impacts of a product. The approach aims to be quicker and more cost-effective than a conventional life-cycle assessment by focusing on specific areas of concern through the product life cycle and then comparing products by scaling their impacts relative to one another. It provides a way of making qualitative and quantitative assessment that gives a depth to the assessment beyond data analysis. The tool includes subjective judgment, which tends to reflect current thinking in the company. Once the tool has been fully tested on all product types, the indicators that are central to the process will be assessed by external stakeholders. This article describes the development of the sustainability assessment tool and presents an example that compares the sustainability of a biolubricant (an "environ-mentally acceptable" hydraulic fluid meeting Swedish Standard SS 15 54 34) with that of a conventional mineral-oil-based product. The tool provides a quick decision-making instrument to help Shell decide which products should be marketed for the business to continue on a sustainable path. The tool also provides a more detailed level of information if a more thorough assessment is necessary.  相似文献   

17.
The identification of metabolic regulation is a major concern in metabolic engineering. Metabolic regulation phenomena depend on intracellular compounds such as enzymes, metabolites and cofactors. A complete understanding of metabolic regulation requires quantitative information about these compounds under in vivo conditions. This quantitative knowledge in combination with the known network of metabolic pathways allows the construction of mathematical models that describe the dynamic changes in metabolite concentrations over time. Rapid sampling combined with pulse experiments is a useful tool for the identification of metabolic regulation owing to the transient data they provide. Enzymatic tests in combination with ESI-LC-MS (Electrospray Ionization Liquid Chromatographic Tandem Mass Spectrometry) and HPLC measurements have been used to identify up to 30 metabolites and nucleotides from rapid sampling experiments. A metabolic modeling tool (MMT) that is built on a relational database was developed specifically for analysis of rapid sampling experiments. The tool allows to construct complex pathway models with information stored in the relational database. Parameter fitting and simulation algorithms for the resulting system of Ordinary Differential Equations (ODEs) are part of MMT. Additionally explicit sensitivity functions are calculated. The integration of all necessary algorithms in one tool allows fast model analysis and comparison. Complex models have been developed to describe the central metabolic pathways of Escherichia coli during a glucose pulse experiment.  相似文献   

18.
Accurate detection of offspring resulting from hybridization between individuals of distinct populations has a range of applications in conservation and population genetics. We assessed the hybrid identification efficiency of two methods (implemented in the STRUCTURE and NEWHYBRIDS programs) which are tailored to identifying hybrid individuals but use different approaches. Simulated first- and second-generation hybrids were used to assess the performance of these two methods in detecting recent hybridization under scenarios with different levels of genetic divergence and varying numbers of loci. Despite the different approaches of the methods, the hybrid detection efficiency was generally similar and neither of the two methods outperformed the other in all scenarios assessed. Interestingly, hybrid detection efficiency was only minimally affected by whether reference population allele frequency information was included or not. In terms of genotyping effort, efficient detection of F1 hybrid individuals requires the use of 12 or 24 loci with pairwise F(ST) between hybridizing parental populations of 0.21 or 0.12, respectively. While achievable, these locus numbers are nevertheless higher than the number of loci currently commonly applied in population genetic studies. The method of STRUCTURE seemed to be less sensitive to the proportion of hybrids included in the sample, while NEWHYBRIDS seemed to perform slightly better when individuals from both backcross and F1 hybrid classes were present in the sample. However, separating backcrosses from purebred parental individuals requires a considerable genotyping effort (at least 48 loci), even when divergence between parental populations is high.  相似文献   

19.
A method for quantitative evaluation of data quality in regional material flow analysis (MFA) is presented. The principal idea is that data quality is a multidimensional problem that cannot be judged by individual characteristics such as the data source, given that data from official statistics may not be per se of good quality and expert estimations may not be per se of bad quality, respectively. It appears that MFA data are never totally accurate and may have certain defects that impair the quality of the data in more than one dimension. The concept of MFA information defects is introduced, and these information defects are mathematically formalized as functions of data characteristics. They are quantified on a scale from 0 (no information defect) to 1 (maximum information defect). The proposed method is illustrated in a case study on palladium flows in Austria. A quantitative evaluation of data quality provides opportunities for understanding and assessing MFA results, their a priori information basis, their reliability in decision making, and data uncertainties. It is a formal step toward better reproducibility and more transparency in MFA.  相似文献   

20.
SUMMARY: The large amount of data produced by proteomics experiments requires effective bioinformatics tools for the integration of data management and data analysis. Here we introduce a suite of tools developed at Vanderbilt University to support production proteomics. We present the Backup Utility Service tool for automated instrument file backup and the ScanSifter tool for data conversion. We also describe a queuing system to coordinate identification pipelines and the File Collector tool for batch copying analytical results. These tools are individually useful but collectively reinforce each other. They are particularly valuable for proteomics core facilities or research institutions that need to manage multiple mass spectrometers. With minor changes, they could support other types of biomolecular resource facilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号