首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Goal, Scope and Background Decision-makers demand information about the range of possible outcomes of their actions. Therefore, for developing Life Cycle Assessment (LCA) as a decision-making tool, Life Cycle Inventory (LCI) databases should provide uncertainty information. Approaches for incorporating uncertainty should be selected properly contingent upon the characteristics of the LCI database. For example, in industry-based LCI databases where large amounts of up-to-date process data are collected, statistical methods might be useful for quantifying the uncertainties. However, in practice, there is still a lack of knowledge as to what statistical methods are most effective for obtaining the required parameters. Another concern from the industry's perspective is the confidentiality of the process data. The aim of this paper is to propose a procedure for incorporating uncertainty information with statistical methods in industry-based LCI databases, which at the same time preserves the confidentiality of individual data. Methods The proposed procedure for taking uncertainty in industry-based databases into account has two components: continuous probability distributions fitted to scattering unit process data, and rank order correlation coefficients between inventory flows. The type of probability distribution is selected using statistical methods such as goodness-of-fit statistics or experience based approaches. Parameters of probability distributions are estimated using maximum likelihood estimation. Rank order correlation coefficients are calculated for inventory items in order to preserve data interdependencies. Such probability distributions and rank order correlation coefficients may be used in Monte Carlo simulations in order to quantify uncertainties in LCA results as probability distribution. Results and Discussion A case study is performed on the technology selection of polyethylene terephthalate (PET) chemical recycling systems. Three processes are evaluated based on CO2 reduction compared to the conventional incineration technology. To illustrate the application of the proposed procedure, assumptions were made about the uncertainty of LCI flows. The application of the probability distributions and the rank order correlation coefficient is shown, and a sensitivity analysis is performed. A potential use of the results of the hypothetical case study is discussed. Conclusion and Outlook The case study illustrates how the uncertainty information in LCI databases may be used in LCA. Since the actual scattering unit process data were not available for the case study, the uncertainty distribution of the LCA result is hypothetical. However, the merit of adopting the proposed procedure has been illustrated: more informed decision-making becomes possible, basing the decisions on the significance of the LCA results. With this illustration, the authors hope to encourage both database developers and data suppliers to incorporate uncertainty information in LCI databases.  相似文献   

2.
It is clear that humans have mental representations of their spatial environments and that these representations are useful, if not essential, in a wide variety of cognitive tasks such as identification of landmarks and objects, guiding actions and navigation and in directing spatial awareness and attention. Determining the properties of mental representation has long been a contentious issue (see Pinker, 1984). One method of probing the nature of human representation is by studying the extent to which representation can surpass or go beyond the visual (or sensory) experience from which it derives. From a strictly empiricist standpoint what is not sensed cannot be represented; except as a combination of things that have been experienced. But perceptual experience is always limited by our view of the world and the properties of our visual system. It is therefore not surprising when human representation is found to be highly dependent on the initial viewpoint of the observer and on any shortcomings thereof. However, representation is not a static entity; it evolves with experience. The debate as to whether human representation of objects is view-dependent or view-invariant that has dominated research journals recently may simply be a discussion concerning how much information is available in the retinal image during experimental tests and whether this information is sufficient for the task at hand. Here we review an approach to the study of the development of human spatial representation under realistic problem solving scenarios. This is facilitated by the use of realistic virtual environments, exploratory learning and redundancy in visual detail.  相似文献   

3.
4.
Multiple sensory-motor maps located in the brainstem and the cortex are involved in spatial orientation. Guiding movements of eyes, head, neck and arms they provide an approximately linear relation between target distance and motor response. This involves especially the superior colliculus in the brainstem and the parietal cortex. There, the natural frame of reference follows from the retinal representation of the environment. A model of navigation is presented that is based on the modulation of activity in those sensory-motor maps. The actual mechanism chosen was gain-field modulation, a process of multimodal integration that has been demonstrated in the parietal cortex and superior colliculus, and was implemented as attraction to visual cues (colour). Dependent on the metric of the sensory-motor map, the relative attraction to these cues implemented as gain field modulation and their position define a fixed point attractor on the plane for locomotive behaviour. The actual implementation used Kohonen-networks in a variant of reinforcement learning that are well suited to generate such topographically organized sensory-motor maps with roughly linear visuo-motor response characteristics. In the following, it was investigated how such an implicit coding of target positions by gain-field parameters might be represented in the hippocampus formation and under what conditions a direction-invariant space representation can arise from such retinotopic representations of multiple cues. Information about the orientation in the plane—as could be provided by head direction cells—appeared to be necessary for unambiguous space representation in our model in agreement with physiological experiments. With this information, Gauss-shaped “place-cells” could be generated, however, the representation of the spatial environment was repetitive and clustered and single cells were always tuned to the gain-field parameters as well  相似文献   

5.
With the increased interest in understanding biological networks, such as protein-protein interaction networks and gene regulatory networks, methods for representing and communicating such networks in both human- and machine-readable form have become increasingly important. Although there has been significant progress in machine-readable representation of networks, as exemplified by the Systems Biology Mark-up Language (SBML) (http://www.sbml.org) issues in human-readable representation have been largely ignored. This article discusses human-readable diagrammatic representations and proposes a set of notations that enhances the formality and richness of the information represented. The process diagram is a fully state transition-based diagram that can be translated into machine-readable forms such as SBML in a straightforward way. It is supported by CellDesigner, a diagrammatic network editing software (http://www.celldesigner.org/), and has been used to represent a variety of networks of various sizes (from only a few components to several hundred components).  相似文献   

6.
7.
Proteomic databases and software on the web   总被引:1,自引:0,他引:1  
In the wake of sequencing projects, protein function analysis is evolving fast, from the careful design of assays that address specific questions to 'large-scale' proteomics technologies that yield proteome-wide maps of protein expression or interaction.As these new technologies depend heavily on information storage, representation and analysis, existing databases and software tools are being adapted, while new resources are emerging.This paper describes the proteomics databases and software available through the World-Wide Web, focusing on their present use and applicability.As the resource situation is highly transitory, trends and probable evolutions are discussed whenever applicable.  相似文献   

8.
9.
Disease networks are increasingly explored as a complement to networks centered around interactions between genes and proteins. The quality of disease networks is heavily dependent on the amount and quality of phenotype information in phenotype databases of human genetic diseases. We explored which aspects of phenotype database architecture and content best reflect the underlying biology of disease. We used the OMIM-based HPO, Orphanet, and POSSUM phenotype databases for this purpose and devised a biological coherence score based on the sharing of gene ontology annotation to investigate the degree to which phenotype similarity in these databases reflects related pathobiology. Our analyses support the notion that a fine-grained phenotype ontology enhances the accuracy of phenome representation. In addition, we find that the OMIM database that is most used by the human genetics community is heavily underannotated. We show that this problem can easily be overcome by simply adding data available in the POSSUM database to improve OMIM phenotype representations in the HPO. Also, we find that the use of feature frequency estimates—currently implemented only in the Orphanet database—significantly improves the quality of the phenome representation. Our data suggest that there is much to be gained by improving human phenome databases and that some of the measures needed to achieve this are relatively easy to implement. More generally, we propose that curation and more systematic annotation of human phenome databases can greatly improve the power of the phenotype for genetic disease analysis.  相似文献   

10.
In order to process data of proteins, a numerical representation for an amino acid is often necessary. Many suitable parameters can be derived from experiments or statistical analysis of databases. To ensure a fast and efficient use of these sources of information, a reduction and extraction of relevant information out of these parameters is a basic need. In this approach established methods like principal component analysis (PCA) are supplemented by a method based on symmetric neural networks. Two different parameter representations of amino acids are reduced from five and seven dimensions, respectively, to one, two, three, or four dimensions by using a symmetric neural network approach alternatively with one or three hidden layers. It is possible to create general reduced parameter representations for amino acids. To demonstrate the ability of this approach, these reduced sets of parameters are applied for the ab initio prediction of protein secondary structure from primary structure only. Artificial neural networks are implemented and trained with a diverse representation of 430 proteins out of the PDB. An essentially faster training and also prediction without a decrease in accuracy is obtained for the reduced parameter representations in comparison with the complete set of parameters. The method is transferable to other amino acids or even other molecular building blocks, like nucleic acids, and therefore represents a general approach.Electronic Supplementary Material available.  相似文献   

11.
We consider a novel 2-D graphical representation of DNA sequences according to chemical structures of bases, reflecting distribution of bases with different chemical structure, preserving information on sequential adjacency of bases, and allowing numerical characterization. The representation avoids loss of information accompanying alternative 2-D representations in which the curve standing for DNA overlaps and intersects itself. Based on this representation we present a numerical characterization approach by the leading eigenvalues of the matrices associated with the DNA sequences. The utility of the approach is illustrated on the coding sequences of the first exon of human beta-globin gene.  相似文献   

12.
The 67th Discussion Forum on Life Cycle Assessment (LCA), organised by partners of the European project RELIEF (RELIability of product Environmental Footprints), focused on methods for better understanding the impacts of land use linked to agricultural value chains. The first session of the forum was dedicated to methods that help in retrospective tracking of land use within complex supply chains. Novel approaches were presented for the integration of increasingly available spatially located land use data into LCA. The second session focused on forward-looking projections of land use change and included emerging, predictive methods for the modelling of land change. The third session considered impact assessment methods related to the use of land and their application together with land change modelling approaches. Discussions throughout the day centred on opportunities and challenges arising from integrating spatially located land use information into Life Cycle Assessment. Increasing amounts of spatially located land use data are becoming available and this could potentially increase the robustness and specificity of Life Cycle Assessment. However, the use of such data can be computationally expensive and requires the development of skills (i.e. use of geographical information systems (GIS) and model coding) within the LCA community. Land change modelling and ecosystem service modelling are associated with considerable uncertainty which must be communicated appropriately to stakeholders and decision-makers when interpreting results from an LCA. The new approaches were found to challenge aspects of the traditional LCA approach—particularly the division between the life cycle inventory and impact assessment and the assumption of linearity between scale and impacts when deriving characterisation factors. The presentations from the DF-67 are available for download (www.lcaforum.ch), and video recordings can be accessed online (http://www.video.ethz.ch/events/lca/2017/autumn/67th.html).  相似文献   

13.
A striking way in which humans differ from non-human primates is in their ability to represent numerical quantity using abstract symbols and to use these 'mental tools' to perform skills such as exact calculations. How do functional brain circuits for the symbolic representation of numerical magnitude emerge? Do neural representations of numerical magnitude change as a function of development and the learning of mental arithmetic? Current theories suggest that cultural number symbols acquire their meaning by being mapped onto non-symbolic representations of numerical magnitude. This Review provides an evaluation of this contention and proposes hypotheses to guide investigations into the neural mechanisms that constrain the acquisition of cultural representations of numerical magnitude.  相似文献   

14.
Graphs are rapidly becoming a powerful and ubiquitous tool for the analysis of protein structure and for event detection in dynamical protein systems. Despite their rise in popularity, however, the graph representations employed to date have shared certain features and parameters that have not been thoroughly investigated. Here, we examine and compare variations on the construction of graph nodes and graph edges. We propose a graph representation based on chemical groups of similar atoms within a protein rather than residues or secondary structure and find that even very simple analyses using this representation form a powerful event detection system with significant advantages over residue-based graph representations. We additionally compare graph edges based on probability of contact to graph edges based on contact strength and analyses of the entire graph structure to an alternative and more computationally tractable node-based analysis. We develop the simplest useful technique for analyzing protein simulations based on these comparisons and use it to shed light on the speed with which static protein structures adjust to a solvated environment at room temperature in simulation.  相似文献   

15.
Background, Goal and Scope  A complete life cycle assessment (LCA) always requires several itemizations of goal/scope definitions, inventory analysis and impact analysis. This requires the retrieval and collection of inventory information on all processes with which a product or any part of it comes into either direct or indirect contact. As a result, the data required for LCA is vast, uncertain and, therefore, complex. Up until now, unfortunately, and as far as the authors are aware, there has not been much computer-assisted aid available from any of the systems currently used in either academia or industry to support any life cycle (LC) related data representation, other than the traditional methods of tables, xy-graphs, bar charts, pie charts and various 3-D variants of those which are difficult for humans to interpret. Main Features  Benefiting from the synergy of latest developments in both visualization techniques and computer technology, the authors are able to introduce a new information representation approach based on glyphs. These exploit the human perceptual capability for distinguishing spatial structures and shapes presented in different colors and textures. Within this approach, issues of representing life cycle related information at a glance, filtering out data so as to reduce the information load, and representation of data features, such as uncertainty and estimated errors, are targeted. Results  Advanced information visualization, the process which transforms and maps data to a visual representation, employs the glyphs rendered here to create abstract representations of multi-dimensional data sets. Different parameters describing spatial, geometrical and retinal properties of such glyphs, and defining their position, orientation, shape, color, etc., can be used to encode more information in a comprehensible format, thus allowing multiple values to be encoded in those glyph parameters. The natural function of glyphs, linking (mapped) data within a known context with the attributes that in turn control their visualization, is believed capable of providing sufficient functionality to interactively support designers and LCA experts performing life cycle inventory (LCI) information analysis so that they can operate faster and more efficiently than at present. Conclusions  Within this paper, the first of a small series on efficient information visualization in LCA, the motivation for and essential basic principles of the approach are introduced and discussed. With this technique, the essential characteristics of data, relationships, patterns, trends, etc. can be represented in a much better structured and compact manner, thus rendering them clearer and more meaningful. It is hoped that a continuing interest in this work combined with an improved collaboration with industrial partners will eventually provide the grounds for translating this novel approach into an efficient and reliable tool enhancing applied LCA in practice on a broader base. Outlook  More technical details of the approach and its implementation will be introduced and discussed in the following papers, and examples will be offered demonstrating its application and first experimental translation into practice.  相似文献   

16.
The extensive germplasm resource collections that are now available for major crop plants and their wild relatives will increasingly provide valuable biological and bioinformatics resources for plant physiologists and geneticists to dissect the molecular basis of key traits and to develop highly adapted plant material to sustain future breeding programs. A key to the efficient deployment of these resources is the development of information systems that will enable the collection and storage of biological information for these plant lines to be integrated with the molecular information that is now becoming available through the use of high-throughput genomics and post-genomics technologies. The GERMINATE database has been designed to hold a diverse variety of data types, ranging from molecular to phenotypic, and to allow querying between such data for any plant species. Data are stored in GERMINATE in a technology-independent manner, such that new technologies can be accommodated in the database as they emerge, without modification of the underlying schema. Users can access data in GERMINATE databases either via a lightweight Perl-CGI Web interface or by the more complex Genomic Diversity and Phenotype Connection software. GERMINATE is released under the GNU General Public License and is available at http://germinate.scri.sari.ac.uk/germinate/.  相似文献   

17.
The life cycle environmental profile of energy‐consuming products, such as air conditioning, is dominated by the products’ use phase. Different user behavior patterns can therefore yield large differences in the results of a cradle‐to‐grave assessment. Although this variation and uncertainty is increasingly recognized, it remains often poorly characterized in life cycle assessment (LCA) studies. Today, pervasive sensing presents the opportunity to collect rich data sets and improve profiling of use‐phase parameters, in turn facilitating quantification and reduction of this uncertainty in LCA. This study examined the case of energy use in building cooling systems, focusing on global warming potential (GWP) as the impact category. In Singapore, building cooling systems or air conditioning consumes up to 37% of national electricity demand. Lack of consideration of variation in use‐phase interaction leads to the oversized designs, wasted energy, and therefore reducible GWP. Using a high‐resolution data set derived from sensor observations, energy use and behavior patterns of single‐office occupants were characterized by probabilistic distributions. The interindividual variability and use‐phase variables were propagated in a stochastic model for the life cycle of air‐conditioning systems and simulated by way of Monte Carlo analysis. Analysis of the generated uncertainties identified plausible reductions in global warming impact through modifying user interaction. Designers concerned about the environmental profile of their products or systems need better representation of the underlying variability in use‐phase data to evaluate the impact. This study suggests that data can be reliably provided and incorporated into the life cycle by proliferation of pervasive sensing, which can only continue to benefit future LCA.  相似文献   

18.
Aim, Scope and Background  The data-intensive nature of life cycle assessment (LCA), even for non-complex products, quickly leads to the utilization of various methods of representing the data in forms other than written characters. Up until now, traditional representations of life cycle inventory (LCI) data and environmental impact analysis (EIA) results have usually been based on 2D and 3D variants of simple tables, bar charts, pie charts and x/y graphs. However, these representation methods do not sufficiently address aspects such as representation of life cycle inventory information at a glance, filtering out data while summarizing the filtered data (so as to reduce the information load), and representation of data errors and uncertainty. Main Features  This new information representation approach with its glyph-based visualization method addresses the specific problems outlined above, encountered when analyzing LCA and EIA related information. In particular, support for multi-dimensional information representation, reduction of information load, and explicit data feature propagation are provided on an interactive, computer-aided basis. Results  Three-dimensional, interactive geometric objects, so called OM-glyphs, were used in the visualization method introduced, to represent LCA-related information in a multi-dimensional information space. This representation is defined by control parameters, which in turn represent spatial, geometric and retinal properties of glyphs and glyph formations. All relevant analysis scenarios allowed and valid can be visualized. These consist of combinations of items for the material and energy inventories, environmental items, life cycle phases and products, or their parts and components. Individual visualization scenarios, once computed and rendered on a computer screen, can then interactively be modified in terms of visual viewpoint, size, spatial location and detail of data represented, as needed. This helps to increase speed, efficiency and quality of the assessment performance, while at the same time considerably reducing mental load due to the more structured manner in which information is represented to the human expert. Conclusions  The previous paper in this series discussed the motivation for a new approach to efficient information visualization in LCA and introduced the essential basic principles. This second paper offers more insight into and discussion on technical details and the framework developed. To provide a means for better understanding the visualization method presented, examples have been given. The main purpose of the examples, as already indicated, is to demonstrate and make transparent the mapping of LCA related data and their contexts to glyph parameters. Those glyph parameters, in turn, are used to generate a novel form of sophisticated information representation which is transparent, clear and compact, features which cannot be achieved with any traditional representation scheme. Outlook  Final technical details of this approach and its framework will be presented and discussed in the next paper. Theoretical and practical issues related to the application of this visualization method to the computed life cycle inventory data of an actual industrial product will also be discussed in this next paper.  相似文献   

19.

Purpose

Some LCA software tools use precalculated aggregated datasets because they make LCA calculations much quicker. However, these datasets pose problems for uncertainty analysis. Even when aggregated dataset parameters are expressed as probability distributions, each dataset is sampled independently. This paper explores why independent sampling is incorrect and proposes two techniques to account for dependence in uncertainty analysis. The first is based on an analytical approach, while the other uses precalculated results sampled dependently.

Methods

The algorithm for generating arrays of dependently presampled aggregated inventories and their LCA scores is described. These arrays are used to calculate the correlation across all pairs of aggregated datasets in two ecoinvent LCI databases (2.2, 3.3 cutoff). The arrays are also used in the dependently presampled approach. The uncertainty of LCA results is calculated under different assumptions and using four different techniques and compared for two case studies: a simple water bottle LCA and an LCA of burger recipes.

Results and discussion

The meta-analysis of two LCI databases shows that there is no single correct approximation of correlation between aggregated datasets. The case studies show that the uncertainty of single-product LCA using aggregated datasets is usually underestimated when the correlation across datasets is ignored and that the magnitude of the underestimation is dependent on the system being analysed and the LCIA method chosen. Comparative LCA results show that independent sampling of aggregated datasets drastically overestimates the uncertainty of comparative metrics. The approach based on dependently presampled results yields results functionally identical to those obtained by Monte Carlo analysis using unit process datasets with a negligible computation time.

Conclusions

Independent sampling should not be used for comparative LCA. Moreover, the use of a one-size-fits-all correction factor to correct the calculated variability under independent sampling, as proposed elsewhere, is generally inadequate. The proposed approximate analytical approach is useful to estimate the importance of the covariance of aggregated datasets but not for comparative LCA. The approach based on dependently presampled results provides quick and correct results and has been implemented in EcodEX, a streamlined LCA software used by Nestlé. Dependently presampled results can be used for streamlined LCA software tools. Both presampling and analytical solutions require a preliminary one-time calculation of dependent samples for all aggregated datasets, which could be centrally done by database providers. The dependent presampling approach can be applied to other aspects of the LCA calculation chain.
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号