首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
This article evaluates the implications of uncertainty in the life cycle (LC) energy efficiency and greenhouse gas (GHG) emissions of rapeseed oil (RO) as an energy carrier displacing fossil diesel (FD). Uncertainties addressed include parameter uncertainty as well as scenario uncertainty concerning how RO coproduct credits are accounted for (uncertainty due to modeling choices). We have carried out an extensive data collection to build an LC inventory accounting for parameter uncertainty. Different approaches for carbon stock changes associated with converting set‐aside land to rapeseed cultivation have been considered, which result in different values: from ?0.25 t C/ha.yr (carbon uptake by the soil in tonnes per hectare year) to 0.60 t C/ha.yr (carbon emission). Energy renewability efficiency and GHG emissions of RO are presented, which show the influence of parameter versus scenario uncertainty. Primary energy savings and avoided GHG emissions when RO displaces FD have also been calculated: Avoided GHG emissions show considerably higher uncertainty than energy savings, mainly due to land use (nitrous oxide emissions from soil) and land use conversion (carbon stock changes). Results demonstrate the relevance of applying uncertainty approaches; emphasize the need to reduce uncertainty in the environmental life cycle modeling, particularly GHG emissions calculation; and show the importance of integrating uncertainty into the interpretation of results.  相似文献   

3.
4.
It was hypothesized in an earlier work that sensory perception can occur only when the perceiving system is uncertain about the nature of the event being perceived. In the absence of any uncertainty, perception will not take place. The response of the sensory afferent neuron (impulse transmission rate) was calculated using Shannon's measure of uncertainty or entropy. It will now be shown that when the event being perceived is the position and momentum of a particle, Shannon's measure of uncertainty leads to the Heisenberg Uncertainty relationship.  相似文献   

5.
6.
This paper is a commentary on Hattis’ three laws of risk assessment. The first law, that “application of standard statistical techniques to a single data set will nearly always reveal only a trivial proportion of the overall uncertainty in the parameter value” is illustrated both by examining the relevance of animal models to man and by a retrospective view of exposure conditions whose importance has only recently been recognized to be important. The second law, that “any estimate of the uncertainty of a parameter value will always itself be more uncertain than the estimate of the parameter value,” is examined in terms of a model addressing multiple levels of uncertainty, e.g., the “uncertainty in the uncertainty”. A argument is made that the number of terms needed for convergence of this uncertainty hierarchy depends on how far from the central tendency of the risk distribution one goes. The further out the “tail” of the distribution, the more terms in the uncertainty hierarchy are needed for convergence. The third law, that “nearly all parameter distribu tions look lognormal, as long as you don't look too closely,” is illustrated with a number of examples. Several reasons are put forward as to why risk variables appear so frequently to be lognormal. Recognition of the lognormal character of variable distributions can provide insight into the proper form for the associated uncertainty distributions.  相似文献   

7.
Emotional satisfaction cannot be increased above "normal"-the same normal as the caveman's-for any length of time, but the wealth and consumption style of modern civilization may systematically reduce some people's satisfaction below normal. Hyperbolic discounting of delayed, expected rewards suggests causes for this reduction in humans, and for how we often respond to it, while conventional exponential discounting does not. Hyperbolic discounting has been well demonstrated by four experimental routes; and there is moderate evidence that it motivates impulse control by an intertemporal bargaining technique, proposed as the mechanism of willpower.A theoretical model is described in which emotion is a reward-dependent behavior rather than a stimulus-bound respondent. Positive emotion is then limited by premature satiation of the appetite for it, a relentless process motivated by the impatience that is described by hyperbolic discount curves. This satiation can be restrained only by using adequately rare and unpredictable occasions as cues for the emotion. Willpower not only is helpless against the urge for premature satiation, but it exacerbates the satiation problem by making anticipation more thorough. The result is an asymmetrical contest between systematic attempts to vouchsafe satisfying events and impetuous attempts to put them at risk. Despite their adversarial relationship, both may to some extent be in the person's long range interest.  相似文献   

8.
Uncertainty calculation in life cycle assessments   总被引:1,自引:0,他引:1  
Goal and Background  Uncertainty is commonly not taken into account in LCA studies, which downgrades their usability for decision support. One often stated reason is a lack of method. The aim of this paper is to develop a method for calculating the uncertainty propagation in LCAs in a fast and reliable manner. Approach  The method is developed in a model that reflects the calculation of an LCA. For calculating the uncertainty, the model combines approximation formulas and Monte Carlo Simulation. It is based on virtual data that distinguishes true values and random errors or uncertainty, and that hence allows one to compare the performance of error propagation formulas and simulation results. The model is developed for a linear chain of processes, but extensions for covering also branched and looped product systems are made and described. Results  The paper proposes a combined use of approximation formulas and Monte Carlo simulation for calculating uncertainty in LCAs, developed primarily for the sequential approach. During the calculation, a parameter observation controls the performance of the approximation formulas. Quantitative threshold values are given in the paper. The combination thus transcends drawbacks of simulation and approximation. Conclusions and Outlook  The uncertainty question is a true jigsaw puzzle for LCAs and the method presented in this paper may serve as one piece in solving it. It may thus foster a sound use of uncertainty assessment in LCAs. Analysing a proper management of the input uncertainty, taking into account suitable sampling and estimation techniques; using the approach for real case studies, implementing it in LCA software for automatically applying the proposed combined uncertainty model and, on the other hand, investigating about how people do decide, and should decide, when their decision relies on explicitly uncertain LCA outcomes-these all are neighbouring puzzle pieces inviting to further work.  相似文献   

9.
A grand challenge in the proteomics and structural genomics era is the prediction of protein structure, including identification of those proteins that are partially or wholly unstructured. A number of predictors for identification of intrinsically disordered proteins (IDPs) have been developed over the last decade, but none can be taken as a fully reliable on its own. Using a single model for prediction is typically inadequate because prediction based on only the most accurate model ignores model uncertainty. In this paper, we present an empirical method to specify and measure uncertainty associated with disorder predictions. In particular, we analyze the uncertainty in the reference model itself and the uncertainty in data. This is achieved by training a set of models and developing several meta predictors on top of them. The best meta predictor achieved comparable or better results than any other single model, suggesting that incorporating different aspects of protein disorder prediction is important for the disorder prediction task. In addition, the best meta-predictor had more balanced sensitivity and specificity than any individual model. We also assessed the effects of changes in disorder prediction as a function of changes in the protein sequence. For collections of homologous sequences, we found that mutations caused many of the predicted disordered residues to be flipped to be predicted as ordered residues, while the reverse was observed much less frequently. These results suggest that disorder tendencies are more sensitive to allowed mutations than structure tendencies and the conservation of disorder is indeed less stable than conservation of structure. Availability: five meta-predictors and four single models developed for this study will be publicly freely accessible for non-commercial use.  相似文献   

10.
Using the Australian weed risk assessment (WRA) model as an example, we applied a combination of bootstrapping and Bayesian techniques as a means for explicitly estimating the posterior probability of weediness as a function of an import risk assessment model screening score. Our approach provides estimates of uncertainty around model predictions, after correcting for verification bias arising from the original training dataset having a higher proportion of weed species than would be the norm, and incorporates uncertainty in current knowledge of the prior (base-rate) probability of weediness. The results confirm the high sensitivity of the posterior probability of weediness to the base-rate probability of weediness of plants proposed for importation, and demonstrate how uncertainty in this base-rate probability manifests itself in uncertainty surrounding predicted probabilities of weediness. This quantitative estimate of the weediness probability posed by taxa classified using the WRA model, including estimates of uncertainty around this probability for a given WRA score, would enable bio-economic modelling to contribute to the decision process, should this avenue be pursued. Regardless of whether or not this avenue is explored, the explicit estimates of uncertainty around weed classifications will enable managers to make better informed decisions regarding risk. When viewed in terms of likelihood of weed introduction, the current WRA model outcomes of ‘accept’, ‘further evaluate’, or ‘reject’, whilst not always accurate in terms of weed classification, appear consistent with a high expected cost of mistakenly introducing a weed. The methods presented have wider application to the quantitative prediction of invasive species for situations where the base-rate probability of invasiveness is subject to uncertainty, and the accuracy of the screening test imperfect  相似文献   

11.
Ecosystem nutrient budgets often report values for pools and fluxes without any indication of uncertainty, which makes it difficult to evaluate the significance of findings or make comparisons across systems. We present an example, implemented in Excel, of a Monte Carlo approach to estimating error in calculating the N content of vegetation at the Hubbard Brook Experimental Forest in New Hampshire. The total N content of trees was estimated at 847 kg ha−1 with an uncertainty of 8%, expressed as the standard deviation divided by the mean (the coefficient of variation). The individual sources of uncertainty were as follows: uncertainty in allometric equations (5%), uncertainty in tissue N concentrations (3%), uncertainty due to plot variability (6%, based on a sample of 15 plots of 0.05 ha), and uncertainty due to tree diameter measurement error (0.02%). In addition to allowing estimation of uncertainty in budget estimates, this approach can be used to assess which measurements should be improved to reduce uncertainty in the calculated values. This exercise was possible because the uncertainty in the parameters and equations that we used was made available by previous researchers. It is important to provide the error statistics with regression results if they are to be used in later calculations; archiving the data makes resampling analyses possible for future researchers. When conducted using a Monte Carlo framework, the analysis of uncertainty in complex calculations does not have to be difficult and should be standard practice when constructing ecosystem budgets.  相似文献   

12.
13.
14.
Recent years have seen increasing interest in life cycle greenhouse gas emissions accounting, also known as carbon footprinting, due to drivers such as transportation fuels policy and climate‐related eco‐labels, sometimes called carbon labels. However, it remains unclear whether applications of greenhouse gas accounting, such as carbon labels, are supportable given the level of precision that is possible with current methodology and data. The goal of this work is to further the understanding of quantitative uncertainty assessment in carbon footprinting through a case study of a rackmount electronic server. Production phase uncertainty was found to be moderate (±15%), though with a high likelihood of being significantly underestimated given the limitations in available data for assessing uncertainty associated with temporal variability and technological specificity. Individual components or subassemblies showed varying levels of uncertainty due to differences in parameter uncertainty (i.e., agreement between data sets) and variability between production or use regions. The use phase displayed a considerably higher uncertainty (±50%) than production due to uncertainty in the useful lifetime of the server, variability in electricity mixes in different market regions, and use profile uncertainty. Overall model uncertainty was found to be ±35% for the whole life cycle, a substantial amount given that the method is already being used to set policy and make comparative environmental product declarations. Future work should continue to combine the increasing volume of available data to ensure consistency and maximize the credibility of the methods of life cycle assessment (LCA) and carbon footprinting. However, for some energy‐using products it may make more sense to increase focus on energy efficiency and use phase emissions reductions rather than attempting to quantify and reduce the uncertainty of the relatively small production phase.  相似文献   

15.
Uncertainty in source partitioning using stable isotopes   总被引:11,自引:0,他引:11  
Stable isotope analyses are often used to quantify the contribution of multiple sources to a mixture, such as proportions of food sources in an animal's diet, or C3 and C4 plant inputs to soil organic carbon. Linear mixing models can be used to partition two sources with a single isotopic signature (e.g., '13C) or three sources with a second isotopic signature (e.g., '15N). Although variability of source and mixture signatures is often reported, confidence interval calculations for source proportions typically use only the mixture variability. We provide examples showing that omission of source variability can lead to underestimation of the variability of source proportion estimates. For both two- and three-source mixing models, we present formulas for calculating variances, standard errors (SE), and confidence intervals for source proportion estimates that account for the observed variability in the isotopic signatures for the sources as well as the mixture. We then performed sensitivity analyses to assess the relative importance of: (1) the isotopic signature difference between the sources, (2) isotopic signature standard deviations (SD) in the source and mixture populations, (3) sample size, (4) analytical SD, and (5) the evenness of the source proportions, for determining the variability (SE) of source proportion estimates. The proportion SEs varied inversely with the signature difference between sources, so doubling the source difference from 2‰ to 4‰ reduced the SEs by half. Source and mixture signature SDs had a substantial linear effect on source proportion SEs. However, the population variability of the sources and the mixture are fixed and the sampling error component can be changed only by increasing sample size. Source proportion SEs varied inversely with the square root of sample size, so an increase from 1 to 4 samples per population cut the SE in half. Analytical SD had little effect over the range examined since it was generally substantially smaller than the population SDs. Proportion SEs were minimized when sources were evenly divided, but increased only slightly as the proportions varied. The variance formulas provided will enable quantification of the precision of source proportion estimates. Graphs are provided to allow rapid assessment of possible combinations of source differences and source and mixture population SDs that will allow source proportion estimates with desired precision. In addition, an Excel spreadsheet to perform the calculations for the source proportions and their variances, SEs, and 95% confidence intervals for the two-source and three-source mixing models can be accessed at http://www.epa.gov/wed/pages/models.htm.  相似文献   

16.
17.
18.
19.
20.
Incorporating DEM Uncertainty in Coastal Inundation Mapping   总被引:1,自引:0,他引:1  
Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号