共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This work studies the impact of systematic uncertainties associated to interaction cross sections on depth dose curves determined by Monte Carlo simulations. The corresponding sensitivity factors are quantified by changing cross sections by a given amount and determining the variation in the dose. The influence of total and partial photon cross sections is addressed. Partial cross sections for Compton and Rayleigh scattering, photo-electric effect, and pair production have been accounted for. The PENELOPE code was used in all simulations. It was found that photon cross section sensitivity factors depend on depth. In addition, they are positive and negative for depths below and above an equilibrium depth, respectively. At this depth, sensitivity factors are null. The equilibrium depths found in this work agree very well with the mean free path of the corresponding incident photon energy. Using the sensitivity factors reported here, it is possible to estimate the impact of photon cross section uncertainties on the uncertainty of Monte Carlo-determined depth dose curves. 相似文献
3.
The results of quantitative risk assessments are key factors in a risk manager's decision of the necessity to implement actions to reduce risk. The extent of the uncertainty in the assessment will play a large part in the degree of confidence a risk manager has in the reported significance and probability of a given risk. The two main sources of uncertainty in such risk assessments are variability and incertitude. In this paper we use two methods, a second-order two-dimensional Monte Carlo analysis and probability bounds analysis, to investigate the impact of both types of uncertainty on the results of a food-web exposure model. We demonstrate how the full extent of uncertainty in a risk estimate can be fully portrayed in a way that is useful to risk managers. We show that probability bounds analysis is a useful tool for identifying the parameters that contribute the most to uncertainty in a risk estimate and how it can be used to complement established practices in risk assessment. We conclude by promoting the use of probability analysis in conjunction with Monte Carlo analyses as a method for checking how plausible Monte Carlo results are in the full context of uncertainty. 相似文献
4.
Louis Anthony Cox Jr. 《人类与生态风险评估》1996,2(1):150-174
The traditional q1 * methodology for constructing upper confidence limits (UCLs) for the low-dose slopes of quantal dose-response functions has two limitations: (i) it is based on an asymptotic statistical result that has been shown via Monte Carlo simulation not to hold in practice for small, real bioassay experiments (Portier and Hoel, 1983); and (ii) it assumes that the multistage model (which represents cumulative hazard as a polynomial function of dose) is correct. This paper presents an uncertainty analysis approach for fitting dose-response functions to data that does not require specific parametric assumptions or depend on asymptotic results. It has the advantage that the resulting estimates of the dose-response function (and uncertainties about it) no longer depend on the validity of an assumed parametric family nor on the accuracy of the asymptotic approximation. The method derives posterior densities for the true response rates in the dose groups, rather than deriving posterior densities for model parameters, as in other Bayesian approaches (Sielken, 1991), or resampling the observed data points, as in the bootstrap and other resampling methods. It does so by conditioning constrained maximum-entropy priors on the observed data. Monte Carlo sampling of the posterior (constrained, conditioned) probability distributions generate values of response probabilities that might be observed if the experiment were repeated with very large sample sizes. A dose-response curve is fit to each such simulated dataset. If no parametric model has been specified, then a generalized representation (e.g., a power-series or orthonormal polynomial expansion) of the unknown dose-response function is fit to each simulated dataset using “model-free” methods. The simulation-based frequency distribution of all the dose-response curves fit to the simulated datasets yields a posterior distribution function for the low-dose slope of the dose-response curve. An upper confidence limit on the low-dose slope is obtained directly from this posterior distribution. This “Data Cube” procedure is illustrated with a real dataset for benzene, and is seen to produce more policy-relevant insights than does the traditional q1 * methodology. For example, it shows how far apart are the 90%, 95%, and 99% limits and reveals how uncertainty about total and incremental risk vary with dose level (typically being dominated at low doses by uncertainty about the response of the control group, and being dominated at high doses by sampling variability). Strengths and limitations of the Data Cube approach are summarized, and potential decision-analytic applications to making better informed risk management decisions are briefly discussed. 相似文献
5.
We examine bias in Markov models of diseases, including both chronic and infectious diseases. We consider two common types of Markov disease models: ones where disease progression changes by severity of disease, and ones where progression of disease changes in time or by age. We find sufficient conditions for bias to exist in models with aggregated transition probabilities when compared to models with state/time dependent transition probabilities. We also find that when aggregating data to compute transition probabilities, bias increases with the degree of data aggregation. We illustrate by examining bias in Markov models of Hepatitis C, Alzheimer’s disease, and lung cancer using medical data and find that the bias is significant depending on the method used to aggregate the data. A key implication is that by not incorporating state/time dependent transition probabilities, studies that use Markov models of diseases may be significantly overestimating or underestimating disease progression. This could potentially result in incorrect recommendations from cost-effectiveness studies and incorrect disease burden forecasts. 相似文献
6.
H. Passing 《Biometrical journal. Biometrische Zeitschrift》1984,26(6):643-654
Let categorical data coming from a control group and (r - 1) treated groups be given in an r × c contingency table. A simultaneous test procedure of the (r - 1) hypotheses that the probabilities of all c categories do not differ between the i-th treated group and the control is derived. For small tables and small cell frequencies it is exactly performed by generation of all tables having the given marginal sums. If 2 categories or 2 groups only are given the asymptotic distribution of the test statistic is known; otherwise its distribution may be simulated if the computational expenditure of performing an exact test is too large. By means of a Monte Carlo study it is shown that this method meets its level more reliably and that it has a better power than others. 相似文献
7.
8.
LANDIS模型是模拟自然和人为干扰下森林景观变化的空间直观景观模型。模型把景观概念化为由相同大小的像元或样地组成的格网。在每一个像元上,模型要求输入物种和年龄组信息。但是,由于研究区一般由成千上百万个像元构成,不可能通过实际调查获取每一个像元上的物种和年龄组信息。因此,采用了一种基于小班的随机赋值法从森林调查数据中获取每一个像元的物种和年龄组信息。该方法是一种基于概率的方法,会在LANDIS模型模拟的物种和年龄组信息的输入中引入不确定性。为了评价由基于小班的随机赋值法所引入像元尺度上的不确定性对模型模拟结果的影响,用蒙特卡罗模拟法进行不确定性分析。对LANDIS模型模拟的每一个物种,用众数年龄组发生频率来定量化单个像元上年龄组信息的不确定性,用所有像元上的众数年龄组平均发生频率来定量化年龄组信息在像元尺度上总的不确定性。平均发生频率越高,不确定性越低。为了评价基于小班的随机赋值法对景观尺度上模型模拟结果的影响,计算了每一个物种在整个研究区内的面积百分比和聚集度指数。变异系数越大,不确定性越高。对所有物种,年龄组信息不确定性在模型模拟的初期是比较低的(平均发生频率大于10)。种子传播、建群、死亡和火干扰使模型结果的不确定性随模拟时间增加而增加。最后,不确定性达到稳定状态,达到平衡状态的时间与物种寿命接近。此时,初始的物种和年龄组信息不再对模型结果有影响。在景观尺度上,物种分布面积百分比和由聚集度指数所定量化的空间格局并未受像元尺度上不确定性增加的影响。因为LANDIS模型模拟研究的目的在于预测总的景观格局变化,而不是单一的事件,所以,基于小班的随机赋值法可用于LANDIS模型的参数化。 相似文献
9.
10.
Quantitative uncertainty analysis has become a common component of risk assessments. In risk assessment models, the most robust method for propagating uncertainty is Monte Carlo simulation. Many software packages available today offer Monte Carlo capabilities while requiring minimal learning time, computational time, and/or computer memory. This paper presents an evalu ation of six software packages in the context of risk assessment: Crystal Ball, @Risk, Analytica, Stella II, PRISM, and Susa-PC. Crystal Ball and @Risk are spreadsheet based programs; Analytica and Stella II are multi-level, influence diagram based programs designed for the construction of complex models; PRISM and Susa-PC are both public-domain programs designed for incorpo rating uncertainty and sensitivity into any model written in Fortran. Each software package was evaluated on the basis of five criteria, with each criterion having several sub-criteria. A ‘User Preferences Table’ was also developed for an additional comparison of the software packages. The evaluations were based on nine weeks of experimentation with the software packages including use of the associated user manuals and test of the software through the use of example problems. The results of these evaluations indicate that Stella II has the most extensive modeling capabilities and can handle linear differential equations. Crystal Ball has the best input scheme for entering uncertain parameters and the best reference materials. @Risk offers a slightly better standard output scheme and requires a little less learning time. Susa-PC has the most options for detailed statistical analysis of the results, such as multiple options for a sensitivity analysis and sophisticated options for inputting correlations. Analytica is a versatile, menu- and graphics-driven package, while PRISM is a more specialized and less user friendly program. When choosing between software packages for uncertainty and sensitivity analysis, the choice largely depends on the specifics of the problem being modeled. However, for risk assessment problems that can be implemented on a spreadsheet, Crystal Ball is recommended because it offers the best input options, a good output scheme, adequate uncertainty and sensitivity analysis, superior reference materials, and an intuitive spreadsheet basis while requiring very little memory. 相似文献
11.
Huelsenbeck JP Joyce P Lakner C Ronquist F 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》2008,363(1512):3941-3953
Models of amino acid substitution present challenges beyond those often faced with the analysis of DNA sequences. The alignments of amino acid sequences are often small, whereas the number of parameters to be estimated is potentially large when compared with the number of free parameters for nucleotide substitution models. Most approaches to the analysis of amino acid alignments have focused on the use of fixed amino acid models in which all of the potentially free parameters are fixed to values estimated from a large number of sequences. Often, these fixed amino acid models are specific to a gene or taxonomic group (e.g. the Mtmam model, which has parameters that are specific to mammalian mitochondrial gene sequences). Although the fixed amino acid models succeed in reducing the number of free parameters to be estimated--indeed, they reduce the number of free parameters from approximately 200 to 0--it is possible that none of the currently available fixed amino acid models is appropriate for a specific alignment. Here, we present four approaches to the analysis of amino acid sequences. First, we explore the use of a general time reversible model of amino acid substitution using a Dirichlet prior probability distribution on the 190 exchangeability parameters. Second, we then explore the behaviour of prior probability distributions that are'centred' on the rates specified by the fixed amino acid model. Third, we consider a mixture of fixed amino acid models. Finally, we consider constraints on the exchangeability parameters as partitions,similar to how nucleotide substitution models are specified, and place a Dirichlet process prior model on all the possible partitioning schemes. 相似文献
12.
WEILE WANG JENNIFER DUNGAN HIROFUMI HASHIMOTO ANDREW R. MICHAELIS CRISTINA MILESI KAZUHITO ICHII RAMAKRISHNA R. NEMANI 《Global Change Biology》2011,17(3):1350-1366
We conducted an ensemble modeling exercise using the Terrestrial Observation and Prediction System (TOPS) to evaluate sources of uncertainty in carbon flux estimates resulting from structural differences among ecosystem models. The experiment ran public‐domain versions of biome‐bgc, lpj, casa , and tops‐bgc over North America at 8 km resolution and for the period of 1982–2006. We developed the Hierarchical Framework for Diagnosing Ecosystem Models (HFDEM) to separate the simulated biogeochemistry into a cascade of three functional tiers and sequentially examine their characteristics in climate (temperature–precipitation) and other spaces. Analysis of the simulated annual gross primary production (GPP) in the climate domain indicates a general agreement among the models, all showing optimal GPP in regions where the relationship between annual average temperature (T, °C) and annual total precipitation (P, mm) is defined by P=50T+500. However, differences in simulated GPP are identified in magnitudes and distribution patterns. For forests, the GPP gradient along P=50T+500 ranges from ~50 g C yr?1 m?2 °C?1 (casa ) to ~125 g C yr?1 m?2 °C?1 (biome‐bgc ) in cold/temperate regions; for nonforests, the diversity among GPP distributions is even larger. Positive linear relationships are found between annual GPP and annual mean leaf area index (LAI) in all models. For biome‐bgc and lpj , such relationships lead to a positive feedback from LAI growth to GPP enhancement. Different approaches to constrain this feedback lead to different sensitivity of the models to disturbances such as fire, which contribute significantly to the diversity in GPP stated above. The ratios between independently simulated NPP and GPP are close to 50% on average; however, their distribution patterns vary significantly between models, reflecting the difficulties in estimating autotrophic respiration across various climate regimes. Although these results are drawn from our experiments with the tested model versions, the developed methodology has potential for other model exercises. 相似文献
13.
Markov chain Monte Carlo methods for switching diffusion models 总被引:1,自引:0,他引:1
14.
WEILE WANG JENNIFER DUNGAN HIROFUMI HASHIMOTO ANDREW R. MICHAELIS CRISTINA MILESI KAZUHITO ICHII RAMAKRISHNA R. NEMANI 《Global Change Biology》2011,17(3):1367-1378
This paper examines carbon stocks and their relative balance in terrestrial ecosystems simulated by Biome‐BGC, LPJ, and CASA in an ensemble model experiment conducted using the Terrestrial Observation and Prediction System. We developed the Hierarchical Framework for Diagnosing Ecosystem Models to separate the simulated biogeochemistry into a cascade of functional tiers and examine their characteristics sequentially. The analyses indicate that the simulated biomass is usually two to three times higher in Biome‐BGC than LPJ or CASA. Such a discrepancy is mainly induced by differences in model parameters and algorithms that regulate the rates of biomass turnover. The mean residence time of biomass in Biome‐BGC is estimated to be 40–80 years in temperate/moist climate regions, while it mostly varies between 5 and 30 years in CASA and LPJ. A large range of values is also found in the simulated soil carbon. The mean residence time of soil carbon in Biome‐BGC and LPJ is ~200 years in cold regions, which decreases rapidly with increases of temperature at a rate of ~10 yr °C?1. Because long‐term soil carbon pool is not simulated in CASA, its corresponding mean residence time is only about 10–20 years and less sensitive to temperature. Another key factor that influences the carbon balance of the simulated ecosystem is disturbance caused by wildfire, for which the algorithms vary among the models. Because fire emissions are balanced by net ecosystem production (NEP) at steady states, magnitudes, and spatial patterns of NEP vary significantly as well. Slight carbon imbalance may be left by the spin‐up algorithm of the models, which adds uncertainty to the estimated carbon sources or sinks. Although these results are only drawn on the tested model versions, the developed methodology has potential for other model exercises. 相似文献
15.
Approximate Bayesian computation (ABC) substitutes simulation for analytic models in Bayesian inference. Simulating evolutionary scenarios under Kimura’s stepping stone model (KSS) might therefore allow inference over spatial genetic process where analytical results are difficult to obtain. ABC first creates a reference set of simulations and would proceed by comparing summary statistics over KSS simulations to summary statistics from localities sampled in the field, but: comparison of which localities and stepping stones? Identical stepping stones can be arranged so two localities fall in the same stepping stone, nearest or diagonal neighbours, or without contact. None is intrinsically correct, yet some choice must be made and this affects inference. We explore a Bayesian strategy for mapping field observations onto discrete stepping stones. We make Sundial, for projecting field data onto the plane, available. We generalize KSS over regular tilings of the plane. We show Bayesian averaging over the mapping between a continuous field area and discrete stepping stones improves the fit between KSS and isolation by distance expectations. We make Tiler Durden available for carrying out this Bayesian averaging. We describe a novel parameterization of KSS based on Wright’s neighbourhood size, placing an upper bound on the geographic area represented by a stepping stone and make it available as m Vector. We generalize spatial coalescence recursions to continuous and discrete space cases and use these to numerically solve for KSS coalescence previously examined only using simulation. We thus provide applied and analytical resources for comparison of stepping stone simulations with field observations. 相似文献
16.
STEPHEN M. OGLE F. JAY BREIDT MARK EASTER STEVE WILLIAMS KENDRICK KILLIAN KEITH PAUSTIAN 《Global Change Biology》2010,16(2):810-822
Process‐based model analyses are often used to estimate changes in soil organic carbon (SOC), particularly at regional to continental scales. However, uncertainties are rarely evaluated, and so it is difficult to determine how much confidence can be placed in the results. Our objective was to quantify uncertainties across multiple scales in a process‐based model analysis, and provide 95% confidence intervals for the estimates. Specifically, we used the Century ecosystem model to estimate changes in SOC stocks for US croplands during the 1990s, addressing uncertainties in model inputs, structure and scaling of results from point locations to regions and the entire country. Overall, SOC stocks increased in US croplands by 14.6 Tg C yr?1 from 1990 to 1995 and 17.5 Tg C yr?1 during 1995 to 2000, and uncertainties were ±22% and ±16% for the two time periods, respectively. Uncertainties were inversely related to spatial scale, with median uncertainties at the regional scale estimated at ±118% and ±114% during the early and latter part of 1990s, and even higher at the site scale with estimates at ±739% and ±674% for the time periods, respectively. This relationship appeared to be driven by the amount of the SOC stock change; changes in stocks that exceeded 200 Gg C yr?1 represented a threshold where uncertainties were always lower than ±100%. Consequently, the amount of uncertainty in estimates derived from process‐based models will partly depend on the level of SOC accumulation or loss. In general, the majority of uncertainty was associated with model structure in this application, and so attaining higher levels of precision in the estimates will largely depend on improving the model algorithms and parameterization, as well as increasing the number of measurement sites used to evaluate the structural uncertainty. 相似文献
17.
The inositol (1,4,5)-trisphosphate receptor (IPR) plays a crucial role in calcium dynamics in a wide range of cell types, and is often a central feature in quantitative models of calcium oscillations and waves. We compare three mathematical models of the IPR, fitting each of them to the same data set to determine ranges for the parameter values. Each of the fits indicates that fast activation of the receptor, followed by slow inactivation, is an important feature of the model, and also that the speed of inositol trisphosphate (IP3) binding cannot necessarily be assumed to be faster than Ca2+ activation. In addition, the model which assumed saturating binding rates of Ca2+ to the IPR demonstrated the best fit. However, lack of convergence in the fitting procedure indicates that responses to step increases of [Ca2+] and [IP3] provide insufficient data to determine the parameters unambiguously in any of the models. 相似文献
18.
Cristian C. Brunchi Pablo Englebienne Herman J.M. Kramer Sondre K. Schnell 《Molecular simulation》2015,41(15):1234-1244
Despite the wide use of the real adsorbed solution theory to predict multicomponent adsorption equilibrium, the models used for the adsorbed phase activity coefficients are usually borrowed from the gas–liquid phase equilibria. In this work, the accuracy of the Wilson and NRTL models for evaluating adsorbed phase activity coefficients is tested using a 2D-lattice model. An accurate model for adsorbed-phase activity coefficients should have no problem in fitting adsorption data obtained using this simple lattice model. The results, however, show that the commonly used Wilson and NRTL models cannot describe the adsorbed phase activity coefficients for slightly non-ideal to strong non-ideal mixtures. Therefore, until new models for adsorbed phase activity coefficients are developed, we should use existing models for liquids with care. In the second part of this work, the use of Monte Carlo simulations on a segregated 2D-lattice model, for predicting adsorption of mixtures is investigated. The segregated model assumes that the competition for adsorption occurs at isolated adsorption sites, and that the molecules from each adsorption site interact with the bulk phase independently. Two binary mixtures in two adsorbent materials were used as case studies for testing the predictions of the segregated 2D-lattice model: the binary system CO2–N2 in the hypothetical pure silica zeolite PCOD8200029, with isolated adsorption sites and normal preference for adsorption, and the binary system CO2–C3H8 in pure silica mordenite (MOR), with isolated adsorption sites and inverse site preference. The segregated 2D-lattice model provides accurate predictions for the system CO2–N2 in PCOD8200029 but fails in predicting the adsorption behaviour of CO2–C3H8 in pure silica MOR. The predictions of the segregated ideal adsorbed solution theory model are superior to those of the 2D-lattice model. 相似文献
19.
Closed-form likelihoods for Arnason-Schwarz models 总被引:1,自引:0,他引:1
20.
Hugues Imbeault‐Tétreault Olivier Jolliet Louise Deschênes Ralph K. Rosenbaum 《Journal of Industrial Ecology》2013,17(4):485-492
Inventory data and characterization factors in life cycle assessment (LCA) contain considerable uncertainty. The most common method of parameter uncertainty propagation to the impact scores is Monte Carlo simulation, which remains a resource‐intensive option—probably one of the reasons why uncertainty assessment is not a regular step in LCA. An analytical approach based on Taylor series expansion constitutes an effective means to overcome the drawbacks of the Monte Carlo method. This project aimed to test the approach on a real case study, and the resulting analytical uncertainty was compared with Monte Carlo results. The sensitivity and contribution of input parameters to output uncertainty were also analytically calculated. This article outlines an uncertainty analysis of the comparison between two case study scenarios. We conclude that the analytical method provides a good approximation of the output uncertainty. Moreover, the sensitivity analysis reveals that the uncertainty of the most sensitive input parameters was not initially considered in the case study. The uncertainty analysis of the comparison of two scenarios is a useful means of highlighting the effects of correlation on uncertainty calculation. This article shows the importance of the analytical method in uncertainty calculation, which could lead to a more complete uncertainty analysis in LCA practice. 相似文献