首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To model catchment surface water quantity and quality, different model types are available. They vary from detailed physically based models to simplified conceptual and empirical models. The most appropriate model type for a certain application depends on the project objectives and the data availability. The detailed models are very useful for short-term simulations of representative events. They cannot be used for long-term statistical information or as a management tool. For those purposes, more simplified (conceptual or meta-) models must be used. In this study, nitrogen dynamics are modeled in a river in Flanders. Nitrogen sources from agricultural leaching and domestic point sources are considered. Based on this input, concentrations of ammonium (NH4-N) and nitrate (NO3-N) in the river water are modeled in MIKE 11 by taking into consideration advection and dispersion and the most important biological and chemical processes. Model calibration was done on the basis of available measured water quality data. To this detailed model, a more simplified model was calibrated with the objective to more easily yield long-term simulation results that can be used in a statistical analysis. The results show that the conceptual simplified model is 1800 times faster than the MIKE 11 model. Moreover the two models have almost the same accuracy. The detailed models are recommended for short-term simulations unless there are enough data for model input and model parameters. The conceptual simplified model is recommended for long-term simulations.  相似文献   

2.
Studying energetics of marine top predators is essential to understand their role within food-webs and mechanisms associated with their survival and population dynamics. Several methods exist to estimate energy expenditure in captive and free-ranging animals. However, most of them are difficult to implement, restrained to specific periods, and are consequently inappropriate for seabirds. Supplementary and complementary approaches are therefore needed, and the use of modelling appears as an excellent option allowing energetic studies when field data collection is challenging. Currently three main energetics models are used, with various degrees of complexity and accuracy: allometric equations, time–energy-budget analyses and thermodynamic models. However, a comparison of their practicability and accuracy was still lacking. Here, we present an overview of these 3 model types, their characteristics, advantages and disadvantages, and areas of application in seabirds. We then investigate their accuracy by using them in parallel for the same dataset, and by comparing outputs with direct measurements (doubly-labelled water technique). We show that, when detailed data are available, time–energy–budget analysis is the best model to accurately predict seabird energy expenditures. Conversely, thermodynamic modelling allows reasonably accurate calculations when field data are scarce, and is therefore ideal to study energetics during the inter-breeding season.  相似文献   

3.
The hydrologic model is the foundation of water resource management and planning. Conceptual model is the essential component of groundwater model. Due to limited understanding of natural hydrogeological conditions, the conceptual model is always constructed incompletely. Therefore, the uncertainty in the model's output is evitable when natural groundwater field is simulated by a single groundwater model. A synthetic groundwater model is built and regarded as the true model, and three alternative conceptual models are constructed by considering incomplete hydrogeological conditions. The outputs (groundwater budget terms from boundary conditions) of these groundwater models are analyzed statistically. The results show that when the conceptual model is closer to the true hydrogeological conditions, the distributions of outputs of the groundwater model are more concentrated on the true outputs. Therefore, the more reliable the structure of the conceptual model is, the more reliable the output of the groundwater model is. Moreover, the uncertainty caused by the conceptual model cannot be compensated by parameter uncertainty.  相似文献   

4.
Water quality indicators can be used to characterize the status and quantify and qualify the change of aquatic ecosystems under different disturbance regimes. Although many studies have been done to develop and assess indicators and discuss interactions among them, few studies have focused on how to improve the predicted indicators and explore their variations in receiving water bodies. Accurate and effective predictions of ecological indictors are critical to better understand changes of water quality in aquatic ecosystems, especially for the real-time forecasting. Process-based water quality models can predict the spatiotemporal variations of the water quality indicators and provide useful information for policy-makers on sound management of water resources. Given their inherent constraints, however, such process models alone cannot actually guarantee perfect results since water quality models generally have a large number of parameters and involve many processes which are too complex to be efficiently calibrated. To overcome these limitations and explore a fast and efficient forecasting method for the change of water quality indictors, we proposed a new framework which combines the process-based models and data assimilation technique. Unlike most traditional approaches in which only the model parameters or initial conditions are updated or corrected and the models are run online, this framework allows the information extracted from observations and outputs of process models to be directly used in a data-driven local/modified local model. The results from the data-driven model are then assimilated into the original process model to further improve its forecasting ability. This approach can be efficiently run offline to directly correct and update the output of water quality models. We applied this framework in a real case study in Singapore. Two of the water quality indicators, namely salinity and oxygen were selected and tested against the observations, suggesting that a good performance of improving the model results and reducing computation time can be obtained. This approach is simple and efficient, especially suitable for real-time forecasting systems. Thus, it can enhance forecasting of water quality indictors and thereby facilitate the effective management of water resources.  相似文献   

5.
6.
Goal Scope Background  The main focus in OMNIITOX is on characterisation models for toxicological impacts in a life cycle assessment (LCA) context. The OMNIITOX information system (OMNIITOX IS) is being developed primarily to facilitate characterisation modelling and calculation of characterisation factors to provide users with information necessary for environmental management and control of industrial systems. The modelling and implementation of operational characterisation models on eco and human toxic impacts requires the use of data and modelling approaches often originating from regulatory chemical risk assessment (RA) related disciplines. Hence, there is a need for a concept model for the data and modelling approaches that can be interchanged between these different contexts of natural system model approaches. Methods. The concept modelling methodology applied in the OMNIITOX project is built on database design principles and ontological principles in a consensus based and iterative process by participants from the LCA, RA and environmental informatics disciplines. Results. The developed OMNIITOX concept model focuses on the core concepts of substance, nature framework, load, indicator, and mechanism, with supplementary concepts to support these core concepts. They refer to the modelled cause, effect, and the relation between them, which are aspects inherent in all models used in the disciplines within the scope of OMNIITOX. This structure provides a possibility to compare the models on a fundamental level and a language to communicate information between the disciplines and to assess the possibility of transparently reusing data and modelling approaches of various levels of detail and complexity. Conclusions  The current experiences from applying the concept model show that the OMNIITOX concept model increases the structuring of all information needed to describe characterisation models transparently. From a user perspective the OMNIITOX concept model aids in understanding the applicability, use of a characterisation model and how to interpret model outputs. Recommendations and Outlook  The concept model provides a tool for structured characterisation modelling, model comparison, model implementation, model quality management, and model usage. Moreover, it could be used for the structuring of any natural environment cause-effect model concerning other impact categories than toxicity.  相似文献   

7.
ABSTRACT: BACKGROUND: A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. RESULTS: A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. CONCLUSIONS: The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.  相似文献   

8.
林网保护区冬小麦生长过程的数值模拟   总被引:5,自引:1,他引:5  
给出了一个模拟冬小麦生长过程的产量生态学模式,并对黄淮海平原林网保护区冬小麦的生长过程进行了数值模拟.模型输出变量包括作物的叶面积指数,根、茎、叶、籽等地上和地下器官生物量,以及与作物生长密切相关的土壤水分变化情况、作物水分利用率、光合/呼吸效率等生理生态因子.结果表明,由于林网地区小气候条件的改善,使得农林复合系统较之单作农田有更强的抗旱能力,在干旱的1994年,林网保护下的农林复合系统生产力较单作农田提高11.6%左右.模式输出的小麦地上部分生物量与生长监测资料十分一致.  相似文献   

9.
Primary plant cell wall (PCW) is a highly organized network, its performance is dependent on cellulose, hemicellulose and pectic polysaccharides, their properties, interactions and assemblies. Their mutual relationships and functions in the cell wall can be better understood by means of conceptual models of their higher-order structures. Knowledge unified in the form of a conceptual model allows predictions to be made about the properties and behaviour of the system under study. Ongoing research in this field has resulted in a number of conceptual models of the cell wall. However, due to the currently limited research methods, the community of cell wall researchers have not reached a consensus favouring one model over another. Herein we present yet another research technique – numerical modelling – which is capable of resolving this issue. Even at the current stage of development of numerical techniques, due to their complexity, the in silico reconstruction of PCW remains a challenge for computational simulations. However, some difficulties have been overcome, thereby making it possible to produce advanced approximations of PCW structure and mechanics. This review summarizes the results concerning the simulation of polysaccharide interactions in PCW with regard to network fine structure, supramolecular properties and polysaccharide binding affinity. The in silico mechanical models presented herein incorporate certain physical and biomechanical aspects of cell wall architecture for the purposes of undertaking critical testing to bring about advances in our understanding of the mechanisms controlling cells and limiting cell wall expansion.  相似文献   

10.
Mechanisms and consequences of biological invasions are a global issue. Yet, one of the key aspects, the initial phase of invasion, is rarely observed in detail. Data from aerial photographs covering the spread of Heracleum mantegazzianum (Apiaceae, native to Caucasus) on a local scale of hectares in the Czech Republic from the beginning of invasion were used as an input for an individual-based model (IBM), based on small-scale and short-time data. To capture the population development inferred from the photographs, long-distance seed dispersal, changes in landscape structures and suitability of landscape elements to invasion by H. mantegazzianum were implemented in the model. The model was used to address (1) the role of long-distance dispersal in regional invasion dynamics, and (2) the effect of land-use changes on the progress of the invasion. Simulations showed that already small fractions of seed subjected to long-distance dispersal, as determined by systematic comparison of field data and modelling results, had an over-proportional effect on the spread of this species. The effect of land-use changes on the simulated course of invasion depends on the actual level of habitat saturation; it is larger for populations covering a high proportion of available habitat area than for those in the initial phase of invasion. Our results indicate how empirical field data and model outputs can be linked more closely with each other to improve the understanding of invasion dynamics. The multi-level, but nevertheless simple structure of our model suggests that it can be used for studying the spread of similar species invading in comparable landscapes.  相似文献   

11.

Background and Aims

Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs.

Methods

A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL.

Key Results

Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas.

Conclusions

The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.  相似文献   

12.
Uncertainty management for the evaluation of evidence based on linguistic and conceptual data is taking advantage of developments in the Dempster-Shafer (DS) theory of evidence, possibility theory and fuzzy logic. The DS theory offers the capability to assess the uncertainty of different subsets of assertions in a domain and the way in which uncertainty is affected by accumulating evidence. The DS theory goes beyond probability theory in its ability to represent ignorance about certain aspects of a situation. However, the theory is very sensitive to the numerical assessments provided by users and can lead to intuitively unexpected and even undesirable results. Certainty factors are widely used in various expert systems. Their definition and updating may follow either a probabilistic model or fuzzy set theoretic concept.  相似文献   

13.
Statistical species distribution models (SDMs) are widely used to predict the potential changes in species distributions under climate change scenarios. We suggest that we need to revisit the conceptual framework and ecological assumptions on which the relationship between species distributions and environment is based. We present a simple conceptual framework to examine the selection of environmental predictors and data resolution scales. These vary widely in recent papers, with light inconsistently included in the models. Focusing on light as a necessary component of plant SDMs, we briefly review its dependence on aspect and slope and existing knowledge of its influence on plant distribution. Differences in light regimes between north‐ and south‐facing aspects in temperate latitudes can produce differences in temperature equivalent to moves 200 km polewards. Local topography may create refugia that are not recognized in many climate change SDMs using coarse‐scale data. We argue that current assumptions about the selection of predictors and data resolution need further testing. Application of these ideas can clarify many issues of scale, extent and choice of predictors, and potentially improve the use of SDMs for climate change modelling of biodiversity.  相似文献   

14.
Accurate prediction of species distributions based on sampling and environmental data is essential for further scientific analysis, such as stock assessment, detection of abundance fluctuation due to climate change or overexploitation, and to underpin management and legislation processes. The evolution of computer science and statistics has allowed the development of sophisticated and well-established modelling techniques as well as a variety of promising innovative approaches for modelling species distribution. The appropriate selection of modelling approach is crucial to the quality of predictions about species distribution. In this study, modelling techniques based on different approaches are compared and evaluated in relation to their predictive performance, utilizing fish density acoustic data. Generalized additive models and mixed models amongst the regression models, associative neural networks (ANNs) and artificial neural networks ensemble amongst the artificial neural networks and ordinary kriging amongst the geostatistical techniques are applied and evaluated. A verification dataset is used for estimating the predictive performance of these models. A combination of outputs from the different models is applied for prediction optimization to exploit the ability of each model to explain certain aspects of variation in species acoustic density. Neural networks and especially ANNs appear to provide more accurate results in fitting the training dataset while generalized additive models appear more flexible in predicting the verification dataset. The efficiency of each technique in relation to certain sampling and output strategies is also discussed.  相似文献   

15.
Data from long-term monitoring sites are vital for biogeochemical process understanding, and for model development. Implicitly or explicitly, information provided by both monitoring and modelling must be extrapolated in order to have wider scientific and policy utility. In many cases, large-scale modelling utilises little of the data available from long-term monitoring, instead relying on simplified models and limited, often highly uncertain, data for parameterisation. Here, we propose a new approach whereby outputs from model applications to long-term monitoring sites are upscaled to the wider landscape using a simple statistical method. For the 22 lakes and streams of the UK Acid Waters Monitoring Network (AWMN), standardised concentrations (Z scores) for Acid Neutralising Capacity (ANC), dissolved organic carbon, nitrate and sulphate show high temporal coherence among sites. This coherence permits annual mean solute concentrations at a new site to be predicted by back-transforming Z scores derived from observations or model applications at other sites. The approach requires limited observational data for the new site, such as annual mean estimates from two synoptic surveys. Several illustrative applications of the method suggest that it is effective at predicting long-term ANC change in upland surface waters, and may have wider application. Because it is possible to parameterise and constrain more sophisticated models with data from intensively monitored sites, the extrapolation of model outputs to policy relevant scales using this approach could provide a more robust, and less computationally demanding, alternative to the application of simple generalised models using extrapolated input data.  相似文献   

16.
Bioclimate envelope models are often used to predict changes in species distribution arising from changes in climate. These models are typically based on observed correlations between current species distribution and climate data. One limitation of this basic approach is that the relationship modelled is assumed to be constant in space; the analysis is global with the relationship assumed to be spatially stationary. Here, it is shown that by using a local regression analysis, which allows the relationship under study to vary in space, rather than conventional global regression analysis it is possible to increase the accuracy of bioclimate envelope modelling. This is demonstrated for the distribution of Spotted Meddick in Great Britain using data relating to three time periods, including predictions for the 2080s based on two climate change scenarios. Species distribution and climate data were available for two of the time periods studied and this allowed comparison of bioclimate envelope model outputs derived using the local and global regression analyses. For both time periods, the area under the receiver operating characteristics curve derived from the analysis based on local statistics was significantly higher than that from the conventional global analysis; the curve comparisons were also undertaken with an approach that recognised the dependent nature of the data sets compared. Marked differences in the future distribution of the species predicted from the local and global based analyses were evident and highlight a need for further consideration of local issues in modelling ecological variables.  相似文献   

17.
If data are available which fulfil sufficiently the demands of statistics, and if physiologically plausible hypotheses can be formulated then it is possible to identify models by means of multiple regression analysis which simulate approximatively the behaviour of a crop within the range of the used data. A statistical as well as a physiological evaluation of the numerical results is necessary at every step of the calculation. The model consists of two equations describing the net assimilation rate and the distribution of the produced dry matter to leaves and roots. Some modification of the model are discussed, one of them with time-dependent parameters. The following influencing variables are taken into account: leaf area or leaf mass, global radiation, nitrogen content of the leaves, and water stress, the latter being calculated from available soil moisture and potential evaporation. One version of the model together with an algorithm of dynamic optimization was used to control the water status of sugar beet in field trials.  相似文献   

18.
One of the most challenging aspects of biomechanical modelling is parameter estimation. Parameter values that define the nonlinear relations within the classic Hill-based muscle model structure have been estimated for a large number of muscles involved in movements of a number of joints. The technique used to estimate these parameters is based on combining information on muscle as a material with geometrical data on muscle-joint anatomy. The resulting relations are compatible with available human experimental data and with past modelling estimates. An estimation of the relative importance of the various synergistic muscle properties during dynamic movement tasks is also provided, aided by examples of muscle load-sharing as a function of optimization criteria including measures of position error, muscle stress and neural effort.  相似文献   

19.
Abstract Habitat models are now broadly used in conservation planning on public lands. If implemented correctly, habitat modelling is a transparent and repeatable technique for describing and mapping biodiversity values, and its application in peri‐urban and agricultural landscape planning is likely to expand rapidly. Conservation planning in such landscapes must be robust to the scrutiny that arises when biodiversity constraints are placed on developers and private landholders. A standardized modelling and model evaluation method based on widely accepted techniques will improve the robustness of conservation plans. We review current habitat modelling and model evaluation methods and provide a habitat modelling case study in the New South Wales central coast region that we hope will serve as a methodological template for conservation planners. We make recommendations on modelling methods that are appropriate when presence‐absence and presence‐only survey data are available and provide methodological details and a website with data and training material for modellers. Our aim is to provide practical guidelines that preserve methodological rigour and result in defendable habitat models and maps. The case study was undertaken in a rapidly developing area with substantial biodiversity values under urbanization pressure. Habitat maps for seven priority fauna species were developed using logistic regression models of species‐habitat relationships and a bootstrapping methodology was used to evaluate model predictions. The modelled species were the koala, tiger quoll, squirrel glider, yellow‐bellied glider, masked owl, powerful owl and sooty owl. Models ranked sites adequately in terms of habitat suitability and provided predictions of sufficient reliability for the purpose of identifying preliminary conservation priority areas. However, they are subject to multiple uncertainties and should not be viewed as a completely accurate representation of the distribution of species habitat. We recommend the use of model prediction in an adaptive framework whereby models are iteratively updated and refined as new data become available.  相似文献   

20.
Summary We provide methods that can be used to obtain more accurate environmental exposure assessment. In particular, we propose two modeling approaches to combine monitoring data at point level with numerical model output at grid cell level, yielding improved prediction of ambient exposure at point level. Extending our earlier downscaler model (Berrocal, V. J., Gelfand, A. E., and Holland, D. M. (2010b) . A spatio‐temporal downscaler for outputs from numerical models. Journal of Agricultural, Biological and Environmental Statistics 15, 176–197), these new models are intended to address two potential concerns with the model output. One recognizes that there may be useful information in the outputs for grid cells that are neighbors of the one in which the location lies. The second acknowledges potential spatial misalignment between a station and its putatively associated grid cell. The first model is a Gaussian Markov random field smoothed downscaler that relates monitoring station data and computer model output via the introduction of a latent Gaussian Markov random field linked to both sources of data. The second model is a smoothed downscaler with spatially varying random weights defined through a latent Gaussian process and an exponential kernel function, that yields, at each site, a new variable on which the monitoring station data is regressed with a spatial linear model. We applied both methods to daily ozone concentration data for the Eastern US during the summer months of June, July and August 2001, obtaining, respectively, a 5% and a 15% predictive gain in overall predictive mean square error over our earlier downscaler model ( Berrocal et al., 2010b ). Perhaps more importantly, the predictive gain is greater at hold‐out sites that are far from monitoring sites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号