首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Ecological Informatics》2007,2(2):121-127
Scaling of ecological data can present a challenge firstly because of the large amount of information contained in an ecological data set, and secondly because of the problem of fitting data to models that we want to use to capture structure. We present a measure of similarity between data collected at several scales using the same set of attributes. The measure is based on the concept of Kolmogorov complexity and implemented through minimal message length estimates of information content and cluster analysis (the models). The similarity represents common patterns across scales, within the model class. We thus provide a novel solution to the problem of simultaneously considering data structure, model fit and scale. The methods are illustrated in application to an ecological data set.  相似文献   

2.
生态学中的尺度及尺度转换方法   总被引:114,自引:19,他引:95  
吕一河  傅伯杰 《生态学报》2001,21(12):2096-2105
尺度作为生态学的重要范式,已经引起了广泛重视,但对尺度问题的研究还不够成熟.尺度具有多维性特点,即功能尺度、空间尺度、时间尺度等,但生态学研究的重点是空间和时间尺度.并且时空尺度还具有复杂性、变异性特征.尺度研究的根本目的在于通过适宜的空间和时间尺度来揭示和把握复杂的生态学规律.为此,科学有效的尺度选择和尺度转换方法不可或缺.常见的尺度转换方法有图示法、回归分析、变异函数、自相关分析、谱分析、分形和小波变换,同时遥感和地理信息系统技术在尺度研究中也发挥着重要作用.结合实例对上述方法进行了分析和论述,认为各种方法都有其内在的优势和不足,新方法的引入和应用对于尺度转换方法体系的充实和完善非常重要.有关尺度的研究将进一步加强,研究的重点是尺度变异性、不同尺度间的相互作用机制以及尺度转换方法等.  相似文献   

3.
Selecting the best-fit model of nucleotide substitution   总被引:2,自引:0,他引:2  
Despite the relevant role of models of nucleotide substitution in phylogenetics, choosing among different models remains a problem. Several statistical methods for selecting the model that best fits the data at hand have been proposed, but their absolute and relative performance has not yet been characterized. In this study, we compare under various conditions the performance of different hierarchical and dynamic likelihood ratio tests, and of Akaike and Bayesian information methods, for selecting best-fit models of nucleotide substitution. We specifically examine the role of the topology used to estimate the likelihood of the different models and the importance of the order in which hypotheses are tested. We do this by simulating DNA sequences under a known model of nucleotide substitution and recording how often this true model is recovered by the different methods. Our results suggest that model selection is reasonably accurate and indicate that some likelihood ratio test methods perform overall better than the Akaike or Bayesian information criteria. The tree used to estimate the likelihood scores does not influence model selection unless it is a randomly chosen tree. The order in which hypotheses are tested, and the complexity of the initial model in the sequence of tests, influence model selection in some cases. Model fitting in phylogenetics has been suggested for many years, yet many authors still arbitrarily choose their models, often using the default models implemented in standard computer programs for phylogenetic estimation. We show here that a best-fit model can be readily identified. Consequently, given the relevance of models, model fitting should be routine in any phylogenetic analysis that uses models of evolution.  相似文献   

4.
In this paper, we identified the best species–area relationship (SAR) models from amongst 28 different models gathered from the literature, using an artificial predator–prey simulation (EcoSim), along with investigating how sampling approaches and sampling scales affect SARs. Further, we attempted to determine a plausible interpretation of SAR model coefficients for the best performing SAR models. This is the most extensive quantitatively based investigation of the species–area relationship so far undertaken in the literature.We gathered 28 different models from the literature and fitted them to sampling data from EcoSim using non-linear regression and ΔAICc as the goodness-of-fit criterion. Afterwards, we proposed a machine-learning approach to find plausible relationships between the models’ coefficients and the spatial information that likely affect SARs, as a basis for extracting rules that provide an interpretation of SAR coefficients.We found the power function family to be a reasonable choice and in particular the Plotkin function based on ΔAICc ranking. The Plotkin function was consistently in the top three in terms of the best ranked SAR functions. Furthermore, the simple power function was the best-ranked model in nested sampling amongst models with two coefficients. We found that the Plotkin, quadratic power, Morgan–Mercer–Floid and the generalized cumulative Weibull functions are the best ranked models for small, intermediate, large, and very large scales, respectively, in nested sampling, while Plotkin (in small to intermediate scales) and Chapman–Richards (in large to very large scales) are the best ranked functions in random sampling. Finally, based on rule extractions using machine-learning techniques we were able to find interpretations of the coefficients for the simple and extended power functions. For instance, function coefficients corresponded to sampling scale size, patch number, fractal dimension, average patch size, and spatial complexity.Our main conclusions are that SAR models are highly dependent on sampling scale and sampling approach and that the shape of the best ranked SAR model is convex without an asymptote for smaller scales (small, intermediate) and it is sigmoid for larger scales (large and very large). For some of the SAR model coefficients, there are clear correlations with spatial information, thereby providing an interpretation of these coefficients. In addition, the slope z measuring the rate of species increase for SAR models in the power function family was found to be directly proportional to beta diversity, which confirms the view that beta diversity and SAR models are to some extent both measures of species richness.  相似文献   

5.
In this paper, we provide a brief review of the well-known methods of reducing spatially structured population models to mean-field models. First, we discuss the terminology of mean-field approximation which is used in the ecological modelling literature and show that the various existing interpretations of the mean-field concept can imply different meanings. Then we classify and compare various methods of reducing spatially explicit models to mean-field models: spatial moment approximation, aggregation techniques and the mean-field limit of IBMs. We emphasize the importance of spatial scales in the reduction of spatially explicit models and briefly consider the inverse problem of scaling up local ecological interactions from microscales to macroscales. Then we discuss the current challenges and limitations for construction of mean-field population models. We emphasize the need for developing mixed methods based on a combination of various reduction techniques to cope with the spatio-temporal complexity of real ecosystems including processes taking place on multiple time and space scales. Finally, we argue that the construction of analytically tractable mean-field models is becoming a key issue to provide an insight into the major mechanisms of ecosystem functioning. We complete this review by introducing the contributions to the current special issue of Ecological Complexity.  相似文献   

6.
Understanding and predicting the dynamics of organisms is a central objective in ecology and conservation biology, and modelling provides a solution to tackling this problem. However, the complex nature of ecological systems means that for a thorough understanding of ecological dynamics at hierarchical scales, a set of modeling approaches need to be adopted. This review illustrates how modelling approaches can be used to understand the dynamics of organisms in applied ecological problems, focussing on mechanistic models at a local scale and statistical models at a broad scale. Mechanistic models incorporate ecological processes explicitly and thus are likely to be robust under novel conditions. Models based on behavioural decisions by individuals represent a typical example of the successful application of mechanistic models to applied problems. Considering the data-hungry nature of such mechanistic models, model complexity and parameterisation need to be explored further for a quick and widespread implementation of this model type. For broad-scale phenomena, statistical models play an important role in dealing with problems that are often inherent in data. Examples include models for quantifying population trends from long-term, large-scale data and those for comparative methods of extinction risk. Novel statistical approaches also allow mechanistic models to be parameterised using readily obtained data at a macro scale. In conclusion, the complementary use and improvement of multiple model types, the increased use of novel model parameterisation, the examination of model transferability and the achievement of wider biodiversity information availability are key challenges for the effective use of modelling in applied ecological problems.  相似文献   

7.
Phylogenetic comparative methods may fail to produce meaningful results when either the underlying model is inappropriate or the data contain insufficient information to inform the inference. The ability to measure the statistical power of these methods has become crucial to ensure that data quantity keeps pace with growing model complexity. Through simulations, we show that commonly applied model choice methods based on information criteria can have remarkably high error rates; this can be a problem because methods to estimate the uncertainty or power are not widely known or applied. Furthermore, the power of comparative methods can depend significantly on the structure of the data. We describe a Monte Carlo-based method which addresses both of these challenges, and show how this approach both quantifies and substantially reduces errors relative to information criteria. The method also produces meaningful confidence intervals for model parameters. We illustrate how the power to distinguish different models, such as varying levels of selection, varies both with number of taxa and structure of the phylogeny. We provide an open-source implementation in the pmc ("Phylogenetic Monte Carlo") package for the R programming language. We hope such power analysis becomes a routine part of model comparison in comparative methods.  相似文献   

8.
Ecological systems are governed by complex interactions which are mainly nonlinear. In order to capture the inherent complexity and nonlinearity of ecological, and in general biological systems, empirical models recently gained popularity. However, although these models, particularly connectionist approaches such as multilayered backpropagation networks, are commonly applied as predictive models in ecology to a wide variety of ecosystems and questions, there are no studies to date aiming to assess the performance, both in terms of data fitting and generalizability, and applicability of empirical models in ecology. Our aim is hence to provide an overview for nature of the wide range of the data sets and predictive variables, from both aquatic and terrestrial ecosystems with different scales of time-dependent dynamics, and the applicability and robustness of predictive modeling methods on such data sets by comparing different empirical modeling approaches. The models used in this study range from predicting the occurrence of submerged plants in shallow lakes to predicting nest occurrence of bird species from environmental variables and satellite images. The methods considered include k-nearest neighbor (k-NN), linear and quadratic discriminant analysis (LDA and QDA), generalized linear models (GLM) feedforward multilayer backpropagation networks and pseudo-supervised network ARTMAP.Our results show that the predictive performances of the models on training data could be misleading, and one should consider the predictive performance of a given model on an independent test set for assessing its predictive power. Moreover, our results suggest that for ecosystems involving time-dependent dynamics and periodicities whose frequency are possibly less than the time scale of the data considered, GLM and connectionist neural network models appear to be most suitable and robust, provided that a predictive variable reflecting these time-dependent dynamics included in the model either implicitly or explicitly. For spatial data, which does not include any time-dependence comparable to the time scale covered by the data, on the other hand, neighborhood based methods such as k-NN and ARTMAP proved to be more robust than other methods considered in this study. In addition, for predictive modeling purposes, first a suitable, computationally inexpensive method should be applied to the problem at hand a good predictive performance of which would render the computational cost and efforts associated with complex variants unnecessary.  相似文献   

9.
Currently available methods for model selection used in phylogenetic analysis are based on an initial fixed-tree topology. Once a model is picked based on this topology, a rigorous search of the tree space is run under that model to find the maximum-likelihood estimate of the tree (topology and branch lengths) and the maximum-likelihood estimates of the model parameters. In this paper, we propose two extensions to the decision-theoretic (DT) approach that relax the fixed-topology restriction. We also relax the fixed-topology restriction for the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) methods. We compare the performance of the different methods (the relaxed, restricted, and the likelihood-ratio test [LRT]) using simulated data. This comparison is done by evaluating the relative complexity of the models resulting from each method and by comparing the performance of the chosen models in estimating the true tree. We also compare the methods relative to one another by measuring the closeness of the estimated trees corresponding to the different chosen models under these methods. We show that varying the topology does not have a major impact on model choice. We also show that the outcome of the two proposed extensions is identical and is comparable to that of the BIC, Extended-BIC, and DT. Hence, using the simpler methods in choosing a model for analyzing the data is more computationally feasible, with results comparable to the more computationally intensive methods. Another outcome of this study is that earlier conclusions about the DT approach are reinforced. That is, LRT, Extended-AIC, and AIC result in more complicated models that do not contribute to the performance of the phylogenetic inference, yet cause a significant increase in the time required for data analysis.  相似文献   

10.

Background

Given the complex mechanisms underlying biochemical processes systems biology researchers tend to build ever increasing computational models. However, dealing with complex systems entails a variety of problems, e.g. difficult intuitive understanding, variety of time scales or non-identifiable parameters. Therefore, methods are needed that, at least semi-automatically, help to elucidate how the complexity of a model can be reduced such that important behavior is maintained and the predictive capacity of the model is increased. The results should be easily accessible and interpretable. In the best case such methods may also provide insight into fundamental biochemical mechanisms.

Results

We have developed a strategy based on the Computational Singular Perturbation (CSP) method which can be used to perform a "biochemically-driven" model reduction of even large and complex kinetic ODE systems. We provide an implementation of the original CSP algorithm in COPASI (a COmplex PAthway SImulator) and applied the strategy to two example models of different degree of complexity - a simple one-enzyme system and a full-scale model of yeast glycolysis.

Conclusion

The results show the usefulness of the method for model simplification purposes as well as for analyzing fundamental biochemical mechanisms. COPASI is freely available at http://www.copasi.org.  相似文献   

11.
Purpose

Despite the wide use of LCA for environmental profiling, the approach for determining the system boundary within LCA models continues to be subjective and lacking in mathematical rigor. As a result, life cycle models are often developed in an ad hoc manner, and are difficult to compare. Significant environmental impacts may be inadvertently left out. Overcoming this shortcoming can help elicit greater confidence in life cycle models and their use for decision making.

Methods

This paper describes a framework for hybrid life cycle model generation by selecting activities based on their importance, parametric uncertainty, and contribution to network complexity. The importance of activities is determined by structural path analysis—which then guides the construction of life cycle models based on uncertainty and complexity indicators. Information about uncertainty is from the available life cycle inventory; complexity is quantified by cost or granularity. The life cycle model is developed in a hierarchical manner by adding the most important activities until error requirements are satisfied or network complexity exceeds user-specified constraints.

Results and Discussion

The framework is applied to an illustrative example for building a hybrid LCA model. Since this is a constructed example, the results can be compared with the actual impact, to validate the approach. This application demonstrates how the algorithm sequentially develops a life cycle model of acceptable uncertainty and network complexity. Challenges in applying this framework to practical problems are discussed.

Conclusion

The presented algorithm designs system boundaries between scales of hybrid LCA models, includes or omits activities from the system based on path analysis of environmental impact contribution at upstream network nodes, and provides model quality indicators that permit comparison between different LCA models.

  相似文献   

12.
Conservation planning with insects at three different spatial scales   总被引:1,自引:0,他引:1  
Deciding which areas to protect, and where to manage and how, are no easy tasks. Many protected areas were established opportunistically under strong political and economic constraints, which may have resulted in inefficient and ineffective conservation. Systematic conservation planning has helped us move from ad-hoc decisions to a quantitative and transparent decision-making process, identifying conservation priorities that achieve explicit objectives in a cost-efficient manner. Here we use Finnish butterflies to illustrate different modeling approaches to address three different types of situations in conservation planning at three different spatial scales. First, we employ species distribution models at the national scale to construct a conservation priority map for 91 species at the resolution of 10×10  km. Species distribution models interpolate sparse occurrence data to infer variation in habitat suitability and to predict species responses to habitat loss, management actions and climate change. Second, at the regional scale we select the optimal management plan to protect a set of habitat specialist species. And third, at the landscape scale, we use a metapopulation approach to manage a network of habitat patches for long-term persistence of a single butterfly species. These different modeling approaches illustrate trade-offs between complexity and tractability and between generality and precision. General correlation-based models are helpful to set priorities for multiple species at large spatial scales. More specific management questions at smaller scales require further data and more complex models. The vast numbers of insect species with diverse ecologies provide a source of information that has remained little used in systematic conservation planning.  相似文献   

13.
We compared four existing process‐based stand‐level models of varying complexity (physiological principles in predicting growth, photosynthesis and evapotranspiration, biogeochemical cycles, and stand to ecosystem carbon and evapotranspiration simulator) and a new nested model with 4 years of eddy‐covariance‐measured water vapor (LE) and CO2 (Fc) fluxes at a maturing loblolly pine forest. The nested model resolves the ‘fast’ CO2 and H2O exchange processes using canopy turbulence theories and radiative transfer principles whereas slowly evolving processes were resolved using standard carbon allocation methods modified to improve leaf phenology. This model captured most of the intraannual variations in leaf area index (LAI), net ecosystem exchange (NEE), and LE for this stand in which maximum LAI was not at a steady state. The model comparisons suggest strong linkages between carbon production and LAI variability, especially at seasonal time scales. This linkage necessitates the use of multilayer models to reproduce the seasonal dynamics of LAI, NEE, and LE. However, our findings suggest that increasing model complexity, often justified for resolving faster processes, does not necessarily translate into improved predictive skills at all time scales. Additionally, none of the models tested here adequately captured drought effects on water and CO2 fluxes. Furthermore, the good performance of some models in capturing flux variability on interannual time scales appears to stem from erroneous LAI dynamics and from sensitivity to droughts that injects unrealistic flux variability at longer time scales.  相似文献   

14.
15.
As a result of the complexity inherent in some natural systems, mathematical models employed in ecology are often governed by a large number of variables. For instance, in the study of population dynamics we often find multiregional models for structured populations in which individuals are classified regarding their age and their spatial location. Dealing with such structured populations leads to high dimensional models. Moreover, in many instances the dynamics of the system is controlled by processes whose time scales are very different from each other. For example, in multiregional models migration is often a fast process in comparison to the growth of the population.Approximate reduction techniques take advantage of the presence of different time scales in a system to introduce approximations that allow one to transform the original system into a simpler low dimensional system. In this way, the dynamics of the original system can be approximated in terms of that of the reduced system. This work deals with the study of that approximation. In particular, we work with a non-autonomous discrete time model previously presented in the literature and obtain different bounds for the error we incur when we describe the dynamics of the original system in terms of the reduced one.The results are illustrated by some numerical simulations corresponding to the reduction of a Leslie type model for a population structured in two age classes and living in a two patch system.  相似文献   

16.
In the last two decades conventional linear methods for biosignal analysis have been substantially extended by non-stationary, non-linear, and complexity approaches. So far, complexity is usually assessed with regard to one single time scale, disregarding complex physiology organised on different time scales. This shortcoming was overcome and medically evaluated by information flow functions developed in our research group in collaboration with several theoretical, experimental, and clinical partners. In the present work, the information flow is introduced and typical information flow characteristics are demonstrated. The prognostic value of autonomic information flow (AIF), which reflects communication in the cardiovascular system, was shown in patients with multiple organ dysfunction syndrome and in patients with heart failure. Gait information flow (GIF), which reflects communication in the motor control system during walking, was introduced to discriminate between controls and elderly patients suffering from low back pain. The applications presented for the theoretically based approach of information flow confirm its value for the identification of complex physiological systems. The medical relevance has to be confirmed by comprehensive clinical studies. These information flow measures substantially extend the established linear and complexity measures in biosignal analysis.  相似文献   

17.
As a consequence of the complexity of ecosystems and context-dependence of species interactions, structural uncertainty is pervasive in ecological modeling. This is particularly problematic when ecological models are used to make conservation and management plans whose outcomes may depend strongly on model formulation. Nonlinear time series approaches allow us to circumvent this issue by using the observed dynamics of the system to guide policy development. However, these methods typically require long time series from stationary systems, which are rarely available in ecological settings. Here we present a Bayesian approach to nonlinear forecasting based on Gaussian processes that readily integrates information from several short time series and allows for nonstationary dynamics. We demonstrate the utility of our modeling methods on simulated from a wide range of ecological scenarios. We expect that these models will extend the range of ecological systems to which nonlinear forecasting methods can be usefully applied.  相似文献   

18.
Fluorescence recovery after photobleaching (FRAP) is used to obtain quantitative information about molecular diffusion and binding kinetics at both cell and tissue levels of organization. FRAP models have been proposed to estimate the diffusion coefficients and binding kinetic parameters of species for a variety of biological systems and experimental settings. However, it is not clear what the connection among the diverse parameter estimates from different models of the same system is, whether the assumptions made in the model are appropriate, and what the qualities of the estimates are. Here we propose a new approach to investigate the discrepancies between parameters estimated from different models. We use a theoretical model to simulate the dynamics of a FRAP experiment and generate the data that are used in various recovery models to estimate the corresponding parameters. By postulating a recovery model identical to the theoretical model, we first establish that the appropriate choice of observation time can significantly improve the quality of estimates, especially when the diffusion and binding kinetics are not well balanced, in a sense made precise later. Secondly, we find that changing the balance between diffusion and binding kinetics by changing the size of the bleaching region, which gives rise to different FRAP curves, provides a priori knowledge of diffusion and binding kinetics, which is important for model formulation. We also show that the use of the spatial information in FRAP provides better parameter estimation. By varying the recovery model from a fixed theoretical model, we show that a simplified recovery model can adequately describe the FRAP process in some circumstances and establish the relationship between parameters in the theoretical model and those in the recovery model. We then analyze an example in which the data are generated with a model of intermediate complexity and the parameters are estimated using models of greater or less complexity, and show how sensitivity analysis can be used to improve FRAP model formulation. Lastly, we show how sophisticated global sensitivity analysis can be used to detect over-fitting when using a model that is too complex.  相似文献   

19.
Soft tissue modelling has gained a great deal of importance, for a large part due to its application in surgical training simulators for minimally invasive surgery (MIS). This article provides a structured overview of different continuum-mechanical models that have been developed over the years. It aims at facilitating model choice for specific soft tissue modelling applications. According to the complexity of the model, different features of soft biological tissue will be incorporated, i.e. nonlinearity, viscoelasticity, anisotropy, heterogeneity and finally, tissue damage during deformation. A brief summary of experimental methods for material characterisation and an introduction to methods for geometric modelling are also provided. The overview is non-exhaustive, focusing on the most important general models and models with specific biological applications. A trade-off in complexity must be made for enabling real-time simulation, but still maintaining realistic representation of the organ deformation. Depending on the organ and tissue types, different models with emphasis on certain features will prove to be more appropriate, meaning the optimal model choice is organ and tissue-dependent.  相似文献   

20.
Soft tissue modelling has gained a great deal of importance, for a large part due to its application in surgical training simulators for minimally invasive surgery (MIS). This article provides a structured overview of different continuum-mechanical models that have been developed over the years. It aims at facilitating model choice for specific soft tissue modelling applications. According to the complexity of the model, different features of soft biological tissue will be incorporated, i.e. nonlinearity, viscoelasticity, anisotropy, heterogeneity and finally, tissue damage during deformation. A brief summary of experimental methods for material characterisation and an introduction to methods for geometric modelling are also provided.

The overview is non-exhaustive, focusing on the most important general models and models with specific biological applications. A trade-off in complexity must be made for enabling real-time simulation, but still maintaining realistic representation of the organ deformation. Depending on the organ and tissue types, different models with emphasis on certain features will prove to be more appropriate, meaning the optimal model choice is organ and tissue-dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号