首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Ring re-encounter data, in particular ring recoveries, have made a large contribution to our understanding of bird movements. However, almost every study based on ring re-encounter data has struggled with the bias caused by unequal observer distribution. Re-encounter probabilities are strongly heterogeneous in space and over time. If this heterogeneity can be measured or at least controlled for, the enormous number of ring re-encounter data collected can be used effectively to answer many questions. Here, we review four different approaches to account for heterogeneity in observer distribution in spatial analyses of ring re-encounter data. The first approach is to measure re-encounter probability directly. We suggest that variation in ring re-encounter probability could be estimated by combining data whose re-encounter probabilities are close to one (radio or satellite telemetry) with data whose re-encounter probabilities are low (ring re-encounter data). The second approach is to measure the spatial variation in re-encounter probabilities using environmental covariates. It should be possible to identify powerful predictors for ring re-encounter probabilities. A third approach consists of the comparison of the actual observations with all possible observations using randomization techniques. We encourage combining such randomisations with ring re-encounter models that we discuss as a fourth approach. Ring re-encounter models are based on the comparison of groups with equal re-encounter probabilities. Together these four approaches could improve our understanding of bird movements considerably. We discuss their advantages and limitations and give directions for future research.  相似文献   

3.
Metabolic system modeling for model-based glycaemic control is becoming increasingly important. Few metabolic system models are clinically validated for both fit to the data and prediction ability. This research introduces a new additional form of pharmaco-dynamic (PD) surface comparison for model analysis and validation. These 3D surfaces are developed for 3 clinically validated models and 1 model with an added saturation dynamic. The models include the well-known Minimal Model. They are fit to two different data sets of clinical PD data from hyperinsulinaemic clamp studies at euglycaemia and/or hyperglycaemia. The models are fit to the first data set to determine an optimal set of population parameters. The second data set is used to test trend prediction of the surface modeling as it represents a lower insulin sensitivity cohort and should thus require only scaling in these (or related) parameters to match this data set. This particular approach clearly highlights differences in modeling methods, and the model dynamics utilized that may not appear as clearly in other fitting or prediction validation methods.Across all models saturation of insulin action is seen to be an important determinant of prediction and fit quality. In particular, the well-reported under-modeling of insulin sensitivity in the Minimal Model can be seen in this context to be a result of a lack of saturation dynamics, which in turn affects its ability to detect differences between cohorts. The overall approach of examining PD surfaces is seen to be an effective means of analyzing and thus validating a metabolic model's inherent dynamics and basic trend prediction on a population level, but is not a replacement for data driven, patient-specific fit and prediction validation for clinical use. The overall method presented could be readily generalized to similar PD systems and therapeutics.  相似文献   

4.
Sightability models are binary logistic-regression models used to estimate and adjust for visibility bias in wildlife-population surveys. Like many models in wildlife and ecology, sightability models are typically developed from small observational datasets with many candidate predictors. Aggressive model-selection methods are often employed to choose a best model for prediction and effect estimation, despite evidence that such methods can lead to overfitting (i.e., selected models may describe random error or noise rather than true predictor–response curves) and poor predictive ability. We used moose (Alces alces) sightability data from northeastern Minnesota (2005–2007) as a case study to illustrate an alternative approach, which we refer to as degrees-of-freedom (df) spending: sample-size guidelines are used to determine an acceptable level of model complexity and then a pre-specified model is fit to the data and used for inference. For comparison, we also constructed sightability models using Akaike's Information Criterion (AIC) step-down procedures and model averaging (based on a small set of models developed using df-spending guidelines). We used bootstrap procedures to mimic the process of model fitting and prediction, and to compute an index of overfitting, expected predictive accuracy, and model-selection uncertainty. The index of overfitting increased 13% when the number of candidate predictors was increased from three to eight and a best model was selected using step-down procedures. Likewise, model-selection uncertainty increased when the number of candidate predictors increased. Model averaging (based on R = 30 models with 1–3 predictors) effectively shrunk regression coefficients toward zero and produced similar estimates of precision to our 3-df pre-specified model. As such, model averaging may help to guard against overfitting when too many predictors are considered (relative to available sample size). The set of candidate models will influence the extent to which coefficients are shrunk toward zero, which has implications for how one might apply model averaging to problems traditionally approached using variable-selection methods. We often recommend the df-spending approach in our consulting work because it is easy to implement and it naturally forces investigators to think carefully about their models and predictors. Nonetheless, similar concepts should apply whether one is fitting 1 model or using multi-model inference. For example, model-building decisions should consider the effective sample size, and potential predictors should be screened (without looking at their relationship to the response) for missing data, narrow distributions, collinearity, potentially overly influential observations, and measurement errors (e.g., via logical error checks). © 2011 The Wildlife Society.  相似文献   

5.
The idea that individual differences in behavior and physiology can be partly understood by linking them to a fast-slow continuum of life history strategies has become popular in the evolutionary behavioral sciences. I refer to this approach as the “fast-slow paradigm” of individual differences. The paradigm has generated a substantial amount of research, but has also come increasingly under scrutiny for theoretical, empirical, and methodological reasons. I start by reviewing the basic empirical facts about the fast-slow continuum across species and the main theoretical accounts of its existence. I then discuss the move from the level of species and populations to that of individuals, and the theoretical and empirical complications that follow. I argue that the fast-slow continuum can be a productive heuristic for individual differences; however, the field needs to update its theoretical assumptions, rethink some methodological practices, and explore new approaches and ideas in light of the specific features of the human ecology.  相似文献   

6.
The COVID-19 pandemic has highlighted delayed reporting as a significant impediment to effective disease surveillance and decision-making. In the absence of timely data, statistical models which account for delays can be adopted to nowcast and forecast cases or deaths. We discuss the four key sources of systematic and random variability in available data for COVID-19 and other diseases, and critically evaluate current state-of-the-art methods with respect to appropriately separating and capturing this variability. We propose a general hierarchical approach to correcting delayed reporting of COVID-19 and apply this to daily English hospital deaths, resulting in a flexible prediction tool which could be used to better inform pandemic decision-making. We compare this approach to competing models with respect to theoretical flexibility and quantitative metrics from a 15-month rolling prediction experiment imitating a realistic operational scenario. Based on consistent leads in predictive accuracy, bias, and precision, we argue that this approach is an attractive option for correcting delayed reporting of COVID-19 and future epidemics.  相似文献   

7.
We apply geostatistical modeling techniques to investigate spatial patterns of species richness. Unlike most other statistical modeling techniques that are valid only when observations are independent, geostatistical methods are designed for applications involving spatially dependent observations. When spatial dependencies, which are sometimes called autocorrelations, exist, geostatistical techniques can be applied to produce optimal predictions in areas (typically proximate to observed data) where no observed data exist. Using tiger beetle species (Cicindelidae) data collected in western North America, we investigate the characteristics of spatial relationships in species numbers data, First, we compare the accuracy of spatial predictions of species richness when data from grid squares of two different sizes (scales) are used to form the predictions. Next we examine how prediction accuracy varies as a function of areal extent of the region under investigation. Then we explore the relationship between the number of observations used to build spatial prediction models and prediction accuracy. Our results indicate that, within the taxon of tiger beetles and for the two scales we investigate, the accuracy of spatial predictions is unrelated to scale and that prediction accuracy is not obviously related lo the areal extent of the region under investigation. We also provide information about the relationship between sample size and prediction accuracy, and, finally, we show that prediction accuracy may be substantially diminished if spatial correlations in the data are ignored.  相似文献   

8.
The laboratory rat has long provided plastic surgical investigators a model to study many aspects of flap physiology. Clinical advances in reconstructive surgery have succeeded or preceded experimental work, setting the stage for further advances. We have critically reviewed all reports of flap models in the laboratory rat. This has begun with simple skin flaps designed on various areas of the body and continued with a review of free-tissue transfer models. Because of the multitude of as yet unanswered questions remaining, the laboratory rat will invariably continue to be widely used as an investigatory source in this area. This report should allow investigators to select more easily reliable, reproducible experimental models, and, one hopes, to streamline their investigative efforts.  相似文献   

9.
Mathematical models of neurobehavioral function are useful both for understanding the underlying physiology and for predicting the effects of rest-activity-work schedules and interventions on neurobehavioral function. In a symposium titled "Modeling Human Neurobehavioral Performance I: Uncovering Physiologic Mechanisms" at the 2006 Society for Industrial and Applied Mathematics/Society for Mathematical Biology (SIAM/SMB) Conference on the Life Sciences, different approaches to modeling the physiology of human circadian rhythms, sleep, and neurobehavioral performance and their usefulness in understanding the underlying physiology were examined. The topics included key elements of the physiology that should be included in mathematical models, a computational model developed within a cognitive architecture that has begun to include the effects of extended wake on information-processing mechanisms that influence neurobehavioral function, how to deal with interindividual differences in the prediction of neurobehavioral function, the applications of systems biology and control theory to the study of circadian rhythms, and comparisons of these methods in approaching the overarching questions of the underlying physiology and mathematical models of circadian rhythms and neurobehavioral function. A unifying theme was that it is important to have strong collaborative ties between experimental investigators and mathematical modelers, both for the design and conduct of experiments and for continued development of the models.  相似文献   

10.
Simulating complex biological and physiological systems and predicting their behaviours under different conditions remains challenging. Breaking systems into smaller and more manageable modules can address this challenge, assisting both model development and simulation. Nevertheless, existing computational models in biology and physiology are often not modular and therefore difficult to assemble into larger models. Even when this is possible, the resulting model may not be useful due to inconsistencies either with the laws of physics or the physiological behaviour of the system. Here, we propose a general methodology for composing models, combining the energy-based bond graph approach with semantics-based annotations. This approach improves model composition and ensures that a composite model is physically plausible. As an example, we demonstrate this approach to automated model composition using a model of human arterial circulation. The major benefit is that modellers can spend more time on understanding the behaviour of complex biological and physiological systems and less time wrangling with model composition.  相似文献   

11.
12.
13.
The two main approaches in theoretical population ecology-the classical approach using differential equations and the approach using individual-based modeling-seem to be incompatible. Linked to these two approaches are two different timescales: population dynamics and behavior or physiology. Thus, the question of the relationship between classical and individual-based approaches is related to the question of the mutual relationship between processes on the population and the behavioral timescales. We present a simple protocol that allows the two different approaches to be reconciled by making explicit use of the fact that processes operating on two different timescales can be treated separately. Using an individual-based model of nomadic birds as an example, we extract the population growth rate by deactivating all demographic processes-in other words, the individuals behave but do not age, die, or reproduce. The growth rate closely matches the logistic growth rate for a wide range of parameters. The implications of this result and the conditions for applying the protocol to other individual-based models are discussed. Since in physics the technique of separating timescales is linked to some concepts of self-organization, we believe that the protocol will also help to develop concepts of self-organization in ecology.  相似文献   

14.
Biophysical models are increasingly used for medical applications at the organ scale. However, model predictions are rarely associated with a confidence measure although there are important sources of uncertainty in computational physiology methods. For instance, the sparsity and noise of the clinical data used to adjust the model parameters (personalization), and the difficulty in modeling accurately soft tissue physiology. The recent theoretical progresses in stochastic models make their use computationally tractable, but there is still a challenge in estimating patient-specific parameters with such models. In this work we propose an efficient Bayesian inference method for model personalization using polynomial chaos and compressed sensing. This method makes Bayesian inference feasible in real 3D modeling problems. We demonstrate our method on cardiac electrophysiology. We first present validation results on synthetic data, then we apply the proposed method to clinical data. We demonstrate how this can help in quantifying the impact of the data characteristics on the personalization (and thus prediction) results. Described method can be beneficial for the clinical use of personalized models as it explicitly takes into account the uncertainties on the data and the model parameters while still enabling simulations that can be used to optimize treatment. Such uncertainty handling can be pivotal for the proper use of modeling as a clinical tool, because there is a crucial requirement to know the confidence one can have in personalized models.  相似文献   

15.
16.
A development of a structural dynamic model, i.e. a model with current change of the most important parameters according to a goal function, is presented with the aim to explain the structural changes observed in lakes, when the nutrient concentration is increased or decreased. This type of models may be important in lake management as it may be possible qualitatively to predict the success or failure of biomanipulation. The answer to the crucial question: àt which phosphorus level will the success of biomanipulation be most probable?' will probably require the development of model which takes into account site specific processes and properties, i.e., a more complicated model. As goal function is proposed the thermodynamic function, exergy, which is defined as the work content of the system (model) compared with the system at thermodynamic equilibrium. It is shown that the structural dynamic modelling approach has been able to explain the shift from large to small zooplankton species at a certain level of phosphorus concentration, accompanied by a shifts from a dominance of zooplankton, and predatory fish to a system dominated by planktivorous fish and phytoplankton. The shift in zooplankton species cannot be explained by application of catastrophe theoretical models, which have been used to explain the hysteresis reaction. The results show that the shift should be expected at approximately 0.12 mg P l-1 and that a typical hysteresis reaction occurs at this concentration in accordance with the expectations. These results are consistent with many observations but should be interpreted with great caution, as the model is simple and general and don't account for a number of processes which may influence the results significantly in specific lake studies. The structural dynamic approach has previously been used in ten case studies with good agreement with the observations, but more case studies are needed before a general recommendation of the use of this type of models can be given. The results from this study point toward to apply this type of models for lake management where biomanipulation is involved, although it should be recommended to improve the presented general model with introduction of site specific properties for a considered lake study.  相似文献   

17.
Classification of patients based on molecular markers, for example into different risk groups, is a modern field in medical research. The aim of this classification is often a better diagnosis or individualized therapy. The search for molecular markers often utilizes extremely high-dimensional data sets (e.g. gene-expression microarrays). However, in situations where the number of measured markers (genes) is intrinsically higher than the number of available patients, standard methods from statistical learning fail to deal correctly with this so-called "curse of dimensionality". Also feature or dimension reduction techniques based on statistical models promise only limited success. Several recent methods explore ideas of how to quantify and incorporate biological prior knowledge of molecular interactions and known cellular processes into the feature selection process. This article aims to give an overview of such current methods as well as the databases, where this external knowledge can be obtained from. For illustration, two recent methods are compared in detail, a feature selection approach for support vector machines as well as a boosting approach for regression models. As a practical example, data on patients with acute lymphoblastic leukemia are considered, where the binary endpoint "relapse within first year" should be predicted.  相似文献   

18.
The size and nature of data collected on gene and protein interactions has led to a rapid growth of interest in graph theory and modern techniques for describing, characterizing and comparing networks. Simultaneously, this is a field of growth within mathematics and theoretical physics, where the global properties, and emergent behavior of networks, as a function of the local properties has long been studied. In this review, a number of approaches for exploiting modern network theory to help describe and analyze different data sets and problems associated with proteomic data are considered. This review aims to help biologists find their way towards useful ideas and references, yet may also help scientists from a mathematics and physics background to understand where they may apply their expertise.  相似文献   

19.
The size and nature of data collected on gene and protein interactions has led to a rapid growth of interest in graph theory and modern techniques for describing, characterizing and comparing networks. Simultaneously, this is a field of growth within mathematics and theoretical physics, where the global properties, and emergent behavior of networks, as a function of the local properties has long been studied. In this review, a number of approaches for exploiting modern network theory to help describe and analyze different data sets and problems associated with proteomic data are considered. This review aims to help biologists find their way towards useful ideas and references, yet may also help scientists from a mathematics and physics background to understand where they may apply their expertise.  相似文献   

20.
利用基因组数据和生物信息学分析方法,快速鉴定耐药基因并预测耐药表型,为细菌耐药状况监测提供了有力辅助手段。目前,已有的数十个耐药数据库及其相关分析工具这些资源为细菌耐药基因的识别以及耐药表型的预测提供了数据信息和技术手段。随着细菌基因组数据的持续增加以及耐药表型数据的不断积累,大数据和机器学习能够更好地建立耐药表型与基因组信息之间的相关性,因此,构建高效的耐药表型预测模型成为研究热点。本文围绕细菌耐药基因的识别和耐药表型的预测,针对耐药相关数据库、耐药特征识别理论与方法、耐药数据的机器学习与表型预测等方面展开讨论,以期为细菌耐药的相关研究提供手段和思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号