首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
The increasing use of computer simulation by theoretical ecologists started a move away from models formulated at the population level towards individual-based models. However, many of the models studied at the individual level are not analysed mathematically and remain defined in terms of a computer algorithm. This is not surprising, given that they are intrinsically stochastic and require tools and techniques for their study that may be unfamiliar to ecologists. Here, we argue that the construction of ecological models at the individual level and their subsequent analysis is, in many cases, straightforward and leads to important insights. We discuss recent work that highlights the importance of stochastic effects for parameter ranges and systems where it was previously thought that such effects would be negligible.  相似文献   

2.
Mathematical models of mimicry typically involve artificial prey species with fixed colorations or appearances; this enables a comparison of predation rates to demonstrate the level of protection a mimic might be afforded. Fruitful theoretical results have been produced using this method, but it is also useful to examine the possible evolutionary consequences of mimicry. To that end, we present individual-based evolutionary simulation models where prey colorations are free to evolve. We use the models to examine the effect of Batesian mimics on Müllerian mimics and mimicry rings. Results show that Batesian mimics can potentially incite Müllerian mimicry relationships and encourage mimicry ring convergence.  相似文献   

3.
As an emergent infectious disease outbreak unfolds, public health response is reliant on information on key epidemiological quantities, such as transmission potential and serial interval. Increasingly, transmission models fit to incidence data are used to estimate these parameters and guide policy. Some widely used modelling practices lead to potentially large errors in parameter estimates and, consequently, errors in model-based forecasts. Even more worryingly, in such situations, confidence in parameter estimates and forecasts can itself be far overestimated, leading to the potential for large errors that mask their own presence. Fortunately, straightforward and computationally inexpensive alternatives exist that avoid these problems. Here, we first use a simulation study to demonstrate potential pitfalls of the standard practice of fitting deterministic models to cumulative incidence data. Next, we demonstrate an alternative based on stochastic models fit to raw data from an early phase of 2014 West Africa Ebola virus disease outbreak. We show not only that bias is thereby reduced, but that uncertainty in estimates and forecasts is better quantified and that, critically, lack of model fit is more readily diagnosed. We conclude with a short list of principles to guide the modelling response to future infectious disease outbreaks.  相似文献   

4.
Maximum likelihood estimation of the model parameters for a spatial population based on data collected from a survey sample is usually straightforward when sampling and non-response are both non-informative, since the model can then usually be fitted using the available sample data, and no allowance is necessary for the fact that only a part of the population has been observed. Although for many regression models this naive strategy yields consistent estimates, this is not the case for some models, such as spatial auto-regressive models. In this paper, we show that for a broad class of such models, a maximum marginal likelihood approach that uses both sample and population data leads to more efficient estimates since it uses spatial information from sampled as well as non-sampled units. Extensive simulation experiments based on two well-known data sets are used to assess the impact of the spatial sampling design, the auto-correlation parameter and the sample size on the performance of this approach. When compared to some widely used methods that use only sample data, the results from these experiments show that the maximum marginal likelihood approach is much more precise.  相似文献   

5.
An equation for the rate of photosynthesis as a function of irradiance introduced by T. T. Bannister included an empirical parameter b to account for observed variations in curvature between the initial slope and the maximum rate of photosynthesis. Yet researchers have generally favored equations with fixed curvature, possibly because b was viewed as having no physiological meaning. We developed an analytic photosynthesis‐irradiance equation relating variations in curvature to changes in the degree of connectivity between photosystems, and also considered a recently published alternative, based on changes in the size of the plastoquinone pool. When fitted to a set of 185 observed photosynthesis‐irradiance curves, it was found that the Bannister equation provided the best fit more frequently compared to either of the analytic equations. While Bannister's curvature parameter engendered negligible improvement in the statistical fit to the study data, we argued that the parameter is nevertheless quite useful because it allows for consistent estimates of initial slope and saturation irradiance for observations exhibiting a range of curvatures, which would otherwise have to be fitted to different fixed‐curvature equations. Using theoretical models, we also found that intra‐ and intercellular self‐shading can result in biased estimates of both curvature and the saturation irradiance parameter. We concluded that Bannister's is the best currently available equation accounting for variations in curvature precisely because it does not assign inappropriate physiological meaning to its curvature parameter, and we proposed that b should be thought of as the expression of the integration of all factors impacting curvature.  相似文献   

6.
Phylogenetic comparative methods (PCMs) have been used to test evolutionary hypotheses at phenotypic levels. The evolutionary modes commonly included in PCMs are Brownian motion (genetic drift) and the Ornstein–Uhlenbeck process (stabilizing selection), whose likelihood functions are mathematically tractable. More complicated models of evolutionary modes, such as branch‐specific directional selection, have not been used because calculations of likelihood and parameter estimates in the maximum‐likelihood framework are not straightforward. To solve this problem, we introduced a population genetics framework into a PCM, and here, we present a flexible and comprehensive framework for estimating evolutionary parameters through simulation‐based likelihood computations. The method does not require analytic likelihood computations, and evolutionary models can be used as long as simulation is possible. Our approach has many advantages: it incorporates different evolutionary modes for phenotypes into phylogeny, it takes intraspecific variation into account, it evaluates full likelihood instead of using summary statistics, and it can be used to estimate ancestral traits. We present a successful application of the method to the evolution of brain size in primates. Our method can be easily implemented in more computationally effective frameworks such as approximate Bayesian computation (ABC), which will enhance the use of computationally intensive methods in the study of phenotypic evolution.  相似文献   

7.
Fluorescence recovery after photobleaching (FRAP) is used to obtain quantitative information about molecular diffusion and binding kinetics at both cell and tissue levels of organization. FRAP models have been proposed to estimate the diffusion coefficients and binding kinetic parameters of species for a variety of biological systems and experimental settings. However, it is not clear what the connection among the diverse parameter estimates from different models of the same system is, whether the assumptions made in the model are appropriate, and what the qualities of the estimates are. Here we propose a new approach to investigate the discrepancies between parameters estimated from different models. We use a theoretical model to simulate the dynamics of a FRAP experiment and generate the data that are used in various recovery models to estimate the corresponding parameters. By postulating a recovery model identical to the theoretical model, we first establish that the appropriate choice of observation time can significantly improve the quality of estimates, especially when the diffusion and binding kinetics are not well balanced, in a sense made precise later. Secondly, we find that changing the balance between diffusion and binding kinetics by changing the size of the bleaching region, which gives rise to different FRAP curves, provides a priori knowledge of diffusion and binding kinetics, which is important for model formulation. We also show that the use of the spatial information in FRAP provides better parameter estimation. By varying the recovery model from a fixed theoretical model, we show that a simplified recovery model can adequately describe the FRAP process in some circumstances and establish the relationship between parameters in the theoretical model and those in the recovery model. We then analyze an example in which the data are generated with a model of intermediate complexity and the parameters are estimated using models of greater or less complexity, and show how sensitivity analysis can be used to improve FRAP model formulation. Lastly, we show how sophisticated global sensitivity analysis can be used to detect over-fitting when using a model that is too complex.  相似文献   

8.
Huiping Xu  Bruce A. Craig 《Biometrics》2009,65(4):1145-1155
Summary Traditional latent class modeling has been widely applied to assess the accuracy of dichotomous diagnostic tests. These models, however, assume that the tests are independent conditional on the true disease status, which is rarely valid in practice. Alternative models using probit analysis have been proposed to incorporate dependence among tests, but these models consider restricted correlation structures. In this article, we propose a probit latent class model that allows a general correlation structure. When combined with some helpful diagnostics, this model provides a more flexible framework from which to evaluate the correlation structure and model fit. Our model encompasses several other PLC models but uses a parameter‐expanded Monte Carlo EM algorithm to obtain the maximum‐likelihood estimates. The parameter‐expanded EM algorithm was designed to accelerate the convergence rate of the EM algorithm by expanding the complete‐data model to include a larger set of parameters and it ensures a simple solution in fitting the PLC model. We demonstrate our estimation and model selection methods using a simulation study and two published medical studies.  相似文献   

9.
We present theoretical explanations and show through simulation that the individual admixture proportion estimates obtained by using ancestry informative markers should be seen as an error-contaminated measurement of the underlying individual ancestry proportion. These estimates can be used in structured association tests as a control variable to limit type I error inflation or reduce loss of power due to population stratification observed in studies of admixed populations. However, the inclusion of such error-containing variables as covariates in regression models can bias parameter estimates and reduce ability to control for the confounding effect of admixture in genetic association tests. Measurement error correction methods offer a way to overcome this problem but require an a priori estimate of the measurement error variance. We show how an upper bound of this variance can be obtained, present four measurement error correction methods that are applicable to this problem, and conduct a simulation study to compare their utility in the case where the admixed population results from the intermating between two ancestral populations. Our results show that the quadratic measurement error correction (QMEC) method performs better than the other methods and maintains the type I error to its nominal level.  相似文献   

10.
Carboxy-fluorescein diacetate succinimidyl ester (CFSE) labeling is an important experimental tool for measuring cell responses to extracellular signals in biomedical research. However, changes of the cell cycle (e.g., time to division) corresponding to different stimulations cannot be directly characterized from data collected in CFSE-labeling experiments. A number of independent studies have developed mathematical models as well as parameter estimation methods to better understand cell cycle kinetics based on CFSE data. However, when applying different models to the same data set, notable discrepancies in parameter estimates based on different models has become an issue of great concern. It is therefore important to compare existing models and make recommendations for practical use. For this purpose, we derived the analytic form of an age-dependent multitype branching process model. We then compared the performance of different models, namely branching process, cyton, Smith–Martin, and a linear birth–death ordinary differential equation (ODE) model via simulation studies. For fairness of model comparison, simulated data sets were generated using an agent-based simulation tool which is independent of the four models that are compared. The simulation study results suggest that the branching process model significantly outperforms the other three models over a wide range of parameter values. This model was then employed to understand the proliferation pattern of CD4+ and CD8+ T cells under polyclonal stimulation.  相似文献   

11.
We illustrate through examples how monotonicity may help for performance evaluation of networks. We consider two different applications of stochastic monotonicity in performance evaluation. In the first one, we assume that a Markov chain of the model depends on a parameter that can be estimated only up to a certain level and we have only an interval that contains the exact value of the parameter. Instead of taking an approximated value for the unknown parameter, we show how we can use the monotonicity properties of the Markov chain to take into account the error bound from the measurements. In the second application, we consider a well known approximation method: the decomposition into Markovian submodels. In such an approach, models of complex networks or other systems are decomposed into Markovian submodels whose results are then used as parameters for the next submodel in an iterative computation. One obtains a fixed point system which is solved numerically. In general, we have neither an existence proof of the solution of the fixed point system nor a convergence proof of the iterative algorithm. Here we show how stochastic monotonicity can be used to answer these questions and provide, to some extent, the theoretical foundations for this approach. Furthermore, monotonicity properties can also help to derive more efficient algorithms to solve fixed point systems.  相似文献   

12.
The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.  相似文献   

13.
The spatial dynamics of epidemics are fundamentally affected by patterns of human mobility. Mobile phone call detail records (CDRs) are a rich source of mobility data, and allow semi-mechanistic models of movement to be parameterised even for resource-poor settings. While the gravity model typically reproduces human movement reasonably well at the administrative level spatial scale, past studies suggest that parameter estimates vary with the level of spatial discretisation at which models are fitted. Given that privacy concerns usually preclude public release of very fine-scale movement data, such variation would be problematic for individual-based simulations of epidemic spread parametrised at a fine spatial scale. We therefore present new methods to fit fine-scale mathematical mobility models (here we implement variants of the gravity and radiation models) to spatially aggregated movement data and investigate how model parameter estimates vary with spatial resolution. We use gridded population data at 1km resolution to derive population counts at different spatial scales (down to ∼ 5km grids) and implement mobility models at each scale. Parameters are estimated from administrative-level flow data between overnight locations in Kenya and Namibia derived from CDRs: where the model spatial resolution exceeds that of the mobility data, we compare the flow data between a particular origin and destination with the sum of all model flows between cells that lie within those particular origin and destination administrative units. Clear evidence of over-dispersion supports the use of negative binomial instead of Poisson likelihood for count data with high values. Radiation models use fewer parameters than the gravity model and better predict trips between overnight locations for both considered countries. Results show that estimates for some parameters change between countries and with spatial resolution and highlight how imperfect flow data and spatial population distribution can influence model fit.  相似文献   

14.
We propose methods for Bayesian inference for a new class of semiparametric survival models with a cure fraction. Specifically, we propose a semiparametric cure rate model with a smoothing parameter that controls the degree of parametricity in the right tail of the survival distribution. We show that such a parameter is crucial for these kinds of models and can have an impact on the posterior estimates. Several novel properties of the proposed model are derived. In addition, we propose a class of improper noninformative priors based on this model and examine the properties of the implied posterior. Also, a class of informative priors based on historical data is proposed and its theoretical properties are investigated. A case study involving a melanoma clinical trial is discussed in detail to demonstrate the proposed methodology.  相似文献   

15.
Summary Approximate standard errors of genetic parameter estimates were obtained using a simulation technique and approximation formulae for a simple statistical model. The similarity of the corresponding estimates of standard errors from the two methods indicated that the simulation technique may be useful for estimating the precision of genetic parameter estimates for complex models or unbalanced population structures where approxi mation formulae do not apply. The method of generating simulation populations in the computer is outlined, and a technique of setting approximate confidence limits to heritability estimates is described.  相似文献   

16.
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.  相似文献   

17.
When we apply ecological models in environmental management, we must assess the accuracy of parameter estimation and its impact on model predictions. Parameters estimated by conventional techniques tend to be nonrobust and require excessive computational resources. However, optimization algorithms are highly robust and generally exhibit convergence of parameter estimation by inversion with nonlinear models. They can simultaneously generate a large number of parameter estimates using an entire data set. In this study, we tested four inversion algorithms (simulated annealing, shuffled complex evolution, particle swarm optimization, and the genetic algorithm) to optimize parameters in photosynthetic models depending on different temperatures. We investigated if parameter boundary values and control variables influenced the accuracy and efficiency of the various algorithms and models. We obtained optimal solutions with all of the inversion algorithms tested if the parameter bounds and control variables were constrained properly. However, the efficiency of processing time use varied with the control variables obtained. In addition, we investigated if temperature dependence formalization impacted optimally the parameter estimation process. We found that the model with a peaked temperature response provided the best fit to the data.  相似文献   

18.
Bird ring‐recovery data have been widely used to estimate demographic parameters such as survival probabilities since the mid‐20th century. However, while the total number of birds ringed each year is usually known, historical information on age at ringing is often not available. A standard ring‐recovery model, for which information on age at ringing is required, cannot be used when historical data are incomplete. We develop a new model to estimate age‐dependent survival probabilities from such historical data when age at ringing is not recorded; we call this the historical data model. This new model provides an extension to the model of Robinson, 2010, Ibis, 152, 651–795 by estimating the proportion of the ringed birds marked as juveniles as an additional parameter. We conduct a simulation study to examine the performance of the historical data model and compare it with other models including the standard and conditional ring‐recovery models. Simulation studies show that the approach of Robinson, 2010, Ibis, 152, 651–795 can cause bias in parameter estimates. In contrast, the historical data model yields similar parameter estimates to the standard model. Parameter redundancy results show that the newly developed historical data model is comparable to the standard ring‐recovery model, in terms of which parameters can be estimated, and has fewer identifiability issues than the conditional model. We illustrate the new proposed model using Blackbird and Sandwich Tern data. The new historical data model allows us to make full use of historical data and estimate the same parameters as the standard model with incomplete data, and in doing so, detect potential changes in demographic parameters further back in time.  相似文献   

19.
The joint effects of stabilizing selection, mutation, recombination, and random drift on the genetic variability of a polygenic character in a finite population are investigated. A simulation study is performed to test the validity of various analytical predictions on the equilibrium genetic variance. A new formula for the expected equilibrium variance is derived that approximates the observed equilibrium variance very closely for all parameter combinations we have tested. The computer model simulates the continuum-of-alleles model of Crow and Kimura. However, it is completely stochastic in the sense that it models evolution as a Markov process and does not use any deterministic evolution equations. The theoretical results are compared with heritability estimates from laboratory and natural populations. Heritabilities ranging from 20% to 50%, as observed even in lab populations under a constant environment, can only be explained by a mutation-selection balance if the phenotypic character is neutral or the number of genes contributing to the trait is sufficiently high, typically several hundred, or if there are a few highly variable loci that influence quantitative traits.  相似文献   

20.
Low-dose-rate extrapolation using the multistage model   总被引:3,自引:0,他引:3  
C Portier  D Hoel 《Biometrics》1983,39(4):897-906
The distribution of the maximum likelihood estimates of virtually safe levels of exposure to environmental chemicals is derived by using large-sample theory and Monte Carlo simulation according to the Armitage-Doll multistage model. Using historical dose-response we develop a set of 33 two-stage models upon which we base our conclusions. The large-sample distributions of the virtually safe dose are normal for cases in which the multistage-model parameters have nonzero expectation, and are skewed in other cases. The large-sample theory does not provide a good approximation of the distribution observed for small bioassays when Monte Carlo simulation is used. The constrained nature of the multistage-model parameters leads to bimodal distributions for small bioassays. The two modes are the direct result of estimating the linear parameter in the multistage model; the lower mode results from estimating this parameter to be nonzero, and the upper mode from estimating it to be zero. The results of this research emphasize the need for incorporation of the biological theory in the model-selection process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号