首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human settlements in arid environments are becoming widespread due to population growth, and without planning, they may alter vegetation and ecosystem processes, compromising sustainability. We hypothesize that in an arid region of the central Monte desert (Mendoza, Argentina), surface and groundwater availability are the primary factors controlling livestock settlements establishment and success as productive units, which affect patterns of degradation in the landscape. To evaluate this hypothesis we simulated settlement dynamics using a Monte Carlo based model of Settlement Dynamics in Drylands (SeDD), which calculates probabilities on a gridded region based on six environmental factors: groundwater depth, vegetation type, proximity to rivers, paved road, old river beds, and existing settlements. A parameter sweep, including millions of simulations, was run to identify the most relevant factors controlling settlements. Results indicate that distances to rivers and the presence of old river beds are critical to explain the current distribution of settlements, while vegetation, paved roads, and water table depth were not as relevant to explain settlement distribution. Far from surface water sources, most settlements were established at random, suggesting that pressures to settle in unfavorable places control settlement dynamics in those isolated areas. The simulated vegetation, which considers degradation around livestock settlements, generally matched the spatial distribution of remotely sensed vegetation classes, although with a higher cover of extreme vegetation classes. The model could be a useful tool to evaluate effects of land use changes, such as water provision or changes on river flows, on settlement distribution and vegetation degradation in arid environments.  相似文献   

2.
Evolutionary graph theory studies the evolutionary dynamics of populations structured on graphs. A central problem is determining the probability that a small number of mutants overtake a population. Currently, Monte Carlo simulations are used for estimating such fixation probabilities on general directed graphs, since no good analytical methods exist. In this paper, we introduce a novel deterministic framework for computing fixation probabilities for strongly connected, directed, weighted evolutionary graphs under neutral drift. We show how this framework can also be used to calculate the expected number of mutants at a given time step (even if we relax the assumption that the graph is strongly connected), how it can extend to other related models (e.g. voter model), how our framework can provide non-trivial bounds for fixation probability in the case of an advantageous mutant, and how it can be used to find a non-trivial lower bound on the mean time to fixation. We provide various experimental results determining fixation probabilities and expected number of mutants on different graphs. Among these, we show that our method consistently outperforms Monte Carlo simulations in speed by several orders of magnitude. Finally we show how our approach can provide insight into synaptic competition in neurology.  相似文献   

3.
A quantitative trait depends on multiple quantitative trait loci (QTL) and on the interaction between two or more QTL, named epistasis. Several methods to detect multiple QTL in various types of design have been proposed, but most of these are based on the assumption that each QTL works independently and epistasis has not been explored sufficiently. The objective of the study was to propose an integrated method to detect multiple QTL with epistases using Bayesian inference via a Markov chain Monte Carlo (MCMC) algorithm. Since the mixed inheritance model is assumed and the deterministic algorithm to calculate the probabilities of QTL genotypes is incorporated in the method, this can be applied to an outbred population such as livestock. Additionally, we treated a pair of QTL as one variable in the Reversible jump Markov chain Monte Carlo (RJMCMC) algorithm so that two QTL were able to be simultaneously added into or deleted from a model. As a result, both of the QTL can be detected, not only in cases where either of the two QTL has main effects and they have epistatic effects between each other, but also in cases where neither of the two QTL has main effects but they have epistatic effects. The method will help ascertain the complicated structure of quantitative traits.  相似文献   

4.
基于中高分辨率遥感的植被覆盖度时相变换方法   总被引:10,自引:0,他引:10  
张喜旺  吴炳方 《生态学报》2015,35(4):1155-1164
植被覆盖度是衡量地表植被状况、指示生态环境变化的一个重要指标,也是许多学科的重要参数。传统的测量方法难以获取时间连续的面状数据,且耗时、耗力,很难大范围推广。遥感估算方法虽然可以弥补传统方法的不足,但由于云覆盖等天气条件的影响,获得同一时相覆盖整个研究区的遥感影像非常困难,时相的差异必然导致研究结果产生误差。针对植被覆盖度这一重要生态参数,结合低分辨率遥感数据的时间优势和中高分辨率遥感数据的空间优势,提出一种时相变换方法,将源于中高分辨率影像的植被覆盖度变换到研究需要的时相上。首先,利用像元二分模型计算MODIS尺度的时间序列植被覆盖度,并利用已经获得的SPOT影像计算其获取时相上的植被覆盖度;其次,利用土地利用图划分植被覆盖类型,并利用MODIS数据和土地利用数据之间的空间对应关系制作MODIS像元内各类植被覆盖的面积百分比数据;再次,利用面积百分比数据提取各类植被覆盖的纯像元,结合MODIS植被覆盖度时间序列,从而提取各类植被覆盖纯像元的植被覆盖度时间序列曲线;最后利用像元分解的方法提取MODIS像元内各类植被覆盖组分的植被覆盖度的变化规律,将其应用到该组分对应位置上SPOT像元的植被覆盖度上,从而将其变换到所需要的时相上。在密云水库上游进行试验,将覆盖研究区的10景SPOT5多光谱影像计算的植被覆盖度统一变换到7月上旬,结果显示:视觉效果上明显好转,且空间上连续一致;变换前后植被覆盖度的统计量对比结果也符合植被生长规律;利用外业样点数据与对应位置的植被覆盖度变换结果进行回归分析,结果发现各植被覆盖类型的R2均在0.8左右,表明变换结果与实测值非常接近,时相变换的效果较好,从而可以很好地促进相关研究精度的提高。  相似文献   

5.
Summary In National Toxicology Program (NTP) studies, investigators want to assess whether a test agent is carcinogenic overall and specific to certain tumor types, while estimating the dose‐response profiles. Because there are potentially correlations among the tumors, a joint inference is preferred to separate univariate analyses for each tumor type. In this regard, we propose a random effect logistic model with a matrix of coefficients representing log‐odds ratios for the adjacent dose groups for tumors at different sites. We propose appropriate nonparametric priors for these coefficients to characterize the correlations and to allow borrowing of information across different dose groups and tumor types. Global and local hypotheses can be easily evaluated by summarizing the output of a single Monte Carlo Markov chain (MCMC). Two multiple testing procedures are applied for testing local hypotheses based on the posterior probabilities of local alternatives. Simulation studies are conducted and an NTP tumor data set is analyzed illustrating the proposed approach.  相似文献   

6.
7.
Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone‐specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well‐known change‐point model. Monte Carlo sampling from the posterior results in zone‐specific posterior summaries, which can be applied for the calculation of a smooth curve describing the variation in disease risk as a function of the distance from the putative source. In addition, the posterior can be used in the calculation of posterior probabilities for interesting hypothesis. The suggested model is suitable for use in geographical information systems (GIS) aimed for monitoring disease risks. As an application, a case study on the incidence of lung cancer around a former asbestos mine in eastern Finland is presented. Further extensions of the model are discussed.  相似文献   

8.
Errors in the estimation of exposures or doses are a major source of uncertainty in epidemiological studies of cancer among nuclear workers. This paper presents a Monte Carlo maximum likelihood method that can be used for estimating a confidence interval that reflects both statistical sampling error and uncertainty in the measurement of exposures. The method is illustrated by application to an analysis of all cancer (excluding leukemia) mortality in a study of nuclear workers at the Oak Ridge National Laboratory (ORNL). Monte Carlo methods were used to generate 10,000 data sets with a simulated corrected dose estimate for each member of the cohort based on the estimated distribution of errors in doses. A Cox proportional hazards model was applied to each of these simulated data sets. A partial likelihood, averaged over all of the simulations, was generated; the central risk estimate and confidence interval were estimated from this partial likelihood. The conventional unsimulated analysis of the ORNL study yielded an excess relative risk (ERR) of 5.38 per Sv (90% confidence interval 0.54-12.58). The Monte Carlo maximum likelihood method yielded a slightly lower ERR (4.82 per Sv) and wider confidence interval (0.41-13.31).  相似文献   

9.
King R  Brooks SP  Coulson T 《Biometrics》2008,64(4):1187-1195
SUMMARY: We consider the issue of analyzing complex ecological data in the presence of covariate information and model uncertainty. Several issues can arise when analyzing such data, not least the need to take into account where there are missing covariate values. This is most acutely observed in the presence of time-varying covariates. We consider mark-recapture-recovery data, where the corresponding recapture probabilities are less than unity, so that individuals are not always observed at each capture event. This often leads to a large amount of missing time-varying individual covariate information, because the covariate cannot usually be recorded if an individual is not observed. In addition, we address the problem of model selection over these covariates with missing data. We consider a Bayesian approach, where we are able to deal with large amounts of missing data, by essentially treating the missing values as auxiliary variables. This approach also allows a quantitative comparison of different models via posterior model probabilities, obtained via the reversible jump Markov chain Monte Carlo algorithm. To demonstrate this approach we analyze data relating to Soay sheep, which pose several statistical challenges in fully describing the intricacies of the system.  相似文献   

10.
In the last thirty years, there has been considerable interest in finding better models to fit data for probabilities of conception. An important early model was proposed by Barrett and Marshall (1969) and extended by Schwartz, MacDonald and Heuchel (1980). Recently, researchers have further extended these models by adding covariates. However, the increasingly complicated models are challenging to analyze with frequentist methods such as the EM algorithm. Bayesian models are more feasible, and the computation can be done via Markov chain Monte Carlo (MCMC). We consider a Bayesian model with an effect for protected intercourse to analyze data from the California Women's Reproductive Health Study and assess the effects of water contaminants and hormones. There are two main contributions in the paper. (1) For protected intercourse, we propose modeling the ratios of daily conception probabilities with protected intercourse to corresponding daily conception probabilities with unprotected intercourse. Due to the small sample size of our data set, we assume the ratios are the same for each day but unknown. (2) We consider Bayesian analysis under a unimodality assumption where the probabilities of conception increase before ovulation and decrease after ovulation. Gibbs sampling is used for finding the Bayesian estimates. There is some evidence that the two covariates affect fecundability.  相似文献   

11.
Leeyoung Park  Ju H. Kim 《Genetics》2015,199(4):1007-1016
Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal–normal, normal–disease, and disease–disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene–environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene–environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases.  相似文献   

12.
为了采用广义加法模型整合数字高程模型和遥感数据进行植被分布的预测, 并探索耦合环境变量和遥感数据作为预测变量是否能够有效地提高植被分布预测的精度, 选择海拔、坡度、至黄河最近距离、至海岸线最近距离, 以及从SPOT5遥感影像中提取的光谱变量作为预测变量, 采用广义加法模型整合环境变量和光谱变量, 建立植被分布预测模型。研究设置3种建模情景(以环境变量作为预测变量, 以光谱变量作为预测变量, 综合使用环境变量与光谱变量作为预测变量)对黄河三角洲的优势植被类型的分布进行了预测, 并对预测结果采用偏差分析、受试者工作特征曲线和野外采样点对比等3种方法进行了验证。结果表明: (1)基于广义加法模型的植被分布预测方法具有一定的实用性, 可以较为准确地预测植被的分布; 盖度较高的植被类型预测精度较高, 盖度较低的植被类型预测精度较低, 植物群落结构的特点是出现这些差异的主要原因; 综合使用环境变量和光谱变量作为预测变量的模型, 预测精度高于单独以环境变量或者光谱变量作为预测变量的模型。(2)环境变量、光谱变量大多被选入模型, 二者均对植被分布预测有重要的作用; 同一预测变量在不同植被类型的预测模型中的贡献不同, 这与植被的光谱、环境特征差异有关; 同一预测变量在不同的建模情景下对模型的贡献不同, 环境变量与光谱变量的耦合效应可能是导致预测变量对模型的贡献出现变化的原因。  相似文献   

13.
A sequence-coupled (Markov chain) model is proposed to predict the cleavage sites in proteins by proteases with extended specificity subsites. In addition to the probability of an amino acid occurring at each of these subsites as observed from a training set of oligopeptides known cleavable by HIV protease, the conditional probabilities as reflected by the neighbor-coupled effect along the subsite sequence are also taken into account. These conditional probabilities are derived from an expanded training set consisting of sufficiently large peptide sequences generated by the Monte Carlo sampling process. Very high accuracy was obtained in predicting protein cleavage sites by both HIV-1 and HIV-2 proteases. The new method provides a rapid and accurate means for analyzing the specificity of HIV protease, and hence can be used to help find effective inhibitors of HIV protease as potential drugs against AIDS. The principle of this method can also be used to study the specificity of any multisubsite enzyme.  相似文献   

14.
An improved Bayesian method is presented for estimating phylogenetic trees using DNA sequence data. The birth-death process with species sampling is used to specify the prior distribution of phylogenies and ancestral speciation times, and the posterior probabilities of phylogenies are used to estimate the maximum posterior probability (MAP) tree. Monte Carlo integration is used to integrate over the ancestral speciation times for particular trees. A Markov Chain Monte Carlo method is used to generate the set of trees with the highest posterior probabilities. Methods are described for an empirical Bayesian analysis, in which estimates of the speciation and extinction rates are used in calculating the posterior probabilities, and a hierarchical Bayesian analysis, in which these parameters are removed from the model by an additional integration. The Markov Chain Monte Carlo method avoids the requirement of our earlier method for calculating MAP trees to sum over all possible topologies (which limited the number of taxa in an analysis to about five). The methods are applied to analyze DNA sequences for nine species of primates, and the MAP tree, which is identical to a maximum-likelihood estimate of topology, has a probability of approximately 95%.   相似文献   

15.
Monte Carlo calculations are highly spread and settled practice to calculate brachytherapy sources dosimetric parameters. In this study, recommendations of the AAPM TG-43U1 report have been followed to characterize the Varisource VS2000 192Ir high dose rate source, provided by Varian Oncology Systems.In order to obtain dosimetric parameters for this source, Monte Carlo calculations with PENELOPE code have been carried out. TG-43 formalism parameters have been presented, i.e., air kerma strength, dose rate constant, radial dose function and anisotropy function. Besides, a 2D Cartesian coordinates dose rate in water table has been calculated. These quantities are compared to this source reference data, finding results in good agreement with them.The data in the present study complement published data in the next aspects: (i) TG-43U1 recommendations are followed regarding to phantom ambient conditions and to uncertainty analysis, including statistical (type A) and systematic (type B) contributions; (ii) PENELOPE code is benchmarked for this source; (iii) Monte Carlo calculation methodology differs from that usually published in the way to estimate absorbed dose, leaving out the track-length estimator; (iv) the results of the present work comply with the most recent AAPM and ESTRO physics committee recommendations about Monte Carlo techniques, in regards to dose rate uncertainty values and established differences between our results and reference data.The results stated in this paper provide a complete parameter collection, which can be used for dosimetric calculations as well as a means of comparison with other datasets from this source.  相似文献   

16.
Mathematical models of calcium release sites derived from Markov chain models of intracellular calcium channels exhibit collective gating reminiscent of the experimentally observed phenomenon of stochastic calcium excitability (i.e., calcium puffs and sparks). Calcium release site models are stochastic automata networks that involve many functional transitions, that is, the transition probabilities of each channel depend on the local calcium concentration and thus the state of the other channels. We present a Kronecker-structured representation for calcium release site models and perform benchmark stationary distribution calculations using both exact and approximate iterative numerical solution techniques that leverage this structure. When it is possible to obtain an exact solution, response measures such as the number of channels in a particular state converge more quickly using the iterative numerical methods than occupation measures calculated via Monte Carlo simulation. In particular, multi-level methods provide excellent convergence with modest additional memory requirements for the Kronecker representation of calcium release site models. When an exact solution is not feasible, iterative approximate methods based on the power method may be used, with performance similar to Monte Carlo estimates. This suggests approximate methods with multi-level iterative engines as a promising avenue of future research for large-scale calcium release site models.  相似文献   

17.
18.
《Mathematical biosciences》1987,83(1):105-125
An environmental process was characterized by a stationary second order autogressive process with Gaussian noise. This process was then linked to survivorship and reproductive success by logistic transformations. The sensitivity of extinction probabilities to variations in the parameters of the environmental process was studied by computer experiments in Monte Carlo integration. Against the background of the rather limited number of fertility and mortality levels studied in these experiments, the extinction probabilities were demonstrated to be quite sensitive to variations in the parameters of the environmental process. Although more extensive experiments will need to be carried out, those conducted so far suggest that concerted efforts should be made to model those environmental factors that are critical to the survivability of an endangered species in assessing its chances for continued existence.  相似文献   

19.
Kim SY  Lee J  Lee J 《Biophysical chemistry》2005,115(2-3):195-200
Understanding how a protein folds is a long-standing challenge in modern science. We have used an optimized atomistic model (united-residue force field) to simulate folding of small proteins of various structures: HP-36 (alpha protein), protein A (beta), 1fsd (alpha+beta), and betanova (beta). Extensive Monte Carlo folding simulations (ten independent runs with 10(9) Monte Carlo steps at a temperature) starting from non-native conformations are carried out for each protein. In all cases, proteins fold into their native-like conformations at appropriate temperatures, and glassy transitions occur at low temperatures. To investigate early folding trajectories, 200 independent runs with 10(6) Monte Carlo steps are also performed at a fixed temperature for a protein. There are a variety of possible pathways during non-equilibrium early processes (fast process, approximately 10(4) Monte Carlo steps). Finally, these pathways converge to the point unique for each protein. The convergence point of the early folding pathways can be determined only by direct folding simulations. The free energy surface, an equilibrium thermodynamic property, dictates the rest of the folding (slow process, approximately 10(8) Monte Carlo steps).  相似文献   

20.
The accurate estimation of the probability of identity by descent (IBD) at loci or genome positions of interest is paramount to the genetic study of quantitative and disease resistance traits. We present a Monte Carlo Markov Chain method to compute IBD probabilities between individuals conditional on DNA markers and on pedigree information. The IBDs can be obtained in a completely general pedigree at any genome position of interest, and all marker and pedigree information available is used. The method can be split into two steps at each iteration. First, phases are sampled using current genotypic configurations of relatives and second, crossover events are simulated conditional on phases. Internal track is kept of all founder origins and crossovers such that the IBD probabilities averaged over replicates are rapidly obtained. We illustrate the method with some examples. First, we show that all pedigree information should be used to obtain line origin probabilities in F2 crosses. Second, the distribution of genetic relationships between half and full sibs is analysed in both simulated data and in real data from an F2 cross in pigs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号