首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bart Haegeman  Michel Loreau 《Oikos》2009,118(8):1270-1278
Entropy maximization (EM) is becoming an increasingly popular modelling technique in ecology, but its potential and limitations are still poorly understood. In our previous contribution (Haegeman and Loreau 2008), we showed that even a trivial application of EM can yield predictions that provide an excellent fit to empirical data. In his response, Shipley (2009) distinguishes two different versions of the EM procedure, an information-theoretical version and a combinatorial version, to justify a trivial application of EM. Here we first provide a brief user's guide to EM to clarify the various steps involved in the procedure. We then show that the information-theoretical and combinatorial rationales for EM are but complementary views on the same procedure. Lastly, we attempt to identify the conditions that lead to trivial and non-trivial applications of EM. We discuss how non-trivial applications of EM can yield valuable new insights in ecology.  相似文献   

2.
A randomisation test is described for assessing relative abundance predictions from the maximum entropy approach to biodiversity. The null model underlying the test randomly allocates observed abundances to species, but retains key aspects of the structure of the observed communities; site richness, species composition, and trait covariance. Three test statistics are used to explore different characteristics of the predictions. Two are based on pairwise comparisons between observed and predicted species abundances (RMSE, RMSESqrt). The third statistic is novel and is based on community‐level abundance patterns, using an index calculated from the observed and predicted community entropies (EDiff). Validation of the test to quantify type I and type II error rates showed no evidence of bias or circularity, confirming the dependencies quantified by Roxburgh and Mokany (2007) and Shipley (2007) have been fully accounted for within the null model. Application of the test to the vineyard data of Shipley et al. (2006) and to an Australian grassland dataset indicated significant departures from the null model, suggesting the integration of species trait information within the maximum entropy framework can successfully predict species abundance patterns. The paper concludes with some general comments on the use of maximum entropy in ecology, including a discussion of the mathematics underlying the Maxent optimisation algorithm and its implementation, the role of absent species in generating biased predictions, and some comments on determining the most appropriate level of data aggregation for Maxent analysis.  相似文献   

3.
Questions: To what extent can Shipley et al.'s original maximum entropy model of trait‐based community assembly predict relative abundances of species over a large (3000 km2) landscape? How does variation in the species pool affect predictive ability of the model? How might the effects of missing traits be detected? How can non‐trait‐based processes be incorporated into the model? Location: Central England. Material and Methods: Using 10 traits measured on 506 plant species from 1308 1‐m2 plots collected over 3000 km2 in central England, we tested one aspect of Shipley et al.'s original maximum entropy model of “pure” trait‐based community assembly (S1), and modified it to represent both a neutral (S2) and a hybrid (S3) scenario of community assembly at the local level. Predictive ability of the three corresponding models was determined with different species pool sizes (30, 60, 100 and 506 species). Statistical significance was tested using a distribution‐free permutation test. Results: Predictive ability was high and significantly different from random expectations in S1. Predictive ability was low but significant in S2. Highest predictive ability occurred when both neutral and trait‐based processes were included in the model (S3). Increasing the pool size decreased predictive ability, but less so in S3. Incorporating habitat affinity (to indicate missing traits) increased predictive ability. Conclusions: The measured functional traits were significantly related to species relative abundance. Our results both confirm the generality of the original model but also highlight the importance of (i) taking into account neutral processes during assembly of a plant community, and (ii) properly defining the species pool.  相似文献   

4.
关于最大信息熵原理与群体遗传平衡一致性的探讨   总被引:16,自引:1,他引:15  
张宏礼  张鸿雁 《遗传》2006,28(3):324-328
汪小龙等建立了用最大信息熵原理推导一个基因座上群体遗传平衡的统一数学模型,并给出了模型的最大值解,此解正是Hardy-Weinberg平衡定律所给出的基因型频率。这说明当群体基因型信息熵最大时,群体基因型频率不再变化,达到平衡状态,从而证明了最大信息熵原理与Hardy-Weinberg平衡定律具有一致性,同时指出这一结论可以推广至有迁移、突变、选择、遗传漂变、近亲交配的群体以及多个基因座情形。概括地说就是:最大信息熵原理与群体遗传平衡具有一致性。但是,他们仅仅证明了最大信息熵原理与一个基因座上Hardy-Weinberg平衡定律具有一致性,本文在这个范围内将其推广至多个基因座,且每一个基因座均为复等位基因情形。至于最大信息熵原理是否与其它的群体遗传平衡具有一致性,他们的结论仅仅是猜想,并未严格推导。事实上,要想将这种一致性推广到迁移、突变、随机漂变和近亲交配等群体,则不见得正确。   相似文献   

5.
Jeremy W. Fox  Gisep Rauch 《Oikos》2009,118(10):1507-1514
Genetically-diverse parasite infections are common in nature, however what mechanisms influence parasite load are still under debate. Rauch et al. found consistently lower parasite loads in genetically-mixed infections compared to uniform infections. Using the additive partition of Loreau and Hector they demonstrated that this lower parasite load was due to negative complementarity effects, but they only found weak selection effects. Complementarity effects arise from differentiation among genotypes that accrue equally to all genotypes, while selection effects arise from unexpectedly high performance of certain genotypes in mixed infections. However, selection effects might arise either because genotypes with certain traits perform unexpectedly well in mixed infections at the expense of other genotypes ('dominance effects', DEs), or because genotypes with certain traits perform unexpectedly well, but not at the expense of others genotypes ('trait dependent complementarity effects', TDCEs). Here, we reanalyze the data of Rauch et al. using the tripartite partition of Fox to separate DEs, TDCEs and trait-independent complementarity effects (TICEs, corresponding to the complementarity effect of Loreau and Hector). We found significantly negative TDCEs that contribute strongly to the low parasite loads in mixed infections. We suggest novel, testable hypotheses to explain negative TDCEs. Ours is the first study to demonstrate consistently-strong TDCEs, which are rare in studies of the productivity of plant mixtures. Our results highlight the importance of testing for TDCEs, rather than assuming them to be small. We discuss the interpretation and value of the tripartite partition as an analytical tool complementary to more mechanistic approaches.  相似文献   

6.
We describe a habitat selection model that predicts the distribution of size-structured groups of fish in a habitat where food availability and water temperature vary spatially. This model is formed by combining a physiological model of fish growth with the logic of ideal free distribution (IFD) theory. In this model we assume that individuals scramble compete for resources, that relative competitive abilities of fish vary with body size, and that individuals select patches that maximize their growth rate. This model overcomes limitations in currently existing physiological and IFD-based models of habitat selection. This is because existing physiological models do not take into account the fact that the amount of food consumed by a fish in a patch will depend on the number of competitors there (something that IFD theory addresses), while traditional IFD models do not take into account the fact that fish are likely to choose patches based on potential growth rate rather than gross food intake (something that physiological models address). Our model takes advantage of the complementary strengths of these two approaches to overcome these weaknesses. Reassuringly, our model reproduces the predictions of its two constituent models under the simple conditions where they apply. When there is no competition for resources it mimics the physiological model of habitat selection, and when there is competition but no temperature variation between patches it mimics either the simple IFD model or the IFD model for unequal competitors. However, when there are both competition and temperature differences between patches our model makes different predictions. It predicts that input-matching between the resource renewal rate and the number of fish (or competitive units) in a patch, the hallmark of IFD models, will be the exception rather than the rule. It also makes the novel prediction that temperature based size-segregation will be common, and that the strength and direction of this segregation will depend on per capita resource renewal rates and the manner in which competitive weight scales with body size. Size-segregation should become more pronounced as per capita resource abundance falls. A larger fish/cooler water pattern is predicted when competitive ability increases more slowly than maximum ration with body size, and a smaller fish/cooler water pattern is predicted when competitive ability increases more rapidly than maximum ration with body size.  相似文献   

7.
Summary Computer technology has acquired an important role in structuring a variety of biological systems. The availability of modern powerful computers has stimulated the development of good and accurate models of biological systems. Biological systems, such as the immune response against cancer, are complex and it is difficult to experimentally control all the interacting elements constituting the immune response of a host to cancer. Complex biosystems do not always behave or act as expected during experimental investigation. In these cases computer models can be helpful in understanding the behavior of such complex systems.The purpose of this review is to consider the use of mathematical models to study the immune response against cancer. The logic and design of some operable models relevant for tumor immunology will be discussed. Special attention is given to the conceptualization of a model based upon a new hypothesis of tumor rejection presented by De Weger et al. [10].Technical details concerning the mathematical aspects, differential equations, details on harware and software package etc. are not included in this survey. These details are contained to in the original papers.  相似文献   

8.
MOTIVATION: Missing data in genotyping single nucleotide polymorphism (SNP) spots are common. High-throughput genotyping methods usually have a high rate of missing data. For example, the published human chromosome 21 data by Patil et al. contains about 20% missing SNPs. Inferring missing SNPs using the haplotype block structure is promising but difficult because the haplotype block boundaries are not well defined. Here we propose a global algorithm to overcome this difficulty. RESULTS: First, we propose to use entropy as a measure of haplotype diversity. We show that the entropy measure combined with a dynamic programming algorithm produces better haplotype block partitions than other measures. Second, based on the entropy measure, we propose a two-step iterative partition-inference algorithm for the inference of missing SNPs. At the first step, we apply the dynamic programming algorithm to partition haplotypes into blocks. At the second step, we use an iterative process similar to the expectation-maximization algorithm to infer missing SNPs in each haplotype block so as to minimize the block entropy. The algorithm iterates these two steps until the total block entropy is minimized. We test our algorithm in several experimental data sets. The results show that the global approach significantly improves the accuracy of the inference. AVAILABILITY: Upon request.  相似文献   

9.
K Vaesen 《PloS one》2012,7(7):e40989
The idea that demographic change may spur or slow down technological change has become widely accepted among evolutionary archaeologists and anthropologists. Two models have been particularly influential in promoting this idea: a mathematical model by Joseph Henrich, developed to explain the Tasmanian loss of culture during the Holocene; and an agent-based adaptation thereof, devised by Powell et al. to explain the emergence of modern behaviour in the Late Pleistocene. However, the models in question make rather strong assumptions about the distribution of skills among social learners and about the selectivity of social learning strategies. Here I examine the behaviour of these models under more conservative and, on empirical and theoretical grounds, equally reasonable assumptions. I show that, some qualifications notwithstanding, Henrich's model largely withstands my robustness tests. The model of Powell et al., in contrast, does not-a finding that warrants a fair amount of skepticism towards Powell et al.'s explanation of the Upper Paleolithic transition. More generally, my evaluation of the accounts of Henrich and of Powell et al. helpfully clarify which inferences their popular models do and not support.  相似文献   

10.
The widely used “Maxent” software for modeling species distributions from presence‐only data (Phillips et al., Ecological Modelling, 190, 2006, 231) tends to produce models with high‐predictive performance but low‐ecological interpretability, and implications of Maxent's statistical approach to variable transformation, model fitting, and model selection remain underappreciated. In particular, Maxent's approach to model selection through lasso regularization has been shown to give less parsimonious distribution models—that is, models which are more complex but not necessarily predictively better—than subset selection. In this paper, we introduce the MIAmaxent R package, which provides a statistical approach to modeling species distributions similar to Maxent's, but with subset selection instead of lasso regularization. The simpler models typically produced by subset selection are ecologically more interpretable, and making distribution models more grounded in ecological theory is a fundamental motivation for using MIAmaxent. To that end, the package executes variable transformation based on expected occurrence–environment relationships and contains tools for exploring data and interrogating models in light of knowledge of the modeled system. Additionally, MIAmaxent implements two different kinds of model fitting: maximum entropy fitting for presence‐only data and logistic regression (GLM) for presence–absence data. Unlike Maxent, MIAmaxent decouples variable transformation, model fitting, and model selection, which facilitates methodological comparisons and gives the modeler greater flexibility when choosing a statistical approach to a given distribution modeling problem.  相似文献   

11.
In recent years two different styles of model for homologous recombination have been discussed, depending on whether or not the recombination event occurs in the vicinity of a double-strand break in DNA. The models of Holliday and Meselson and Radding exemplify those that do not involve a break whereas the model of Szostak et al is taken as an example of those that do. Recent advances in understanding a prototypic recombination system thought to promote exchange distant from DNA ends, at Chi sites, suggest a mechanism of initiation neither like Holliday/Meselson-Radding nor like Szostak et al. In those models, only one strand of DNA may invade a homologous DNA molecule. We propose a model for Chi in which exonuclease degrades DNA from a double-strand break to the Chi site; the exonuclease is converted into a helicase upon interaction with Chi; unwinding produces a recombinagenic split-end, and both 3'- and 5'-ending strands at the split-end are capable of invading a homologue. Different genetic consequences are proposed to result from invasion by each. We review evidence supporting the split-end model and suggest its application in at least some cases previously considered to proceed via the Meselson/Radding model and by the double-strand-break repair model of Szostak et al.  相似文献   

12.
Aim Species distribution models are invaluable tools in biogeographical, ecological and applied biological research, but specific concerns have been raised in relation to different modelling techniques in terms of their validity. Here we compare two fundamentally different approaches to species distribution modelling, one based on simple occurrence data where the lack of an ecological framework has been criticized, and the other firmly based in socio‐ecological theory but requiring highly detailed behavioural information that is often limited in availability. Location (Sub‐Saharan) Africa. Methods We used two distinct techniques to predict the realized distribution of a model species, the vervet monkey (Cercopithecus aethiops Linnaeus, 1758). A maximum entropy model was produced taking 13 environmental variables and presence‐only data from 174 sites throughout Africa as input, with an additional 58 sites retained to test the model. A time‐budget model considering the same environmental variables was constructed from detailed behavioural data on 20 groups representing 14 populations, with presence‐only data from the remaining 218 sites reserved to test model predictions on vervet monkey occurrence. Both models were further validated against a reference species distribution map as drawn up by the African Mammals Databank. Results Both models performed well, with the time budget and maximum entropy algorithms correctly predicting vervet monkey presence at 78.4% and 91.4% of their respective test sites. Similarly, the time‐budget model correctly predicted presence and absence at 87.4% of map pixels against the reference distribution map, and the maximum entropy model achieved a success rate of 81.8%. Finally, there was a high level of agreement (81.6%) between the presence–absence maps produced by the two models, and the environmental variables identified as most strongly driving vervet monkey distribution were the same in both models. Main conclusions The time‐budget and maximum entropy models produced accurate and remarkably similar species distribution maps, despite fundamental differences in their conceptual and methodological approaches. Such strong convergence not only provides support for the credibility of current results, but also relieves concerns about the validity of the two modelling approaches.  相似文献   

13.
Recently, Schork et al. found that two-trait-locus, two-marker-locus (parametric) linkage analysis can provide substantially more linkage information than can standard one-trait-locus, one-marker-locus methods. However, because of the increased burden of computation, Schork et al. do not expect that their approach will be applied in an initial genome scan. Further, the specification of a suitable two-locus segregation model can be crucial. Affected-sibpair tests are computationally simple and do not require an explicit specification of the disease model. In the past, however, these tests mainly have been applied to data with a single marker locus. Here, we consider sib-pair tests that make it possible to analyze simultaneously two marker loci. The power of these tests is investigated for different (epistatic and heterogeneous) two-trait-locus models, each trait locus being linked to one of the marker loci. We compare these tests both with the test that is optimal for a certain model and with the strategy that analyzes each marker locus separately. The results indicate that a straightforward extension of the well-known mean test for two marker loci can be much more powerful than single-marker-locus analysis and that is power is only slightly inferior to the power of the optimal test.  相似文献   

14.
Examining whole-body center of mass (COM) motion is one of method being used to quantify dynamic balance and energy during gait. One common method for estimating the COM position is to apply an anthropometric model to a marker set and calculate the weighted sum from known segmental COM positions. Several anthropometric models are available to perform such a calculation. However, to date there has been no study of how the anthropometric model affects whole-body COM calculations during gait. This information is pertinent to researchers because the choice of anthropometric model may influence gait research findings and currently the trend is to consistently use a single model. In this study we analyzed a single stride of gait data from 103 young adult participants. We compared the whole-body COM motion calculated from 4 different anthropometric models (Plagenhoef et al., 1983; Winter, 1990; de Leva, 1996; Pavol et al., 2002). We found that anterior-posterior motion calculations are relatively unaffected by the anthropometric model. However, medial-lateral and vertical motions are significantly affected by the use of different anthropometric models. Our findings suggest that the researcher carefully choose an anthropometric model to fit their study populations when interested in medial-lateral or vertical motions of the COM. Our data can provide researchers a priori information on the model determination depending on the particular variable and how conservative they may want to be with COM comparisons between groups.  相似文献   

15.
Motivation: The topic of this paper is the estimation of alignments and mutation rates based on stochastic sequence-evolution models that allow insertions and deletions of subsequences ('fragments') and not just single bases. The model we propose is a variant of a model introduced by Thorne et al., (J. Mol. Evol., 34, 3-16, 1992). The computational tractability of the model depends on certain restrictions in the insertion/deletion process; possible effects we discuss. Results: The process of fragment insertion and deletion in the sequence-evolution model induces a hidden Markov structure at the level of alignments and thus makes possible efficient statistical alignment algorithms. As an example we apply a sampling procedure to assess the variability in alignment and mutation parameter estimates for HVR1 sequences of human and orangutan, improving results of previous work. Simulation studies give evidence that estimation methods based on the proposed model also give satisfactory results when applied to data for which the restrictions in the insertion/deletion process do not hold. Availability: The source code of the software for sampling alignments and mutation rates for a pair of DNA sequences according to the fragment insertion and deletion model is freely available from http://www.math.uni-frankfurt.de/~stoch/software/mcmcsalut under the terms of the GNU public license (GPL, 2000).  相似文献   

16.
Mathematical models of cardiac cells have become important tools for investigating the electrophysiological properties and behavior of the heart. As the number of published models increases, it becomes more difficult to choose a model appropriate for the conditions to be studied, especially when multiple models describing the species and region of the heart of interest are available. In this paper, we will review and compare two detailed ionic models of human atrial myocytes, the Nygren et al. model (NM) and the Courtemanche et al. model (CM). Although both models include the same transmembrane currents and are largely based on the same experimental data from human atrial cells, the two models exhibit vastly different properties, especially in their dynamical behavior, including restitution and memory effects. The CM produces pronounced rate adaptation of action potential duration (APD) with limited memory effects, while the NM exhibits strong rate dependence of resting membrane potential (RMP), limited APD restitution, and stronger memory, as well as delayed afterdepolarizations and auto-oscillatory behavior upon cessation of rapid pacing. Channel conductance modifications based on experimentally measured changes during atrial fibrillation modify rate adaptation and memory in both models, but do not change the primary rate-dependent properties of APD and RMP for the CM and NM, respectively. Two sets of proposed changes to the NM that yield a spike-and-dome action potential morphology qualitatively similar to the CM at slow pacing rates similarly do not change the underlying dynamics of the model. Moreover, interchanging the formulations of all transmembrane currents between the two models while leaving calcium handling and ionic concentrations intact indicates that the currents strongly influence memory and the rate adaptation of RMP, while intracellular calcium dynamics primarily determine APD rate adaptation. Our results suggest that differences in intracellular calcium handling between the two human atrial myocyte models are responsible for marked dynamical differences and may prevent reconciliation between the models by straightforward channel conductance modifications.  相似文献   

17.
Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information (“Shannon differentiation”) between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.  相似文献   

18.
Mechanism underlying mammalian preimplantation development has long been a subject of controversy and the central question has been if any "determinants" play a key role in a manner comparable to the non-mammalian "model" system. During the last decade, this issue has been revived (Pearson, 2002; Rossant and Tam, 2004) by claims that the axes of the mouse blastocyst are anticipated at the egg ("prepatterning model"; Gardner, 1997; Gardner, 2001; Piotrowska et al., 2001; Piotrowska and Zernicka-Goetz, 2001; Zernicka-Goetz, 2005), suggesting that a mechanism comparable to that operating in non-mammals may be at work. However, recent studies by other laboratories do not support these claims ("regulative model"; Alarcon and Marikawa, 2003; Chroscicka et al., 2004; Hiiragi and Solter, 2004; Alarcon and Marikawa, 2005; Louvet-Vallee et al., 2005; Motosugi et al., 2005) and the issue is currently under hot debate (Vogel, 2005). Deepening our knowledge of this issue will not only provide an essential basis for understanding mammalian development, but also directly apply to ongoing clinical practices such as intracytoplasmic sperm injection (ICSI) and preimplantation genetic diagnosis (PGD). These practices were originally supported by a classical premise that mammalian preimplantation embryos are highly regulative (Tarkowski, 1959; Tarkowski, 1961; Tarkowski and Wroblewska, 1967; Rossant, 1976), in keeping with the "regulative model". However, if the "prepatterning model" is correct, the latter will require critical reassessment.  相似文献   

19.
We propose a framework for modeling sequence motifs based on the maximum entropy principle (MEP). We recommend approximating short sequence motif distributions with the maximum entropy distribution (MED) consistent with low-order marginal constraints estimated from available data, which may include dependencies between nonadjacent as well as adjacent positions. Many maximum entropy models (MEMs) are specified by simply changing the set of constraints. Such models can be utilized to discriminate between signals and decoys. Classification performance using different MEMs gives insight into the relative importance of dependencies between different positions. We apply our framework to large datasets of RNA splicing signals. Our best models out-perform previous probabilistic models in the discrimination of human 5' (donor) and 3' (acceptor) splice sites from decoys. Finally, we discuss mechanistically motivated ways of comparing models.  相似文献   

20.
As Jessani et al. 1 point out development of cell and animal models that accurately depict human tumorigenesis remains a major goal of cancer research. Clam cancer offers significant advantages over traditional models for genotoxic and non-genotoxic preclinical analysis of treatments for human cancers with a similar molecular basis. The naturally occurring clam model closely resembles an out-breeding, human clinical population and provides both in vitro and in vivo alternatives to those generated from inbred mouse strains or by intentional exposure to known tumor viruses. Fly and worm in vivo models for adult human somatic cell cancers do not exist because their adult somatic cells do not divide. Clam cancer is the best characterized, naturally occurring malignancy with a known molecular basis remarkably similar to those observed in several unrelated human cancers where both genotoxic and non-genotoxic strategies can restore the function of wild-type p53. To further emphasize this point of view, we here demonstrate a p53-induced, mitochondrial-directed mechanism for promoting apoptosis in the clam cancer model that is similar to one recently identified in mammals. Discerning the molecular basis for naturally occurring diseases in non-traditional models and correlating these with related molecular mechanisms responsible for human diseases is a virtually unexplored aspect of toxico-proteomics and genomics and related drug discovery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号