首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, Lévy walks have been put forward as a new paradigm for animal search and many cases have been made for its presence in nature. However, it remains debated whether Lévy walks are an inherent behavioural strategy or emerge from the animal reacting to its habitat. Here, we demonstrate signatures of Lévy behaviour in the search movement of mud snails (Hydrobia ulvae) based on a novel, direct assessment of movement properties in an experimental set-up using different food distributions. Our experimental data uncovered clusters of small movement steps alternating with long moves independent of food encounter and landscape complexity. Moreover, size distributions of these clusters followed truncated power laws. These two findings are characteristic signatures of mechanisms underlying inherent Lévy-like movement. Thus, our study provides clear experimental evidence that such multi-scale movement is an inherent behaviour rather than resulting from the animal interacting with its environment.  相似文献   

2.
3.
Among philosophers of science, there is now a widespread agreement that the DN model of explanation is poorly equipped to account for explanations in biology. Rather than identifying laws, so the consensus goes, researchers explain biological capacities by constructing a model of the underlying mechanism.We think that the dichotomy between DN explanations and mechanistic explanations is misleading. In this article, we argue that there are cases in which biological capacities are explained without constructing a model of the underlying mechanism. Although these explanations do not conform to Hempel’s DN model (they do not deduce the explanandum from laws of nature), they do invoke more or less stable generalisations. Because they invoke generalisations and have the form of an argument, we call them inferential explanations. We support this claim by considering two examples of explanations of biological capacities: pigeon navigation and photoperiodism. Next, we will argue that these non-mechanistic explanations are crucial to biology in three ways: (i) sometimes, they are the only thing we have (there is no alternative available), (ii) they are heuristically useful, and (iii) they provide genuine understanding and so are interesting in their own right.In the last sections we discuss the relation between types of explanations and types of experiments and situate our views within some relevant debates on explanatory power and explanatory virtues.  相似文献   

4.
《Gene》1996,172(1):GC11-GC17
Algorithms inspired by comparative genomics calculate an edit distance between two linear orders based on elementary edit operations such as inversion, transposition and reciprocal translocation. All operations are generally assigned the same weight, simply by default, because no systematic empirical studies exist verifying whether algorithmic outputs involve realistic proportion of each. Nor de we have data on how weights should vary with the length of the inverted or transposed segment of the chromosome. In this paper, we present a rapid algorithm that allows each operation to take on a range of weights, producing an relatively tight bound on the distance between single-chromosome genomes, by means of a greedy search with look-ahead. The efficiency of this algorithm allows us to test random genomes for each parameter setting, to detect gene order similarity and to infer the parameter values most appropriate to the phylogenetic domain under study. We apply this method to genome segments in which the sa me gene order is conserved in Escherichia coli and Bacillus subtilis, as well as to the gene order in human versus Drosophila mitochondrial genomes. In both cases, we conclude that it is most appropriate to assign somewhat more than twice the weight to transpositions and inverted transpositions than to inversions. We also explore segment-length weighting for fungal mitochondrial gene orders.  相似文献   

5.
Several empirical studies have shown that the animal group size distribution of many species can be well fit by power laws with exponential truncation. A striking empirical result due to Niwa is that the exponent in these power laws is one and the truncation is determined by the average group size experienced by an individual. This distribution is known as the logarithmic distribution. In this paper we provide first principles derivation of the logarithmic distribution and other truncated power laws using a site-based merge and split framework. In particular, we investigate two such models. Firstly, we look at a model in which groups merge whenever they meet but split with a constant probability per time step. This generates a distribution similar, but not identical to the logarithmic distribution. Secondly, we propose a model, based on preferential attachment, that produces the logarithmic distribution exactly. Our derivation helps explain why logarithmic distributions are so widely observed in nature. The derivation also allows us to link splitting and joining behavior to the exponent and truncation parameters in power laws.  相似文献   

6.
McKinney SA  Joo C  Ha T 《Biophysical journal》2006,91(5):1941-1951
The analysis of single-molecule fluorescence resonance energy transfer (FRET) trajectories has become one of significant biophysical interest. In deducing the transition rates between various states of a system for time-binned data, researchers have relied on simple, but often arbitrary methods of extracting rates from FRET trajectories. Although these methods have proven satisfactory in cases of well-separated, low-noise, two- or three-state systems, they become less reliable when applied to a system of greater complexity. We have developed an analysis scheme that casts single-molecule time-binned FRET trajectories as hidden Markov processes, allowing one to determine, based on probability alone, the most likely FRET-value distributions of states and their interconversion rates while simultaneously determining the most likely time sequence of underlying states for each trajectory. Together with a transition density plot and Bayesian information criterion we can also determine the number of different states present in a system in addition to the state-to-state transition probabilities. Here we present the algorithm and test its limitations with various simulated data and previously reported Holliday junction data. The algorithm is then applied to the analysis of the binding and dissociation of three RecA monomers on a DNA construct.  相似文献   

7.
Directed network motifs are the building blocks of complex networks, such as human brain networks, and capture deep connectivity information that is not contained in standard network measures. In this paper we present the first application of directed network motifs in vivo to human brain networks, utilizing recently developed directed progression networks which are built upon rates of cortical thickness changes between brain regions. This is in contrast to previous studies which have relied on simulations and in vitro analysis of non-human brains. We show that frequencies of specific directed network motifs can be used to distinguish between patients with Alzheimer’s disease (AD) and normal control (NC) subjects. Especially interesting from a clinical standpoint, these motif frequencies can also distinguish between subjects with mild cognitive impairment who remained stable over three years (MCI) and those who converted to AD (CONV). Furthermore, we find that the entropy of the distribution of directed network motifs increased from MCI to CONV to AD, implying that the distribution of pathology is more structured in MCI but becomes less so as it progresses to CONV and further to AD. Thus, directed network motifs frequencies and distributional properties provide new insights into the progression of Alzheimer’s disease as well as new imaging markers for distinguishing between normal controls, stable mild cognitive impairment, MCI converters and Alzheimer’s disease.  相似文献   

8.
This paper shows an adaptive statistical test for QRS detection of electrocardiography (ECG) signals. The method is based on a M-ary generalized likelihood ratio test (LRT) defined over a multiple observation window in the Fourier domain. The motivations for proposing another detection algorithm based on maximum a posteriori (MAP) estimation are found in the high complexity of the signal model proposed in previous approaches which i) makes them computationally unfeasible or not intended for real time applications such as intensive care monitoring and (ii) in which the parameter selection conditions the overall performance. In this sense, we propose an alternative model based on the independent Gaussian properties of the Discrete Fourier Transform (DFT) coefficients, which allows to define a simplified MAP probability function. In addition, the proposed approach defines an adaptive MAP statistical test in which a global hypothesis is defined on particular hypotheses of the multiple observation window. In this sense, the observation interval is modeled as a discontinuous transmission discrete-time stochastic process avoiding the inclusion of parameters that constraint the morphology of the QRS complexes.  相似文献   

9.
The clinical serial interval of an infectious disease is the time between date of symptom onset in an index case and the date of symptom onset in one of its secondary cases. It is a quantity which is commonly collected during a pandemic and is of fundamental importance to public health policy and mathematical modelling. In this paper we present a novel method for calculating the serial interval distribution for a Markovian model of household transmission dynamics. This allows the use of Bayesian MCMC methods, with explicit evaluation of the likelihood, to fit to serial interval data and infer parameters of the underlying model. We use simulated and real data to verify the accuracy of our methodology and illustrate the importance of accounting for household size. The output of our approach can be used to produce posterior distributions of population level epidemic characteristics.  相似文献   

10.
11.
Rohlfs RV  Weir BS 《Genetics》2008,180(3):1609-1616
It is well established that test statistics and P-values derived from discrete data, such as genetic markers, are also discrete. In most genetic applications, the null distribution for a discrete test statistic is approximated with a continuous distribution, but this approximation may not be reasonable. In some cases using the continuous approximation for the expected null distribution may cause truly null test statistics to appear nonnull. We explore the implications of using continuous distributions to approximate the discrete distributions of Hardy–Weinberg equilibrium test statistics and P-values. We derive exact P-value distributions under the null and alternative hypotheses, enabling a more accurate analysis than is possible with continuous approximations. We apply these methods to biological data and find that using continuous distribution theory with exact tests may underestimate the extent of Hardy–Weinberg disequilibrium in a sample. The implications may be most important for the widespread use of whole-genome case–control association studies and Hardy–Weinberg equilibrium (HWE) testing for data quality control.  相似文献   

12.
In this work we propose the adoption of a statistical framework used in the evaluation of forensic evidence as a tool for evaluating and presenting circumstantial “evidence” of a disease outbreak from syndromic surveillance. The basic idea is to exploit the predicted distributions of reported cases to calculate the ratio of the likelihood of observing n cases given an ongoing outbreak over the likelihood of observing n cases given no outbreak. The likelihood ratio defines the Value of Evidence (V). Using Bayes'' rule, the prior odds for an ongoing outbreak are multiplied by V to obtain the posterior odds. This approach was applied to time series on the number of horses showing clinical respiratory symptoms or neurological symptoms. The separation between prior beliefs about the probability of an outbreak and the strength of evidence from syndromic surveillance offers a transparent reasoning process suitable for supporting decision makers. The value of evidence can be translated into a verbal statement, as often done in forensics or used for the production of risk maps. Furthermore, a Bayesian approach offers seamless integration of data from syndromic surveillance with results from predictive modeling and with information from other sources such as disease introduction risk assessments.  相似文献   

13.
In this paper we describe and test a new method for characterizing the space use patterns of individual animals on the basis of successive locations of marked individuals. Existing methods either do not describe space use in probabilistic terms, e.g. the maximum distance between locations or the area of the convex hull of all locations, or they assume a priori knowledge of the probabilistic shape of each individual's use pattern, e.g. bivariate or circular normal distributions. We develop a method for calculating a probability of location distribution for an average individual member of a population that requires no assumptions about the shape of the distribution (we call this distribution the population utilization distribution or PUD). Using nine different sets of location data, we demonstrate that these distributions accurately characterize the space use patterns of the populations from which they were derived. The assumption of normality is found to result in a consistent and significant overestimate of the area of use. We then describe a function which relates probability of location to area (termed the MAP index) which has a number of advantages over existing size indices. Finally, we show how any quantities such as the MAP index derived from our average distributions can be subjected to standard statistical tests of significance.  相似文献   

14.
The large conductance voltage- and Ca2+-activated K+ channels from the inner mitochondrial membrane (mitoBK) are modulated by a number of factors. Among them flavanones, including naringenin (Nar), arise as a promising group of mitoBK channel regulators from a pharmacological point of view. It is well known that in the presence of Nar the open state probability (pop) of mitoBK channels significantly increases. Nevertheless, the molecular mechanism of the mitoBK-Nar interactions remains still unrevealed. It is also not known whether the effects of naringenin administration on conformational dynamics can resemble those which are exerted by the other channel-activating stimuli. In aim to answer this question, we examine whether the dwell-time series of mitoBK channels which were obtained at different voltages and Nar concentrations (yet allowing to reach comparable pops) are discernible by means of artificial intelligence methods, including k-NN and shapelet learning. The obtained results suggest that the structural complexity of the gating dynamics is shaped both by the interaction of channel gate with the voltage sensor (VSD) and the Nar-binding site. For a majority of data one can observe stimulus-specific patterns of channel gating. Shapelet algorithm allows to obtain better prediction accuracy in most cases. Probably, because it takes into account the complexity of local features of a given signal. About 30% of the analyzed time series do not sufficiently differ to unambiguously distinguish them from each other, which can be interpreted in terms of the existence of the common features of mitoBK channel gating regardless of the type of activating stimulus. There exist long-range mutual interactions between VSD and the Nar-coordination site that are responsible for higher levels of Nar-activation (Δpop) at deeply depolarized membranes. These intra-sensor interactions are anticipated to have an allosteric nature.  相似文献   

15.
Many marine ecosystems have undergone ‘regime shifts’, i.e. abrupt reorganizations across trophic levels. Establishing whether these constitute shifts between alternative stable states is of key importance for the prospects of ecosystem recovery and for management. We show how mechanisms underlying alternative stable states caused by predator–prey interactions can be revealed in field data, using analyses guided by theory on size-structured community dynamics. This is done by combining data on individual performance (such as growth and fecundity) with information on population size and prey availability. We use Atlantic cod (Gadus morhua) and their prey in the Baltic Sea as an example to discuss and distinguish two types of mechanisms, ‘cultivation-depensation’ and ‘overcompensation’, that can cause alternative stable states preventing the recovery of overexploited piscivorous fish populations. Importantly, the type of mechanism can be inferred already from changes in the predators'' body growth in different life stages. Our approach can thus be readily applied to monitored stocks of piscivorous fish species, for which this information often can be assembled. Using this tool can help resolve the causes of catastrophic collapses in marine predatory–prey systems and guide fisheries managers on how to successfully restore collapsed piscivorous fish stocks.  相似文献   

16.
For community ecologists, “neutral or not?” is a fundamental question, and thus, rejecting neutrality is an important first step before investigating the deterministic processes underlying community dynamics. Hubbell''s neutral model is an important contribution to the exploration of community dynamics, both technically and philosophically. However, the neutrality tests for this model are limited by a lack of statistical power, partly because the zero‐sum assumption of the model is unrealistic. In this study, we developed a neutrality test for local communities that implements non‐zero‐sum community dynamics and determines the number of new species (N sp) between observations. For the non‐zero‐sum neutrality test, the model distributed the expected N sp, as calculated by extensive simulations, which allowed us to investigate the neutrality of the observed community by comparing the observed N sp with distributions of the expected N sp derived from the simulations. For this comparison, we developed a new “non‐zero‐sum N sp test,” which we validated by running multiple neutral simulations using different parameter settings. We found that the non‐zero‐sum N sp test rejected neutrality at a near‐significance level, which justified the validity of our approach. For an empirical test, the non‐zero‐sum N sp test was applied to real tropical tree communities in Panama and Malaysia. The non‐zero‐sum N sp test rejected neutrality in both communities when the observation interval was long and N sp was large. Hence, the non‐zero‐sum N sp test is an effective way to examine neutrality and has reasonable statistical power to reject the neutral model, especially when the observed N sp is large. This unique and simple approach is statistically powerful, even though it only employs two temporal sequences of community data. Thus, this test can be easily applied to existing datasets. In addition, application of the test will provide significant benefits for detecting changing biodiversity under climate change and anthropogenic disturbance.  相似文献   

17.
A central question of marine ecology is, how far do larvae disperse? Coupled biophysical models predict that the probability of successful dispersal declines as a function of distance between populations. Estimates of genetic isolation-by-distance and self-recruitment provide indirect support for this prediction. Here, we conduct the first direct test of this prediction, using data from the well-studied system of clown anemonefish (Amphiprion percula) at Kimbe Island, in Papua New Guinea. Amphiprion percula live in small breeding groups that inhabit sea anemones. These groups can be thought of as populations within a metapopulation. We use the x- and y-coordinates of each anemone to determine the expected distribution of dispersal distances (the distribution of distances between each and every population in the metapopulation). We use parentage analyses to trace recruits back to parents and determine the observed distribution of dispersal distances. Then, we employ a logistic model to (i) compare the observed and expected dispersal distance distributions and (ii) determine the relationship between the probability of successful dispersal and the distance between populations. The observed and expected dispersal distance distributions are significantly different (p < 0.0001). Remarkably, the probability of successful dispersal between populations decreases fivefold over 1 km. This study provides a framework for quantitative investigations of larval dispersal that can be applied to other species. Further, the approach facilitates testing biological and physical hypotheses for the factors influencing larval dispersal in unison, which will advance our understanding of marine population connectivity.  相似文献   

18.
The dynamic behaviour of epithelial cell sheets plays a central role during development, growth, disease and wound healing. These processes occur as a result of cell adhesion, migration, division, differentiation and death, and involve multiple processes acting at the cellular and molecular level. Computational models offer a useful means by which to investigate and test hypotheses about these processes, and have played a key role in the study of cell–cell interactions. However, the necessarily complex nature of such models means that it is difficult to make accurate comparison between different models, since it is often impossible to distinguish between differences in behaviour that are due to the underlying model assumptions, and those due to differences in the in silico implementation of the model. In this work, an approach is described for the implementation of vertex dynamics models, a discrete approach that represents each cell by a polygon (or polyhedron) whose vertices may move in response to forces. The implementation is undertaken in a consistent manner within a single open source computational framework, Chaste, which comprises fully tested, industrial-grade software that has been developed using an agile approach. This framework allows one to easily change assumptions regarding force generation and cell rearrangement processes within these models. The versatility and generality of this framework is illustrated using a number of biological examples. In each case we provide full details of all technical aspects of our model implementations, and in some cases provide extensions to make the models more generally applicable.  相似文献   

19.
High throughput identification of peptides in databases from tandem mass spectrometry data is a key technique in modern proteomics. Common approaches to interpret large scale peptide identification results are based on the statistical analysis of average score distributions, which are constructed from the set of best scores produced by large collections of MS/MS spectra by using searching engines such as SEQUEST. Other approaches calculate individual peptide identification probabilities on the basis of theoretical models or from single-spectrum score distributions constructed by the set of scores produced by each MS/MS spectrum. In this work, we study the mathematical properties of average SEQUEST score distributions by introducing the concept of spectrum quality and expressing these average distributions as compositions of single-spectrum distributions. We predict and demonstrate in the practice that average score distributions are dominated by the quality distribution in the spectra collection, except in the low probability region, where it is possible to predict the dependence of average probability on database size. Our analysis leads to a novel indicator, the probability ratio, which takes optimally into account the statistical information provided by the first and second best scores. The probability ratio is a non-parametric and robust indicator that makes spectra classification according to parameters such as charge state unnecessary and allows a peptide identification performance, on the basis of false discovery rates, that is better than that obtained by other empirical statistical approaches. The probability ratio also compares favorably with statistical probability indicators obtained by the construction of single-spectrum SEQUEST score distributions. These results make the robustness, conceptual simplicity, and ease of automation of the probability ratio algorithm a very attractive alternative to determine peptide identification confidences and error rates in high throughput experiments.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号