首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
From a theoretical viewpoint, nature management basically has two options to prolong metapopulation persistence: decreasing local extinction probabilities and increasing colonization probabilities. This article focuses on those options with a stochastic, single-species metapopulation model. We found that for most combinations of local extinction probabilities and colonization probabilities, decreasing the former increases metapopulation extinction time more than does increasing the latter by the same amount. Only for relatively low colonization probabilities is an effort to increase these probabilities more beneficial, but even then, decreasing extinction probabilities does not seem much less effective. Furthermore, we found the following rules of thumb. First, if one focuses on extinction, one should preferably decrease the lowest local extinction probability. Only if the extinction probabilities are (almost) equal should one prioritize decreases in the local extinction probability of the patch with the best direct connections to and from other patches. Second, if one focuses on colonization, one should preferably increase the colonization probability between the patches with the lowest local extinction probability. Only if the local extinction probabilities are (almost) equal should one instead prioritize increases in the highest colonization probability (unless extinction probabilities and colonization probabilities are very low). The rules of thumb have an important common denominator: the local extinction process has a greater bearing on metapopulation extinction time than colonization.  相似文献   

2.
"This paper aims to identify net and partial-crude probabilities in the competing-risk life table context, by using probabilistic approaches. Five types of lifelength random variables are defined to formulate these nonidentifiable probabilities. General expressions for net and partial-crude probabilities are first derived under independent risks assumptions. Two sets of explicit formulas for estimating the net and partial-crude probabilities are then derived in terms of the identifiable overall and crude probabilities by making the additional assumption of piecewise uniform distribution of the lifelength random variables. A study of the degree to which nonidentifiability can affect the net and partial-crude probabilities in a variety of situations is developed. An example from cross-sectional studies is employed to illustrate the methodology developed."  相似文献   

3.
A growth model for topological trees is formulated as a generalization of the terminal and segmental growth model. For this parameterized growth model, expressions are derived for the partition probabilities (probabilities of subtree pairs of certain degrees). The probabilities of complete trees are easily derived from these partition probabilities.  相似文献   

4.
Susko E 《Systematic biology》2008,57(4):602-612
Several authors have recently noted that when data are generated from a star topology, posterior probabilities can often be very large, even with arbitrarily large sequence lengths. This is counter to intuition, which suggests convergence to the limit of equal probability for each topology. Here the limiting distributions of bootstrap support and posterior probabilities are obtained for a four-taxon star tree. Theoretical results are given, providing confirmation that this counterintuitive phenomenon holds for both posterior probabilities and bootstrap support. For large samples the limiting results for posterior probabilities are the same regardless of the prior. With equal-length terminal edges, the limiting distribution is similar but not the same across different choices for the lengths of the edges. In contrast to previous results, the case of unequal lengths of terminal edges is considered. With two long edges, the posterior probability of the tree with long edges together tends to be much larger. Using the neighbor-joining algorithm, with equal edge lengths, the distribution of bootstrap support tends to be qualitatively comparable to posterior probabilities. As with posterior probabilities, when two of the edges are long, bootstrap support for the tree with long branches together tends to be large. The bias is less pronounced, however, as the distribution of bootstrap support gets close to uniform for this tree, whereas posterior probabilities are much more likely to be large. Our findings for maximum likelihood estimation are based entirely on simulation and in contrast suggest that bootstrap support tends to be fairly constant across edge-length choices.  相似文献   

5.
In a studbook, MULT is used in a parent ID field when the actual parent is unknown but the parent is known to be one of a set of possible parents. Probabilities of being the actual parent are assigned to each possible parent in the MULT group, and that information is used in the calculation of mean kinships (MKs). Parental probabilities are typically assigned based on the species biology and/or what was known about how the animals were being managed at the time of conception. If there is no additional information, the default is to assign each possible parent the same probability. What has not been considered to date is the impact of different MKs among the group of possible parents. Methods are developed which allow a combination of parental probabilities and MKs into parental weights. These weights replace parental probabilities in the analysis. One important conclusion is that even when the MKs of possible parents are quite different, the difference between the parental weights and probabilities is typically less than 30%. This highlights the importance of correct estimation of parental probabilities, whenever possible, instead of reliance on a default.  相似文献   

6.
Determining the error rate for peptide and protein identification accurately and reliably is necessary to enable evaluation and crosscomparisons of high throughput proteomics experiments. Currently, peptide identification is based either on preset scoring thresholds or on probabilistic models trained on datasets that are often dissimilar to experimental results. The false discovery rates (FDR) and peptide identification probabilities for these preset thresholds or models often vary greatly across different experimental treatments, organisms, or instruments used in specific experiments. To overcome these difficulties, randomized databases have been used to estimate the FDR. However, the cumulative FDR may include low probability identifications when there are a large number of peptide identifications and exclude high probability identifications when there are few. To overcome this logical inconsistency, this study expands the use of randomized databases to generate experiment-specific estimates of peptide identification probabilities. These experiment-specific probabilities are generated by logistic and Loess regression models of the peptide scores obtained from original and reshuffled database matches. These experiment-specific probabilities are shown to very well approximate "true" probabilities based on known standard protein mixtures across different experiments. Probabilities generated by the earlier Peptide_Prophet and more recent LIPS models are shown to differ significantly from this study's experiment-specific probabilities, especially for unknown samples. The experiment-specific probabilities reliably estimate the accuracy of peptide identifications and overcome potential logical inconsistencies of the cumulative FDR. This estimation method is demonstrated using a Sequest database search, LIPS model, and a reshuffled database. However, this approach is generally applicable to any search algorithm, peptide scoring, and statistical model when using a randomized database.  相似文献   

7.
In this paper a theory of a class of restricted transition probabilities is developed and applied to a problem in the dynamics of biological populations under the assumption that the underlying stochastic process is a continuous time parameter Markov chain with stationary transition probabilities. The paper is divided into three parts. Part one contains sufficient background from the theory of Markov processes to define restricted transition probabilities in a rigorous manner. In addition, some basic concepts in the theory of stochastic processes are interpreted from the biological point of view. Part two is concerned with the problem of finding representations for restricted transition probabilities. Finally, in part three the theory of restricted transition probabilities is applied to the problem of finding and analyzing some properties of the distribution function of the maximum size attained by the population in a finite time interval for a rather wide class of Markov processes. Some other applications of restricted transition probabilities to other problems in the dynamics of biological populations are also suggested. These applications will be discussed more fully in a companion paper. The research reported in this paper was supported by the United States Atomic Energy Commission, Division of Biology and Medicine Project AT(45-1)-1729.  相似文献   

8.
9.
Familiar quantitative reserve-selection techniques are tailored to simple decision problems, where the representation of species is sought at minimum cost. However, conservationists have begun to ask whether representing species in reserve networks is sufficient to avoid local extinctions within selected areas. An attractive, but previously untested idea is to model current species' probabilities of occurrence as an estimate of local persistence in the near future. Using distribution data for passerine birds in Great Britain, we show that (i) species' probabilities of occurrence are negatively related to local probabilities of extinction, at least when a particular 20-year period is considered, and (ii) local extinctions can be reduced if areas are selected to maximize current species' probabilities of occurrence We suggest that more extinctions could be avoided if even a simple treatment of persistence were to be incorporated within reserve selection methods.  相似文献   

10.
In this study, we use the random principle to analyse the distributions of amino acids and amino acid pairs in human tumour necrosis factor precursor (TNF-!) and its eight mutations, to compare the measured distribution probability with the theoretical distribution probability and to rank the measured distribution probability against the theoretical distribution probability. In this way, we can suggest that distributions with a high random rank should not be deliberately evolved and conserved and those with a low random rank should be deliberately evolved and conserved in human TNF-!. An increased distribution probability in a mutation means probabilistically that the mutation is more likely to occur spontaneously, whereas a decreased distribution probability in a mutation means probabilistically that the mutation is less likely to occur spontaneously and perhaps is more related to a certain cause. The results, for example, show that the distributions of 30% of the amino acids are identical with their probabilistic simplest distributions, and the distributions of some of the remaining amino acids are very close to their probabilistic simplest distributions. With respect to probabilities of distributions of amino acids in mutations, the results show that mutations lead to an increase in eight probabilities, which are thus more likely to occur. Eight probabilities decrease and are thus less likely to occur. With respect to the random ranks against the theoretical probabilities of distributions of amino acids, the results show that mutations lead to an increase in seven and a decrease in seven probabilities, with two probabilities unchanged.  相似文献   

11.

Background  

Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction.  相似文献   

12.
Markov chain Monte Carlo (MCMC) methods have been proposed to overcome computational problems in linkage and segregation analyses. This approach involves sampling genotypes at the marker and trait loci. Scalar-Gibbs is easy to implement, and it is widely used in genetics. However, the Markov chain that corresponds to scalar-Gibbs may not be irreducible when the marker locus has more than two alleles, and even when the chain is irreducible, mixing has been observed to be slow. These problems do not arise if the genotypes are sampled jointly from the entire pedigree. This paper proposes a method to jointly sample genotypes. The method combines the Elston-Stewart algorithm and iterative peeling, and is called the ESIP sampler. For a hypothetical pedigree, genotype probabilities are estimated from samples obtained using ESIP and also scalar-Gibbs. Approximate probabilities were also obtained by iterative peeling. Comparisons of these with exact genotypic probabilities obtained by the Elston-Stewart algorithm showed that ESIP and iterative peeling yielded genotypic probabilities that were very close to the exact values. Nevertheless, estimated probabilities from scalar-Gibbs with a chain of length 235 000, including a burn-in of 200 000 steps, were less accurate than probabilities estimated using ESIP with a chain of length 10 000, with a burn-in of 5 000 steps. The effective chain size (ECS) was estimated from the last 25 000 elements of the chain of length 125 000. For one of the ESIP samplers, the ECS ranged from 21 579 to 22 741, while for the scalar-Gibbs sampler, the ECS ranged from 64 to 671. Genotype probabilities were also estimated for a large real pedigree consisting of 3 223 individuals. For this pedigree, it is not feasible to obtain exact genotype probabilities by the Elston-Stewart algorithm. ESIP and iterative peeling yielded very similar results. However, results from scalar-Gibbs were less accurate.  相似文献   

13.
Multilocus genotype probabilities, estimated using the assumption of independent association of alleles within and across loci, are subject to sampling fluctuation, since allele frequencies used in such computations are derived from samples drawn from a population. We derive exact sampling variances of estimated genotype probabilities and provide simple approximation of sampling variances. Computer simulations conducted using real DNA typing data indicate that, while the sampling distribution of estimated genotype probabilities is not symmetric around the point estimate, the confidence interval of estimated (single-locus or multilocus) genotype probabilities can be obtained from the sampling of a logarithmic transformation of the estimated values. This, in turn, allows an examination of heterogeneity of estimators derived from data on different reference populations. Applications of this theory to DNA typing data at VNTR loci suggest that use of different reference population data may yield significantly different estimates. However, significant differences generally occur with rare (less than 1 in 40,000) genotype probabilities. Conservative estimates of five-locus DNA profile probabilities are always less than 1 in 1 million in an individual from the United States, irrespective of the racial/ethnic origin.  相似文献   

14.
Abstract: Estimates of wildlife population sizes are frequently constructed by combining counts of observed animals from a stratified survey of aerial sampling units with an estimated probability of detecting animals. Unlike traditional stratified survey designs, stratum-specific estimates of population size will be correlated if a common detection model is used to adjust counts for undetected animals in all strata. We illustrate this concept in the context of aerial surveys, considering 2 cases: 1) a single-detection parameter is estimated under the assumption of constant detection probabilities, and 2) a logistic-regression model is used to estimate heterogeneous detection probabilities. Naïve estimates of variance formed by summing stratum-specific estimates of variance may result in significant bias, particularly if there are a large number of strata, if detection probabilities are small, or if estimates of detection probabilities are imprecise. (JOURNAL OF WILDLIFE MANAGEMENT 72(3):837–844; 2008)  相似文献   

15.
There are many effective ways to represent a minimum free energy RNA secondary structure that make it easy to locate its helices and loops. It is a greater challenge to visualize the thermal average probabilities of all folds in a partition function sum; dot plot representations are often puzzling. Therefore, we introduce the RNAbows visualization tool for RNA base pair probabilities. RNAbows represent base pair probabilities with line thickness and shading, yielding intuitive diagrams. RNAbows aid in disentangling incompatible structures, allow comparisons between clusters of folds, highlight differences between wild-type and mutant folds, and are also rather beautiful.  相似文献   

16.
Consider case control analysis with a dichotomous exposure variable that is subject to misclassification. If the classification probabilities are known, then methods are available to adjust odds-ratio estimates in light of the misclassification. We study the realistic scenario where reasonable guesses, but not exact values, are available for the classification probabilities. If the analysis proceeds by simply treating the guesses as exact, then even small discrepancies between the guesses and the actual probabilities can seriously degrade odds-ratio estimates. We show that this problem is mitigated by a Bayes analysis that incorporates uncertainty about the classification probabilities as prior information.  相似文献   

17.
We evaluate statistical models used in two-hypothesis tests for identifying peptides from tandem mass spectrometry data. The null hypothesis H(0), that a peptide matches a spectrum by chance, requires information on the probability of by-chance matches between peptide fragments and peaks in the spectrum. Likewise, the alternate hypothesis H(A), that the spectrum is due to a particular peptide, requires probabilities that the peptide fragments would indeed be observed if it was the causative agent. We compare models for these probabilities by determining the identification rates produced by the models using an independent data set. The initial models use different probabilities depending on fragment ion type, but uniform probabilities for each ion type across all of the labile bonds along the backbone. More sophisticated models for probabilities under both H(A) and H(0) are introduced that do not assume uniform probabilities for each ion type. In addition, the performance of these models using a standard likelihood model is compared to an information theory approach derived from the likelihood model. Also, a simple but effective model for incorporating peak intensities is described. Finally, a support-vector machine is used to discriminate between correct and incorrect identifications based on multiple characteristics of the scoring functions. The results are shown to reduce the misidentification rate significantly when compared to a benchmark cross-correlation based approach.  相似文献   

18.
Speciation and extinction probabilities can be estimated from molecular phylogenies of extant species that are complete at the species level. Because only a fraction of published phylogenies is complete at the species level, methods have been developed to estimate speciation and extinction probabilities also from incomplete phylogenies. However, due to different estimation techniques, estimates from complete and incomplete phylogenies are difficult to compare statistically. Here I show with some examples how existing likelihood functions can be used to obtain Bayesian estimates of speciation and extinction probabilities, and how this approach is applied to both complete and incomplete phylogenies.  相似文献   

19.
Ring re-encounter data, in particular ring recoveries, have made a large contribution to our understanding of bird movements. However, almost every study based on ring re-encounter data has struggled with the bias caused by unequal observer distribution. Re-encounter probabilities are strongly heterogeneous in space and over time. If this heterogeneity can be measured or at least controlled for, the enormous number of ring re-encounter data collected can be used effectively to answer many questions. Here, we review four different approaches to account for heterogeneity in observer distribution in spatial analyses of ring re-encounter data. The first approach is to measure re-encounter probability directly. We suggest that variation in ring re-encounter probability could be estimated by combining data whose re-encounter probabilities are close to one (radio or satellite telemetry) with data whose re-encounter probabilities are low (ring re-encounter data). The second approach is to measure the spatial variation in re-encounter probabilities using environmental covariates. It should be possible to identify powerful predictors for ring re-encounter probabilities. A third approach consists of the comparison of the actual observations with all possible observations using randomization techniques. We encourage combining such randomisations with ring re-encounter models that we discuss as a fourth approach. Ring re-encounter models are based on the comparison of groups with equal re-encounter probabilities. Together these four approaches could improve our understanding of bird movements considerably. We discuss their advantages and limitations and give directions for future research.  相似文献   

20.
This paper examines the changes in parameters of a Semi-Markov model and its impact on interval transition probabilities. Computational techniques are used to illustrate that the model is sensitive enough to depict changes in the interval transition probabilities if the paramters of the model are modified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号