首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zeh J  Poole D  Miller G  Koski W  Baraff L  Rugh D 《Biometrics》2002,58(4):832-840
Annual survival probability of bowhead whales, Balaena mysticetus, was estimated using both Bayesian and maximum likelihood implementations of Cormack and Jolly-Seber (JS) models for capture-recapture estimation in open populations and reduced-parameter generalizations of these models. Aerial photographs of naturally marked bowheads collected between 1981 and 1998 provided the data. The marked whales first photographed in a particular year provided the initial 'capture' and 'release' of those marked whales and photographs in subsequent years the 'recaptures'. The Cormack model, often called the Cormack-Jolly-Seber (CJS) model, and the program MARK were used to identify the model with a single survival and time-varying capture probabilities as the most appropriate for these data. When survival was constrained to be one or less, the maximum likelihood estimate computed by MARK was one, invalidating confidence interval computations based on the asymptotic standard error or profile likelihood. A Bayesian Markov chain Monte Carlo (MCMC) implementation of the model was used to produce a posterior distribution for annual survival. The corresponding reduced-parameter JS model was also fit via MCMC because it is the more appropriate of the two models for these photoidentification data. Because the CJS model ignores much of the information on capture probabilities provided by the data, its results are less precise and more sensitive to the prior distributions used than results from the JS model. With priors for annual survival and capture probabilities uniform from 0 to 1, the posterior mean for bowhead survival rate from the JS model is 0.984, and 95% of the posterior probability lies between 0.948 and 1. This high estimated survival rate is consistent with other bowhead life history data.  相似文献   

2.

Background

In quantitative trait mapping and genomic prediction, Bayesian variable selection methods have gained popularity in conjunction with the increase in marker data and computational resources. Whereas shrinkage-inducing methods are common tools in genomic prediction, rigorous decision making in mapping studies using such models is not well established and the robustness of posterior results is subject to misspecified assumptions because of weak biological prior evidence.

Methods

Here, we evaluate the impact of prior specifications in a shrinkage-based Bayesian variable selection method which is based on a mixture of uniform priors applied to genetic marker effects that we presented in a previous study. Unlike most other shrinkage approaches, the use of a mixture of uniform priors provides a coherent framework for inference based on Bayes factors. To evaluate the robustness of genetic association under varying prior specifications, Bayes factors are compared as signals of positive marker association, whereas genomic estimated breeding values are considered for genomic selection. The impact of specific prior specifications is reduced by calculation of combined estimates from multiple specifications. A Gibbs sampler is used to perform Markov chain Monte Carlo estimation (MCMC) and a generalized expectation-maximization algorithm as a faster alternative for maximum a posteriori point estimation. The performance of the method is evaluated by using two publicly available data examples: the simulated QTLMAS XII data set and a real data set from a population of pigs.

Results

Combined estimates of Bayes factors were very successful in identifying quantitative trait loci, and the ranking of Bayes factors was fairly stable among markers with positive signals of association under varying prior assumptions, but their magnitudes varied considerably. Genomic estimated breeding values using the mixture of uniform priors compared well to other approaches for both data sets and loss of accuracy with the generalized expectation-maximization algorithm was small as compared to that with MCMC.

Conclusions

Since no error-free method to specify priors is available for complex biological phenomena, exploring a wide variety of prior specifications and combining results provides some solution to this problem. For this purpose, the mixture of uniform priors approach is especially suitable, because it comprises a wide and flexible family of distributions and computationally intensive estimation can be carried out in a reasonable amount of time.  相似文献   

3.
Dynamic N‐mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen ( 2011 : Biometrics, 67 , 577–587) dynamic N‐mixture model in a manipulative experiment using a before‐after control‐impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N‐mixture model. We compared abundance estimates from this recent approach with those from classic capture–mark–recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack–Jolly–Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N‐mixture models were similar to those from capture–mark–recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N‐mixture models were higher and less precise than those from CJS models. However, N‐mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.  相似文献   

4.
Monitoring is an essential part of reintroduction programs, but many years of data may be needed to obtain reliable population projections. This duration can potentially be reduced by incorporating prior information on expected vital rates (survival and fecundity) when making inferences from monitoring data. The prior distributions for these parameters can be derived from data for previous reintroductions, but it is important to account for site‐to‐site variation. We evaluated whether such informative priors improved our ability to estimate the finite rate of increase (λ) of the North Island robin (Petroica longipes) population reintroduced to Tawharanui Regional Park, New Zealand. We assessed how precision improved with each year of postrelease data added, comparing models that used informative or uninformative priors. The population grew from about 22 to 80 individuals from 2007 to 2016, with λ estimated to be 1.23 if density dependence was included in the model and 1.13 otherwise. Under either model, 7 years of data were required before the lower 95% credible limit for λ was > 1, giving confidence that the population would persist. The informative priors did not reduce this requirement. Data‐derived priors are useful before reintroduction because they allow λ to be estimated in advance. However, in the case examined here, the value of the priors was overwhelmed once site‐specific monitoring data became available. The Bayesian method presented is logical for reintroduced populations. It allows prior information (used to inform prerelease decisions) to be integrated with postrelease monitoring. This makes full use of the data for ongoing management decisions. However, if the priors properly account for site‐to‐site variation, they may have little predictive value compared with the site‐specific data. This value will depend on the degree of site‐to‐site variation as well as the quality of the data.  相似文献   

5.
Bayesian methods allow borrowing of historical information through prior distributions. The concept of prior effective sample size (prior ESS) facilitates quantification and communication of such prior information by equating it to a sample size. Prior information can arise from historical observations; thus, the traditional approach identifies the ESS with such a historical sample size. However, this measure is independent of newly observed data, and thus would not capture an actual “loss of information” induced by the prior in case of prior-data conflict. We build on a recent work to relate prior impact to the number of (virtual) samples from the current data model and introduce the effective current sample size (ECSS) of a prior, tailored to the application in Bayesian clinical trial designs. Special emphasis is put on robust mixture, power, and commensurate priors. We apply the approach to an adaptive design in which the number of recruited patients is adjusted depending on the effective sample size at an interim analysis. We argue that the ECSS is the appropriate measure in this case, as the aim is to save current (as opposed to historical) patients from recruitment. Furthermore, the ECSS can help overcome lack of consensus in the ESS assessment of mixture priors and can, more broadly, provide further insights into the impact of priors. An R package accompanies the paper.  相似文献   

6.
Björn Bornkamp 《Biometrics》2012,68(3):893-901
Summary This article considers the topic of finding prior distributions when a major component of the statistical model depends on a nonlinear function. Using results on how to construct uniform distributions in general metric spaces, we propose a prior distribution that is uniform in the space of functional shapes of the underlying nonlinear function and then back‐transform to obtain a prior distribution for the original model parameters. The primary application considered in this article is nonlinear regression, but the idea might be of interest beyond this case. For nonlinear regression the so constructed priors have the advantage that they are parametrization invariant and do not violate the likelihood principle, as opposed to uniform distributions on the parameters or the Jeffrey’s prior, respectively. The utility of the proposed priors is demonstrated in the context of design and analysis of nonlinear regression modeling in clinical dose‐finding trials, through a real data example and simulation.  相似文献   

7.
Establishing that a set of population‐splitting events occurred at the same time can be a potentially persuasive argument that a common process affected the populations. Recently, Oaks et al. ( 2013 ) assessed the ability of an approximate‐Bayesian model‐choice method (msBayes ) to estimate such a pattern of simultaneous divergence across taxa, to which Hickerson et al. ( 2014 ) responded. Both papers agree that the primary inference enabled by the method is very sensitive to prior assumptions and often erroneously supports shared divergences across taxa when prior uncertainty about divergence times is represented by a uniform distribution. However, the papers differ about the best explanation and solution for this problem. Oaks et al. ( 2013 ) suggested the method's behavior was caused by the strong weight of uniformly distributed priors on divergence times leading to smaller marginal likelihoods (and thus smaller posterior probabilities) of models with more divergence‐time parameters (Hypothesis 1); they proposed alternative prior probability distributions to avoid such strongly weighted posteriors. Hickerson et al. ( 2014 ) suggested numerical‐approximation error causes msBayes analyses to be biased toward models of clustered divergences because the method's rejection algorithm is unable to adequately sample the parameter space of richer models within reasonable computational limits when using broad uniform priors on divergence times (Hypothesis 2). As a potential solution, they proposed a model‐averaging approach that uses narrow, empirically informed uniform priors. Here, we use analyses of simulated and empirical data to demonstrate that the approach of Hickerson et al. ( 2014 ) does not mitigate the method's tendency to erroneously support models of highly clustered divergences, and is dangerous in the sense that the empirically derived uniform priors often exclude from consideration the true values of the divergence‐time parameters. Our results also show that the tendency of msBayes analyses to support models of shared divergences is primarily due to Hypothesis 1, whereas Hypothesis 2 is an untenable explanation for the bias. Overall, this series of papers demonstrates that if our prior assumptions place too much weight in unlikely regions of parameter space such that the exact posterior supports the wrong model of evolutionary history, no amount of computation can rescue our inference. Fortunately, as predicted by fundamental principles of Bayesian model choice, more flexible distributions that accommodate prior uncertainty about parameters without placing excessive weight in vast regions of parameter space with low likelihood increase the method's robustness and power to detect temporal variation in divergences.  相似文献   

8.
Bayesian phylogenetic methods require the selection of prior probability distributions for all parameters of the model of evolution. These distributions allow one to incorporate prior information into a Bayesian analysis, but even in the absence of meaningful prior information, a prior distribution must be chosen. In such situations, researchers typically seek to choose a prior that will have little effect on the posterior estimates produced by an analysis, allowing the data to dominate. Sometimes a prior that is uniform (assigning equal prior probability density to all points within some range) is chosen for this purpose. In reality, the appropriate prior depends on the parameterization chosen for the model of evolution, a choice that is largely arbitrary. There is an extensive Bayesian literature on appropriate prior choice, and it has long been appreciated that there are parameterizations for which uniform priors can have a strong influence on posterior estimates. We here discuss the relationship between model parameterization and prior specification, using the general time-reversible model of nucleotide evolution as an example. We present Bayesian analyses of 10 simulated data sets obtained using a variety of prior distributions and parameterizations of the general time-reversible model. Uniform priors can produce biased parameter estimates under realistic conditions, and a variety of alternative priors avoid this bias.  相似文献   

9.
The objective Bayesian approach relies on the construction of prior distributions that reflect ignorance. When topologies are considered equally probable a priori, clades cannot be. Shifting justifications have been offered for the use of uniform topological priors in Bayesian inference. These include: (i) topological priors do not inappropriately influence Bayesian inference when they are uniform; (ii) although clade priors are not uniform, their undesirable influence is negated by the likelihood function, even when data sets are small; and (iii) the influence of nonuniform clade priors is an appropriate reflection of knowledge. The first two justifications have been addressed previously: the first is false, and the second was found to be questionable. The third and most recent justification is inconsistent with the first two, and with the objective Bayesian philosophy itself. Thus, there has been no coherent justification for the use of nonflat clade priors in Bayesian phylogenetics. We discuss several solutions: (i) Bayesian inference can be abandoned in favour of other methods of phylogenetic inference; (ii) the objective Bayesian philosophy can be abandoned in favour of a subjective interpretation; (iii) the topology with the greatest posterior probability, which is also the tree of greatest marginal likelihood, can be accepted as optimal, with clade support estimated using other means; or (iv) a Bayes factor, which accounts for differences in priors among competing hypotheses, can be used to assess the weight of evidence in support of clades.
©The Willi Hennig Society 2009  相似文献   

10.
The present paper discusses some probability problems in epidemiology in which a community is divided into two parts by means of the binomial damage model. The community size N is an integer-valued random v?ariable and p denotes the incidence rate of a certain disease such that conditional on N=n, the number B of infected individuals in the community follows a binomial distribution with parameters n and p. Let C denote the number of the susceptibles, where B + C = N. Probabilities are found of the events, such as, only half the community is hit by the disease, or there are n more susceptibles than are infected for n=0, 1, 2,…, or the susceptibles exceed the infected, or the susceptibles exceed (m-1) times the number of infected individuals for m = 2, 3,…, etc. These probabilities play an useful role when a public health official wishes to ascertain that only a given proportion 1/m of the community is infected with the disease. Only the cases when the community size N follows the geometric, negative binomial or FISHER'S logarithmic series distributions are considered with the mathematics being manageable.  相似文献   

11.
Estimating correlations among demographic parameters is critical to understanding population dynamics and life‐history evolution, where correlations among parameters can inform our understanding of life‐history trade‐offs, result in effective applied conservation actions, and shed light on evolutionary ecology. The most common approaches rely on the multivariate normal distribution, and its conjugate inverse Wishart prior distribution. However, the inverse Wishart prior for the covariance matrix of multivariate normal distributions has a strong influence on posterior distributions. As an alternative to the inverse Wishart distribution, we individually parameterize the covariance matrix of a multivariate normal distribution to accurately estimate variances (σ2) of, and process correlations (ρ) between, demographic parameters. We evaluate this approach using simulated capture–mark–recapture data. We then use this method to examine process correlations between adult and juvenile survival of black brent geese marked on the Yukon–Kuskokwim River Delta, Alaska (1988–2014). Our parameterization consistently outperformed the conjugate inverse Wishart prior for simulated data, where the means of posterior distributions estimated using an inverse Wishart prior were substantially different from the values used to simulate the data. Brent adult and juvenile annual apparent survival rates were strongly positively correlated (ρ = 0.563, 95% CRI 0.181–0.823), suggesting that habitat conditions have significant effects on both adult and juvenile survival. We provide robust simulation tools, and our methods can readily be expanded for use in other capture–recapture or capture‐recovery frameworks. Further, our work reveals limits on the utility of these approaches when study duration or sample sizes are small.  相似文献   

12.
Insights into the ecology and natural history of the neotenic salamander, Eurycea tonkawae, are provided from eight years of capture‐recapture data from 10,041 captures of 7,315 individuals at 16 sites. Eurycea tonkawae exhibits seasonal reproduction, with peak gravidity occurring in the fall and winter. Size frequency data indicated recruitment occurred in the spring and summer. Open‐population capture‐recapture models revealed a similar seasonal pattern at two of three sites, while recruitment was dependent on flow at the third site. Females can reach sexual maturity within one year, and oviposition likely takes place below ground. The asymptotic body length of 1,290 individuals was estimated as 31.73 mm (at ca. two years of age), although there was substantial heterogeneity among growth trajectories. Longevity was approximately eight years, and the median age for a recaptured adult was 2.3 years. Abundance estimated from closed‐population and robust‐design capture‐recapture models varied widely within and among sites (range 41–834), although, surprisingly, dramatic changes in abundance were not observed following prolonged dry periods. Seasonal migration patterns of second‐year and older adults may help explain lower ratios of large individuals and higher temporary emigration during the latter half of the year, but further study is required. Low numbers of captures and recaptures precluded the use of open‐population models to estimate demographic parameters at several sites; therefore, closed‐population (or robust‐design) methods are generally recommended. Based on observations of their life history and population demographics, E. tonkawae seems well adapted to conditions where spring flow is variable and surface habitat periodically goes dry.  相似文献   

13.
Yves Bötsch  Lukas Jenni  Marc Kéry 《Ibis》2020,162(3):902-910
Assessing and modelling abundance from animal count data is a very common task in ecology and management. Detection is arguably never perfect, but modern hierarchical models can incorporate detection probability and yield abundance estimates that are corrected for imperfect detection. Two variants of these models rely on counts of unmarked individuals, or territories (binomial N-mixture models, or binmix), and on detection histories based on territory-mapping data (multinomial N-mixture models or multimix). However, calibration studies which evaluate these two N-mixture model approaches are needed. We analysed conventional territory-mapping data (three surveys in 2014 and four in 2015) using both binmix and multimix models to estimate abundance for two common avian cavity-nesting forest species (Great Tit Parus major and Eurasian Blue Tit Cyanistes caeruleus). In the same study area, we used two benchmarks: occupancy data from a dense nestbox scheme and total number of detected territories. To investigate variance in estimates due to the territory assignment, three independent ornithologists conducted territory assignments. Nestbox occupancy yields a minimum number of territories, as some natural cavities may have been used, and binmix model estimates were generally higher than this benchmark. Estimates using the multimix model were slightly more precise than binmix model estimates. Depending on the person assigning the territories, the multimix model estimates became quite different, either overestimating or underestimating the ‘truth’. We conclude that N-mixture models estimated abundance reliably, even for our very small sample sizes. Territory-mapping counts depended on territory assignment and this carried over to estimates under the multimix model. This limitation has to be taken into account when abundance estimates are compared between sites or years. Whenever possible, accounting for such hidden heterogeneity in the raw data of bird surveys, via including a ‘territory editor’ factor, is recommended. Distributing the surveys randomly (in time and space) to editors may also alleviate this problem.  相似文献   

14.
Batch marking is common and useful for many capture–recapture studies where individual marks cannot be applied due to various constraints such as timing, cost, or marking difficulty. When batch marks are used, observed data are not individual capture histories but a set of counts including the numbers of individuals first marked, marked individuals that are recaptured, and individuals captured but released without being marked (applicable to some studies) on each capture occasion. Fitting traditional capture–recapture models to such data requires one to identify all possible sets of capture–recapture histories that may lead to the observed data, which is computationally infeasible even for a small number of capture occasions. In this paper, we propose a latent multinomial model to deal with such data, where the observed vector of counts is a non-invertible linear transformation of a latent vector that follows a multinomial distribution depending on model parameters. The latent multinomial model can be fitted efficiently through a saddlepoint approximation based maximum likelihood approach. The model framework is very flexible and can be applied to data collected with different study designs. Simulation studies indicate that reliable estimation results are obtained for all parameters of the proposed model. We apply the model to analysis of golden mantella data collected using batch marks in Central Madagascar.  相似文献   

15.
The aim of dose finding studies is sometimes to estimate parameters in a fitted model. The precision of the parameter estimates should be as high as possible. This can be obtained by increasing the number of subjects in the study, N, choosing a good and efficient estimation approach, and by designing the dose finding study in an optimal way. Increasing the number of subjects is not always feasible because of increasing cost, time limitations, etc. In this paper, we assume fixed N and consider estimation approaches and study designs for multiresponse dose finding studies. We work with diabetes dose–response data and compare a system estimation approach that fits a multiresponse Emax model to the data to equation‐by‐equation estimation that fits uniresponse Emax models to the data. We then derive some optimal designs for estimating the parameters in the multi‐ and uniresponse Emax model and study the efficiency of these designs.  相似文献   

16.
Bayesian LASSO for quantitative trait loci mapping   总被引:7,自引:1,他引:6       下载免费PDF全文
Yi N  Xu S 《Genetics》2008,179(2):1045-1055
The mapping of quantitative trait loci (QTL) is to identify molecular markers or genomic loci that influence the variation of complex traits. The problem is complicated by the facts that QTL data usually contain a large number of markers across the entire genome and most of them have little or no effect on the phenotype. In this article, we propose several Bayesian hierarchical models for mapping multiple QTL that simultaneously fit and estimate all possible genetic effects associated with all markers. The proposed models use prior distributions for the genetic effects that are scale mixtures of normal distributions with mean zero and variances distributed to give each effect a high probability of being near zero. We consider two types of priors for the variances, exponential and scaled inverse-chi(2) distributions, which result in a Bayesian version of the popular least absolute shrinkage and selection operator (LASSO) model and the well-known Student's t model, respectively. Unlike most applications where fixed values are preset for hyperparameters in the priors, we treat all hyperparameters as unknowns and estimate them along with other parameters. Markov chain Monte Carlo (MCMC) algorithms are developed to simulate the parameters from the posteriors. The methods are illustrated using well-known barley data.  相似文献   

17.
In tropical dry environments rainfall periodicity may affect demographic parameters, resulting in fluctuations in bird abundance. We used capture–recapture data for the Grey Pileated Finch from a Neotropical dry forest to evaluate the hypothesis that intra- and inter-annual survival, individuals entrance and population abundance, are related to local rainfall. Sampling occurred across 3 years, with individuals captured, tagged and evaluated for age and presence of brood patch every 14 days. Using the POPAN formulation, we generated demographic models to evaluate study population temporal dynamics. Best-fit models indicated a low apparent annual survival in the first year (16%) compared to other years (between 47 and 62%), with this low value associated with an extreme drought. The abundance of juveniles at each capture occasion was significantly dependent on the accumulated precipitation in the previous 14 days, and the juvenile covariate was a strong predictor of the intra-annual entrance probability (natality). Individuals entrance during the reproductive period corresponded to 53, 52 and 75% of total ingress for each year, respectively. The trend in sampled population size indicated positive exponential growth (Ninitial = 50, Nlast = 600), with intra-annual fluctuations becoming progressively more intense. Low survival was relevant during population decline at study onset, while at study end intense Individuals entrance promoted rapid population growth. Thus, the indirect effects of rainfall and the combined effect of two demographic rates operated synergistically on the immediate population abundance of Grey Pileated Finch, an abundant bird in a Neotropical dry forest.  相似文献   

18.
Many long‐lived plant and animal species have nondiscrete overlapping generations. Although numerous models have been developed to predict the effective sizes (Ne) of populations with overlapping generations, they are extremely difficult to apply to natural populations because of the large array of unknown and elusive life‐table parameters involved. Unfortunately, little work has been done to estimate the Ne of populations with overlapping generations from marker data, in sharp contrast to the situation of populations with discrete generations for which quite a few estimators are available. In this study, we propose an estimator (EPA, estimator by parentage assignments) of the current Ne of populations with overlapping generations, using the sex, age, and multilocus genotype information of a single sample of individuals taken at random from the population. Simulations show that EPA provides unbiased and accurate estimates of Ne under realistic sampling and genotyping effort. Additionally, it yields estimates of other interesting parameters such as generation interval, the variances and covariances of lifetime family size, effective number of breeders of each age class, and life‐table variables. Data from wild populations of baboons and hihi (stitchbird) were analyzed by EPA to demonstrate the use of the estimator in practical sampling and genotyping situations.  相似文献   

19.
In this paper, the development of a probabilistic network for the diagnosis of acute cardiopulmonary diseases is presented in detail. A panel of expert physicians collaborated to specify the qualitative part, which is a directed acyclic graph defining a factorization of the joint probability distribution of domain variables into univariate conditional distributions. The quantitative part, which is a set of parametric models defining these univariate conditional distributions, was estimated following the Bayesian paradigm. In particular, we exploited an original reparameterization of Beta and categorical logistic regression models to elicit the joint prior distribution of parameters from medical experts, and updated it by conditioning on a dataset of hospital records via Markov chain Monte Carlo simulation. Refinement was iteratively performed until the probabilistic network provided satisfactory concordance index values for several acute diseases and reasonable diagnosis for six fictitious patient cases. The probabilistic network can be employed to perform medical diagnosis on a total of 63 diseases (38 acute and 25 chronic) on the basis of up to 167 patient findings.  相似文献   

20.
Obtaining inferences on disease dynamics (e.g., host population size, pathogen prevalence, transmission rate, host survival probability) typically requires marking and tracking individuals over time. While multistate mark–recapture models can produce high‐quality inference, these techniques are difficult to employ at large spatial and long temporal scales or in small remnant host populations decimated by virulent pathogens, where low recapture rates may preclude the use of mark–recapture techniques. Recently developed N‐mixture models offer a statistical framework for estimating wildlife disease dynamics from count data. N‐mixture models are a type of state‐space model in which observation error is attributed to failing to detect some individuals when they are present (i.e., false negatives). The analysis approach uses repeated surveys of sites over a period of population closure to estimate detection probability. We review the challenges of modeling disease dynamics and describe how N‐mixture models can be used to estimate common metrics, including pathogen prevalence, transmission, and recovery rates while accounting for imperfect host and pathogen detection. We also offer a perspective on future research directions at the intersection of quantitative and disease ecology, including the estimation of false positives in pathogen presence, spatially explicit disease‐structured N‐mixture models, and the integration of other data types with count data to inform disease dynamics. Managers rely on accurate and precise estimates of disease dynamics to develop strategies to mitigate pathogen impacts on host populations. At a time when pathogens pose one of the greatest threats to biodiversity, statistical methods that lead to robust inferences on host populations are critically needed for rapid, rather than incremental, assessments of the impacts of emerging infectious diseases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号