首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Huggins RM  Yip PS 《Biometrics》1999,55(2):387-395
A weighted martingale method, akin to a moving average, is proposed to allow the use of modified closed-population methods in the estimation of the size of a smoothly changing open population when there are frequent capture occasions. We concentrate here on modifications to martingale estimating functions for model Mt, but a wide range of closed-population estimators may be modified in this fashion. The method is motivated by and applied to weekly capture-recapture data from the Mai Po bird sanctuary in Hong Kong. Simulations show that the weighted martingale estimator compared well with the Jolly-Seber estimator when the conditions for the latter to be valid are met, and it performed far better when individuals were allowed to leave and reenter the population. Expressions are derived for the asymptotic bias and variance of the estimator in an appendix.  相似文献   

3.
Link WA 《Biometrics》2003,59(4):1123-1130
Heterogeneity in detection probabilities has long been recognized as problematic in mark-recapture studies, and numerous models developed to accommodate its effects. Individual heterogeneity is especially problematic, in that reasonable alternative models may predict essentially identical observations from populations of substantially different sizes. Thus even with very large samples, the analyst will not be able to distinguish among reasonable models of heterogeneity, even though these yield quite distinct inferences about population size. The problem is illustrated with models for closed and open populations.  相似文献   

4.
Estimating the size of hidden populations is essential to understand the magnitude of social and healthcare needs, risk behaviors, and disease burden. However, due to the hidden nature of these populations, they are difficult to survey, and there are no gold standard size estimation methods. Many different methods and variations exist, and diagnostic tools are needed to help researchers assess method-specific assumptions as well as compare between methods. Further, because many necessary mathematical assumptions are unrealistic for real survey implementation, assessment of how robust methods are to deviations from the stated assumptions is essential. We describe diagnostics and assess the performance of a new population size estimation method, capture–recapture with successive sampling population size estimation (CR-SS-PSE), which we apply to data from 3 years of studies from three cities and three hidden populations in Armenia. CR-SS-PSE relies on data from two sequential respondent-driven sampling surveys and extends the successive sampling population size estimation (SS-PSE) framework by using the number of individuals in the overlap between the two surveys and a model for the successive sampling process to estimate population size. We demonstrate that CR-SS-PSE is more robust to violations of successive sampling assumptions than SS-PSE. Further, we compare the CR-SS-PSE estimates to population size estimations using other common methods, including unique object and service multipliers, wisdom of the crowd, and two-source capture–recapture to illustrate volatility across estimation methods.  相似文献   

5.
6.
BACKGROUND AND AIMS: In microdensitometry and flow cytometry, estimation of nuclear DNA content in a sample requires a standard with a known nuclear DNA content. It is assumed that dye accessibility to DNA is the same in the sample and standard nuclei. Stoichiometric error arises when dye accessibility is not proportional between the sample and standard. The aim of the present study was to compare the effects of standardization (external-internal) on nuclear fluorescence of two Coffea species and petunia when temperature increases, and the consequences on genome size estimation. METHODS: Two coffee tree taxa, C. liberica subsp dewevrei (DEW) and C. pseudozanguebarieae (PSE), and Petunia hybrida were grown in a glasshouse in Montpellier, France. Nuclei were extracted by leaf chopping and at least 2 h after nuclei extraction they were stained with propidium iodide for approx. 3 min just before cytometer processing. In the first experiment, effects of heat treatment were observed in mixed (DEW + petunia) and unmixed extracts (petunia and DEW in separate extracts). Nine temperature treatments were carried out (21, 45, 55, 60, 65, 70, 75, 80 and 85 degrees C). In a second experiment, effects of heating on within-species genome size variations were investigated in DEW and PSE. Two temperatures (21 and 70 degrees C) were selected as representative of the maximal range of chromatin decondensation. KEY RESULTS AND CONCLUSIONS: In coffee trees, sample and standard nuclei reacted differently to temperature according to the type of standardization (pseudo-internal vs. external). Cytosolic compounds released in the filtrate would modify chromatin sensitivity to decondensation. Consequently, the 'genome size' estimate depended on the temperature. Similarly, intraspecific variations in genome size changed between estimations at 21 degrees C and 70 degrees C. Consequences are discussed and stoichiometric error detection methods are proposed, along with proposals for minimizing them.  相似文献   

7.
The Sage Grouse Centrocercus urophasianus is a species of conservation concern throughout its range in western North America. Since the 1950s, the high count of males at leks has been used as an index for monitoring populations. However, the relationship between this lek-count index and population size is unclear, and its reliability for assessing population trends has been questioned. We used non-invasive genetic mark-recapture analysis of faecal and feather samples to estimate pre-breeding population size for the Parachute-Piceance-Roan, a small, geographically isolated population of Sage Grouse in western Colorado, during two consecutive winters from 2012 to 2014. We estimated total pre-breeding population size as 335 (95% confidence interval (CI): 287–382) in the first winter and 745 (95% CI: 627–864) in the second, an approximate doubling in abundance between years. Although we also observed a large increase in the spring lek-count index between those years, high male count data poorly represented mark-recapture estimates of male abundance in each year. Our data suggest that lek counts are useful for detecting the direction and magnitude of large changes in Sage Grouse abundance over time but they may not reliably reflect small changes in abundance that may be relevant to small populations of conservation concern.  相似文献   

8.
9.
We consider sample size calculations for testing differences in means between two samples and allowing for different variances in the two groups. Typically, the power functions depend on the sample size and a set of parameters assumed known, and the sample size needed to obtain a prespecified power is calculated. Here, we account for two sources of variability: we allow the sample size in the power function to be a stochastic variable, and we consider estimating the parameters from preliminary data. An example of the first source of variability is nonadherence (noncompliance). We assume that the proportion of subjects who will adhere to their treatment regimen is not known before the study, but that the proportion is a stochastic variable with a known distribution. Under this assumption, we develop simple closed form sample size calculations based on asymptotic normality. The second source of variability is in parameter estimates that are estimated from prior data. For example, we account for variability in estimating the variance of the normal response from existing data which are assumed to have the same variance as the study for which we are calculating the sample size. We show that we can account for the variability of the variance estimate by simply using a slightly larger nominal power in the usual sample size calculation, which we call the calibrated power. We show that the calculation of the calibrated power depends only on the sample size of the existing data, and we give a table of calibrated power by sample size. Further, we consider the calculation of the sample size in the rarer situation where we account for the variability in estimating the standardized effect size from some existing data. This latter situation, as well as several of the previous ones, is motivated by sample size calculations for a Phase II trial of a malaria vaccine candidate.  相似文献   

10.
11.
Bayesian methods for estimation of the size of a closed population   总被引:4,自引:0,他引:4  
  相似文献   

12.
13.
14.
15.
The availability of a large number of high-density markers (SNPs) allows the estimation of historical effective population size (Ne) from linkage disequilibrium between loci. A recent refinement of methods to estimate historical Ne from the recent past has been shown to be rather accurate with simulation data. The method has also been applied to real data for numerous species. However, the simulation data cannot encompass all the complexities of real genomes, and the performance of any estimation method with real data is always uncertain, as the true demography of the populations is not known. Here, we carried out an experimental design with Drosophila melanogaster to test the method with real data following a known demographic history. We used a population maintained in the laboratory with a constant census size of about 2800 individuals and subjected the population to a drastic decline to a size of 100 individuals. After a few generations, the population was expanded back to the previous size and after a few further generations again expanded to twice the initial size. Estimates of historical Ne were obtained with the software GONE both for autosomal and X chromosomes from samples of 17 individuals sequenced for the whole genome. Estimates of the historical effective size were able to infer the patterns of changes that occurred in the populations showing generally good performance of the method. We discuss the limitations of the method and the application of the software carried out so far.  相似文献   

16.
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets.  相似文献   

17.
DUPUIS  JEROME A. 《Biometrika》1995,82(4):761-772
The Arnason–Schwarz model is usually used for estimatingsurvival and movement probabilities of animal populations fromcapture-recapture data. The missing data structure of this capture-recapturemodel is exhibited and summarised via a directed graph representation.Taking advantage of this structure we implement a Gibbs samplingalgorithm from which Bayesian estimates and credible intervalsfor survival and movement probabilities are derived. Convergenceof the algorithm is proved using a duality principle. We illustrateour approach through a real example.  相似文献   

18.
Miller F 《Biometrics》2005,61(2):355-361
We consider clinical studies with a sample size re-estimation based on the unblinded variance estimation at some interim point of the study. Because the sample size is determined in such a flexible way, the usual variance estimator at the end of the trial is biased. We derive sharp bounds for this bias. These bounds have a quite simple form and can help for the decision if this bias is negligible for the actual study or if a correction should be done. An exact formula for the bias is also provided. We discuss possibilities to get rid of this bias or at least to reduce the bias substantially. For this purpose, we propose a certain additive correction of the bias. We see in an example that the significance level of the test can be controlled when this additive correction is used.  相似文献   

19.
This paper develops Bayesian sample size formulae for experiments comparing two groups, where relevant preexperimental information from multiple sources can be incorporated in a robust prior to support both the design and analysis. We use commensurate predictive priors for borrowing of information and further place Gamma mixture priors on the precisions to account for preliminary belief about the pairwise (in)commensurability between parameters that underpin the historical and new experiments. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances that compare two normal means, proportions, or event times. When nuisance parameters (such as variance) in the new experiment are unknown, a prior distribution can further be specified based on preexperimental data. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our sample size formulae in the design of clinical trials, where pretrial information is available to be leveraged. Hypothetical data examples, motivated by a rare-disease trial with an elicited expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.  相似文献   

20.
Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (Ne) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (Ne). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known Ne and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed‐inverse variance‐weighted harmonic mean) consistently performed the best for both single‐sample and two‐sample (temporal) methods of estimating Ne and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per‐locus sample size components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号