首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Binomial sampling based on the proportion of samples infested was investigated for estimating mean densities of citrus rust mite, Phyllocoptruta oleivora (Ashmead), and Aculops pelekassi (Keifer) (Acari: Eriophyidae), on oranges, Citrus sinensis (L.) Osbeck. Data for the investigation were obtained by counting the number of motile mites within 600 sample units (each unit a 1-cm2 surface area per fruit) across a 4-ha block of trees (32 blocks total): five areas per 4 ha, five trees per area, 12 fruit per tree, and two samples per fruit. A significant (r2 = 0.89), linear relationship was found between ln(-ln(1 -Po)) and ln(mean), where P0 is the proportion of samples with more than zero mites. The fitted binomial parameters adequately described a validation data set from a sampling plan consisting of 192 samples. Projections indicated the fitted parameters would apply to sampling plans with as few as 48 samples, but reducing sample size resulted in an increase of bootstrap estimates falling outside expected confidence limits. Although mite count data fit the binomial model, confidence limits for mean arithmetic predictions increased dramatically as proportion of samples infested increased. Binomial sampling using a tally threshold of 0 therefore has less value when proportions of samples infested are large. Increasing the tally threshold to two mites marginally improved estimates at larger densities. Overall, binomial sampling for a general estimate of mite densities seemed to be a viable alternative to absolute counts of mites per sample for a grower using a low management threshold such as two or three mites per sample.  相似文献   

2.
Clegg LX  Gail MH  Feuer EJ 《Biometrics》2002,58(3):684-688
We propose a new Poisson method to estimate the variance for prevalence estimates obtained by the counting method described by Gail et al. (1999, Biometrics 55, 1137-1144) and to construct a confidence interval for the prevalence. We evaluate both the Poisson procedure and the procedure based on the bootstrap proposed by Gail et al. in simulated samples generated by resampling real data. These studies show that both variance estimators usually perform well and yield coverages of confidence intervals at nominal levels. When the number of disease survivors is very small, however, confidence intervals based on the Poisson method have supranominal coverage, whereas those based on the procedure of Gail et al. tend to have below-nominal coverage. For these reasons, we recommend the Poisson method, which also reduces the computational burden considerably.  相似文献   

3.
Mathematical approaches are not well established for calculating the upper confidence limit (UCL) of the mean of a set of concentration values that have been measured using a count-based analytical approach such as is commonly used for asbestos in air. This is because the uncertainty around the sample mean is determined not only by the authentic between-sample variation (sampling error), but also by random Poisson variation that occurs in the measurement of sample concentrations (measurement error). This report describes a computer-based application, referred to as CB-UCL, that supports the estimation of UCL values for asbestos and other count-based samples sets, with special attention to datasets with relatively small numbers of samples and relatively low counts (including datasets with all-zero count samples). Evaluation of the performance of the application with a range of test datasets indicates the application is useful for deriving UCL estimates for datasets of this type.  相似文献   

4.
Samples, such as raw waters, which contain large numbers of organisms will need to be pre-diluted before culture, in order to count or estimate the numbers present. This introduces a further approximation into the results obtained from routine sampling and laboratory procedures. Computers allow calculation of 95% confidence intervals (c.i.) for the estimated count in the pre-dilution sample. Although such confidence intervals can be wide, the variation in the density of organisms in the water source will be as large and probably much larger than the confidence intervals suggest. Accordingly, it is not surprisng that series of results from routine samples are highly variable.  相似文献   

5.
The effect of temperature and sampling interval on the accuracy of food consumption estimates based on stomach contents was studied using simulation. Three temporal patterns of feeding were considered (scattered throughout the day, one 5 h period or two 5 h periods) and gastric evacuation was modelled according to published values. Sampling intervals of 3 h gave reasonable food consumption estimates (2 to 19% error) at all temperatures. Comparably, sampling intervals as large as 12 h gave reasonable estimates of food consumption (1 to 20% error) when temperature was set to ≤10° C. At temperatures <5° C, even 24 h intervals (equivalent to one daily sampling) provided reasonable estimates of daily food consumption (2 to 19% error) for all but the highest gastric evacuation rate combined with one daily feeding period (47% error). The temperature effect on estimation error resulted from diminishing temporal fluctuations in stomach contents with slower gastric evacuation rates. It follows that sampling effort may be considerably minimized when estimating food consumption from stomach contents during periods with low temperatures such as the winter time experienced by temperate fishes.  相似文献   

6.
Propensity-score matching is frequently used in the medical literature to reduce or eliminate the effect of treatment selection bias when estimating the effect of treatments or exposures on outcomes using observational data. In propensity-score matching, pairs of treated and untreated subjects with similar propensity scores are formed. Recent systematic reviews of the use of propensity-score matching found that the large majority of researchers ignore the matched nature of the propensity-score matched sample when estimating the statistical significance of the treatment effect. We conducted a series of Monte Carlo simulations to examine the impact of ignoring the matched nature of the propensity-score matched sample on Type I error rates, coverage of confidence intervals, and variance estimation of the treatment effect. We examined estimating differences in means, relative risks, odds ratios, rate ratios from Poisson models, and hazard ratios from Cox regression models. We demonstrated that accounting for the matched nature of the propensity-score matched sample tended to result in type I error rates that were closer to the advertised level compared to when matching was not incorporated into the analyses. Similarly, accounting for the matched nature of the sample tended to result in confidence intervals with coverage rates that were closer to the nominal level, compared to when matching was not taken into account. Finally, accounting for the matched nature of the sample resulted in estimates of standard error that more closely reflected the sampling variability of the treatment effect compared to when matching was not taken into account.  相似文献   

7.
Multiple regression of observational data is frequently used to infer causal effects. Partial regression coefficients are biased estimates of causal effects if unmeasured confounders are not in the regression model. The sensitivity of partial regression coefficients to omitted confounders is investigated with a Monte‐Carlo simulation. A subset of causal traits is “measured” and their effects are estimated using ordinary least squares regression and compared to their expected values. Three major results are: (1) the error due to confounding is much larger than that due to sampling, especially with large samples, (2) confounding error shrinks trivially with sample size, and (3) small true effects are frequently estimated as large effects. Consequently, confidence intervals from regression are poor guides to the true intervals, especially with large sample sizes. The addition of a confounder to the model improves estimates only 55% of the time. Results are improved with complete knowledge of the rank order of causal effects but even with this omniscience, measured intervals are poor proxies for true intervals if there are many unmeasured confounders. The results suggest that only under very limited conditions can we have much confidence in the magnitude of partial regression coefficients as estimates of causal effects.  相似文献   

8.
The seemingly straightforward task of analysing faecal egg counts resulting from laboratory procedures such as the McMaster technique has, in reality, a number of complexities. These include Poisson errors in the counting technique which result from eggs being randomly distributed in well mixed faecal samples. In addition, counts between animals in a single experimental or observational group are nearly always over-dispersed. We describe the R package “eggCounts” that we have developed that incorporates both sampling error and over-dispersion between animals to calculate the true egg counts in samples of faeces, the probability distribution of the true counts and summary statistics such as the 95% uncertainty intervals. Based on a hierarchical Bayesian framework, the software will also rigorously estimate the percentage reduction of faecal egg counts and the 95% uncertainty intervals of data generated by a faecal egg count reduction test. We have also developed a user friendly web interface that can be used by those with limited knowledge of the R statistical computing environment. We illustrate the package with three simulated data sets of faecal egg count reduction experiments.  相似文献   

9.
Summary Various methods for the estimation of populations of algae and other small freshwater organisms are described. A method of counting is described in detail. It is basically that of Utermöhl and uses an inverted microscope.If the organisms are randomly distributed, a single count is sufficient to obtain an estimate of their abundance and confidence limits for this estimate, even if pipetting, dilution or concentration are involved.The errors in the actual counting and in converting colony counts to cell numbers are considered and found to be small relative to the random sampling error.Data are also given for a variant of Utermöhl's method using a normal microscope and for a method of using a haemocytometer for the larger plankton algae.  相似文献   

10.
Recently released data on non-cancer mortality in Japanese atomic bomb survivors are analysed using a variety of generalised relative risk models that take account of errors in estimates of dose to assess the dose-response at low doses. If linear-threshold, quadratic-threshold or linear-quadratic-threshold relative risk models (the dose-response is assumed to be linear, quadratic or linear-quadratic above the threshold, respectively) are fitted to the non-cancer data there are no statistically significant (p>0.10) indications of threshold departures from linearity, quadratic curvature or linear-quadratic curvature. These findings are true irrespective of the assumed magnitude of dosimetric error, between 25%–45% geometric standard deviations. In general, increasing the assumed magnitude of dosimetric error had little effect on the central estimates of the threshold, but somewhat widened the associated confidence intervals. If a power of dose model is fitted, there is little evidence (p>0.10) that the power of dose in the dose-response is statistically significantly different from 1, again irrespective of the assumed magnitude of dosimetric errors in the range 25%–45%. Again, increasing the size of the errors resulted in wider confidence intervals on the power of dose, without marked effect on the central estimates. In general these findings remain true for various non-cancer disease subtypes.  相似文献   

11.
12.
The fractal doubly stochastic Poisson process (FDSPP) model of molecular evolution, like other doubly stochastic Poisson models, agrees with the high estimates for the index of dispersion found from sequence comparisons. Unlike certain previous models, the FDSPP also predicts a positive geometric correlation between the index of dispersion and the mean number of substitutions. Such a relationship is statistically proven herein using comparisons between 49 mammalian genes. There is no characteristic rate associated with molecular evolution according to this model, but there is a scaling relationship in rates according to a fractal dimension of evolution. The FDSPP is a suitable replacement for the homogeneous Poisson process in tests of the lineage dependence of rates and in estimating confidence intervals for divergence times. As opposed to other fractal models, this model can be interpreted in terms of Darwinian selection and drift.   相似文献   

13.
Comparison of techniques used to count single-celled viable phytoplankton   总被引:1,自引:0,他引:1  
Four methods commonly used to count phytoplankton were evaluated based upon the precision of concentration estimates: Sedgewick Rafter and membrane filter direct counts, flow cytometry, and flow-based imaging cytometry (FlowCAM). Counting methods were all able to estimate the cell concentrations, categorize cells into size classes, and determine cell viability using fluorescent probes. These criteria are essential to determine whether discharged ballast water complies with international standards that limit the concentration of viable planktonic organisms based on size class. Samples containing unknown concentrations of live and UV-inactivated phytoflagellates (Tetraselmis impellucida) were formulated to have low concentrations (<100?mL?1) of viable phytoplankton. All count methods used chlorophyll a fluorescence to detect cells and SYTOX fluorescence to detect nonviable cells. With the exception of one sample, the methods generated live and nonviable cell counts that were significantly different from each other, although estimates were generally within 100% of the ensemble mean of all subsamples from all methods. Overall, percent coefficient of variation (CV) among sample replicates was lowest in membrane filtration sample replicates, and CVs for all four counting methods were usually lower than 30% (although instances of ~60% were observed). Since all four methods were generally appropriate for monitoring discharged ballast water, ancillary considerations (e.g., ease of analysis, sample processing rate, sample size, etc.) become critical factors for choosing the optimal phytoplankton counting method.  相似文献   

14.
BackgroundThe prevalence of Schistosoma mansoni infection is usually assessed by the Kato-Katz diagnostic technique. However, Kato-Katz thick smears have low sensitivity, especially for light infections. Egg count models fitted on individual level data can adjust for the infection intensity-dependent sensitivity and estimate the ‘true’ prevalence in a population. However, application of these models is complex and there is a need for adjustments that can be done without modeling expertise. This study provides estimates of the ‘true’ S. mansoni prevalence from population summary measures of observed prevalence and infection intensity using extensive simulations parametrized with data from different settings in sub-Saharan Africa.MethodologyAn individual-level egg count model was applied to Kato-Katz data to determine the S. mansoni infection intensity-dependent sensitivity for various sampling schemes. Observations in populations with varying forces of transmission were simulated, using standard assumptions about the distribution of worms and their mating behavior. Summary measures such as the geometric mean infection, arithmetic mean infection, and the observed prevalence of the simulations were calculated, and parametric statistical models fitted to the summary measures for each sampling scheme. For validation, the simulation-based estimates are compared with an observational dataset not used to inform the simulation.Principal findingsOverall, the sensitivity of Kato-Katz in a population varies according to the mean infection intensity. Using a parametric model, which takes into account different sampling schemes varying from single Kato-Katz to triplicate slides over three days, both geometric and arithmetic mean infection intensities improve estimation of sensitivity. The relation between observed and ‘true’ prevalence is remarkably linear and triplicate slides per day on three consecutive days ensure close to perfect sensitivity.Conclusions/significanceEstimation of ‘true’ S. mansoni prevalence is improved when taking into account geometric or arithmetic mean infection intensity in a population. We supply parametric functions and corresponding estimates of their parameters to calculate the ‘true’ prevalence for sampling schemes up to 3 days with triplicate Kato-Katz thick smears per day that allow estimation of the ‘true’ prevalence.  相似文献   

15.
Many tests of the lineage dependence of substitution rates, computations of the error of evolutionary distances, and simulations of molecular evolution assume that the rate of evolution is constant in time within each lineage descended from a common ancestor. However, estimates of the index of dispersion of numbers of mammalian substitutions suggest that the rate has time-dependent variations consistent with a fractal-Gaussian-rate Poisson process, which assumes common descent without assuming rate constancy. While this model does not affect certain relative-rate tests, it substantially increases the uncertainty of branch lengths. Thus, fluctuations in the rate of substitution cannot be neglected in calculations that rely on evolutionary distances, such as the confidence intervals of divergence times and certain phylogenetic reconstructions. The fractal-Gaussian-rate Poisson process is compared and contrasted with previous models of molecular evolution, including other Poisson processes, the fractal renewal process, a Lévy-stable process, a fractional-difference process, and a log-Brownian process. The fractal models are more compatible with mammalian data than the nonfractal models considered, and they may also be better supported by Darwinian theory. Although the fractal-Gaussian-rate Poisson process has not been proven to have better agreement with data or theory than the other fractal models, its Gaussian nature simplifies the exploration of its impact on evolutionary distance errors and relative-rate tests. Received: 29 September 1999 / Accepted: 20 January 2000  相似文献   

16.
This is the first study to examine the abundance of naked amoebae in the water column of a mangrove stand. A total of 37 different morphotypes was noted and at least 13 of these are probably new species. Over a one-year sampling interval, amoebae averaged 35,400 cells liter(-1) (range 2,000-104,000) by an indirect enrichment cultivation method. Densities in the upper end of this range are the highest ever reported for any planktonic habitat. Variation between samples was related to the quantity of suspended aggregates (flocs) in the water column emphasizing that amoebae are usually floc-associated. The study also showed that it is essential to disrupt floc material prior to withdrawing sample aliquots for the indirect counting method since several amoebae can occupy the interstices of aggregates. There is concern that indirect enumeration methods that require organisms to be cultured in the laboratory seriously underestimate the true count. A direct counting method using acridine orange staining and epifluorescence microscopy was attempted to assess the possible magnitude of the error associated with indirect counting. While this direct method had limitations, notably the difficulty of unambiguously differentiating between small amoebae and nanoflagellates, the results suggested that the indirect method gave estimates that were close to the true count (within a factor of two). Mangrove waters are rich in heterotrophic protozoa (approximately 3 x 10 liter(-1)) and while the heterotrophic flagellates are by far the dominant group, naked amoebae outnumber ciliates some 20-fold. The ecological consequences of high numbers of amoebae, particularly the common small forms less than 10 microm in length, need to be examined for these important coastal sites.  相似文献   

17.
We reconcile the findings of Holmes et al. ( Ecology Letters , 10 , 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.  相似文献   

18.
A graphical method of estimating the 90, 95, or 99% confidence intervals of direct microscopic counting data is presented. Construction of the graphs is based on the assumption that the normal distribution approximates the Poisson distribution. The method, useful on data obtained from dried films or counting chambers, eliminates time-consuming computation of precision. The minimal number of fields which must be counted to maintain a certain level of precision can be readily determined.  相似文献   

19.
Aims In ecology and conservation biology, the number of species counted in a biodiversity study is a key metric but is usually a biased underestimate of total species richness because many rare species are not detected. Moreover, comparing species richness among sites or samples is a statistical challenge because the observed number of species is sensitive to the number of individuals counted or the area sampled. For individual-based data, we treat a single, empirical sample of species abundances from an investigator-defined species assemblage or community as a reference point for two estimation objectives under two sampling models: estimating the expected number of species (and its unconditional variance) in a random sample of (i) a smaller number of individuals (multinomial model) or a smaller area sampled (Poisson model) and (ii) a larger number of individuals or a larger area sampled. For sample-based incidence (presence–absence) data, under a Bernoulli product model, we treat a single set of species incidence frequencies as the reference point to estimate richness for smaller and larger numbers of sampling units.Methods The first objective is a problem in interpolation that we address with classical rarefaction (multinomial model) and Coleman rarefaction (Poisson model) for individual-based data and with sample-based rarefaction (Bernoulli product model) for incidence frequencies. The second is a problem in extrapolation that we address with sampling-theoretic predictors for the number of species in a larger sample (multinomial model), a larger area (Poisson model) or a larger number of sampling units (Bernoulli product model), based on an estimate of asymptotic species richness. Although published methods exist for many of these objectives, we bring them together here with some new estimators under a unified statistical and notational framework. This novel integration of mathematically distinct approaches allowed us to link interpolated (rarefaction) curves and extrapolated curves to plot a unified species accumulation curve for empirical examples. We provide new, unconditional variance estimators for classical, individual-based rarefaction and for Coleman rarefaction, long missing from the toolkit of biodiversity measurement. We illustrate these methods with datasets for tropical beetles, tropical trees and tropical ants.Important findings Surprisingly, for all datasets we examined, the interpolation (rarefaction) curve and the extrapolation curve meet smoothly at the reference sample, yielding a single curve. Moreover, curves representing 95% confidence intervals for interpolated and extrapolated richness estimates also meet smoothly, allowing rigorous statistical comparison of samples not only for rarefaction but also for extrapolated richness values. The confidence intervals widen as the extrapolation moves further beyond the reference sample, but the method gives reasonable results for extrapolations up to about double or triple the original abundance or area of the reference sample. We found that the multinomial and Poisson models produced indistinguishable results, in units of estimated species, for all estimators and datasets. For sample-based abundance data, which allows the comparison of all three models, the Bernoulli product model generally yields lower richness estimates for rarefied data than either the multinomial or the Poisson models because of the ubiquity of non-random spatial distributions in nature.  相似文献   

20.
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance–covariance matrices ( G ). Large‐sample theory shows that maximum‐likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G . This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G , and of functions of G . We refer to this as the REML‐MVN method. This has been implemented in the mixed‐model program WOMBAT. Estimates of sampling variances from REML‐MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20‐dimensional data set for Drosophila wings. REML‐MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best‐estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML‐MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号