首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several statistical methods have been proposed for estimating the infection prevalence based on pooled samples, but these methods generally presume the application of perfect diagnostic tests, which in practice do not exist. To optimize prevalence estimation based on pooled samples, currently available and new statistical models were described and compared. Three groups were tested: (a) Frequentist models, (b) Monte Carlo Markov‐Chain (MCMC) Bayesian models, and (c) Exact Bayesian Computation (EBC) models. Simulated data allowed the comparison of the models, including testing the performance under complex situations such as imperfect tests with a sensitivity varying according to the pool weight. In addition, all models were applied to data derived from the literature, to demonstrate the influence of the model on real‐prevalence estimates. All models were implemented in the freely available R and OpenBUGS software and are presented in Appendix S1. Bayesian models can flexibly take into account the imperfect sensitivity and specificity of the diagnostic test (as well as the influence of pool‐related or external variables) and are therefore the method of choice for calculating population prevalence based on pooled samples. However, when using such complex models, very precise information on test characteristics is needed, which may in general not be available.  相似文献   

2.
1. Studies of large carnivore populations and, particularly, reliable estimates of population density are necessary for effective conservation management. However, these animals are difficult to study, and direct methods of assessing population size and density are often expensive and time-consuming.
2. Indirect sampling, by counting spoor, could provide repeatable and inexpensive measures of some population parameters. The relationship between true population density and indirect sampling results has seldom been described in large carnivore studies.
3. In northern Namibia the population densities of leopards, lions and wild dogs were measured through recognition of individuals and groups. Spoor counts were then conducted independently, to assess the relationship between true density and the distribution of spoor.
4. Sampling effort, both in terms of the number of roads and total road distance in a sample zone, and the intensity of sampling, had a marked effect on the accuracy and precision of spoor frequency calculations.
5. In a homogeneous habitat, leopard spoor were evenly spread along different roads and spoor frequency was independent of road length. Taking into account very low sample sizes, the spoor density of leopards, lions and wild dogs showed a strong linear correlation with true density. The slope of the regression for leopards was different to that of lions and wild dogs.  相似文献   

3.
Clegg LX  Gail MH  Feuer EJ 《Biometrics》2002,58(3):684-688
We propose a new Poisson method to estimate the variance for prevalence estimates obtained by the counting method described by Gail et al. (1999, Biometrics 55, 1137-1144) and to construct a confidence interval for the prevalence. We evaluate both the Poisson procedure and the procedure based on the bootstrap proposed by Gail et al. in simulated samples generated by resampling real data. These studies show that both variance estimators usually perform well and yield coverages of confidence intervals at nominal levels. When the number of disease survivors is very small, however, confidence intervals based on the Poisson method have supranominal coverage, whereas those based on the procedure of Gail et al. tend to have below-nominal coverage. For these reasons, we recommend the Poisson method, which also reduces the computational burden considerably.  相似文献   

4.
Summary .  We introduce a method of estimating disease prevalence from case–control family study data. Case–control family studies are performed to investigate the familial aggregation of disease; families are sampled via either a case or a control proband, and the resulting data contain information on disease status and covariates for the probands and their relatives. Here, we introduce estimators for overall prevalence and for covariate-stratum-specific (e.g., sex-specific) prevalence. These estimators combine the proportion of affected relatives of control probands with the proportion of affected relatives of case probands and are designed to yield approximately unbiased estimates of their population counterparts under certain commonly made assumptions. We also introduce corresponding confidence intervals designed to have good coverage properties even for small prevalences. Next, we describe simulation experiments where our estimators and intervals were applied to case–control family data sampled from fictional populations with various levels of familial aggregation. At all aggregation levels, the resulting estimates varied closely and symmetrically around their population counterparts, and the resulting intervals had good coverage properties, even for small sample sizes. Finally, we discuss the assumptions required for our estimators to be approximately unbiased, highlighting situations where an alternative estimator based only on relatives of control probands may perform better.  相似文献   

5.
To obtain accurate estimates of activity budget parameters, samples must be unbiased and precise. Many researchers have considered how biased data may affect their ability to draw conclusions and examined ways to decrease bias in sampling efforts, but few have addressed the implications of not considering estimate precision. We propose a method to assess whether the number of instantaneous samples collected is sufficient to obtain precise activity budget parameter estimates. We draw on sampling theory to determine the number of observations per animal required to reach a desired bound on the error of estimation based on a stratified random sample, with individual animals acting as strata. We also discuss the optimal balance between the number of individuals sampled and the number of observations sampled per individual for a variety of sampling conditions. We present an empirical dataset on pronghorn (Antilocapra americana) as an example of the utility of the method. The required numbers of observation to reach precise estimates for pronghorn varied between common and rare behaviors, but precise estimates were achieved with <255 observations per individual for common behaviors. The two most apparent factors affecting the required number of observations for precise estimates were the number of individuals sampled and the complexity of the activity budget. This technique takes into account variation associated with individual activity budgets and population variation in activity budget parameter estimates, and helps to ensure that estimates are precise. The method can also be used for planning future sampling efforts.  相似文献   

6.
Accurately estimating infection prevalence is fundamental to the study of population health, disease dynamics, and infection risk factors. Prevalence is estimated as the proportion of infected individuals (“individual‐based estimation”), but is also estimated as the proportion of samples in which evidence of infection is detected (“anonymous estimation”). The latter method is often used when researchers lack information on individual host identity, which can occur during noninvasive sampling of wild populations or when the individual that produced a fecal sample is unknown. The goal of this study was to investigate biases in individual‐based versus anonymous prevalence estimation theoretically and to test whether mathematically derived predictions are evident in a comparative dataset of gastrointestinal helminth infections in nonhuman primates. Using a mathematical model, we predict that anonymous estimates of prevalence will be lower than individual‐based estimates when (a) samples from infected individuals do not always contain evidence of infection and/or (b) when false negatives occur. The mathematical model further predicts that no difference in bias should exist between anonymous estimation and individual‐based estimation when one sample is collected from each individual. Using data on helminth parasites of primates, we find that anonymous estimates of prevalence are significantly and substantially (12.17%) lower than individual‐based estimates of prevalence. We also observed that individual‐based estimates of prevalence from studies employing single sampling are on average 6.4% higher than anonymous estimates, suggesting a bias toward sampling infected individuals. We recommend that researchers use individual‐based study designs with repeated sampling of individuals to obtain the most accurate estimate of infection prevalence. Moreover, to ensure accurate interpretation of their results and to allow for prevalence estimates to be compared among studies, it is essential that authors explicitly describe their sampling designs and prevalence calculations in publications.  相似文献   

7.
Pharmaceutical pregnancy registries document birth defects and other complications reported in pregnancies exposed to specific medications or diseases. A baseline estimate of birth defect prevalence is necessary for comparison. To identify potential teratogenic signals, the pregnancy registry must have a comparator that most closely matches the exposed population and data collection methodology, which are characteristics that vary among the multiplicity of birth defect surveillance systems. The system that yields the most accurate prevalence data may be different from that most closely matching the pregnancy registry methods. State public health programs have highly accurate and precise statistics, but their populations are broader than those of a pharmaceutical pregnancy registry. Large collaborative databases may have a more useful covered population, but there are secondary problems related to data precision. Health care databases enroll large numbers of patients and have good information about exposures and health problems, but the data can be difficult to access and lack useful detail. Exposure‐related databases are closer in population definition and collection methods, though the presence of different diseases and exposures can be problematic. Internal comparators are likely to be most useful in formal statistical analysis, but added cost and management burden and may require significantly increased registry enrollment. There is no ideal comparator, and this must be taken into account when planning a single‐exposure or single‐disease pregnancy registry. Birth Defects Research (Part A), 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

8.
Five methods to assess percolation rate from alternative earthen final covers (AEFCs) are described in the context of the precision with which the percolation rate can be estimated: trend analysis, tracer methods, water balance method, Darcy's Law calculations, and lysimetry. Trend evaluation of water content data is the least precise method because it cannot be used alone to assess the percolation rate. The precision of percolation rates estimated using tracer methods depends on the tracer concentration, percolation rate, and the sensitivity of the chemical extraction and analysis methods. Percolation rates determined using the water balance method have a precision of approximately 100 mm/yr in humid climates and 50 mm/yr in semiarid and drier climates, which is too large to demonstrate that an AEFC is meeting typical equivalency criterion (30 mm/yr or less). In most cases, the precision will be much poorer. Percolation rates computed using Darcy's Law with measured profiles of water content and matric suction typically have a precision that is about two orders of magnitude (or more) greater than the computed percolation rate. The Darcy's Law method can only be used for performance assessment if the estimated percolation rate is much smaller than the equivalency criterion and preferential flow is not present. Lysimetry provides the most precise estimates of percolation rate, but the precision depends on the method used to measure the collected water. The lysimeter used in the Alternative Cover Assessment Program (ACAP), which is described in this paper, can be used to estimate percolation rates with a precision between 0.00004 to 0.5 mm/yr, depending on the measurement method and the flow rates.  相似文献   

9.
We consider the estimation of the prevalence of a rare disease, and the log‐odds ratio for two specified groups of individuals from group testing data. For a low‐prevalence disease, the maximum likelihood estimate of the log‐odds ratio is severely biased. However, Firth correction to the score function leads to a considerable improvement of the estimator. Also, for a low‐prevalence disease, if the diagnostic test is imperfect, the group testing is found to yield more precise estimate of the log‐odds ratio than the individual testing.  相似文献   

10.
Whether the aim is to diagnose individuals or estimate prevalence, many epidemiological studies have demonstrated the successful use of tests on pooled sera. These tests detect whether at least one sample in the pool is positive. Although originally designed to reduce diagnostic costs, testing pools also lowers false positive and negative rates in low prevalence settings and yields more precise prevalence estimates. Current methods are aimed at estimating the average population risk from diagnostic tests on pools. In this article, we extend the original class of risk estimators to adjust for covariates recorded on individual pool members. Maximum likelihood theory provides a flexible estimation method that handles different covariate values in the pool, different pool sizes, and errors in test results. In special cases, software for generalized linear models can be used. Pool design has a strong impact on precision and cost efficiency, with covariate-homogeneous pools carrying the largest amount of information. We perform joint pool and sample size calculations using information from individual contributors to the pool and show that a good design can severely reduce cost and yet increase precision. The methods are illustrated using data from a Kenyan surveillance study of HIV. Compared to individual testing, age-homogeneous, optimal-sized pools of average size seven reduce cost to 44% of the original price with virtually no loss in precision.  相似文献   

11.
Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.  相似文献   

12.
Summary .  Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer.  相似文献   

13.
Synopsis We present ways to test the assumptions of the Petersen and removal methods of population size estimation and ways to adjust the estimates if violations of the assumptions are found. We were motivated by the facts that (1) results of using both methods are commonly reported without any reference to the testing of assumptions, (2) violations of the assumptions are more likely to occur than not to occur in natural populations, and (3) the estimates can be grossly in error if assumptions are violated. We recognize that in many cases two days in the field is the most time fish biologists can spend in obtaining a population estimate, so the use of alternative models of population estimation that require fewer assumptions is precluded. Hence, for biologists operating with these constraints and only these biologists, we describe and recommend a two-day technique that combines aspects of both capture-recapture and removal methods. We indicate how to test: most of the assumptions of both methods and how to adjust the population estimates obtained if violations of the assumptions occur. We also illustrate the use of this combined method with data from a field study. The results of this application further emphasize the importance of testing the assumptions of whatever method is used and making appropriate adjustments to the population size estimates for any violations identified.  相似文献   

14.
 This study presents two efficient algorithms – combinatorial and probabilistic combinatorial methods (CM and PCM) – for estimation of a number of precise patterns of discharges that occur by chance in records of multiple single-unit spike trains. The confidence limits estimated by these methods are in good agreement with different sets of simulated test data as well as with the ad-hoc method. Both combinatorial methods provided a better accuracy than the bootstrap algorithm and in most cases of nonstationary data PCM provided better estimations than the ad-hoc method. Introduction of a jitter for searching patterns with a precision of a few milliseconds and burst filtering may introduce biases in the estimations. Comparison of a new filtering procedure based upon a filtering frequency with previously described schemes of filtering indicates the possibility of using a simple setting which remains accurate over a wide range of parameters. We aim to implement a combination of PCM for estimations of the number of patterns formed by three to seven spikes and CM for higher-order complexities for estimations during experiments in progress. Received: 12 June 1995 / Accepted in revised form: 5 February 1997  相似文献   

15.
Cryptococcal meningitis (CM), a fungal disease caused by Cryptococcus species, is one of the most common opportunistic infections among persons with HIV/AIDS. The highest burden of disease is in sub-Saharan Africa and Southeast Asia, where limited access to antiretroviral treatment and appropriate antifungal therapy contributes to high mortality rates. Increasing focus has been placed on earlier detection and prevention of disease. Primary prophylaxis and screening may provide a survival benefit and can be cost-effective in settings where CM prevalence is high. The development of a new point-of-care cryptococcal antigen assay has the potential to transform both disease prevention and diagnosis.  相似文献   

16.
Two statistical methods for determining the precision of best-fit model parameters generated from chemical rate of release data are discussed. One method uses the likelihood theory to estimate marginal confidence intervals and joint confidence regions of the release model parameters. The other method uses Monte Carlo simulation to estimate statistical inferences for the release model parameters. Both methods were applied to a set of rate of release data that was generated using a field soil. The results of this evaluation indicate that the precision of F (the fraction of a chemical in a soil that is released quickly) is greater than the precision of k1 (the rate constant describing fast release), which is greater than the precision of k2 (the rate constant describing slow release). This occurs because more data are taken during the time period described by F and k1 than during the time period described by F and k2. In general, estimates of F will be relatively precise when the ratio of k1 to k2 is large, estimates of k1 for soil/chemical matrices with a high F will be relatively precise, and estimates of k2 for soil/chemical matrices with a low F will be relatively precise, provided that sufficient time is allowed for full release.  相似文献   

17.
Estimation of the survival rate through a gonotrophic cycle is an important factor in determining the vectorial capacity of a population of haematophagous insects in a disease cycle. Most methods used to calculate survival rates make stringent assumptions which may not be valid for all species. Birley and colleagues used a time series analysis of samples collected over several consecutive days, the lagged parous rate. Here, we use a simulation model to investigate (i) the length of data series needed and (ii) the consequences of failures in the assumptions of this method for the estimated survival rate. The accuracy of the estimated survival rate per cycle was high with sample periods of 10-100 days. The standard deviation (a measure of precision) decreased with the length of the sample period. When random sampling efficiency was included, the accuracy remained high but the estimates were less precise (larger standard deviations). If the sampling was biased in favour of either nulliparous or parous females, estimates of the survival rate were not accurate. The relationship between estimated survival rate, bias in collection, and true survival rate was non-linear. Thus, correction for the bias requires (i) prior knowledge of the direction and the severity of the bias and (ii) an independent estimate of the survival rate. This method of estimating survival rates is less accurate when the collection method is biased for or against parous females, although robust to other assumptions.  相似文献   

18.
《Cancer epidemiology》2014,38(2):193-199
ObjectivesWe present a new method for determining prevalence estimates together with estimates of their precision, from incidence and survival data using Monte-Carlo simulation techniques. The algorithm also provides for the incidence process to be marked with the values of subject level covariates, facilitating calculation of the distribution of these variables in prevalent cases.MethodsDisease incidence is modelled as a marked stochastic process and simulations are made from this process. For each simulated incident case, the probability of remaining in the prevalent sub-population is calculated from bootstrapped survival curves. This algorithm is used to determine the distribution of prevalence estimates and of the ancillary data associated with the marks of the incidence process. This is then used to determine prevalence estimates and estimates of the precision of these estimates, together with estimates of the distribution of ancillary variables in the prevalent sub-population. This technique is illustrated by determining the prevalence of acute myeloid leukaemia from data held in the Haematological Malignancy Research Network (HMRN). In addition, the precision of these estimates is determined and the age distribution of prevalent cases diagnosed within twenty years of the prevalence index date is calculated.ConclusionDetermining prevalence estimates by using Monte-Carlo simulation techniques provides a means of calculation more flexible that traditional techniques. In addition to automatically providing precision estimates for the prevalence estimates, the distribution of any measured subject level variables can be calculated for the prevalent sub-population. Temporal changes in incidence and in survival offer no difficulties for the method.  相似文献   

19.
Applied algal studies typically require enumeration of preserved cells. As applications of algal assessments proliferate, understanding sources of variability inherent in the methods by which abundance and species composition data are obtained becomes even more important for precision of measurements. We performed replicate counts of diatoms on permanently fixed coverglasses and all algae in Palmer–Maloney chambers to assess precision and accuracy of measurements derived from common counting methods. We counted diatoms and all algae with transects and random fields. Variability estimates (precision) of diatom density, species diversity, and species composition on permanent coverglasses were low between replicate subsamples and between replicate transects. However, average density estimates of diatoms settled on coverglasses determined with transect methods were 42–52% greater than density estimates made with random fields. This bias was due to a predictable, nonrandom distribution of diatoms on the coverglass with few diatoms near edges. Despite bias in density when counting diatoms along coverglass transects, no bias was observed in estimates of species composition. Estimates of density and taxa richness of all-algae in Palmer–Maloney chambers also had low variability among multiple transects and high similarity in species composition between transects. In addition, counting method in Palmer–Maloney chambers did not affect estimates of algal cell density, taxa richness, and species composition, which suggested that counting units were distributed randomly in the chambers. Thus, most sources of variability in sample preparation and analysis are small; however, transect counts should not be used to estimate cell density, and sufficient numbers of random fields must be counted to account for edge effects on cell distribution with material settled on permanently fixed coverglasses.  相似文献   

20.
An accurate and precise method was developed using HPLC-MS/MS to quantify erlotinib (OSI-774) and its O-desmethyl metabolite, OSI-420, in plasma. The advantages of this method include the use of a small sample volume, liquid-liquid extraction with high extraction efficiency and short chromatographic run times. The analytes were extracted from 100 microL plasma volume using hexane:ethyl acetate after midazolam was added to the sample for internal standardization. The compounds were separated on a Phenomenex C-18 Luna analytical column with acetonitrile:5 mM ammonium acetate as the mobile phase. All compounds were monitored by tandem mass spectrometry with electrospray positive ionization. The intra-day accuracy and precision (% coefficient of variation, % CV) estimates for erlotinib at 10 ng/mL were 90% and 9%, respectively. The intra-day accuracy and precision estimates for OSI-420 at 5 ng/mL were 80% and 4%, respectively. This method was used to quantify erlotinib and OSI-420 in plasma of patients (n=21) administered 150 mg erlotinib per day for non-small cell lung cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号