首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To reduce the costs of using the ELITEST-MVV, we explored the possibilities of sample pooling. Straight forward pooling applying the manufacturer's test conditions resulted in a significant loss of sensitivity. This was solved by using lower pre-dilutions of the samples than prescribed. Although an increase of background signal was encountered, discrimination between positive and negative samples was even better at pre-dilutions up to 12.5× as compared to the standard pre-dilution of 100×. This implied that pooling of up to eight samples was feasible. Receiver operating characteristic (ROC) analysis was used to determine the optimal cut-off value for the testing of pooled serum samples.

A model for cost-benefit analysis of pooling was applied which combines the economics of the technical performance of the modified assay and other additional cost factors connected with pooling such as hands-on time for composing the pools, expected seroprevalence in the test population, sample tracing and testing the individual samples of positive pools.

We concluded that pooling of samples was only feasible for monitoring SRLV-free accredited flocks because of their very low prevalence of infection. A pool consisting of five samples turned out to be the economical optimum although technically pool sizes of 10 samples were permitted.  相似文献   


2.
Brookmeyer R 《Biometrics》1999,55(2):608-612
The testing of pooled samples of biological specimens for the purpose of estimating disease prevalence may be more cost effective than testing individual samples, particularly if the prevalence of disease is low. Multistage pooling studies involve testing pools and then sequentially subdividing and testing the positive pools. A simple estimator of disease prevalence and its variance are derived for general multistage pooling studies and are shown to be natural generalizations of Thompson's (1962) original estimators for single-stage pooling studies. The reduction in variance associated with each additional stage is calibrated. The results are extended to estimating disease incidence rates. The methods are used to estimate HIV incidence rates from a prevalence study of early HIV infection using a PCR assay for HIV RNA.  相似文献   

3.
The introduction of routine testing to detect viral genomes in donated blood was originally driven by requirements for plasma fractionation in relation to exclusion of hepatitis C virus (HCV) RNA. Nevertheless, it was obvious from the outset that a dual standard for fractionated products and individual blood components would be untenable. In many countries therefore, planning for introduction of nucleic acid testing (NAT) of blood incorporated progression to release of HCV RNA tested components. HCV was singled out because of its long seronegative 'window period', relatively high prevalence and incidence in blood donors, rapid burst time and high genome copy number during seroconversion. The latter properties made HCV particularly suitable for detection in pools of samples. If HCV RNA testing is required for release of labile components such as platelets, rapid provision of NAT results is vital because of short shelf life of platelets and the problems of delays when resolving the infectious unit in a reactive pool. For NAT release of labile components smaller sample pool sizes allow faster resolution of RNA positive units. Smaller pools involve high test throughput, the likely need for more testing laboratories and ensuing increased costs. Single sample testing is the ultimate extrapolation of reducing sample pool size. With reduced pool sizes or single sample testing, the option of testing for other viruses (e.g. HIV or HBV) singly or in multiplex also arises. The cost-benefit and incremental yield of such strategies in the light of 'combo' assays for HIV Ag/Ab and the recently described HCV Ag assay will require careful and objective assessment, together with re-appraisal of anti-HBc screening for detection of HBV infected donors at the "tail-end" of carriage.  相似文献   

4.
In many areas of the world, Potato virus Y (PVY) is one of the most economically important disease problems in seed potatoes. In Taiwan, generation 2 (G2) class certified seed potatoes are required by law to be free of detectable levels of PVY. To meet this standard, it is necessary to perform accurate tests at a reasonable cost. We used a two‐stage testing design involving group testing which was performed in Taiwan's Seed Improvement and Propagation Station to identify plants infected with PVY. At the first stage of this two‐stage testing design, plants are tested in groups. The second stage involves no retesting for negative test groups and exhaustive testing of all constituent individual samples from positive test groups. In order to minimise costs while meeting government standards, it is imperative to estimate optimal group size. However, because of limited test accuracy, classification errors for diagnostic tests are inevitable; to get a more accurate estimate, it is necessary to adjust for these errors. Therefore, this paper describes an analysis of diagnostic test data in which specimens are grouped for batched testing to offset costs. The optimal batch size is determined by various cost parameters as well as test sensitivity, specificity and disease prevalence. Here, the Bayesian method is employed to deal with uncertainty in these parameters. Moreover, we developed a computer program to determine optimal group size for PVY tests such that the expected cost is minimised even when using imperfect diagnostic tests of pooled samples. Results from this research show that, compared with error free testing, when the presence of diagnostic testing errors is taken into account, the optimal group size becomes smaller. Higher diagnostic testing costs, lower costs of false negatives or smaller prevalence can all lead to a larger optimal group size. Regarding the effects of sensitivity and specificity, optimal group size increases as sensitivity increases; however, specificity has little effect on determining optimal group size. From our simulated study, it is apparent that the Bayesian method can truly update the prior information to more closely approximate the intrinsic characteristics of the parameters of interest. We believe that the results of this study will be useful in the implementation of seed potato certification programmes, particularly those which require zero tolerance for quarantine diseases in certified tubers.  相似文献   

5.
Fields such as, diagnostic testing, biotherapeutics, drug development, and toxicology among others, center on the premise of searching through many specimens for a rare event. Scientists in the business of “searching for a needle in a haystack” may greatly benefit from the use of group screening design strategies. Group screening, where specimens are composited into pools with each pool being tested for the presence of the event, can be much more cost-efficient than testing each individual specimen. A number of group screening designs have been proposed in the literature. Incomplete block screening designs are described here and compared with other group screening designs. It is shown under certain conditions, that incomplete block screening designs can provide nearly a 90% cost saving compared to other group screening designs such as when prevalence is 0.001 and screening 3876 specimens with an ICB-sequential design vs. a Dorfman design. In other cases, previous group screening designs are shown to be most efficient. Overall, when prevalence is small (≤0.05) group screening designs are shown to be quite cost effective at screening a large number of specimens and in general there is no one design that is best in all situations. © 2018 American Institute of Chemical Engineers Biotechnol Progress, 35: e2770, 2019.  相似文献   

6.
The objective of this study was to identify QTL affecting susceptibility to Mycobacterium paratuberculosis infection in US Holsteins. Twelve paternal half-sib families were selected for the study based on large numbers of daughters in production and limited relationships among sires. Serum and faecal samples from 4350 daughters of these 12 sires were obtained for disease testing. Case definition for an infected cow was an ELISA sample-to-positive ratio >/=0.25, a positive faecal culture or both. Three families were selected for genotyping based on a high apparent prevalence (6.8-10.4% infected cows), high faecal culture prevalence (46.2-52.9% positive faecal cultures) and large numbers of daughters tested for disease (264-585). DNA pooling was used to genotype cows, with an average of 159 microsatellites within each sire family. Infected cows (the positive pool) were matched with two of their non-infected herdmates in the same lactation (the negative pool) to control for herd and age effects. Eight chromosomal regions putatively linked with susceptibility to M. paratuberculosis infection were identified using a Z-test (P < 0.01). Significant results were more rigorously tested by individually genotyping cows with three to five informative microsatellites within 15 cM of the significant markers identified with the DNA pools. Probability of infection based on both diagnostic tests was estimated for each individual and used as the dependent variable for interval mapping. Based on this analysis, evidence for the presence of a QTL segregating within families on BTA20 was found (chromosome-wide P-value = 0.0319).  相似文献   

7.

Background

The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred.

Methodology/Principal Findings

This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2.

Conclusions

The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools.  相似文献   

8.
Nucleic acid tests that detect HIV infection at an early phase are available and have been applied on individual dried blood spot (DBS). The present study was undertaken with an aim to evaluate the feasibility of performing PCR for HIV-1 DNA on pools of DBS as an alternative to individual testing. Standardization of PCR by a modified Amplicor HIV-1 DNA assay version 1.5 (Roche molecular diagnostics, USA), on pooled DBS was performed using five confirmed HIV reactive samples with known low viral load of HIV-1 and HIV non-reactive samples in pools of 5, 10 and 20 DBS. After successful standardization of pooling procedure, a total of 183 pools (of 10 DBS each) were prepared from 1,823 DBS samples, collected from a population-based study that tested negative for HIV antibodies and p24 antigen. All these pools were screened for HIV-1 DNA by the Amplicor assay. Standardization of pooling procedure indicated that pooling of 10 DBS gave an optimum result. Out of 183 pools tested, one pool of 10 samples was positive and of these ten DBS that were tested individually to identify the positive DBS, one sample was detected to be positive for HIV-1 DNA. Our study demonstrates that PCR for HIV-1 DNA can be successfully performed on pools of DBS. However, this may be needed only on specialized studies of HIV and not for routine epidemiology studies as only a very small fraction of cases would be missed if only antibody/antigen testing were done.  相似文献   

9.
We describe a method for pooling and sequencing DNA from a large number of individual samples while preserving information regarding sample identity. DNA from 576 individuals was arranged into four 12 row by 12 column matrices and then pooled by row and by column resulting in 96 total pools with 12 individuals in each pool. Pooling of DNA was carried out in a two-dimensional fashion, such that DNA from each individual is present in exactly one row pool and exactly one column pool. By considering the variants observed in the rows and columns of a matrix we are able to trace rare variants back to the specific individuals that carry them. The pooled DNA samples were enriched over a 250 kb region previously identified by GWAS to significantly predispose individuals to lung cancer. All 96 pools (12 row and 12 column pools from 4 matrices) were barcoded and sequenced on an Illumina HiSeq 2000 instrument with an average depth of coverage greater than 4,000×. Verification based on Ion PGM sequencing confirmed the presence of 91.4% of confidently classified SNVs assayed. In this way, each individual sample is sequenced in multiple pools providing more accurate variant calling than a single pool or a multiplexed approach. This provides a powerful method for rare variant detection in regions of interest at a reduced cost to the researcher.  相似文献   

10.
Group testing is frequently used to reduce the costs of screening a large number of individuals for infectious diseases or other binary characteristics in small prevalence situations. In many applications, the goals include both identifying individuals as positive or negative and estimating the probability of positivity. The identification aspect leads to additional tests being performed, known as “retests”, beyond those performed for initial groups of individuals. In this paper, we investigate how regression models can be fit to estimate the probability of positivity while also incorporating the extra information from these retests. We present simulation evidence showing that significant gains in efficiency occur by incorporating retesting information, and we further examine which testing protocols are the most efficient to use. Our investigations also demonstrate that some group testing protocols can actually lead to more efficient estimates than individual testing when diagnostic tests are imperfect. The proposed methods are applied retrospectively to chlamydia screening data from the Infertility Prevention Project. We demonstrate that significant cost savings could occur through the use of particular group testing protocols.  相似文献   

11.
Planning studies involving diagnostic tests is complicated by the fact that virtually no test provides perfectly accurate results. The misclassification induced by imperfect sensitivities and specificities of diagnostic tests must be taken into account, whether the primary goal of the study is to estimate the prevalence of a disease in a population or to investigate the properties of a new diagnostic test. Previous work on sample size requirements for estimating the prevalence of disease in the case of a single imperfect test showed very large discrepancies in size when compared to methods that assume a perfect test. In this article we extend these methods to include two conditionally independent imperfect tests, and apply several different criteria for Bayesian sample size determination to the design of such studies. We consider both disease prevalence studies and studies designed to estimate the sensitivity and specificity of diagnostic tests. As the problem is typically nonidentifiable, we investigate the limits on the accuracy of parameter estimation as the sample size approaches infinity. Through two examples from infectious diseases, we illustrate the changes in sample sizes that arise when two tests are applied to individuals in a study rather than a single test. Although smaller sample sizes are often found in the two-test situation, they can still be prohibitively large unless accurate information is available about the sensitivities and specificities of the tests being used.  相似文献   

12.
AIMS: Monitoring for Salmonella in slaughter pigs is important to enable targeted control measures to be applied on problem farms and at the abattoir. The aim of this study was to determine whether pooled serum and meat juice could be used to identify finishing pig herds with a high prevalence of infection. METHODS AND RESULTS: Samples of meat juice, serum, caecal contents, carcase swabs and pooled faeces from pig pens were taken from 20 commercial pig finishing farms and comparisons were made between the results of Salmonella culture, individual ELISA tests on serum and meat juice and pooled samples of serum and meat juice. Salmonella was isolated from samples from 19 of 20 farms. None of the ELISA tests showed a statistically significant correlation with caecal carriage of Salmonella or contamination of carcases. Serum mean optical density (O.D.) from pools of five, 10 or 20 sera showed a significant correlation with the Salmonella status of farm pen faeces. All pooled serum O.D. and sample/positive control ratio results correlated significantly with the results of the conventional individual sample ELISA. There was a statistically significant correlation between the incidence of Salmonella in farm pen pooled faeces and the prevalence of Salmonella in caeca of slaughter pigs. CONCLUSIONS: The results show a generally poor correlation between serological and bacteriological results but pooled serum or meat juice samples could be used as a cheaper substitute for serological screening of farms for Salmonella than individual samples. SIGNIFICANCE AND IMPACT OF THE STUDY: The availability of a cheaper test should allow the costs of Salmonella monitoring of pig farms to be reduced or allow more regular testing to enhance the designation of farm Salmonella risk status.  相似文献   

13.
McMahan CS  Tebbs JM  Bilder CR 《Biometrics》2012,68(1):287-296
Since the early 1940s, group testing (pooled testing) has been used to reduce costs in a variety of applications, including infectious disease screening, drug discovery, and genetics. In such applications, the goal is often to classify individuals as positive or negative using initial group testing results and the subsequent process of decoding of positive pools. Many decoding algorithms have been proposed, but most fail to acknowledge, and to further exploit, the heterogeneous nature of the individuals being screened. In this article, we use individuals' risk probabilities to formulate new informative decoding algorithms that implement Dorfman retesting in a heterogeneous population. We introduce the concept of "thresholding" to classify individuals as "high" or "low risk," so that separate, risk-specific algorithms may be used, while simultaneously identifying pool sizes that minimize the expected number of tests. When compared to competing algorithms which treat the population as homogeneous, we show that significant gains in testing efficiency can be realized with virtually no loss in screening accuracy. An important additional benefit is that our new procedures are easy to implement. We apply our methods to chlamydia and gonorrhea data collected recently in Nebraska as part of the Infertility Prevention Project.  相似文献   

14.
Biallelic marker, most commonly single nucleotide polymorphism (SNP), is widely utilized in genetic association analysis, which can be speeded up by estimating allele frequency in pooled DNA instead of individual genotyping. Several methods have shown high accuracy and precision for allele frequency estimation in pools. Here, we explored PCR restriction fragment length polymorphism (PCR–RFLP) combined with microchip electrophoresis as a possible strategy for allele frequency estimation in DNA pools. We have used the commercial available Agilent 2100 microchip electrophoresis analysis system for quantifying the enzymatically digested DNA fragments and the fluorescence intensities to estimate the allele frequencies in the DNA pools. In this study, we have estimated the allele frequencies of five SNPs in a DNA pool composed of 141 previously genotyped health controls and a DNA pool composed of 96 previously genotyped gastric cancer patients with a frequency representation of 10–90% for the variant allele. Our studies show that accurate, quantitative data on allele frequencies, suitable for investigating the association of SNPs with complex disorders, can be estimated from pooled DNA samples by using this assay. This approach, being independent of the number of samples, promises to drastically reduce the labor and cost of genotyping in the initial association analysis.  相似文献   

15.
MOTIVATION: Microarrays can simultaneously measure the expression levels of many genes and are widely applied to study complex biological problems at the genetic level. To contain costs, instead of obtaining a microarray on each individual, mRNA from several subjects can be first pooled and then measured with a single array. mRNA pooling is also necessary when there is not enough mRNA from each subject. Several studies have investigated the impact of pooling mRNA on inferences about gene expression, but have typically modeled the process of pooling as if it occurred in some transformed scale. This assumption is unrealistic. RESULTS: We propose modeling the gene expression levels in a pool as a weighted average of mRNA expression of all individuals in the pool on the original measurement scale, where the weights correspond to individual sample contributions to the pool. Based on these improved statistical models, we develop the appropriate F statistics to test for differentially expressed genes. We present formulae to calculate the power of various statistical tests under different strategies for pooling mRNA and compare resulting power estimates to those that would be obtained by following the approach proposed by Kendziorski et al. (2003). We find that the Kendziorski estimate tends to exceed true power and that the estimate we propose, while somewhat conservative, is less biased. We argue that it is possible to design a study that includes mRNA pooling at a significantly reduced cost but with little loss of information.  相似文献   

16.
Prospective studies of diagnostic test accuracy have important advantages over retrospective designs. Yet, when the disease being detected by the diagnostic test(s) has a low prevalence rate, a prospective design can require an enormous sample of patients. We consider two strategies to reduce the costs of prospective studies of binary diagnostic tests: stratification and two-phase sampling. Utilizing neither, one, or both of these strategies provides us with four study design options: (1) the conventional design involving a simple random sample (SRS) of patients from the clinical population; (2) a stratified design where patients from higher-prevalence subpopulations are more heavily sampled; (3) a simple two-phase design using a SRS in the first phase and selection for the second phase based on the test results from the first; and (4) a two-phase design with stratification in the first phase. We describe estimators for sensitivity and specificity and their variances for each design, along with sample size estimation. We offer some recommendations for choosing among the various designs. We illustrate the study designs with two examples.  相似文献   

17.
I tested the effects of pool size and spatial position (upstream or downstream) on fish assemblage attributes in isolated and connected pools in an upland Oklahoma stream, United States. I hypothesized that there would be fundamental differences between assemblages in these two pool types due to the presence or absence of colonization opportunities. Analyses were carried out at three ecological scales: (1) the species richness of pool assemblages, (2) the species composition of pool assemblages, and (3) the responses of individual species. There were significant species-volume relationships for isolated and connected pools. However, the relationship was weaker and there were fewer species, on average, in isolated pools. For both pool types, species incidences were significantly nested such that species-poor pools tended to be subsets of species-rich pools, a common pattern that ultimately results from species-specific differences in colonization ability and/or extinction susceptibility. To examine the potential importance of these two processes in nestedness patterns in both pool types, I made the following two assumptions: (1) probability of extinction should decline with increasing pool size, and (2) probability of immigration should decline in an upstream direction (increasing isolation). When ordered by pool volume, only isolated pools were significantly nested suggesting that these assemblages were extinction-driven. When ordered by spatial position, only connected pools were significantly nested (more species downstream) suggesting that differences in species-specific dispersal abilities were important in structuring these assemblages. At the individual-species level, volume was a significant predictor of occurrence for three species in isolated pools. In connected pools, two species showed significant position effects, one species showed a pool volume effect, and one species showed pool volume and position effects. These results demonstrate that pool size and position within a watershed are important determinants of fish species assemblage structure, but their importance varies with the colonization potential of the pools. Isolated pool assemblages are similar to the presumed relaxed faunas of montane forest fragments and land bridge islands, but at much smaller space and time scales. Received: 6 December 1996 / Accepted: 10 December 1996  相似文献   

18.
A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.  相似文献   

19.
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first ‘basic’ scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second ‘cautious’ scheme, an adaptation is made to ensure that correctly classifying a farm as ‘bad’ is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the ‘cautious’ scheme for which a sampling protocol has also been developed.  相似文献   

20.
Hung MC  Swallow WH 《Biometrics》2000,56(1):204-212
In group testing, the test unit consists of a group of individuals. If the group test is positive, then one or more individuals in the group are assumed to be positive. A group observation in binomial group testing can be, say, the test result (positive or negative) for a pool of blood samples that come from several different individuals. It has been shown that, when the proportion (p) of infected individuals is low, group testing is often preferable to individual testing for identifying infected individuals and for estimating proportions of those infected. We extend the potential applications of group testing to hypothesis-testing problems wherein one wants to test for a relationship between p and a classification or quantitative covariable. Asymptotic relative efficiencies (AREs) of tests based on group testing versus the usual individual testing are obtained. The Pitman ARE strongly favors group testing in many cases. Small-sample results from simulation studies are given and are consistent with the large-sample (asymptotic) findings. We illustrate the potential advantages of group testing in hypothesis testing using HIV-1 seroprevalence data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号