首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   1篇
  2020年   1篇
  2016年   1篇
  2014年   1篇
  2012年   1篇
  2011年   2篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2006年   2篇
  2001年   3篇
  2000年   1篇
  1999年   2篇
  1993年   1篇
  1975年   1篇
排序方式: 共有20条查询结果,搜索用时 15 毫秒
1.
Weinberg CR  Umbach DM 《Biometrics》1999,55(3):718-726
Assays can be so expensive that interesting hypotheses become impractical to study epidemiologically. One need not, however, perform an assay for everyone providing a biological specimen. We propose pooling equal-volume aliquots from randomly grouped sets of cases and randomly grouped sets of controls, and then assaying the smaller number of pooled samples. If an effect modifier is of concern, the pooling can be done within strata defined by that variable. For covariates assessed on individuals (e.g., questionnaire data), set-based counterparts are calculated by adding the values for the individuals in each set. The pooling set then becomes the unit of statistical analysis. We show that, with appropriate specification of a set-based logistic model, standard software yields a valid estimated exposure odds ratio, provided the multiplicative formulation is correct. Pooling minimizes the depletion of irreplaceable biological specimens and can enable additional exposures to be studied economically. Statistical power suffers very little compared with the usual, individual-based analysis. In settings where high assay costs constrain the number of people an investigator can afford to study, specimen pooling can make it possible to study more people and hence improve the study's statistical power with no increase in cost.  相似文献   
2.
Chen HH  Jou YS  Lee WJ  Pan WH 《Genomics》2008,92(6):429-435
DNA pooling approach is a cost-saving strategy which is crucial for multiple-SNP association study and particularly for laboratories with limited budget. However, the biased allele frequency estimates cannot be completely abolished by κ correction. Using the SNaPshot™, we systematically examined the relations between actual minor allele frequencies (AMiAFs) levels and estimates obtained from the pooling process for all six types of SNPs. We applied principle of polynomial standard curves method (PSCM) to produce allele frequency estimates in pooled DNA samples and compared it with the κ method. The results showed that estimates derived from the PSCM were in general closer to AMiAFs than those from the κ method, particularly for C/G and G/T polymorphisms at the range of AMiAF between 20–40%. We demonstrated that applying PSCM in the SNaPshot™ platform is suitable for multiple-SNP association study using pooling strategy, due to its cost effectiveness and estimation accuracy.  相似文献   
3.
Summary It has become increasingly common in epidemiological studies to pool specimens across subjects to achieve accurate quantitation of biomarkers and certain environmental chemicals. In this article, we consider the problem of fitting a binary regression model when an important exposure is subject to pooling. We take a regression calibration approach and derive several methods, including plug‐in methods that use a pooled measurement and other covariate information to predict the exposure level of an individual subject, and normality‐based methods that make further adjustments by assuming normality of calibration errors. Within each class we propose two ways to perform the calibration (covariate augmentation and imputation). These methods are shown in simulation experiments to effectively reduce the bias associated with the naive method that simply substitutes a pooled measurement for all individual measurements in the pool. In particular, the normality‐based imputation method performs reasonably well in a variety of settings, even under skewed distributions of calibration errors. The methods are illustrated using data from the Collaborative Perinatal Project.  相似文献   
4.
Although requiring laborious analytical treatment, tree-ring series of nitrogen isotopes (δ15N) have gained popularity amongst researchers for their potential as environmental indicators as anthropogenic emissions increase globally with potential effects on forest N cycles. Previous studies suggested that tree-ring series correlate with climatic and air quality parameters. However, none discussed the level of replication required for expressing the population signals of specific species of trees. In this investigation, we studied 27 white spruce trees from two sites under distinct environmental conditions to evaluate the appropriate protocol for preparing consistent tree-ring δ15N series.The produced series indicate that high frequency (short-term, <7 years) δ15N changes at a replication as high as 10 trees cannot serve environmental purposes. Conversely, the low frequency (middle-, 7–15 years, to long-term, > 15 years) δ15N trends show coherence between arithmetic means of individual series at replication levels as low as three trees, whereas middle-term pooled trends do not perform as coherently. The low frequency mean trends of individual series obtained for the two sites suggest that local biogeochemical soil conditions modified by anthropogenic emissions modulate the δ15N responses in trees. Hence, we propose that long-term tree-ring δ15N series constitute reliable environmental indicators.  相似文献   
5.
Human immunodeficiency virus (HIV) infection has serious consequences and must be kept out of blood supplies. Screening to ensure the safety of blood supplies is associated with a very high cost. The idea of pooling test samples to obtain significant savings was first suggested in 1943. Recently pooling sera has gained wider interest both as a means to determine the HIV seroprevalence rate in general populations and to weed out all HIV-positive units in blood supplies. We describe a simple method for detecting seropositive samples in mass screening. This method determines the pooling size based on the estimated prevalence rate. Although several repooling stages are allowed, these will be kept to a minimum since the more stages that are required, the greater chance for human and technical errors. The criteria to end pooling are based on both the savings rate and the relative cost between the preparation and the actual test. Two examples illustrate the applications of this method in determining the number of samples to be pooled in successive stages and the resulting savings rate.  相似文献   
6.
Böhning D  Sarol J 《Biometrics》2000,56(1):304-308
In this paper, we consider the case of efficient estimation of the risk difference in a multicenter study allowing for baseline heterogeneity. We consider the optimally weighted estimator for the common risk difference and show that this estimator has considerable bias when the true weights (which are inversely proportional to the variances of the center-specific risk difference estimates) are replaced by their sample estimates. In addition, we propose a new estimator for this situation of the Mantel-Haenszel type that is unbiased and, in addition, has a smaller variance for small sample sizes within the study centers. Simulations illustrate these findings.  相似文献   
7.
8.
Results from better quality studies should in some sense be more valid or more accurate than results from other studies, and as a consequence should tend to be distributed differently from results of other studies. To date, however, quality scores have been poor predictors of study results. We discuss possible reasons and remedies for this problem. It appears that 'quality' (whatever leads to more valid results) is of fairly high dimension and possibly non-additive and nonlinear, and that quality dimensions are highly application-specific and hard to measure from published information. Unfortunately, quality scores are often used to contrast, model, or modify meta-analysis results without regard to the aforementioned problems, as when used to directly modify weights or contributions of individual studies in an ad hoc manner. Even if quality would be captured in one dimension, use of quality scores in summarization weights would produce biased estimates of effect. Only if this bias were more than offset by variance reduction would such use be justified. From this perspective, quality weighting should be evaluated against formal bias-variance trade-off methods such as hierarchical (random-coefficient) meta-regression. Because it is unlikely that a low-dimensional appraisal will ever be adequate (especially over different applications), we argue that response-surface estimation based on quality items is preferable to quality weighting. Quality scores may be useful in the second stage of a hierarchical response-surface model, but only if the scores are reconstructed to maximize their correlation with bias.  相似文献   
9.
Pooling biospecimens is a well accepted sampling strategy in biomedical research to reduce study cost of measuring biomarkers, and has been shown in the case of normally distributed data to yield more efficient estimation. In this paper we examine the efficiency of pooling, in the context of information matrix related to estimators of unknown parameters, when the biospecimens being pooled yield incomplete observations due to the instruments' limit of detection. Our investigation of three sampling strategies shows that, for a range of values of the detection limit, pooling is the most efficient sampling procedure. For certain other values of the detection limit, pooling can perform poorly.  相似文献   
10.
Pooling biospecimens and limits of detection: effects on ROC curve analysis   总被引:2,自引:0,他引:2  
Frequently, epidemiological studies deal with two restrictions in the evaluation of biomarkers: cost and instrument sensitivity. Costs can hamper the evaluation of the effectiveness of new biomarkers. In addition, many assays are affected by a limit of detection (LOD), depending on the instrument sensitivity. Two common strategies used to cut costs include taking a random sample of the available samples and pooling biospecimens. We compare the two sampling strategies when an LOD effect exists. These strategies are compared by examining the efficiency of receiver operating characteristic (ROC) curve analysis, specifically the estimation of the area under the ROC curve (AUC) for normally distributed markers. We propose and examine a method to estimate AUC when dealing with data from pooled and unpooled samples where an LOD is in effect. In conclusion, pooling is the most efficient cost-cutting strategy when the LOD affects less than 50% of the data. However, when much more than 50% of the data are affected, utilization of the pooling design is not recommended.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号