首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using jackknife methods for estimating the parameter in dilution series   总被引:1,自引:0,他引:1  
R J Does  L W Strijbosch  W Albers 《Biometrics》1988,44(4):1093-1102
Dilution assays are quantal dose-response assays that detect a positive or negative response in each individual culture within groups of replicate cultures that vary in the dose of cells/organisms tested. We propose three jackknife versions of the maximum likelihood estimator of the unknown parameter, i.e., the frequency of a well-defined cell within the context of limiting dilution assays or the density of organisms within the context of serial dilution assays. The methods have been evaluated with artificial data from extensive Monte Carlo experiments. As a result of these experiments and theoretical considerations, the jackknife version based on deleting one individual culture at a time is proposed as the statistical procedure of choice. The next best method is the jackknife version based on leaving out the same replicate from each of the culture groups at a time.  相似文献   

2.
This is part 2 of a pair of papers on antimicrobial assays conducted to estimate the log reduction (LR), in the density of viable microbes, attributable to the germicide. Two alternative definitions of LR were defined in part 1, one based on the mean of the log-transformed densities; the other is based on the logarithm of the mean of densities. In this paper, we evaluate statistical methods for estimating LR from an antimicrobial assay in which the responses are presence/absence observations at each dilution in a series of dilutions. We provide a model for the presence/absence data, and, for each definition of LR, we derive the maximum likelihood estimator (mle). Using computer simulation methods, we compare the mle to several alternative estimators, including an estimator based on averaging the log-transformed most probable number (mpn) values. Standard error formulas for the estimators are also derived and evaluated using computer simulations. This investigation results in the following recommendations. If the parameter of interest is based on the mean of log-transformed densities, then the results favor use of the log-transformed mpn method. If, however, the parameter of interest is based on the logarithm of the mean of densities, then the results show that the mle should be used.  相似文献   

3.
In quantitative antimicrobial assays, the responses are counts of viable microbes in two treatment groups. One group is treated with a chemical germicide and the other group is control, treated with an inactive chemical. This is part 1 of a pair of papers that pertain to assays that estimate the log reduction (LR), in the density of viable microbes, attributable to the germicide treatment (part 2 is concerned with presence/absence responses). Such assays are used by producers, consumers, and regulatory agencies to assess the efficacy of liquid germicides. We define and compare the two different mathematical formulations for LR that are commonly used in practice when there are replicate density measurements. One LR parameter is based on the mean of the log-transformed densities; the other is based on the logarithm of the mean of densities. We build a statistical model relating microbial count data to the LR parameters, derive maximum likelihood and method of moments estimators for each LR parameter, and compare the estimators according to both their asymptotic characteristics and the results of a simulation study utilizing realistic sample sizes. Standard error formulas for the estimators are derived, and they are evaluated via simulation studies. The results of this investigation lead us to recommend the method of moments estimator, regardless of which definition of LR is chosen.  相似文献   

4.
Summary This study reports the results of a critical comparison of five statistical methods for estimating the density of viable cells in a limiting dilution assay (LDA). Artificial data were generated using Monte Carlo simulation. The performance of each statistical method was examined with respect to the accuracy of its estimator and, most importantly, the accuracy of its associated estimated standard error (SE). The regression method was found to perform at a level that is unacceptable for scientific research, due primarily to gross underestimation of the SE. The maximum likelihood method exhibited the best overall performance. A corrected version of Taswell's weighted-mean method, which provides the best performance among all noniterative methods examined, is also presented.  相似文献   

5.
Summary Six different statistical methods for comparing limiting dilution assays were evaluated, using both real data and a power analysis of simulated data. Simulated data consisted of a series of 12 dilutions for two treatment groups with 24 cultures per dilution and 1,000 independent replications of each experiment. Data within each replication were generated by Monte Carlo simulation, based on a probability model of the experiment. Analyses of the simulated data revealed that the type I error rates for the six methods differed substantially, with only likelihood ratio and Taswell's weighted mean methods approximating the nominal 5% significance level. Of the six methods, likelihood ratio and Taswell's minimum Chi-square exhibited the best power (least probability of type II errors). Taswell's weighted mean test yielded acceptable type I and type II error rates, whereas the regression method was judged unacceptable for scientific work.  相似文献   

6.
This work is concerned with statistical methods to estimate yield and maintenance parameters associated with microbial growth. For a given dilution rate, an experimenter typically measures substrate concentration, oxygen utilization rate, the rate of carbon dioxide evolution, and biomass concentration. These correlated response variables each contain information about the maintenance and yield parameters of interest. A maximum likelihood estimator which combines this correlated information for the yield and maintenance parameters is proposed, evaluated, and tested on literature data. Both point and interval estimators are considered.  相似文献   

7.
K Y Liang  S L Zeger 《Biometrics》1988,44(4):1145-1156
A new estimator of the common odds ratio in one-to-one matched case-control studies is proposed. The connection between this estimator and the James-Stein estimating procedure is highlighted through the argument of estimating functions. Comparisons are made between this estimator, the conditional maximum likelihood estimator, and the estimator ignoring the matching in terms of finite sample bias, mean squared error, coverage probability, and length of confidence interval. In many situations, the new estimator is found to be more efficient than the conditional maximum likelihood estimator without being as biased as the estimator that ignores matching. The extension to multiple risk factors is also outlined.  相似文献   

8.
L D Mueller 《Biometrics》1979,35(4):757-763
The delta and jackknife methods can be used to estimate Nei's measure of genetic distance and calculate confidence intervals for this estimate. Computer stimulations were used to study the bias and variance of each estimator and the accuracy of the corresponding approximate 95% confidence intervals. The simulations were conducted using 3 sets of data and several sample sizes. The results showed: (1) the jackknife reduced bias; (2) in 8 out of 9 cases the variance and mean square error of the jackknife estimator were less; (3) a second order jackknife reduced the bias the most but suffered a corresponding increase in variance; (4) both the first order jackknife and delta methods yielded intervals whose confidence levels were approximately equal but less than 95%.  相似文献   

9.
Wang J  Basu S 《Biometrics》1999,55(1):111-116
Interval estimates of the concentration of target entities from a serial dilution assay are usually based on the maximum likelihood estimator. The distribution of the maximum likelihood estimator is skewed to the right and is positively biased. This bias results in interval estimates that either provide inadequate coverage relative to the nominal level or yield excessively long intervals. Confidence intervals based on both log transformation and bias reduction are proposed and are shown through simulations to provide appropriate coverage with shorter widths than the commonly used intervals in a variety of designs. An application to feline AIDS research, which motivated this work, is also presented.  相似文献   

10.
Konopiński (2022) suggests that when averaging nucleotide diversity over a sequence, ignoring per-site sample size variation (i.e., using an unweighted mean) offers an improvement in precision (lower variation) and accuracy (reduced bias). Here, I argue that preserving uncertainty due to variation in sample size is in line with best statistical practices, and that the increase in accuracy observed is not a general feature of the unweighted mean proposed by Konopiński (2022). As such, I conclude that the use of a weighted mean, as employed by (Korunes & Samuk, 2020), remains the preferred method for averaging nucleotide diversity over multiple sites.  相似文献   

11.
MS/MS combined with database search methods can identify the proteins present in complex mixtures. High throughput methods that infer probable peptide sequences from enzymatically digested protein samples create a challenge in how best to aggregate the evidence for candidate proteins. Typically the results of multiple technical and/or biological replicate experiments must be combined to maximize sensitivity. We present a statistical method for estimating probabilities of protein expression that integrates peptide sequence identifications from multiple search algorithms and replicate experimental runs. The method was applied to create a repository of 797 non-homologous zebrafish (Danio rerio) proteins, at an empirically validated false identification rate under 1%, as a resource for the development of targeted quantitative proteomics assays. We have implemented this statistical method as an analytic module that can be integrated with an existing suite of open-source proteomics software.  相似文献   

12.
W W Hauck 《Biometrics》1984,40(4):1117-1123
The finite-sample properties of various point estimators of a common odds ratio from multiple 2 X 2 tables have been considered in a number of simulation studies. However, the conditional maximum likelihood estimator has received only limited attention. That omission is partially rectified here for cases of relatively small numbers of tables and moderate to large within-table sample sizes. The conditional maximum likelihood estimator is found to be superior to the unconditional maximum likelihood estimator, and equal or superior to the Mantel-Haenszel estimator in both bias and precision.  相似文献   

13.
Selected distributional properties of the maximum likelihood estimator and its z-transformation of three familial correlations (parental, parent-offspring, filial) were investigated numerically for the case of nuclear families with variable sibship size. This investigation was based on six different sets of the three correlations, and four different sample sizes, defining 24 sampling conditions, which were replicated 1,000 times each. It was found that the distributional properties of the correlation estimator are affected by the magnitude of the correlations even in large samples although approximate normality is achieved locally. Fisher's z-transformation, here used only in its interclass form, achieves reduction of skewness, stabilization of variance, and approach to normality already in small samples, except for the filial correlation (where it may be deemed inappropriate) in smaller samples. For both the correlation estimator and its z-transformation, the (estimated) relative efficiency was shown to be high (better than 90% in most sampling conditions), suggesting that the estimated minimum variance bound is a satisfactory estimator of the sampling variance. It is concluded that the maximum likelihood estimation of familial correlations under variable sibship size is feasible and, when prudently applied, especially in the form of their z-transformations, provides an appropriate method in analyses of family studies.  相似文献   

14.
Summary As biological studies become more expensive to conduct, statistical methods that take advantage of existing auxiliary information about an expensive exposure variable are desirable in practice. Such methods should improve the study efficiency and increase the statistical power for a given number of assays. In this article, we consider an inference procedure for multivariate failure time with auxiliary covariate information. We propose an estimated pseudopartial likelihood estimator under the marginal hazard model framework and develop the asymptotic properties for the proposed estimator. We conduct simulation studies to evaluate the performance of the proposed method in practical situations and demonstrate the proposed method with a data set from the studies of left ventricular dysfunction ( SOLVD Investigators, 1991 , New England Journal of Medicine 325 , 293–302).  相似文献   

15.
Genetic correlations are frequently estimated from natural and experimental populations, yet many of the statistical properties of estimators of are not known, and accurate methods have not been described for estimating the precision of estimates of Our objective was to assess the statistical properties of multivariate analysis of variance (MANOVA), restricted maximum likelihood (REML), and maximum likelihood (ML) estimators of by simulating bivariate normal samples for the one-way balanced linear model. We estimated probabilities of non-positive definite MANOVA estimates of genetic variance-covariance matrices and biases and variances of MANOVA, REML, and ML estimators of and assessed the accuracy of parametric, jackknife, and bootstrap variance and confidence interval estimators for MANOVA estimates of were normally distributed. REML and ML estimates were normally distributed for but skewed for and 0.9. All of the estimators were biased. The MANOVA estimator was less biased than REML and ML estimators when heritability (H), the number of genotypes (n), and the number of replications (r) were low. The biases were otherwise nearly equal for different estimators and could not be reduced by jackknifing or bootstrapping. The variance of the MANOVA estimator was greater than the variance of the REML or ML estimator for most H, n, and r. Bootstrapping produced estimates of the variance of close to the known variance, especially for REML and ML. The observed coverages of the REML and ML bootstrap interval estimators were consistently close to stated coverages, whereas the observed coverage of the MANOVA bootstrap interval estimator was unsatisfactory for some H, n, and r. The other interval estimators produced unsatisfactory coverages. REML and ML bootstrap interval estimates were narrower than MANOVA bootstrap interval estimates for most H, and r. Received: 6 July 1995 / Accepted: 8 March 1996  相似文献   

16.
coResearchers have long appreciated the significant relationship between body size and an animal's overall adaptive strategy and life history. However, much more emphasis has been placed on interpreting body size than on the actual calculation of it. One measure of size that is especially important for human evolutionary studies is stature. Despite a long history of investigation, stature estimation remains plagued by two methodological problems: (1) the choice of the statistical estimator, and (2) the choice of the reference population from which to derive the parameters.This work addresses both of these problems in estimating stature for fossil hominids, with special reference to A.L. 288-1 (Australopithecus afarensis) and WT 15000 (Homo erectus). Three reference samples of known stature with maximum humerus and femur lengths are used in this study: a large (n=2209) human sample from North America, a smaller sample of modern human pygmies (n=19) from Africa, and a sample of wild-collected African great apes (n=85). Five regression techniques are used to estimate stature in the fossil hominids using both univariate and multivariate parameters derived from the reference samples: classical calibration, inverse calibration, major axis, reduced major axis and the zero-intercept ratio model. We also explore a new diagnostic to test extrapolation and allometric differences with multivariate data, and we calculate 95% confidence intervals to examine the range of variation in estimates for A.L. 288-1, WT 15000 and the new Bouri hominid (contemporary with [corrected] Australopithecus garhi). Results frequently vary depending on whether the data are univariate or multivariate. Unique limb proportions and fragmented remains complicate the choice of estimator. We are usually left in the end with the classical calibrator as the best choice. It is the maximum likelihood estimator that performs best overall, especially in scenarios where extrapolation occurs away from the mean of the reference sample. The new diagnostic appears to be a quick and efficient way to determine at the outset whether extrapolation exists in size and/or shape of the long bones between the reference sample and the target specimen.  相似文献   

17.
The log-det estimator is a measure of divergence (evolutionary distance) between sequences of biological characters, DNA or amino acids, for example, and has been shown to be robust to biases in composition that can cause problems for other estimators. We provide a statistical framework to construct high-accuracy confidence intervals for log-det estimates and compare the efficiency of the estimator to that of maximum likelihood using time-reversible Markov models. The log-det estimator is found to have good statistical properties under such general models.  相似文献   

18.
Multiple lower limits of quantification (MLOQs) result if various laboratories are involved in the analysis of concentration data and some observations are too low to be quantified. For normally distributed data under MLOQs there exists only the multiple regression method of Helsel to estimate the mean and variance. We propose a simple imputation method and two new maximum likelihood estimation methods: the multiple truncated sample method and the multiple censored sample method. A simulation study is conducted to compare the performances of the newly introduced methods to Helsel's via the criteria root mean squared error (RMSE) and bias of the parameter estimates. Two and four lower limits of quantification (LLOQs), various amounts of unquantifiable observations and two sample sizes are studied. Furthermore, the robustness is investigated under model misspecification. The methods perform with decreasing accuracy for increasing rates of unquantified observations. Increasing sample sizes lead to smaller bias. There is almost no change in the performance between two and four LLOQs. The magnitude of the variance impairs the performance of all methods. For a smaller variance, the multiple censored sample method leads to superior estimates regarding the RMSE and bias, whereas Helsel's method is superior regarding the bias for a larger variance. Under model misspecification, Helsel's method was inferior to the other methods. Estimating the mean, the multiple censored sample method performed better, whereas the multiple truncated sample method performs best in estimating the variance. Summarizing, for a large sample size and normally distributed data we recommend to use Helsel's method. Otherwise, the multiple censored sample method should be used to obtain estimates of the mean and variance of data including MLOQs.  相似文献   

19.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

20.
The statistics of estimators used with the endpoint assay for virus titration were investigated. For a standard assay with 10 wells/dilution, the graphical estimator traditionally used was found to produce estimates with significant positive bias and a relatively low accuracy. Furthermore, the graphical estimator was found to be inconsistent. A superior estimator based on the maximum likelihood principle was developed. The results are discussed in relation to the choice between the endpoint titration assay and the plaque assay, and an alternative two-stage assay is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号