共查询到20条相似文献,搜索用时 15 毫秒
1.
R. A. Parker 《Biometrical journal. Biometrische Zeitschrift》1996,38(3):375-384
I propose an exact confidence interval for the ratio of two proportions when the proportions are not independent. One application is to estimate the population prevalence using a screening test with perfect specificity but imperfect sensitivity. The population prevalence is the ratio of the observed prevalence divided by the test's sensitivity. I describe a method to calculate exact confidence intervals for this problem and compare these results with approximate confidence intervals given previously. 相似文献
2.
Kung-Jong Lui 《Biometrical journal. Biometrische Zeitschrift》1997,39(5):545-558
This paper discusses interval estimation for the ratio of the mean failure times on the basis of paired exponential observations. This paper considers five interval estimators: the confidence interval using an idea similar to Fieller's theorem (CIFT), the confidence interval using an exact parametric test (CIEP), the confidence interval using the marginal likelihood ratio test (CILR), the confidence interval assuming no matching effect (CINM), and the confidence interval using a locally most powerful test (CIMP). To evaluate and compare the performance of these five interval estimators, this paper applies Monte Carlo simulation. This paper notes that with respect to the coverage probability, use of the CIFT, CILR, or CIMP, although which are all derived based on large sample theory, can perform well even when the number of pairs n is as small as 10. As compared with use of the CILR, this paper finds that use of the CIEP with equal tail probabilities is likely to lose efficiency. However, this loss can be reduced by using the optimal tail probabilities to minimize the average length when n is small (<20). This paper further notes that use of the CIMP is preferable to the CIEP in a variety of situations considered here. In fact, the average length of the CIMP with use of the optimal tail probabilities can even be shorter than that of the CILR. When the intraclass correlation between failure times within pairs is 0 (i.e., the failure times within the same pair are independent), the CINM, which is derived for two independent samples, is certainly the best one among the five interval estimators considered here. When there is an intraclass correlation but which is small (<0.10), the CIFT is recommended for obtaining a relatively short interval estimate without sacrificing the loss of the coverage probability. When the intraclass correlation is moderate or large, either the CILR or the CIMP with the optimal tail probabilities is preferable to the others. This paper also notes that if the intraclass correlation between failure times within pairs is large, use of the CINM can be misleading, especially when the number of pairs is large. 相似文献
3.
D. E. Walters 《Biometrical journal. Biometrische Zeitschrift》1986,28(3):337-346
A method is presented of deriving confidence limits for a difference of two proportions using Bayesian Theory. The reliability of the method is assessed for a selection of small sample sizes. A limited table of confidence limits is displayed. 相似文献
4.
T. Anbupalam K. N. Ponnuswamy M. R. Srinivasan 《Biometrical journal. Biometrische Zeitschrift》1994,36(5):549-556
An alternative method for determining the approximate lower confidence limits for the positive linear combination of two variances based on an approach similar to BULMER (1957) has been proposed. The probability coverage of the proposed alternative limits has been compared with the other existing methods. 相似文献
5.
J. Jack Lee Dan M. Serachitopol Barry W. Brown 《Biometrical journal. Biometrische Zeitschrift》1997,39(4):387-407
Two new methods for computing confidence intervals for the difference δ = p1 — p2 between two binomial proportions (p1, p2) are proposed. Both the Mid-P and Max-P likelihood weighted intervals are constructed by mapping the tail probabilities from the two-dimensional (p1, p2)-space into a one-dimensional function of δ based on the likelihood weights. This procedure may be regarded as a natural extension of the CLOPPER-PEARSON (1934) interval to the two-sample case where the weighted tail probability is α/2 at each end on the δ scale. The probability computation is based on the exact distribution rather than a large sample approximation. Extensive computation was carried out to evaluate the coverage probability and expected width of the likelihood-weighted intervals, and of several other methods. The likelihood-weighted intervals compare very favorably with the standard asymptotic interval and with intervals proposed by HAUCK and ANDERSON (1986), COX and SNELL (1989), SANTNER and SNELL (1980), SANTNER and YAMAGAMI (1993), and PESKUN (1993). In particular, the Mid-P likelihood-weighted interval provides a good balance between accurate coverage probability and short interval width in both small and large samples. The Mid-P interval is also comparable to COE and TAMHANE'S (1993) interval, which has the best performance in small samples. 相似文献
6.
D. E. Walters 《Biometrical journal. Biometrische Zeitschrift》1985,27(8):851-861
The performance of the standard confidence limits for a proportion is studied over the entire range of proportions, and for various prescribed values of the nominal confidence coefficient. In the light of the conservative nature of these limits, modifications are suggested resulting in limits with average confidence coefficients close to the 95% nominal value. A comparison is made with a solution obtained from the application of Bayes's theorem to the problem. 相似文献
7.
The problem of confidence interval construction for the odds ratio of two independent binomial samples is considered. Two methods of eliminating the nuisance parameter from the exact likelihood, conditioning and maximization, are described. A conditionally exact tail method exists by putting together upper and lower bounds. A shorter interval can be obtained by simultaneous consideration of both tails. We present here new methods that extend the tail and simultaneous approaches to the maximized likelihood. The methods are unbiased and applicable to case-control data, for which the odds ratio is important. The confidence interval procedures are compared unconditionally for small sample sizes in terms of their expected length and coverage probability. A Bayesian confidence interval method and a large-sample chi2 procedure are included in the comparisons. 相似文献
8.
The model considered is a two-factor cross-classification variance components model with one observation per cell. Let the two factors be A and B, the problem is to obtain an approximate confidence interval for the ratio of variance component A over variance component B. In this paper, a method of solving this problem is established and simulations are performed to check the method. 相似文献
9.
Lui KJ 《Biometrical journal. Biometrische Zeitschrift》2006,48(1):131-143
The relative risk (RR) is one of the most frequently used indices to measure the strength of association between a disease and a risk factor in etiological studies or the efficacy of an experimental treatment in clinical trials. In this paper, we concentrate attention on interval estimation of RR for sparse data, in which we have only a few patients per stratum, but a moderate or large number of strata. We consider five asymptotic interval estimators for RR, including a weighted least-squares (WLS) interval estimator with an ad hoc adjustment procedure for sparse data, an interval estimator proposed elsewhere for rare events, an interval estimator based on the Mantel-Haenszel (MH) estimator with a logarithmic transformation, an interval estimator calculated from a quadratic equation, and an interval estimator derived from the ratio estimator with a logarithmic transformation. On the basis of Monte Carlo simulations, we evaluate and compare the performance of these five interval estimators in a variety of situations. We note that, except for the cases in which the underlying common RR across strata is around 1, using the WLS interval estimator with the adjustment procedure for sparse data can be misleading. We note further that using the interval estimator suggested elsewhere for rare events tends to be conservative and hence leads to loss of efficiency. We find that the other three interval estimators can consistently perform well even when the mean number of patients for a given treatment is approximately 3 patients per stratum and the number of strata is as small as 20. Finally, we use a mortality data set comparing two chemotherapy treatments in patients with multiple myeloma to illustrate the use of the estimators discussed in this paper. 相似文献
10.
Kung-Jong Lui 《Biometrical journal. Biometrische Zeitschrift》1996,38(2):221-229
When the disease of interest is rare or chronic, we often apply the case control study design and the odds ratio to determine the possibly risk factors. In this paper, three simple closed-form interval calculation procedures for the odds ratio under inverse sampling are proposed. Results on the basis of a Monte Carlo simulation to evaluate and compare the performance of these three procedures are presented. A discussion on the limitation of the proposed procedures when the probability of exposure is extremely common in both the case and the control groups and a suggestion to overcome this limitation are included as well. 相似文献
11.
Bhabesh Sen Franklin A. Graybill Naitee Ting 《Biometrical journal. Biometrische Zeitschrift》1992,34(3):259-274
The model considered in this article is the two-factor nested unbalanced variance component model: for p = 1, 2, …, P; q = 1, 2, …, Qp; and r = 1, 2, …, Rpq. The random variables Ypqr are observable. The constant μ is an unknown parameter, and Ap, Bpq and Cpqr are (unobservable) normal and independently distributed random variables with zero means and finite variances σ2A, σ2B, and σ2C, respectively. Approximate confidence intervals on ?A and ?B using unweighted means are derived, where The performance of these approximate confidence intervals are evaluated using computer simulation. The simulated results indicate that these proposed confidence intervals perform satisfactorily and can be used in applied problems. 相似文献
12.
《Journal of molecular biology》2022,434(15):167586
Machine learning or deep learning models have been widely used for taxonomic classification of metagenomic sequences and many studies reported high classification accuracy. Such models are usually trained based on sequences in several training classes in hope of accurately classifying unknown sequences into these classes. However, when deploying the classification models on real testing data sets, sequences that do not belong to any of the training classes may be present and are falsely assigned to one of the training classes with high confidence. Such sequences are referred to as out-of-distribution (OOD) sequences and are ubiquitous in metagenomic studies. To address this problem, we develop a deep generative model-based method, MLR-OOD, that measures the probability of a testing sequencing belonging to OOD by the likelihood ratio of the maximum of the in-distribution (ID) class conditional likelihoods and the Markov chain likelihood of the testing sequence measuring the sequence complexity. We compose three different microbial data sets consisting of bacterial, viral, and plasmid sequences for comprehensively benchmarking OOD detection methods. We show that MLR-OOD achieves the state-of-the-art performance demonstrating the generality of MLR-OOD to various types of microbial data sets. It is also shown that MLR-OOD is robust to the GC content, which is a major confounding effect for OOD detection of genomic sequences. In conclusion, MLR-OOD will greatly reduce false positives caused by OOD sequences in metagenomic sequence classification. 相似文献
13.
Transformation and computer intensive methods such as the jackknife and bootstrap are applied to construct accurate confidence intervals for the ratio of specific occurrence/exposure rates, which are used to compare the mortality (or survival) experience of individuals in two study populations. Monte Carlo simulations are employed to compare the performances of the proposed confidence intervals when sample sizes are small or moderate. 相似文献
14.
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression. 相似文献
15.
V. Guiard 《Biometrical journal. Biometrische Zeitschrift》1989,31(6):681-697
It is possible to show that the theorem of Fieller yields an exact confidence region derivable from the likelihood ratio test. The statement of Milliken (1982) that this confidence region is conservative is refuted. Bounds for the conditional confidence coefficient under the condition that the confidence region is a “meaningful” confidence interval are given. 相似文献
16.
最近,人们突变积累实验(MA)中测定有害基因突变(DGM)的兴趣大增。在MA实验中有两种常见的DGM估计方法(极大似然法ML和距法MM),依靠计算机模拟和处理真实数据的应用软件来比较这两种方法。结论是:ML法难于得到最大似然估计(MLEs),所以ML法不如MM法估计有效;即使MLEs可得,也因其具严重的微样误差(据偏差和抽样差异)而产生估计偏差;似然函数曲线较平坦而难于区分高峰态和低峰态的分布。 相似文献
17.
Summary Many major genes have been identified that strongly influence the risk of cancer. However, there are typically many different mutations that can occur in the gene, each of which may or may not confer increased risk. It is critical to identify which specific mutations are harmful, and which ones are harmless, so that individuals who learn from genetic testing that they have a mutation can be appropriately counseled. This is a challenging task, since new mutations are continually being identified, and there is typically relatively little evidence available about each individual mutation. In an earlier article, we employed hierarchical modeling ( Capanu et al., 2008 , Statistics in Medicine 27 , 1973–1992) using the pseudo‐likelihood and Gibbs sampling methods to estimate the relative risks of individual rare variants using data from a case–control study and showed that one can draw strength from the aggregating power of hierarchical models to distinguish the variants that contribute to cancer risk. However, further research is needed to validate the application of asymptotic methods to such sparse data. In this article, we use simulations to study in detail the properties of the pseudo‐likelihood method for this purpose. We also explore two alternative approaches: pseudo‐likelihood with correction for the variance component estimate as proposed by Lin and Breslow (1996, Journal of the American Statistical Association 91 , 1007–1016) and a hybrid pseudo‐likelihood approach with Bayesian estimation of the variance component. We investigate the validity of these hierarchical modeling techniques by looking at the bias and coverage properties of the estimators as well as at the efficiency of the hierarchical modeling estimates relative to that of the maximum likelihood estimates. The results indicate that the estimates of the relative risks of very sparse variants have small bias, and that the estimated 95% confidence intervals are typically anti‐conservative, though the actual coverage rates are generally above 90%. The widths of the confidence intervals narrow as the residual variance in the second‐stage model is reduced. The results also show that the hierarchical modeling estimates have shorter confidence intervals relative to estimates obtained from conventional logistic regression, and that these relative improvements increase as the variants become more rare. 相似文献
18.
Notes on Estimation of the General Odds Ratio and the General Risk Difference for Paired‐Sample Data
Kung‐Jong Lui 《Biometrical journal. Biometrische Zeitschrift》2002,44(8):957-968
Under the matched‐pair design, this paper discusses estimation of the general odds ratio ORG for ordinal exposure in case‐control studies and the general risk difference RDG for ordinal outcomes in cross‐sectional or cohort studies. To illustrate the practical usefulness of interval estimators of ORG and RDG developed here, this paper uses the data from a case‐control study investigating the effect of the number of beverages drunk at “burning hot” temperature on the risk of possessing esophageal cancer, and the data from a cross‐sectional study comparing the grade distributions of unaided distance vision between two eyes. Finally, this paper notes that using the commonly‐used statistics related to odds ratio for dichotomous data by collapsing the ordinal exposure into two categories: the exposure versus the non‐exposure, tends to be less efficient than using the statistics related to ORG proposed herein. 相似文献
19.
The purpose of this study was to investigate how variability in pharmacokinetic parameters influences the determination of occupational exposure limits (OEL) for pharmaceutical compounds in potentially susceptible subpopulations. A compartmental pharmacokinetic model for quinidine was applied to derive OELs based on target blood concentrations in humans but relied on pharmacokinetic parameters in subjects with cirrhosis rather than normal subjects. Quinidine was used as the sample compound as this was used in the development of the methodology. The intent was not to set an OEL for quinidine for a particular population but rather to use the methodology to investigate how factors, which may influence susceptibility, could be incorporated into the analysis. The model was used to simulate exposure concentrations that would result in levels below those that cause undesirable pharmacological effects taking into account variability in parameters through incorporation of Monte Carlo sampling. Results indicate that cirrhotic patients did not require additional protection from occupational exposure to quinidine. These results cannot be extrapolated to other compounds, as the effects of variability in pharmacokinetics on systemic exposure are compound specific. However, this methodology does provide a framework for addressing issues related to the contribution of pharmacokinetics to susceptibility from occupational exposure to pharmaceutical compounds. 相似文献
20.
For the identification of peptides with tandem mass spectrometry (MS/MS), many software tools rely on the comparison between an experimental spectrum and a theoretically predicted spectrum. Consequently, the accurate prediction of the theoretical spectrum from a peptide sequence can potentially improve the peptide identification performance and is an important problem for mass spectrometry based proteomics. In this study a new approach, called MS-Simulator, is presented for predicting the y-ion intensities in the spectrum of a given peptide. The new approach focuses on the accurate prediction of the relative intensity ratio between every two adjacent y-ions. The theoretical spectrum can then be derived from these ratios. The prediction of a ratio is a closed-form equation that involves up to five consecutive amino acids nearby the two y-ions and the two peptide termini. Compared with another existing spectrum prediction tool MassAnalyzer, the new approach not only simplifies the computation, but also improves the prediction accuracy. 相似文献