首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Problems of establishing equivalence or noninferiority between two medical diagnostic procedures involve comparisons of the response rates between correlated proportions. When the sample size is small, the asymptotic tests may not be reliable. This article proposes an unconditional exact test procedure to assess equivalence or noninferiority. Two statistics, a sample-based test statistic and a restricted maximum likelihood estimation (RMLE)-based test statistic, to define the rejection region of the exact test are considered. We show the p-value of the proposed unconditional exact tests can be attained at the boundary point of the null hypothesis. Assessment of equivalence is often based on a comparison of the confidence limits with the equivalence limits. We also derive the unconditional exact confidence intervals on the difference of the two proportion means for the two test statistics. A typical data set of comparing two diagnostic procedures is analyzed using the proposed unconditional exact and asymptotic methods. The p-value from the unconditional exact tests is generally larger than the p-value from the asymptotic tests. In other words, an exact confidence interval is generally wider than the confidence interval obtained from an asymptotic test.  相似文献   

2.
Yin G  Shen Y 《Biometrics》2005,61(2):362-369
Clinical trial designs involving correlated data often arise in biomedical research. The intracluster correlation needs to be taken into account to ensure the validity of sample size and power calculations. In contrast to the fixed-sample designs, we propose a flexible trial design with adaptive monitoring and inference procedures. The total sample size is not predetermined, but adaptively re-estimated using observed data via a systematic mechanism. The final inference is based on a weighted average of the block-wise test statistics using generalized estimating equations, where the weight for each block depends on cumulated data from the ongoing trial. When there are no significant treatment effects, the devised stopping rule allows for early termination of the trial and acceptance of the null hypothesis. The proposed design updates information regarding both the effect size and within-cluster correlation based on the cumulated data in order to achieve a desired power. Estimation of the parameter of interest and its confidence interval are proposed. We conduct simulation studies to examine the operating characteristics and illustrate the proposed method with an example.  相似文献   

3.
The problem of confidence interval construction for the odds ratio of two independent binomial samples is considered. Two methods of eliminating the nuisance parameter from the exact likelihood, conditioning and maximization, are described. A conditionally exact tail method exists by putting together upper and lower bounds. A shorter interval can be obtained by simultaneous consideration of both tails. We present here new methods that extend the tail and simultaneous approaches to the maximized likelihood. The methods are unbiased and applicable to case-control data, for which the odds ratio is important. The confidence interval procedures are compared unconditionally for small sample sizes in terms of their expected length and coverage probability. A Bayesian confidence interval method and a large-sample chi2 procedure are included in the comparisons.  相似文献   

4.
Tang ML  Tang NS  Chan IS  Chan BP 《Biometrics》2002,58(4):957-963
In this article, we propose approximate sample size formulas for establishing equivalence or noninferiority of two treatments in match-pairs design. Using the ratio of two proportions as the equivalence measure, we derive sample size formulas based on a score statistic for two types of analyses: hypothesis testing and confidence interval estimation. Depending on the purpose of a study, these formulas can be used to provide a sample size estimate that guarantees a prespecified power of a hypothesis test at a certain significance level or controls the width of a confidence interval with a certain confidence level. Our empirical results confirm that these score methods are reliable in terms of true size, coverage probability, and skewness. A liver scan detection study is used to illustrate the proposed methods.  相似文献   

5.
An alteration to Woodward's methods is recommended for deriving a 1 — α confidence interval for microbial density using serial dilutions with most-probable-number (MPN) estimates. Outcomes of the serial dilution test are ordered by their MPNs. A lower limit for the confidence interval corresponding to an outcome y is the density for which y and all higher ordered outcomes have total probability α/2. An upper limit is derived in the analogous way. An alteration increases the lowest lower limits and decreases the highest upper limits. For comparison, a method that is optimal in the sense of null hypothesis rejection is described. This method ranks outcomes dependent upon the microbial density in question, using proportional first derivatives of the probabilities. These and currently used methods are compared. The recommended method is shown to be more desirable in certain respects, although resulting in slightly wider confidence intervals than De Man's (1983) method.  相似文献   

6.
The observation of monophyly for a specified set of genealogical lineages is often used to place the lineages into a distinctive taxonomic entity. However, it is sometimes possible that monophyly of the lineages can occur by chance as an outcome of the random branching of lineages within a single taxon. Thus, especially for small samples, an observation of monophyly for a set of lineages--even if strongly supported statistically--does not necessarily indicate that the lineages are from a distinctive group. Here I develop a test of the null hypothesis that monophyly is a chance outcome of random branching. I also compute the sample size required so that the probability of chance occurrence of monophyly of a specified set of lineages lies below a prescribed tolerance. Under the null model of random branching, the probability that monophyly of the lineages in an index group occurs by chance is substantial if the sample is highly asymmetric, that is, if only a few of the sampled lineages are from the index group, or if only a few lineages are external to the group. If sample sizes are similar inside and outside the group of interest, however, chance occurrence of monophyly can be rejected at stringent significance levels (P < 10(-5)) even for quite small samples (approximately 20 total lineages). For a fixed total sample size, rejection of the null hypothesis of random branching in a single taxon occurs at the most stringent level if samples of nearly equal size inside and outside the index group--with a slightly greater size within the index group--are used. Similar results apply, with smaller sample sizes needed, when reciprocal monophyly of two groups, rather than monophyly of a single group, is of interest. The results suggest minimal sample sizes required for inferences to be made about taxonomic distinctiveness from observations of monophyly.  相似文献   

7.
A model is derived to estimate the survival probability of a time interval when censorings occur. The time interval is divided into partial intervals in order to obtain the conditional survival probabilities, each of which is a parameter of a Binomial distributed random variable. To allow for the dependence between the events in the different intervals these parameters are transformed. Corresponding a priori density functions are formulated regarding both the Bayesian uniform distribution and the special model. The a posteriori density function is derived for the product of the conditional survival probabilities, and formulae for the BAYE sian confidence interval and the expectation are given. Lower and upper bounds for the confidence interval and the expectation are derived. Some examples are given to compare the results with other methods.  相似文献   

8.
Problems involving thousands of null hypotheses have been addressed by estimating the local false discovery rate (LFDR). A previous LFDR approach to reporting point and interval estimates of an effect-size parameter uses an estimate of the prior distribution of the parameter conditional on the alternative hypothesis. That estimated prior is often unreliable, and yet strongly influences the posterior intervals and point estimates, causing the posterior intervals to differ from fixed-parameter confidence intervals, even for arbitrarily small estimates of the LFDR. That influence of the estimated prior manifests the failure of the conditional posterior intervals, given the truth of the alternative hypothesis, to match the confidence intervals. Those problems are overcome by changing the posterior distribution conditional on the alternative hypothesis from a Bayesian posterior to a confidence posterior. Unlike the Bayesian posterior, the confidence posterior equates the posterior probability that the parameter lies in a fixed interval with the coverage rate of the coinciding confidence interval. The resulting confidence-Bayes hybrid posterior supplies interval and point estimates that shrink toward the null hypothesis value. The confidence intervals tend to be much shorter than their fixed-parameter counterparts, as illustrated with gene expression data. Simulations nonetheless confirm that the shrunken confidence intervals cover the parameter more frequently than stated. Generally applicable sufficient conditions for correct coverage are given. In addition to having those frequentist properties, the hybrid posterior can also be motivated from an objective Bayesian perspective by requiring coherence with some default prior conditional on the alternative hypothesis. That requirement generates a new class of approximate posteriors that supplement Bayes factors modified for improper priors and that dampen the influence of proper priors on the credibility intervals. While that class of posteriors intersects the class of confidence-Bayes posteriors, neither class is a subset of the other. In short, two first principles generate both classes of posteriors: a coherence principle and a relevance principle. The coherence principle requires that all effect size estimates comply with the same probability distribution. The relevance principle means effect size estimates given the truth of an alternative hypothesis cannot depend on whether that truth was known prior to observing the data or whether it was learned from the data.  相似文献   

9.
An estimation of the immunity coverage needed to prevent future outbreaks of an infectious disease is considered for a community of households. Data on outbreak size in a sample of households from one epidemic are used to derive maximum likelihood estimates and confidence bounds for parameters of a stochastic model for disease transmission in a community of households. These parameter estimates induce estimates and confidence bounds for the basic reproduction number and the critical immunity coverage, which are the parameters of main interest when aiming at preventing major outbreaks in the future. The case when individuals are homogeneous, apart from the size of their household, is considered in detail. The generalization to the case with variable infectivity, susceptibility and/or mixing behaviour is discussed more briefly. The methods are illustrated with an application to data on influenza in Tecumseh, Michigan.  相似文献   

10.
Donner A  Zou G 《Biometrics》2002,58(1):209-215
Model-based inference procedures for the kappa statistic have developed rapidly over the last decade. However, no method has yet been developed for constructing a confidence interval about a difference between independent kappa statistics that is valid in samples of small to moderate size. In this article, we propose and evaluate two such methods based on an idea proposed by Newcombe (1998, Statistics in Medicine, 17, 873-890) for constructing a confidence interval for a difference between independent proportions. The methods are shown to provide very satisfactory results in sample sizes as small as 25 subjects per group. Sample size requirements that achieve a prespecified expected width for a confidence interval about a difference of kappa statistic are also presented.  相似文献   

11.

Background

The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred.

Methodology/Principal Findings

This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2.

Conclusions

The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools.  相似文献   

12.
Symmetric parallel‐line biological assays involve the estimation of (log) relative potencies. The class of p(≥ 2) combination of symmetric parallel line bioassays are considered in this study. A large sample test for the equality of the several potencies is developed. An estimator and a confidence interval are proposed for the common relative potency parameter. The asymptotic distribution of the proposed test‐statistic under the null hypothesis as well as under a contagious hypothesis is derived.  相似文献   

13.
The problem of finding exact simultaneous confidence bounds for comparing simple linear regression lines for two treatments with a simple linear regression line for the control over a fixed interval is considered. The assumption that errors are iid normal random is considered. It is assumed that the design matrices for the two treatments are equal and the design matrix for the control has the same number of copies of each distinct row of the design matrix for the treatments. The method is based on a pivotal quantity that can be expressed as a function of four t variables. The probability point depends on the size of an angle associated with the interval. We present probability points for various sample sizes and angles. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

15.
Zhou XH  Tu W 《Biometrics》2000,56(4):1118-1125
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression.  相似文献   

16.
In inter-laboratory studies, a fundamental problem of interest is inference concerning the consensus mean, when the measurements are made by several laboratories which may exhibit different within-laboratory variances, apart from the between laboratory variability. A heteroscedastic one-way random model is very often used to model this scenario. Under such a model, a modified signed log-likelihood ratio procedure is developed for the interval estimation of the common mean. Furthermore, simulation results are presented to show the accuracy of the proposed confidence interval, especially for small samples. The results are illustrated using an example on the determination of selenium in non-fat milk powder by combining the results of four methods. Here, the sample size is small, and the confidence limits for the common mean obtained by different methods produce very different results. The confidence interval based on the modified signed log-likelihood ratio procedure appears to be quite satisfactory.  相似文献   

17.
In clinical studies, we often compare the success rates of two treatment groups where post‐treatment responses of subjects within clusters are usually correlated. To estimate the difference between the success rates, interval estimation procedures that do not account for this intraclass correlation are likely inappropriate. To address this issue, we propose three interval procedures by direct extensions of recently proposed methods for independent binary data based on the concepts of design effect and effective sample size used in sample surveys. Each of them is then evaluated with four competing variance estimates. We also extend three existing methods recommended for complex survey data using different weighting schemes required for those three existing methods. An extensive simulation study is conducted for the purposes of evaluating and comparing the performance of the proposed methods in terms of coverage and expected width. The interval estimation procedures are illustrated using three examples in clinical and social science studies. Our analytic arguments and numerical studies suggest that the methods proposed in this work may be useful in clustered data analyses.  相似文献   

18.
Strassburger K  Bretz F  Finner H 《Biometrics》2007,63(4):1143-1151
This article considers the problem of comparing several treatments (dose levels, interventions, etc.) with the best, where the best treatment is unknown and the treatments are ordered in some sense. Order relations among treatments often occur quite naturally in practice. They may be ordered according to increasing risks, such as tolerability or safety problems with increasing dose levels in a dose-response study, for example. We tackle the problem of constructing a lower confidence bound for the smallest index of all treatments being at most marginally less effective than the (best) treatment having the largest effect. Such a bound ensures at confidence level 1 -alpha that all treatments with lower indices are relevantly less effective than the best competitor. We derive a multiple testing strategy that results in sharp confidence bounds. The proposed lower confidence bound is compared with those derived from other testing strategies. We further derive closed-form expressions for power and sample size calculations. Finally, we investigate several real data sets to illustrate various applications of our methods.  相似文献   

19.
Bootstrap confidence intervals for adaptive cluster sampling   总被引:2,自引:0,他引:2  
Consider a collection of spatially clustered objects where the clusters are geographically rare. Of interest is estimation of the total number of objects on the site from a sample of plots of equal size. Under these spatial conditions, adaptive cluster sampling of plots is generally useful in improving efficiency in estimation over simple random sampling without replacement (SRSWOR). In adaptive cluster sampling, when a sampled plot meets some predefined condition, neighboring plots are added to the sample. When populations are rare and clustered, the usual unbiased estimators based on small samples are often highly skewed and discrete in distribution. Thus, confidence intervals based on asymptotic normal theory may not be appropriate. We investigated several nonparametric bootstrap methods for constructing confidence intervals under adaptive cluster sampling. To perform bootstrapping, we transformed the initial sample in order to include the information from the adaptive portion of the sample yet maintain a fixed sample size. In general, coverages of bootstrap percentile methods were closer to nominal coverage than the normal approximation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号