首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 0 毫秒
1.
Under the matched‐pair design, this paper discusses estimation of the general odds ratio ORG for ordinal exposure in case‐control studies and the general risk difference RDG for ordinal outcomes in cross‐sectional or cohort studies. To illustrate the practical usefulness of interval estimators of ORG and RDG developed here, this paper uses the data from a case‐control study investigating the effect of the number of beverages drunk at “burning hot” temperature on the risk of possessing esophageal cancer, and the data from a cross‐sectional study comparing the grade distributions of unaided distance vision between two eyes. Finally, this paper notes that using the commonly‐used statistics related to odds ratio for dichotomous data by collapsing the ordinal exposure into two categories: the exposure versus the non‐exposure, tends to be less efficient than using the statistics related to ORG proposed herein.  相似文献   

2.
There are many epidemiologic studies or clinical trials, in which we may wish to establish an equivalence rather than to detect a difference between the distributions of responses. In this paper, we develop test procedures to detect equivalence with respect to the tail marginal distributions and the marginal proportions when the underlying data are on an ordinal scale with matched pairs. We include a numerical example concerning the unaided distance vision of two eyes over 7477 women to illustrate the practical usefulness of the proposed procedure. Finally, we include a brief discussion on the relation between the test procedures developed here and an asymptotic interval estimator proposed elsewhere for the simple difference in dichotomous data with matched‐pairs.  相似文献   

3.
Paired data arises in a wide variety of applications where often the underlying distribution of the paired differences is unknown. When the differences are normally distributed, the t‐test is optimum. On the other hand, if the differences are not normal, the t‐test can have substantially less power than the appropriate optimum test, which depends on the unknown distribution. In textbooks, when the normality of the differences is questionable, typically the non‐parametric Wilcoxon signed rank test is suggested. An adaptive procedure that uses the Shapiro‐Wilk test of normality to decide whether to use the t‐test or the Wilcoxon signed rank test has been employed in several studies. Faced with data from heavy tails, the U.S. Environmental Protection Agency (EPA) introduced another approach: it applies both the sign and t‐tests to the paired differences, the alternative hypothesis is accepted if either test is significant. This paper investigates the statistical properties of a currently used adaptive test, the EPA's method and suggests an alternative technique. The new procedure is easy to use and generally has higher empirical power, especially when the differences are heavy‐tailed, than currently used methods.  相似文献   

4.
Abstract. This article investigates whether the Braun‐Blanquet abundance/dominance (AD) scores that commonly appear in phytosociological tables can properly be analysed by conventional multivariate analysis methods such as Principal Components Analysis and Correspondence Analysis. The answer is a definite NO. The source of problems is that the AD values express species performance on a scale, namely the ordinal scale, on which differences are not interpretable. There are several arguments suggesting that no matter which methods have been preferred in contemporary numerical syntaxonomy and why, ordinal data should be treated in an ordinal way. In addition to the inadmissibility of arithmetic operations with the AD scores, these arguments include interpretability of dissimilarities derived from ordinal data, consistency of all steps throughout the analysis and universality of the method which enables simultaneous treatment of various measurement scales. All the ordination methods that are commonly used, for example, Principal Components Analysis and all variants of Correspondence Analysis as well as standard cluster analyses such as Ward's method and group average clustering, are inappropriate when using AD data. Therefore, the application of ordinal clustering and scaling methods to traditional phytosociological data is advocated. Dissimilarities between relevés should be calculated using ordinal measures of resemblance, and ordination and clustering algorithms should also be ordinal in nature. A good ordination example is Non‐metric Multidimensional Scaling (NMDS) as long as it is calculated from an ordinal dissimilarity measure such as the Goodman & Kruskal γ coefficient, and for clustering the new OrdClAn‐H and OrdClAn‐N methods.  相似文献   

5.
Survival data consisting of independent sets of correlated failure times may arise in many situations. For example, we may take repeated observations of the failure time of interest from each patient or observations of the failure time on siblings, or consider the failure times on littermates in toxicological experiments. Because the failure times taken on the same patient or related family members or from the same litter are likely correlated, use of the classical log‐rank test in these situations can be quite misleading with respect to type I error. To avoid this concern, this paper develops two closed‐form asymptotic summary tests, that account for the intraclass correlation between the failure times within patients or units. In fact, one of these two test includes the classical log‐rank test as a special case when the intraclass correlation equals 0. Furthermore, to evaluate the finite‐sample performance of the two tests developed here, this paper applies Monte Carlo simulation and notes that they can actually perform quite well in a variety of situations considered here.  相似文献   

6.
Tests for a monotonic trend between an ordered categorical exposure and disease status are routinely carried out from case‐control data using the Mantel‐extension trend test or the asymptotically equivalent Cochran‐Armitage test. In this study, we considered two alternative tests based on isotonic regression, namely an order‐restricted likelihood ratio test and an isotonic modification of the Mantel‐extension test extending the recent proposal by Mancuso, Ahn and Chen (2001) to case‐control data. Furthermore, we considered three tests based on contrasts, namely a single contrast (SC) test based on Schaafsma's coefficients, the Dosemeci and Benichou (DB) test, a multiple contrast (MC) test based on the Helmert, reverse‐Helmert and linear contrasts and we derived their case‐control versions. Using simulations, we compared the statistical properties of these five alternative tests to those of the Mantel‐extension test under various patterns including no relationship, as well as monotonic and non‐monotonic relationships between exposure and disease status. In the case of no relationship, all tests had close to nominal type I error except in situations combining a very unbalanced exposure distribution and small sample size, where the asymptotic versions of the three tests based on contrasts were highly anticonservative. The use of bootstrap instead of asymptotic versions corrected this anticonservatism. For monotonic patterns, all tests had close powers. For non monotonic patterns, the DB‐test showed the most favourable results as it was the least powerful test. The two tests based on isotonic regression were the most powerful tests and the Mantel‐extension test, the SC‐ and MC‐tests had in‐between powers. The six tests were applied to data from a case‐control study investigating the relationship between alcohol consumption and risk of laryngeal cancer in Turkey. In situations with no evidence of a monotonic relationship between exposure and disease status, the three tests based on contrasts did not conclude in favour of a significant trend whereas all the other tests did. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
Let X and Y be two random variables with continuous distribution functions F and G. Consider two independent observations X1, … , Xm from F and Y1, … , Yn from G. Moreover, suppose there exists a unique x* such that F(x) > G(x) for x < x* and F(x) < G(x) for x > x* or vice versa. A semiparametric model with a linear shift function (Doksum, 1974) that is equivalent to a location‐scale model (Hsieh, 1995) will be assumed and an empirical process approach (Hsieh, 1995) is used to estimate the parameters of the shift function. Then, the estimated shift function is set to zero, and the solution is defined to be an estimate of the crossing‐point x*. An approximate confidence band of the linear shift function at the crossing‐point x* is also presented, which is inverted to yield an approximate confidence interval for the crossing‐point. Finally, the lifetime of guinea pigs in days observed in a treatment‐control experiment in Bjerkedal (1960) is used to demonstrate our procedure for estimating the crossing‐point. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
Sensitivity and specificity have traditionally been used to assess the performance of a diagnostic procedure. Diagnostic procedures with both high sensitivity and high specificity are desirable, but these procedures are frequently too expensive, hazardous, and/or difficult to operate. A less sophisticated procedure may be preferred, if the loss of the sensitivity or specificity is determined to be clinically acceptable. This paper addresses the problem of simultaneous testing of sensitivity and specificity for an alternative test procedure with a reference test procedure when a gold standard is present. The hypothesis is formulated as a compound hypothesis of two non‐inferiority (one‐sided equivalence) tests. We present an asymptotic test statistic based on the restricted maximum likelihood estimate in the framework of comparing two correlated proportions under the prospective and retrospective sampling designs. The sample size and power of an asymptotic test statistic are derived. The actual type I error and power are calculated by enumerating the exact probabilities in the rejection region. For applications that require high sensitivity as well as high specificity, a large number of positive subjects and a large number of negative subjects are needed. We also propose a weighted sum statistic as an alternative test by comparing a combined measure of sensitivity and specificity of the two procedures. The sample size determination is independent of the sampling plan for the two tests.  相似文献   

9.
We present a survey of sample size formulas derived in other papers for pairwise comparisons of k treatments and for comparisons of k treatments with a control. We consider the calculation of sample sizes with preassigned per‐pair, any‐pair and all‐pairs power for tests that control either the comparisonwise or the experimentwise type I error rate. A comparison exhibits interesting similarities between the parametric, nonparametric and binomial case.  相似文献   

10.
The conditional exact tests of homogeneity of two binomial proportions are often used in small samples, because the exact tests guarantee to keep the size under the nominal level. The Fisher's exact test, the exact chi‐squared test and the exact likelihood ratio test are popular and can be implemented in software StatXact. In this paper we investigate which test is the best in small samples in terms of the unconditional exact power. In equal sample cases it is proved that the three tests produce the same unconditional exact power. A symmetry of the unconditional exact power is also found. In unequal sample cases the unconditional exact powers of the three tests are computed and compared. In most cases the Fisher's exact test turns out to be best, but we characterize some cases in which the exact likelihood ratio test has the highest unconditional exact power. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号