首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary Gilbert, Rossini, and Shankarappa (2005 , Biometrics 61 , 106‐117) present four U‐statistic based tests to compare genetic diversity between different samples. The proposed tests improved upon previously used methods by accounting for the correlations in the data. We find, however, that the same correlations introduce an unacceptable bias in the sample estimators used for the variance and covariance of the inter‐sequence genetic distances for modest sample sizes. Here, we compute unbiased estimators for these and test the resulting improvement using simulated data. We also show that, contrary to the claims in Gilbert et al., it is not always possible to apply the Welch–Satterthwaite approximate t‐test, and we provide explicit formulas for the degrees of freedom to be used when, on the other hand, such approximation is indeed possible.  相似文献   

2.
Lloyd CJ 《Biometrics》2008,64(3):716-723
Summary .   We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91–108). The latter method, however, may have computational advantages for large samples.  相似文献   

3.
A nonparametric model for the multivariate one‐way design is discussed which entails continuous as well as discontinuous distributions and, therefore, allows for ordinal data. Nonparametric hypotheses are formulated by the normalized version of the marginal distribution functions as well as the common distribution functions. The differences between the distribution functions are described by means of the so‐called relative treatment effects, for which unbiased and consistent estimators are derived. The asymptotic distribution of the vector of the effect estimators is derived and under the marignal hypothesis a consistent estimator for the asymptotic covariance matrix is given. Nonparametric versions of the Wald‐type statistic, the ANOVA‐type statistic and the Lawley‐Hotelling statistic are considered and compared by means of a simulation study. Finally, these tests are applied to a psychiatric clinical trial.  相似文献   

4.
The classical χ2‐procedure for the assessment of Hardy–Weinberg equilibrium (HWE) is tailored for detecting violations of HWE. However, many applications in genetic epidemiology require approximate compatibility with HWE. In a previous contribution to the field (Wellek, S. (2004). Biometrics, 60 , 694–703), the methodology of statistical equivalence testing was exploited for the construction of tests for problems in which the assumption of approximate compatibility of a given genotype distribution with HWE plays the role of the alternative hypothesis one aims to establish. In this article, we propose a procedure serving the same purpose but relying on confidence limits rather than critical bounds of a significance test. Interval estimation relates to essentially the same parametric function that was previously chosen as the target parameter for constructing an exact conditional UMPU test for equivalence with a HWE conforming genotype distribution. This population parameter is shown to have a direct genetic interpretation as a measure of relative excess heterozygosity. Confidence limits are constructed using both asymptotic and exact methods. The new approach is illustrated by reanalyzing genotype distributions obtained from published genetic association studies, and detailed guidance for choosing the equivalence margin is provided. The methods have been implemented in freely available SAS macros.  相似文献   

5.
In applied statistics it is customary to have to obtain a one‐ or two‐tail confidence interval for the difference d = p2p1 between two independent binomial proportions. Traditionally, one is looking for a classic and non‐symmetric interval (with respect to zero) of the type d ∈ [δLU], d ≤ δ0 or d ≥ δ0. However, in clinical trials, equivalence studies, vaccination efficacy studies, etc., and even after performing the classic homogeneity test, intervals of the type |d| ≤ Δ0 or |d| ≥ Δ0, where Δ0 > 0, may be necessary. In all these cases it is advisable to obtain the interval by inverting the appropriate test. The advantage of this procedure is that the conclusions obtained using the test are compatible with those obtained using the interval. The article shows how this is done using the new exact and asymptotic unconditional tests published. The programs for performing these tests may be obtained at URL http://www.ugr.es/~bioest/software.htm.  相似文献   

6.
The one‐degree‐of‐freedom Cochran‐Armitage (CA) test statistic for linear trend has been widely applied in various dose‐response studies (e.g., anti‐ulcer medications and short‐term antibiotics, animal carcinogenicity bioassays and occupational toxicant studies). This approximate statistic relies, however, on asymptotic theory that is reliable only when the sample sizes are reasonably large and well balanced across dose levels. For small, sparse, or skewed data, the asymptotic theory is suspect and exact conditional method (based on the CA statistic) seems to provide a dependable alternative. Unfortunately, the exact conditional method is only practical for the linear logistic model from which the sufficient statistics for the regression coefficients can be obtained explicitly. In this article, a simple and efficient recursive polynomial multiplication algorithm for exact unconditional test (based on the CA statistic) for detecting a linear trend in proportions is derived. The method is applicable for all choices of the model with monotone trend including logistic, probit, arcsine, extreme value and one hit. We also show that this algorithm can be easily extended to exact unconditional power calculation for studies with up to a moderately large sample size. A real example is given to illustrate the applicability of the proposed method.  相似文献   

7.
Let d = p2 ? p1 be the difference between two binomial proportions obtained from two independent trials. For parameter d, three pairs of hypothesis may be of interest: H1: d ≤ δ vs. K1: d > δ; H2: d ? (δ1, δ2) vs. K2: d ∈ (δ1, δ2); and H3: d ∈ [δ1, δ2] vs. K3: d ? [δ1, δ2], where Hi is the null hypothesis and Ki is the alternative hypothesis. These tests are useful in clinical trials, pharmacological and vaccine studies and in statistics generally. The three problems may be investigated by exact unconditional tests when the sample sizes are moderate. Otherwise, one should use approximate (or asymptotical) tests generally based on a Z‐statistics like those suggested in the paper. The article defines a new procedure for testing H2 or H3, demonstrates that this is more powerful than tests based on confidence intervals (the classic TOST – two one sided tests – test), defines two corrections for continuity which reduce the liberality of the three tests, and selects the one that behaves better. The programs for executing the unconditional exact and asymptotic tests described in the paper can be loaded at http://www.ugr.es/~bioest/software.htm. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
This paper develops the exact and asymptotic test procedures for detecting whether the relative difference of a given disease between exposure and non-exposure to a risk factor varies between strata under inverse sampling. This paper applies Monte Carlo Simulation to evaluate the performance of the proposed asymptotic test procedures and further demonstrates that these asymptotic procedures can be useful in many situations, especially when either the number of index subjects is large or the probability of possessing the underlying disease is small.  相似文献   

9.
Monte‐Carlo simulation methods are commonly used for assessing the performance of statistical tests under finite sample scenarios. They help us ascertain the nominal level for tests with approximate level, e.g. asymptotic tests. Additionally, a simulation can assess the quality of a test on the alternative. The latter can be used to compare new tests and established tests under certain assumptions in order to determinate a preferable test given characteristics of the data. The key problem for such investigations is the choice of a goodness criterion. We expand the expected p‐value as considered by Sackrowitz and Samuel‐Cahn (1999) to the context of univariate equivalence tests. This presents an effective tool to evaluate new purposes for equivalence testing because of its independence of the distribution of the test statistic under null‐hypothesis. It helps to avoid the often tedious search for the distribution under null‐hypothesis for test statistics which have no considerable advantage over yet available methods. To demonstrate the usefulness in biometry a comparison of established equivalence tests with a nonparametric approach is conducted in a simulation study for three distributional assumptions.  相似文献   

10.
Summary Meta‐analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran’s Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi‐square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here, we present an expansion for the mean of Q under the null hypothesis that is valid when the effect and the weight for each study depend on a single parameter, but for which neither normality nor independence of the effect and weight estimators is needed. This expansion represents an order O(1/n) correction to the usual chi‐square moment in the one‐parameter case. We apply the result to the homogeneity test for meta‐analyses in which the effects are measured by the standardized mean difference (Cohen’s d‐statistic). In this situation, we recommend approximating the null distribution of Q by a chi‐square distribution with fractional degrees of freedom that are estimated from the data using our expansion for the mean of Q. The resulting homogeneity test is substantially more accurate than the currently used test. We provide a program available at the Paper Information link at the Biometrics website http://www.biometrics.tibs.org for making the necessary calculations.  相似文献   

11.
Typical animal carcinogenicity studies involve the comparison of several dose groups to a negative control. The uncorrected asymptotic Cochran‐Armitage trend test with equally spaced dose scores is the most frequently used test in such set‐ups. However, this test based on a weighted linear regression on proportions. It is well known that the Cochran‐Armitage test lacks in power for other shapes than the assumed linear one. Therefore, dichotomous multiple contrast tests are introduced. These build the maximum over several single contrasts, where each of them is chosen appropriately to cover a specific dose‐response shape. An extensive power study has been conducted to compare multiple contrast tests with the approaches used so far. Crucial results will be presented in this paper. Moreover, exact tests and continuity corrected versions are introduced and compared to the traditional uncorrected approaches regarding size and power behaviour. A trend test for any shape of the dose‐response relationship for either crude tumour rates or mortality‐ adjusted rates based on the simple Poly‐3 transformation is proposed for evaluation of carcinogenicity studies.  相似文献   

12.
Multiple‐dose factorial designs may provide confirmatory evidence that (fixed) combination drugs are superior to either component drug alone. Moreover, a useful and safe range of dose combinations may be identified. In our study, we focus on (A) adjustments of the overall significance level made necessary by multiple testing, (B) improvement of conventional statistical methods with respect to power, distributional assumptions and dimensionality, and (C) construction of corresponding simultaneous confidence intervals. We propose novel resampling algorithms, which in a simple way take the correlation of multiple test statistics into account, thus improving power. Moreover, these algorithms can easily be extended to combinations of more than two component drugs and binary outcome data. Published data summaries from a blood pressure reduction trial are analysed and presented as a worked example. An implementation of the proposed methods is available online as an R package.  相似文献   

13.
Tests for a monotonic trend between an ordered categorical exposure and disease status are routinely carried out from case‐control data using the Mantel‐extension trend test or the asymptotically equivalent Cochran‐Armitage test. In this study, we considered two alternative tests based on isotonic regression, namely an order‐restricted likelihood ratio test and an isotonic modification of the Mantel‐extension test extending the recent proposal by Mancuso, Ahn and Chen (2001) to case‐control data. Furthermore, we considered three tests based on contrasts, namely a single contrast (SC) test based on Schaafsma's coefficients, the Dosemeci and Benichou (DB) test, a multiple contrast (MC) test based on the Helmert, reverse‐Helmert and linear contrasts and we derived their case‐control versions. Using simulations, we compared the statistical properties of these five alternative tests to those of the Mantel‐extension test under various patterns including no relationship, as well as monotonic and non‐monotonic relationships between exposure and disease status. In the case of no relationship, all tests had close to nominal type I error except in situations combining a very unbalanced exposure distribution and small sample size, where the asymptotic versions of the three tests based on contrasts were highly anticonservative. The use of bootstrap instead of asymptotic versions corrected this anticonservatism. For monotonic patterns, all tests had close powers. For non monotonic patterns, the DB‐test showed the most favourable results as it was the least powerful test. The two tests based on isotonic regression were the most powerful tests and the Mantel‐extension test, the SC‐ and MC‐tests had in‐between powers. The six tests were applied to data from a case‐control study investigating the relationship between alcohol consumption and risk of laryngeal cancer in Turkey. In situations with no evidence of a monotonic relationship between exposure and disease status, the three tests based on contrasts did not conclude in favour of a significant trend whereas all the other tests did. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
15.
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.  相似文献   

16.
Summary Second‐generation sequencing (sec‐gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads—strings of A,C,G, or T's, between 30 and 100 characters long—which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base‐calling. The complexity of the base‐calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across‐sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec‐gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base‐calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base‐calling allowing for informative and easily interpretable metrics that capture the variability in sequencing quality. Our model provides these informative estimates readily usable in quality assessment tools while significantly improving base‐calling performance.  相似文献   

17.
18.
When applying the Cochran‐Armitage (CA) trend test for an association between a candidate allele and a disease in a case‐control study, a set of scores must be assigned to the genotypes. Sasieni (1997, Biometrics 53 , 1253–1261) suggested scores for the recessive, additive, and dominant models but did not examine their statistical properties. Using the criteria of minimizing the required sample size of the CA trend test to achieve prespecified type I and type II errors, we show that the scores given by Sasieni (1997) are optimal for the recessive and dominant models and locally optimal for the additive one. Moreover, the additive scores are shown to be locally optimal for the multiplicative model. The tests are applied to a real dataset.  相似文献   

19.
Wellek S 《Biometrics》2004,60(3):694-703
The classical chi(2)-procedure for the assessment of genetic equilibrium is tailored for establishing lack rather than goodness of fit of an observed genotype distribution to a model satisfying the Hardy-Weinberg law, and the same is true for the exact competitors to the large-sample procedure, which have been proposed in the biostatistical literature since the late 1930s. In this contribution, the methodology of statistical equivalence testing is adopted for the construction of tests for problems in which the assumption of approximate compatibility of the genotype distribution actually sampled with Hardy-Weinberg equilibrium (HWE) plays the role of the alternative hypothesis one aims to establish. The result of such a construction highly depends on the choice of a measure of distance to be used for defining an indifference zone containing those genotype distributions whose degree of disequilibrium shall be considered irrelevant. The first such measure proposed here is the Euclidean distance of the true parameter vector from that of a genotype distribution with identical allele frequencies being in strict HWE. The second measure is based on the (scalar) parameter of the distribution first introduced into the present context by Stevens (1938, Annals of Eugenics 8, 377-383). The first approach leads to a nonconditional test (which nevertheless can be carried out in a numerically exact way), the second to an exact conditional test shown to be uniformly most powerful unbiased (UMPU) for the associated pair of hypotheses. Both tests are compared in terms of the exact power attained against the class of those specific alternatives under which HWE is strictly satisfied.  相似文献   

20.
Although linear rank statistics for the two‐sample problem are distribution free tests, their power depends on the distribution of the data. In the planning phase of an experiment, researchers are often uncertain about the shape of this distribution and so the choice of test statistic for the analysis and the determination of the required sample size are based on vague information. Adaptive designs with interim analysis can potentially overcome both problems. And in particular, adaptive tests based on a selector statistic are a solution to the first. We investigate whether adaptive tests can be usefully implemented in flexible two‐stage designs to gain power. In a simulation study, we compare several methods for choosing a test statistic for the second stage of an adaptive design based on interim data with the procedure that applies adaptive tests in both stages. We find that the latter is a sensible approach that leads to the best results in most situations considered here. The different methods are illustrated using a clinical trial example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号