首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To construct a confidence interval of effect size in paired studies, we propose four approximate methods--Wald method, variance-stabilizing transformation method, and signed and modified signed log-likelihood ratio methods. We compare these methods using simulation to determine those that have good performance in terms of coverage probability. In particular, simulations show that the modified signed log-likelihood ratio method produces a confidence interval with a nearly exact coverage probability and highly accurate and symmetric error probabilities even for very small samples. We apply the methods to data from an iron deficiency anemia study.  相似文献   

2.
In this paper, we consider an approach based on the adjusted signed log-likelihood ratio statistic for constructing a confidence interval for the mean of lognormal data with excess zeros. An extensive simulation study suggests that the proposed approach outperforms all the existing methods in terms of coverage probabilities and symmetry of upper and lower tail error probabilities. Finally, we analyzed two real-life datasets using the proposed approach.  相似文献   

3.
In inter-laboratory studies, a fundamental problem of interest is inference concerning the consensus mean, when the measurements are made by several laboratories which may exhibit different within-laboratory variances, apart from the between laboratory variability. A heteroscedastic one-way random model is very often used to model this scenario. Under such a model, a modified signed log-likelihood ratio procedure is developed for the interval estimation of the common mean. Furthermore, simulation results are presented to show the accuracy of the proposed confidence interval, especially for small samples. The results are illustrated using an example on the determination of selenium in non-fat milk powder by combining the results of four methods. Here, the sample size is small, and the confidence limits for the common mean obtained by different methods produce very different results. The confidence interval based on the modified signed log-likelihood ratio procedure appears to be quite satisfactory.  相似文献   

4.
Zho XH  Tu W 《Biometrics》1999,55(2):645-651
In this paper, we consider the problem of testing the mean equality of several independent populations that contain log-normal and possibly zero observations. We first showed that the currently used methods in statistical practice, including the nonparametric Kruskal-Wallis test, the standard ANOVA F-test and its two modified versions, the Welch test and the Brown-Forsythe test, could have poor Type I error control. Then we propose a likelihood ratio test that is shown to have much better Type I error control than the existing methods. Finally, we analyze two real data sets that motivated our study using the proposed test.  相似文献   

5.
Rosner B  Glynn RJ  Lee ML 《Biometrics》2006,62(1):185-192
The Wilcoxon signed rank test is a frequently used nonparametric test for paired data (e.g., consisting of pre- and posttreatment measurements) based on independent units of analysis. This test cannot be used for paired comparisons arising from clustered data (e.g., if paired comparisons are available for each of two eyes of an individual). To incorporate clustering, a generalization of the randomization test formulation for the signed rank test is proposed, where the unit of randomization is at the cluster level (e.g., person), while the individual paired units of analysis are at the subunit within cluster level (e.g., eye within person). An adjusted variance estimate of the signed rank test statistic is then derived, which can be used for either balanced (same number of subunits per cluster) or unbalanced (different number of subunits per cluster) data, with an exchangeable correlation structure, with or without tied values. The resulting test statistic is shown to be asymptotically normal as the number of clusters becomes large, if the cluster size is bounded. Simulation studies are performed based on simulating correlated ranked data from a signed log-normal distribution. These studies indicate appropriate type I error for data sets with > or =20 clusters and a superior power profile compared with either the ordinary signed rank test based on the average cluster difference score or the multivariate signed rank test of Puri and Sen. Finally, the methods are illustrated with two data sets, (i) an ophthalmologic data set involving a comparison of electroretinogram (ERG) data in retinitis pigmentosa (RP) patients before and after undergoing an experimental surgical procedure, and (ii) a nutritional data set based on a randomized prospective study of nutritional supplements in RP patients where vitamin E intake outside of study capsules is compared before and after randomization to monitor compliance with nutritional protocols.  相似文献   

6.
Tang NS  Tang ML 《Biometrics》2002,58(4):972-980
In this article, we consider small-sample statistical inference for rate ratio (RR) in a correlated 2 x 2 table with a structural zero in one of the off-diagonal cells. Existing Wald's test statistic and logarithmic transformation test statistic will be adopted for this purpose. Hypothesis testing and confidence interval construction based on large-sample theory will be reviewed first. We then propose reliable small-sample exact unconditional procedures for hypothesis testing and confidence interval construction. We present empirical results to evince the better confidence interval performance of our proposed exact unconditional procedures over the traditional large-sample procedures in small-sample designs. Unlike the findings given in Lui (1998, Biometrics 54, 706-711), our empirical studies show that the existing asymptotic procedures may not attain a prespecified confidence level even in moderate sample-size designs (e.g., n = 50). Our exact unconditional procedures on the other hand do not suffer from this problem. Hence, the asymptotic procedures should be applied with caution. We propose two approximate unconditional confidence interval construction methods that outperform the existing asymptotic ones in terms of coverage probability and expected interval width. Also, we empirically demonstrate that the approximate unconditional tests are more powerful than their associated exact unconditional tests. A real data set from a two-step tuberculosis testing study is used to illustrate the methodologies.  相似文献   

7.
Gill PS 《Biometrics》2004,60(2):525-527
We propose a likelihood-based test for comparing the means of two or more log-normal distributions, with possibly unequal variances. A modification to the likelihood ratio test is needed when sample sizes are small. The performance of the proposed procedures is compared with the F-ratio test using Monte Carlo simulations.  相似文献   

8.
9.
Confidence intervals for the mean of one sample and the difference in means of two independent samples based on the ordinary-t statistic suffer deficiencies when samples come from skewed families. In this article we evaluate several existing techniques and propose new methods to improve coverage accuracy. The methods examined include the ordinary-t, the bootstrap-t, the biased-corrected acceleration and three new intervals based on transformation of the t-statistic. Our study shows that our new transformation intervals and the bootstrap-t intervals give best coverage accuracy for a variety of skewed distributions, and that our new transformation intervals have shorter interval lengths.  相似文献   

10.
Tamhane AC  Logan BR 《Biometrics》2002,58(3):650-656
Tang, Gnecco, and Geller (1989, Biometrika 76, 577-583) proposed an approximate likelihood ratio (ALR) test of the null hypothesis that a normal mean vector equals a null vector against the alternative that all of its components are nonnegative with at least one strictly positive. This test is useful for comparing a treatment group with a control group on multiple endpoints, and the data from the two groups are assumed to follow multivariate normal distributions with different mean vectors and a common covariance matrix (the homoscedastic case). Tang et al. derived the test statistic and its null distribution assuming a known covariance matrix. In practice, when the covariance matrix is estimated, the critical constants tabulated by Tang et al. result in a highly liberal test. To deal with this problem, we derive an accurate small-sample approximation to the null distribution of the ALR test statistic by using the moment matching method. The proposed approximation is then extended to the heteroscedastic case. The accuracy of both the approximations is verified by simulations. A real data example is given to illustrate the use of the approximations.  相似文献   

11.
Chan IS  Tang NS  Tang ML  Chan PS 《Biometrics》2003,59(4):1170-1177
Testing of noninferiority has become increasingly important in modern medicine as a means of comparing a new test procedure to a currently available test procedure. Asymptotic methods have recently been developed for analyzing noninferiority trials using rate ratios under the matched-pair design. In small samples, however, the performance of these asymptotic methods may not be reliable, and they are not recommended. In this article, we investigate alternative methods that are desirable for assessing noninferiority trials, using the rate ratio measure under small-sample matched-pair designs. In particular, we propose an exact and an approximate exact unconditional test, along with the corresponding confidence intervals based on the score statistic. The exact unconditional method guarantees the type I error rate will not exceed the nominal level. It is recommended for when strict control of type I error (protection against any inflated risk of accepting inferior treatments) is required. However, the exact method tends to be overly conservative (thus, less powerful) and computationally demanding. Via empirical studies, we demonstrate that the approximate exact score method, which is computationally simple to implement, controls the type I error rate reasonably well and has high power for hypothesis testing. On balance, the approximate exact method offers a very good alternative for analyzing correlated binary data from matched-pair designs with small sample sizes. We illustrate these methods using two real examples taken from a crossover study of soft lenses and a Pneumocystis carinii pneumonia study. We contrast the methods with a hypothetical example.  相似文献   

12.
An exciting biological advancement over the past few years is the use of microarray technologies to measure simultaneously the expression levels of thousands of genes. The bottleneck now is how to extract useful information from the resulting large amounts of data. An important and common task in analyzing microarray data is to identify genes with altered expression under two experimental conditions. We propose a nonparametric statistical approach, called the mixture model method (MMM), to handle the problem when there are a small number of replicates under each experimental condition. Specifically, we propose estimating the distributions of a t -type test statistic and its null statistic using finite normal mixture models. A comparison of these two distributions by means of a likelihood ratio test, or simply using the tail distribution of the null statistic, can identify genes with significantly changed expression. Several methods are proposed to effectively control the false positives. The methodology is applied to a data set containing expression levels of 1,176 genes of rats with and without pneumococcal middle ear infection.  相似文献   

13.
Simultaneous confidence intervals for comparing binomial parameters   总被引:1,自引:0,他引:1  
Agresti A  Bini M  Bertaccini B  Ryu E 《Biometrics》2008,64(4):1270-1275
SUMMARY: To compare proportions with several independent binomial samples, we recommend a method of constructing simultaneous confidence intervals that uses the studentized range distribution with a score statistic. It applies to a variety of measures, including the difference of proportions, odds ratio, and relative risk. For the odds ratio, a simulation study suggests that the method has coverage probability closer to the nominal value than ad hoc approaches such as the Bonferroni implementation of Wald or "exact" small-sample pairwise intervals. It performs well even for the problematic but practically common case in which the binomial parameters are relatively small. For the difference of proportions, the proposed method has performance comparable to a method proposed by Piegorsch (1991, Biometrics 47, 45-52).  相似文献   

14.
Procedures for discriminating between competing statistical models of synaptic transmission, and for providing confidence limits on the parameters of these models, have been developed. These procedures were tested against simulated data and were used to analyze the fluctuations in synaptic currents evoked in hippocampal neurones. All models were fitted to data using the Expectation-Maximization algorithm and a maximum likelihood criterion. Competing models were evaluated using the log-likelihood ratio (Wilks statistic). When the competing models were not nested, Monte Carlo sampling of the model used as the null hypothesis (H0) provided density functions against which H0 and the alternate model (H1) were tested. The statistic for the log-likelihood ratio was determined from the fit of H0 and H1 to these probability densities. This statistic was used to determine the significance level at which H0 could be rejected for the original data. When the competing models were nested, log-likelihood ratios and the chi 2 statistic were used to determine the confidence level for rejection. Once the model that provided the best statistical fit to the data was identified, many estimates for the model parameters were calculated by resampling the original data. Bootstrap techniques were then used to obtain the confidence limits of these parameters.  相似文献   

15.
Csanády L 《Biophysical journal》2006,90(10):3523-3545
The distributions of log-likelihood ratios (DeltaLL) obtained from fitting ion-channel dwell-time distributions with nested pairs of gating models (Xi, full model; Xi(R), submodel) were studied both theoretically and using simulated data. When Xi is true, DeltaLL is asymptotically normally distributed with predictable mean and variance that increase linearly with data length (n). When Xi(R) is true and corresponds to a distinct point in full parameter space, DeltaLL is Gamma-distributed (2DeltaLL is chi-square). However, when data generated by an l-component multiexponential distribution are fitted by l+1 components, Xi(R) corresponds to an infinite set of points in parameter space. The distribution of DeltaLL is a mixture of two components, one identically zero, the other approximated by a Gamma-distribution. This empirical distribution of DeltaLL, assuming Xi(R), allows construction of a valid log-likelihood ratio test. The log-likelihood ratio test, the Akaike information criterion, and the Schwarz criterion all produce asymmetrical Type I and II errors and inefficiently recognize Xi, when true, from short datasets. A new decision strategy, which considers both the parameter estimates and DeltaLL, yields more symmetrical errors and a larger discrimination power for small n. These observations are explained by the distributions of DeltaLL when Xi or Xi(R) is true.  相似文献   

16.
This paper considers four summary test statistics, including the one recently proposed by Bennett (1986, Biometrical Journal 28, 859–862), for hypothesis testing of association in a series of independent fourfold tables under inverse sampling. This paper provides a systematic and quantitative evaluation of the small-sample performance for these summary test statistics on the basis of a Monte Carlo simulation. This paper notes that the test statistic developed by Bennett (1986) can be conservative and thereby possibly lose the power when the underlying disease is not rare. This paper also finds that for given a fixed total number of cases in each table, the conditional test statistic is the best in controlling type I error among all test statistics considered here.  相似文献   

17.
For the purpose of making inferences for a one-dimensional interestparameter, or constructing approximate complementary ancillariesor residuals, the directed likelihood or signed square rootof the likelihood ratio statistic can be adjusted so that theresulting modified directed likelihood is under ordinary repeatedsampling approximately standard normal with error of O(n–3/2),conditional on a suitable ancillary statistic and hence unconditionally.In general, suitable specification of the ancillary statisticmay be difficult. We introduce two adjusted directed likelihoodswhich are similar to the modified directed likelihood but donot require the specification of the ancillary statistic. Theerror of the standard normal approximation to the distributionof these new adjusted directed likelihoods is O(n–1),conditional on any reasonable ancillary statistic, which isstill an improvement over the unadjusted directed likelihoods.  相似文献   

18.
By COHEN and others the kappa index was developed for measuring nominal scale agreement between two raters. This statistic measures the distance from the nullhypothesis of independent ratings of two observers. Here a modified kappa is introduced, which takes into account the distance between the marginal distributions, as well. This distance is interpreted as the so-called interobserver bias. Population analogues are defined for the modified kappa and a related conditional index. For these parameters asymptotic confidence intervals and tests are derived. The procedures are illustrated by fictitious and real examples.  相似文献   

19.
Consider a sequence of independent exponential random variables that is susceptible to a change in the means. We would like to test whether the means have been subjected to an epidemic change after an unknown point, for an unknown duration in the sequence. The likelihood ratio statistic and a likelihood ratio type statistic are derived. The distribution theories and related properties of the test statistics are discussed. Percentage points and powers of the tests are tabulated for selected values of the parameters. The powers of these two tests are then compared to the two statistics proposed by Aly and Bouzar. The tests are applied to find epidemic changes in the set of Stanford heart transplant data and air traffic arrival data.  相似文献   

20.
Permutation test is a popular technique for testing a hypothesis of no effect, when the distribution of the test statistic is unknown. To test the equality of two means, a permutation test might use a test statistic which is the difference of the two sample means in the univariate case. In the multivariate case, it might use a test statistic which is the maximum of the univariate test statistics. A permutation test then estimates the null distribution of the test statistic by permuting the observations between the two samples. We will show that, for such tests, if the two distributions are not identical (as for example when they have unequal variances, correlations or skewness), then a permutation test for equality of means based on difference of sample means can have an inflated Type I error rate even when the means are equal. Our results illustrate permutation testing should be confined to testing for non-identical distributions. CONTACT: calian@raunvis.hi.is.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号