首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
We present a Monte-Carlo simulation analysis of the statistical properties of absolute genetic distance and of Nei's minimum and standard genetic distances. The estimation of distances (bias) and of their variances is analysed as well as the distributions of distance and variance estimators, taking into account both gamete and locus samplings. Both of Nei's statistics are non-linear when distances are small and consequently the distributions of their estimators are extremely asymmetrical. It is difficult to find theoretical laws that fit such asymmetrical distributions. Absolute genetic distance is linear and its distributions are better fit by a normal distribution. When distances are medium or large, minimum distance and absolute distance distributions are close to a normal distribution, but those of the standard distance can never be considered as normal. For large distances the jack-knife estimator of the standard distance variance is bad; another standard distance estimator is suggested. Absolute distance, which has the best mathematical properties, is particularly interesting for small distances if the gamete sample size is large, even when the number of loci is small. When both distance and gamete sample size are small, this statistic is biased.  相似文献   

2.
Tests are introduced which are designed to test for a nondecreasing ordered alternative among the survival functions of k populations consisting of multiple observations on each subject. Some of the observations could be right censored. A simulation study is conducted comparing the proposed tests on the basis of estimated power when the underlying distributions are multivariate normal. Equal sample sizes of 20 with 25% censoring, and 40 with both 25% and 50% censoring are considered for 3 and 4 populations. All of the tests hold their α‐values well. A recommendation is made as to the best overall test for the situations considered.  相似文献   

3.
A robust test (to be referred to as M* test) is proposed for testing equality of several group means without assuming normality and equality of variances. This test statistic is obtained by combining Tiku's MML robust procedure with the James statistic. Monte Carlo simulation studies indicate that the M* test is more powerful than the Welch test, the James test, and the tests based on Huber's M-estimators over a wide range of nonnormal universes. It is also more powerful than the Brown and Forsythe test under most of nonnormal distributions and has substantially the same power as the Brown and Forsythe test under normal distribution. Comparing with Tan-Tabatabai test, M* is almost as powerful as Tan-Tabatabai test.  相似文献   

4.
5.
DUNNETT (1955) developed a procedure simultaneously comparing k treatments to one control with an exact overall type I error of α when all sampling distributions are normal. Sometimes it is desirable to compare k treatments to m≧2 controls, in particular to two controls. For instance, several new therapies (e.g., pain relievers) could be compared to two standard therapies (e.g., Aspirin and Tylenol). Alternatively, a standard therapy could be very expensive, difficult to apply and/or have bad side effects, making it useful to compare each new therapy to both standard therapy and no therapy (Placebo). Dunnett's method is expanded here to give comparisons of mean values for k treatments to mean values for m≧2 controls at an exact overall type I error of α when all sampling distributions are normal. Tabled values needed to make exact simultaneous comparisons at α = .05 are given for m = 2. An application is made to an example from the literature.  相似文献   

6.
The classical normal-theory tests for testing the null hypothesis of common variance and the classical estimates of scale have long been known to be quite nonrobust to even mild deviations from normality assumptions for moderate sample sizes. Levene (1960) suggested a one-way ANOVA type statistic as a robust test. Brown and Forsythe (1974) considered a modified version of Levene's test by replacing the sample means with sample medians as estimates of population locations, and their test is computationally the simplest among the three tests recommended by Conover , Johnson , and Johnson (1981) in terms of robustness and power. In this paper a new robust and powerful test for homogeneity of variances is proposed based on a modification of Levene's test using the weighted likelihood estimates (Markatou , Basu , and Lindsay , 1996) of the population means. For two and three populations the proposed test using the Hellinger distance based weighted likelihood estimates is observed to achieve better empirical level and power than Brown-Forsythe's test in symmetric distributions having a thicker tail than the normal, and higher empirical power in skew distributions under the use of F distribution critical values.  相似文献   

7.
This paper discusses the application of randomization tests to censored survival distributions. The three types of censoring considered are those designated by MILLER (1981) as Type 1 (fixed time termination), Type 2 (termination of experiment at r-th failure), and random censoring. Examples utilize the Gehan scoring procedure. Randomization tests for which computer programs already exist can be applied to a variety of experimental designs, regardless of the presence of censored observations.  相似文献   

8.
Summary In life history studies, interest often lies in the analysis of the interevent, or gap times and the association between event times. Gap time analyses are challenging however, even when the length of follow‐up is determined independently of the event process, because associations between gap times induce dependent censoring for second and subsequent gap times. This article discusses nonparametric estimation of the association between consecutive gap times based on Kendall's τ in the presence of this type of dependent censoring. A nonparametric estimator that uses inverse probability of censoring weights is provided. Estimates of conditional gap time distributions can be obtained following specification of a particular copula function. Simulation studies show the estimator performs well and compares favorably with an alternative estimator. Generalizations to a piecewise constant Clayton copula are given. Several simulation studies and illustrations with real data sets are also provided.  相似文献   

9.
Abstract. This paper aims at proposing efficient vegetation sampling strategies. It describes how the estimation of species richness and diversity of moist evergreen forest is affected by (1) sampling design (simple random sampling, random cluster sampling, systematic cluster sampling, stratified cluster sampling); (2) choice of species richness estimators (number of observed species vs. non-parametric estimators) and (3) choice of diversity index (Simpson vs. Shannon). Two sites are studied: a 28-ha area situated in the Western Ghats of India and a 25-ha area located at Pasoh in Peninsular Malaysia. The results show that: (1) whatever the sampling strategy, estimates of species richness depend on sample size in these very diverse forest ecosystems which contain many rare species; (2) Simpson's diversity index reaches a stable value at low sample sizes while Shannon's index is affected more by the addition of rare species with increasing sample size; (3) cluster sampling strategies provide a good compromise between cost and statistical efficiency; (4) 300 - 400 sample trees grouped in small clusters (10–50 individuals) are enough to obtain unbiased and precise estimates of Simpson's index; (5) the local topography of the Western Ghats has a major influence on forest composition, the steep slopes being richer and more diverse than the ridges and gentle slopes; (6) stratified cluster sampling is thus an interesting alternative to systematic cluster sampling.  相似文献   

10.
An approximate and practical solution is proposed for the Behrens-Fisher problem. This solution is compared to the solutions considered by Mehta and Srinivasan (1970) and Welch's (1937) approximate t-test in terms of the stability of the size and magnitude of the power. It is shown that the stability of the size of the new test is better than that of Welch's t when at least one of the sample sizes is small. When the sample sizes are moderately large or large the sizes and powers of all the recommended tests are almost the same.  相似文献   

11.
The aim of this paper is to study the properties of the asymptotic variances of the maximum likelihood estimators of the parameters of the exponential mixture model with long-term survivors for randomly censored data. In addition, we study the asymptotic relative efficiency of these estimators versus those which would be obtained with complete follow-up. It is shown that fixed censoring at time T produces higher precision as well as higher asymptotic relative efficiency than those obtainable under uniform and uniform-exponential censoring distributions over (0, T). The results are useful in planning the size and duration of survival experiments with long-term survivors under random censoring schemes.  相似文献   

12.
Estimators of location are considered. Huber (1964) introduced estimators asymptotically minimax on the set ?? of all regular M-estimators, for a given contamination ε and for the set Q of all regular symmetric alternative data sources. We extend his concept by admitting arbitrary sets ?? of regular M-estimators and arbitrary sets Q or regular symmetric alternative sources, and also by replacing the singletons [ε] ? (0, 1) by arbitrary subsets ?? ? (0, 1). The resulting estimator cannot in general be evaluated explicitly. But for finite T it exists and, if ?? and Q are finite too, it may be chosen by a computer. This extra burden is justified in some cases since more than 100% relative efficiency gain against all Huber's Hk is achievable in this manner. Such gains are achieved for a nontrivial family Q by the estimator proposed in Vajda (1984), with redescending influence curve, which is shown to be asymptotically minimax in wide sense.  相似文献   

13.
For J dependent groups, let θj, j = 1, …, J, be some measure of location associated with the jth group. A common goal is computing confidence intervals for the pairwise differences, θj — θk, j < k, such that the simultaneous probability coverage is 1 — α. If means are used, it is well known that slight departures from normality (as measured by the Kolmogorov distance) toward a heavy-tailed distribution can substantially inflate the standard error of the sample mean, which in turn can result in relatively low power. Also, when distributions differ in shape, or when sampling from skewed distributions with relatively light tails, practical problems arise when the goal is to obtain confidence intervals with simultaneous probability coverage reasonably close to the nominal level. Extant theoretical and simulation results suggest replacing means with trimmed means. The Tukey-McLaughlin method is easily adapted to the problem at hand via the Bonferroni inequality, but this paper illustrates that practical concerns remain. Here, the main result is that the percentile t bootstrap method, used in conjunction with trimmed means, gives improved probability coverage and substantially better power. A method based on a one-step M-estimator is also considered but found to be less satisfactory.  相似文献   

14.
A superpopulation model generates the probabilities of a Bernouilli random variable. The ranks of the involved variables are considered as survey weights. The distribution f each linear rank statistic is derived under the null hypothesis for the two sample problem and for the case k2 when a simple random sampling or stratified sampling is used. The growth of a population of insects and the behavior of patients with imsomnia are studied using these procedures.  相似文献   

15.
Knowledge of statistical power is essential for sampling design and data evaluation when testing for genetic differentiation. Yet, such information is typically missing in studies of conservation and evolutionary genetics, most likely because of complex interactions between the many factors that affect power. powsim is a 32‐bit Windows/DOS simulation‐based computer program that estimates power (and α error) for chi‐square and Fisher's exact tests when evaluating the hypothesis of genetic homogeneity. Optional combinations include the number of samples, sample sizes, number of loci and alleles, allele frequencies, and degree of differentiation (quantified as FST). powsim is available at http://www.zoologi.su.se/~ryman .  相似文献   

16.
Two common goals when choosing a method for performing all pairwise comparisons of J independent groups are controlling experiment wise Type I error and maximizing power. Typically groups are compared in terms of their means, but it has been known for over 30 years that the power of these methods becomes highly unsatisfactory under slight departures from normality toward heavy-tailed distributions. An approach to this problem, well-known in the statistical literature, is to replace the sample mean with a measure of location having a standard error that is relatively unaffected by heavy tails and outliers. One possibility is to use the trimmed mean. This paper describes three such multiple comparison procedures and compares them to two methods for comparing means.  相似文献   

17.
A finite population consists of kN individuals of N different categories with k individuals each. It is required to estimate the unknown parameter N, the number of different classes in the population. A sequential sampling scheme is considered in which individuals are sampled until a preassigned number of repetitions of already observed categories occur in the sample. Corresponding fixed sample size schemes were considered by Charalambides (1981). The sequential sampling scheme has the advantage of always allowing unbiased estimation of the size parameter N. It is shown that relative to Charalambides' fixed sample size scheme only minor adjustments are required to account for the sequential scheme. In particular, MVU estimators of parametric functions are expressible in terms of the C-numbers introduced by Charalambides.  相似文献   

18.
In health policy and economics studies, the incremental cost-effectiveness ratio (ICER) has long been used to compare the economic consequences relative to the health benefits of therapies. Due to the skewed distributions of the costs and ICERs, much research has been done on how to obtain confidence intervals of ICERs, using either parametric or nonparametric methods, with or without the presence of censoring. In this paper, we will examine and compare the finite sample performance of many approaches via simulation studies. For the special situation when the health effect of the treatment is not statistically significant, we will propose a new bootstrapping approach to improve upon the bootstrap percentile method that is currently available. The most efficient way of constructing confidence intervals will be identified and extended to the censored data case. Finally, a data example from a cardiovascular clinical trial is used to demonstrate the application of these methods.  相似文献   

19.
To compare two exponential distributions with or without censoring, two different statistics are often used; one is the F test proposed by COX (1953) and the other is based on the efficient score procedure. In this paper, the relationship between these tests is investigated and it is shown that the efficient score test is a large-sample approximation of the F test.  相似文献   

20.
A robust two-sample Student t-type procedure based on symmetrically censored samples proposed by Tiku (1980, 1982a, b) is studied from the Bayesian point of view. The effect of asymmetric censoring on this procedure is investigated and a good approximation to its posterior distribution in this case is worked out. An illustrative example is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号