首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider k independent exponential populations with location parameters μ1,…, μk and a common scale parameter or standard deviation θ. Let μ(k) be the largest of the μ's and define a population to be good if its location parameter exceeds μ(k) –Δ1. A selection procedure is proposed to select a subset of the k populations which includes the good populations with probability at least P*, a pre-assigned value. Simultaneous confidence intervals, that can be derived with the proposed selection procedure, are discussed. Moreover, if populations with locations below μ(k) –δ2, (δ2 > δ1) are “bad”, a selection procedure is proposed and a sample size is determined so that the probability of omitting a “good” population or selecting a “bad” population is at most 1 – P*.  相似文献   

2.
Qiu J  Hwang JT 《Biometrics》2007,63(3):767-776
Summary Simultaneous inference for a large number, N, of parameters is a challenge. In some situations, such as microarray experiments, researchers are only interested in making inference for the K parameters corresponding to the K most extreme estimates. Hence it seems important to construct simultaneous confidence intervals for these K parameters. The naïve simultaneous confidence intervals for the K means (applied directly without taking into account the selection) have low coverage probabilities. We take an empirical Bayes approach (or an approach based on the random effect model) to construct simultaneous confidence intervals with good coverage probabilities. For N= 10,000 and K= 100, typical for microarray data, our confidence intervals could be 77% shorter than the naïve K‐dimensional simultaneous intervals.  相似文献   

3.
For the model y0 + β1 x + e (model I of linear regression) in the literature confidence estimators of an unknown position x0 are given at which either the expectation of y is given (see FIELLER, 1944; FINNEY, 1952), or realizations of y are given (see GRAYBILL, 1961). These confidence regions with level 1—α need not be intervals. The occurrence of interval shape is a random event. Its probability is equal to the power of the t test for the examination of the hypothesis H: β1 = 0. The papers mentioned above claim to provide confidence intervals with level 1 ? α. But because of the restriction of (1 —α)—confidence regions to intervals the true confidence probability is the conditional probability Wc: Wc = P (the confidence region covers x0| the region has interval shape). Here this conditional probability is shown to be less than 1 —α. Evidence on the possible deviations from 1 —α has been obtained by simulations.  相似文献   

4.
Two new methods for computing confidence intervals for the difference δ = p1 — p2 between two binomial proportions (p1, p2) are proposed. Both the Mid-P and Max-P likelihood weighted intervals are constructed by mapping the tail probabilities from the two-dimensional (p1, p2)-space into a one-dimensional function of δ based on the likelihood weights. This procedure may be regarded as a natural extension of the CLOPPER-PEARSON (1934) interval to the two-sample case where the weighted tail probability is α/2 at each end on the δ scale. The probability computation is based on the exact distribution rather than a large sample approximation. Extensive computation was carried out to evaluate the coverage probability and expected width of the likelihood-weighted intervals, and of several other methods. The likelihood-weighted intervals compare very favorably with the standard asymptotic interval and with intervals proposed by HAUCK and ANDERSON (1986), COX and SNELL (1989), SANTNER and SNELL (1980), SANTNER and YAMAGAMI (1993), and PESKUN (1993). In particular, the Mid-P likelihood-weighted interval provides a good balance between accurate coverage probability and short interval width in both small and large samples. The Mid-P interval is also comparable to COE and TAMHANE'S (1993) interval, which has the best performance in small samples.  相似文献   

5.
Two classes of tests for the hypothesis of bivariate symmetry are studied. For paired exponential survival times (t1j, t2j), the classes of tests are those based on t1j-t2j and those based on log t1j–log t2j. For each class the sign, signed ranks, t and likelihood ratio tests are compared via Pitman's criterion of asymptotic relative efficiency (ARE). For tests based on t1jt2j, it is found that: (1) the efficacy of the paired t depends on the coefficient of variation (CV) of the pair means, (2) the signed rank test has the same ARE to the sign test as for the usual location problem. For tests based on log t1j — log t2j, the ARE comparisons reduce to the well-known results for the one-sample location problem for samples from a logistic density. Hence, the signed rank test is asymptotically efficient. Furthermore, analyses based on log t1j — log t2j are not complicated by the underlying pairing mechanism.  相似文献   

6.
Matched-pair design is often adopted in equivalence or non-inferiority trials to increase the efficiency of binary-outcome treatment comparison. Briefly, subjects are required to participate in two binary-outcome treatments (e.g., old and new treatments via crossover design) under study. To establish the equivalence between the two treatments at the α significance level, a (1−α)100% confidence interval for the correlated proportion difference is constructed and determined if it is entirely lying in the interval (−δ 0,δ 0) for some clinically acceptable threshold δ 0 (e.g., 0.05). Nonetheless, some subjects may not be able to go through both treatments in practice and incomplete data thus arise. In this article, a hybrid method for confidence interval construction for correlated rate difference is proposed to establish equivalence between two treatments in matched-pair studies in the presence of incomplete data. The basic idea is to recover variance estimates from readily available confidence limits for single parameters. We compare the hybrid Agresti–Coull, Wilson score and Jeffreys confidence intervals with the asymptotic Wald and score confidence intervals with respect to their empirical coverage probabilities, expected confidence widths, ratios of left non-coverage probability, and total non-coverage probability. Our simulation studies suggest that the Agresti–Coull hybrid confidence intervals is better than the score-test-based and likelihood-ratio-based confidence interval in small to moderate sample sizes in the sense that the hybrid confidence interval controls its true coverage probabilities around the pre-assigned coverage level well and yield shorter expected confidence widths. A real medical equivalence trial with incomplete data is used to illustrate the proposed methodologies.  相似文献   

7.
The problem of finding exact simultaneous confidence bounds for differences in regression models for k groups via the union‐intersection method is considered. The error terms are taken to be iid normal random variables. Under an assumption slightly more general than having identical design matrices for each of the k groups, it is shown that an existing probability point for the multivariate studentized range can be used to find the necessary probability point for pairwise comparisons of regression models. The resulting methods can be used with simple or multiple regression. Under a weaker assumption on the k design matrices that allows more observations to be taken from the control group than from the k‐1 treatment groups, a method is developed for computing exact probability points for comparing the simple linear regression models of the k‐1 groups to that of the control. Within a class of designs, the optimal design for comparisons with a control takes the square root of (k‐1) times as many observations from the control than from each treatment group. The simultaneous confidence bounds for all pairwise differences and for comparisons with a control are much narrower than Spurrier's intervals for all contrasts of k regression lines.  相似文献   

8.
In applied work, distributions are often highly skewed with heavy tails, and this can have disastrous consequences in terms of power when comparing groups based on means. One solution to this problem in the one-sample case is to use the TUKEY and MCLAUGHLIN (1963) method for trimmed means, while in the two-group case YUEN's (1974) method can be used. Published simulations indicate that they yield accurate confidence intervals when distributions are symmetric. Using a Cornish-Fisher expansion, this paper extends these results by describing general circumstances under which methods based on trimmed means can be expected to give more accurate confidence intervals than those based on means. The results cover both symmetric and asymmetric distributions. Simulations are also used to illustrate the accuracy of confidence intervals using trimmed means versus means.  相似文献   

9.
The present paper reports the results of a Monte Carlo simulation study to examine the performance of several approximate confidence intervals for the Relative Risk Ratio (RRR) parameter in an epidemiologic study, involving two groups of individuals. The first group consists of n1 individuals, called the experimental group, who are exposed to some carcinogen, say radiation, whose effect on the incidence of some form of cancer, say skin cancer, is being investigated. The second group consists of n2 individuals (called the control group) who are not exposed to the carcinogen. Two cases are considered in which the life times (or time to cancer) in the two groups follow (i) the exponential and (ii) the Weibull distributions. The case when the life times follow a Rayleigh distribution follows as a particular case. A general random censorship model is considered in which the life times of the individuals are censored on the right by random censoring times following (i) the exponential and (ii) the Weibull distributions. The Relative Risk Ratio parameter in the study is defined as the ratio of the hazard rates in the two distributions of the times to cancer. Approximate confidence intervals are constructed for the RRR parameter using its maximum likelihood estimator (m.l.e) and several other methods, including a method due to FIELLER. SPROTT'S (1973) and Cox's (1953) suggestions, as well as the Box-Cox (1964) transformation, are also utilized to construct approximate confidence intervals. The performance of these confidence intervals in small samples is investigated by means of some Monte Carlo simulations based on 500 random samples. Our simulation study indicates that many of these confidence intervals perform quite well in samples of size 10 and 15, in terms of the coverage probability and expected length of the interval.  相似文献   

10.
An adaptive R-estimator θA and an adaptive trimmed mean MAT are proposed. The performance of these and a number of other robust estimators are studied on real data sets, drawn from the astronomical, behavioural, biomedical, chemical, engineering and physical sciences. In the case of sets that can be assumed to have come from symmetric distributions, the best performer is θA. The next best performers are the Hodges-Lehmann estimator, Bisquare (7.5) and Huber (1.5), in that order. MAT works well with all kinds of sets–symmetric or skewed. Extensions of these results to ANOVA and regression models are mentioned.  相似文献   

11.
Directly standardized rates continue to be an integral tool for presenting rates for diseases that are highly dependent on age, such as cancer. Statistically, these rates are modeled as a weighted sum of Poisson random variables. This is a difficult statistical problem, because there are k observed Poisson variables and k unknown means. The gamma confidence interval has been shown through simulations to have at least nominal coverage in all simulated scenarios, but it can be overly conservative. Previous modifications to that method have closer to nominal coverage on average, but they do not achieve the nominal coverage bound in all situations. Further, those modifications are not central intervals, and the upper coverage error rate can be substantially more than half the nominal error. Here we apply a mid‐p modification to the gamma confidence interval. Typical mid‐p methods forsake guaranteed coverage to get coverage that is sometimes higher and sometimes lower than the nominal coverage rate, depending on the values of the parameters. The mid‐p gamma interval does not have guaranteed coverage in all situations; however, in the (not rare) situations where the gamma method is overly conservative, the mid‐p gamma interval often has at least nominal coverage. The mid‐p gamma interval is especially appropriate when one wants a central interval, since simulations show that in many situations both the upper and lower coverage error rates are on average less than or equal to half the nominal error rate.  相似文献   

12.
13.
Multivariate Polya and inverse Polya distributions of order k are derived by means of generalized urn models and by compounding the type II multinomial and multivariate negative binomial distributions of order k of PHILIPPOU , ANTZOULAKOS and TRIPSIANNIS (1990, 1988), respectively, with the Dirichlet distribution. It is noted that the above two distributions include as special cases a multivariate hypergeometric distribution of order k, a negative one, an inverse one, a negative inverse one and a discrete uniform of the same order. The probability generating functions, means, variances and covariances of the new distributions are obtained and five asymptotic results are established relating them to the above-mentioned multinomial and multivariate negative binomial distributions of order k, and to the type II negative binomial and the type I multivariate Poisson distributions of order k of PHILIPPOU (1983), and PHILIPPOU , ANTZOULAKOS and TRIPSIAN-NIS (1988), respectively. Potential applications are also indicated. The present paper extends to the multivariate case the work of PHILIPPOU , TRIPSIANNIS and ANTZOULAKOS (1989) on Polya and inverse Polya distributions of order k..  相似文献   

14.
When the number of tumors is small, a significance level for the Cox-Mantel (log-rank) test Z is often computed using a discrete approximation to the permutation distribution. For j = 0,…, J let Nj(t) be the number of animals in group j alive and tumor-free at the start of time t. Make a 2 × (1+J) table for each time t of the number of animals Rj(t) with newly palpated tumor out of the total Nj(t) at risk. There are a total of say K tables, one for each distinct time t with observed death or newly palpated tumor. The usual discrete approximation to the permutation distribution of Z is defined by taking tables to be independent with fixed margins Nj(t) and ΣRj(t) for all t. However, the Nj(t) are random variables for the actual permutation distribution of Z, resulting in dependence among the tables. Calculations for the exact permutation distribution are explained, and examples are given where the exact significance level differs substantially from the usual discrete approximation. The discrepancy arisis primarily because permutations with different Z-scores under the exact distribution can be equal for the discrete approximation, inflating the approximate P-value.  相似文献   

15.
Let {Q1, …, Qk} be the potencies of k substances relative to a standard in a multiple dilution assay. Joint confidence bounds for these are given with confidence coefficient at least 1-α. These bounds are easily interpreted; they appeal to available tables; they improve Scheffe's bounds; they are based on applicable probability inequalities together with extensions of Fieller's theorem; and they are genuinely nonparametric. The procedures are illustrated using data from parallel-line and slope-ratio assays.  相似文献   

16.
Kinetics of biopolymerization on nucleic acid templates   总被引:3,自引:0,他引:3  
The kinetics of biopolymerization on nucleic acid templates is discussed. The model introduced allows for the simultaneous synthesis of several chains, of a given type, on a common template, e.g., the polyribosome situation. Each growth center [growing chain end plus enzyme(s)] moves one template site at a time, but blocks L adjacent sites. Solutions are found for the probability nj(t) that a template has a growing center that occupies the sites jL + 1,…, j at time t. Two special sets of solutions are considered, the uniform-density solutions, for which nj(t) = n, and the more general steady-state solutions, for which dnj(t)/dt = 0. In the uniform-density case, there is an upper bound to the range of rates of polymerization that can occur. Corresponding to this maximum rate, there is one uniform solution. For a polymerization rate less than this maximum, there are two uniform solutions that give the same rate. In the steady-state case, only L = 1 is discussed. For a steady-state polymerization rate less than the maximum uniform-density rate, the steady-state solutions consist of either one or two regions of nearly uniform density, with the density value(s) assumed in the uniform region(s) being either or both of the uniform-density solutions corresponding to that polymerization rate. For a steady-state polymerization rate equal to or slightly larger than the maximum uniform-density rate, the steady-state solutions are nearly uniform to the single uniform-density solution for the maximum rate. The boundary conditions (rate of initiation and rate, of release of completed chains from the template) govern the choice among the possible solutions, i.e., determine the region(s) of uniformity and the value(s) assumed in the uniform region(s).  相似文献   

17.
Canran Liu 《植被学杂志》2001,12(3):411-416
Abstract. The behaviour of five statistics (extensions of Pielou's, Clark and Evansapos;, Pollard's, Johnson & Zimmer's, and Eberhardt's statistics, which are denoted as Pi, Ce, Po, Jz and Eb respectively) that involve the distance from a random point to its jth nearest neighbour were examined against several alternative patterns (lattice‐based regular, inhomogeneous random, and Poisson cluster patterns) through Monte Carlo simulation to test their powers to detect patterns. The powers of all the five statistics increase as distance order j increases against inhomogeneous random pattern. They decrease for Pi and Ce and increase for Po, Jz, and Eb against regular and Poisson cluster patterns. Po, Jz, and Eb can reach high powers with the third or higher order distances in most cases. However, Po is recommended because no extra information is needed, it can reach a high power with the second or third distance even though the sample size is not large in most cases, and the test can be performed with an approximate χ2 distribution associated with it. When a regular pattern is expected, Jz is recommended because it is more sensitive to lattice‐based regular pattern than Po and Eb, especially for the first distance. However, simulation tests should be used because the speed of convergence of Jz to normal distribution is very slow.  相似文献   

18.
Consider the one-way ANOVA problem of comparing the means m1, m2, …, mc of c distributions F1(x) = F(xm1), …, Fc(x) = F(xmc). Solutions are available based on (i) normal-theory procedures, (ii) linear rank statistics and (iii) M-estimators. The above model presupposes that F1, F2, …, Fc have equal variances (= homoscedasticity). However practising statisticans content that homoscedasticity is often violated in practice. Hence a more realistic problem to consider is F1(x) = F((xm1)/σ1), …, Fc(x) = F((xmc)/σc), where F is symmetric about the origin and σ1, …, σc are unknown and possibly unequal (= heteroscedasticity). Now we have to compare m1, m2, …, mc. At present, nonparametric tests of the equality of m1, m2, …, mc are available. However, simultaneous tests for paired comparisons and contrasts and do not seem to be available. This paper begins by proposing a solution applicable to both the homoscedastic and the heteroscedastic situations, assuming F to be symmetric. Then the assumptions of symmetry and the identical shapes of F1, …, Fc are progressively relaxed and solutions are proposed for these cases as well. The procedures are all based on either the 15% trimmed means or the sample medians, whose quantiles are estimated by means of the bootstrap. Monte Carlo studies show that these procedures tend to be superior to the Wilcoxon procedure and Dunnett's normal theory procedure. A rigorous justification of the bootstrap is also presented. The methodology is illustrated by a comparison of mean effects of cocaine administration in pregnant female Sprague-Dawley rats, where skewness and heteroscedascity are known to be present.  相似文献   

19.
Summary We have analyzed the intracellular and cell-to-cell diffusion kinetics of fluorescent tracers in theChironomus salivary gland. We use this analysis to investigate whether membrane potential-induced changes in junctional permeability are accompanied by changes in cell-to-cell channel selectivity. Tracers of different size and fluorescence wavelength were coinjected into a cell, and the fluorescence was monitored in this cell and an adjacent one. Rate constants,k j , for cell-to-cell diffusion were derived by compartment model analysis, taking into account (i) cell-to-cell diffusion of the tracers; (ii) their loss from the cells; (iii) their binding (sequestration) to cytoplasmic components; and (iv) their relative mobility to cytoplasm, as determined separately on isolated cells. In cell pairs, we compared a tracer'sk j with the electrical cell-to-cell conductance,g j .At cell membrane resting potential, thek j 's ranged 3.8–9.2×10–3 sec–1 for the small carboxyfluorescein (mol wt 376) to about 0.4×10–3 sec–1 for a large fluorescein-labeled sugar (mol wt 2327). Cell membrane depolarization reversibly reducedg j andk j for a large and a small tracer, all in the same proportion. This suggests that membrane potential controls the number of open channels, rather than their effective pore diameter or selectivity. From the inverse relation between tracer mean diameter and relativek j we calculate an effective, permeation-limiting diameter of approximately 29 Å for the insect cell-to-cell channel. Intracellular diffusion was faster than cell-to-cell diffusion, and it was not solely dependent on tracer size. Rate constants for intracellular sequestration and loss through nonjunctional membrane were large enough to become rate-limiting for cell-to-cell tracer diffusion at low junctional permeabilities.  相似文献   

20.
In the article Bechhofers Indifference-zone formulation for selecting the t populations with the t highest means is considered in a set of non-normal distributions. Selection rules based on the sample mean, the 10% and the 20% trimmed means, two estimators proposed by Tiku (1981) for valuating the smallest and highest accepted sample values higher, the sample median and a linear combination of quantile estimators, two adaptive procedures and a ranksum procedure are investigated in a large scale simulation experiment in respect of their robustness against deviations from an assumed distribution. Robustness is understood as a small percentage of the difference βA-β between the actual probability of incorrect selection βA and the nominal β-value. We obtained a relatively good robustness for the classical sample mean selection rule and useful derivations for the employment of other selection rules in an area of practical importance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号