首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The determination of sample sizes for the comparison of k treatments against a control by means of the test of Dunnett (1955, 1964) as well as by means of the multiple t-test will be considered. The power in multiple comparisons can be defined in different ways, see Hochberg and Tamhane (1987). We will derive formulas for the per-pair power, the any-pair power and the all-pairs power for both one- and two-sided comparisons. Tables will be provided that allow sample sizes to be determined for preassigned values of the power.  相似文献   

2.
Kwong KS  Cheung SH  Chan WS 《Biometrics》2004,60(2):491-498
In clinical studies, multiple superiority/equivalence testing procedures can be applied to classify a new treatment as superior, equivalent (same therapeutic effect), or inferior to each set of standard treatments. Previous stepwise approaches (Dunnett and Tamhane, 1997, Statistics in Medicine16, 2489-2506; Kwong, 2001, Journal of Statistical Planning and Inference 97, 359-366) are only appropriate for balanced designs. Unfortunately, the construction of similar tests for unbalanced designs is far more complex, with two major difficulties: (i) the ordering of test statistics for superiority may not be the same as the ordering of test statistics for equivalence; and (ii) the correlation structure of the test statistics is not equi-correlated but product-correlated. In this article, we seek to develop a two-stage testing procedure for unbalanced designs, which are very popular in clinical experiments. This procedure is a combination of step-up and single-step testing procedures, while the familywise error rate is proved to be controlled at a designated level. Furthermore, a simulation study is conducted to compare the average powers of the proposed procedure to those of the single-step procedure. In addition, a clinical example is provided to illustrate the application of the new procedure.  相似文献   

3.
In a clinical trial with an active treatment and a placebo the situation may occur that two (or even more) primary endpoints may be necessary to describe the active treatment's benefit. The focus of our interest is a more specific situation with two primary endpoints in which superiority in one of them would suffice given that non-inferiority is observed in the other. Several proposals exist in the literature for dealing with this or similar problems, but prove insufficient or inadequate at a closer look (e.g. Bloch et al. (2001, 2006) or Tamhane and Logan (2002, 2004)). For example, we were unable to find a good reason why a bootstrap p-value for superiority should depend on the initially selected non-inferiority margins or on the initially selected type I error alpha. We propose a hierarchical three step procedure, where non-inferiority in both variables must be proven in the first step, superiority has to be shown by a bivariate test (e.g. Holm (1979), O'Brien (1984), Hochberg (1988), a bootstrap (Wang (1998)), or L?uter (1996)) in the second step, and then superiority in at least one variable has to be verified in the third step by a corresponding univariate test. All statistical tests are performed at the same one-sided significance level alpha. From the above mentioned bivariate superiority tests we preferred L?uter's SS test and the Holm procedure for the reason that these have been proven to control the type I error strictly, irrespective of the correlation structure among the primary variables and the sample size applied. A simulation study reveals that the performance regarding power of the bivariate test depends to a considerable degree on the correlation and on the magnitude of the expected effects of the two primary endpoints. Therefore, the recommendation of which test to choose depends on knowledge of the possible correlation between the two primary endpoints. In general, L?uter's SS procedure in step 2 shows the best overall properties, whereas Holm's procedure shows an advantage if both a positive correlation between the two variables and a considerable difference between their standardized effect sizes can be expected.  相似文献   

4.
Statistical analysis of in vivo rodent micronucleus assay   总被引:2,自引:0,他引:2  
Kim BS  Cho M  Kim HJ 《Mutation research》2000,469(2):233-241
The in vivo rodent micronucleus assay (MNC) is widely used as a cytogenetic assay to detect the clastogenic activity of a chemical in vivo. MNC is one of three tests in a battery recommended by the fourth International Conference on Harmonization (ICH4) of Genotoxicity Guidelines. As such it has been accepted by many regulatory authorities. However, the determination of a positive result in a genotoxicity test, including MNC, has been an issue of debate among toxicologists and biometricians. In this presentation we compare several statistical procedures that have been suggested for the analysis of MNC data and indicate which one is the most powerful. The standard protocol of MNC has at least three dose levels plus the control dose and uses at least four animals per group. For each animal, 2000 polychromatic erythrocytes (PCE) are counted. Two statistical procedures can be employed, either alone or jointly, for the analysis of the MNC dose-response curve. These are the Cochran-Armitage (C-A) trend test and the Dunnett type test. For performing Dunnett type tests, toxicologists often use negative historical control rate for the estimate of the concurrent negative control rate. Some toxicologists emphasize the reproducibility of assay results instead of the dose-response relationship for the important criterion [J. Ashby, H. Tinwell, Mutat. Res. 327 (1995) 49-55; for the rebuttal see M. Hayashi, T. Sofuni, Mutat. Res. 331 (1995) 173-174]. The following three procedures are currently employed in toxicology labs for the evaluation of MNC result. The assay response is deemed positive if it is detected by (i) the C-A trend test alone, (ii) both the C-A trend test and the Dunnett type test and (iii) either the C-A trend test or the Dunnett type test. Using Monte Carlo simulation, we first find for each procedure, sizes of tests which yield the experiment-wise type I error rate of 0.05 and show that the procedure (ii) is the most powerful against the alternatives of monotone increase. The procedure (ii) which originated from Hayashi's three-step procedure was coded in C and termed 'MNC'. The MNC software program is available in the public domain through the ftp.  相似文献   

5.
In the two-step version (Dmitrienko, Tamhane, Wang and Chen, 2006) of the Bonferroni parallel-gatekeeping multiple-testing procedure (MTP): (a) a family F1 of null hypotheses H is used as a gatekeeper for another family F2 in that no H in F2 can be rejected unless at least one H is rejected in F1; (b) a Bonferroni MTP is used for F1 at local multiple-level alpha in the first step; and (c) Holm's (1979) step-down MTP is used in the second step for F2 at a local multiple level that depends on the rejections made in the first step. It is shown in this article that this two-step procedure can be generalized in that any MTP with multiple-level control and available multiplicity-adjusted p -values can be used instead of Holm's MTP in the second step. A further generalization related to what Dmitrienko, Molenberghs, Chuang-Stein and Offen (2005) called modified Bonferroni parallel gatekeeping is also given where in case all H s in F2 are rejected, additional rejections in F1 can be made in a third step at local multiple-level alpha through any MTP that is more powerful than the initial Bonferroni MTP, e.g. Holm's MTP. The proofs that these two generalized Bonferroni parallel-gatekeeping MTPs have multiple-level alpha are short and direct, without closed-testing arguments. Multiplicity-adjusted p -values can easily be calculated for these MTPs. The extensions to several successive gatekeeper families are straightforward. An illustration is given.  相似文献   

6.
Investigating differences between means of more than two groups or experimental conditions is a routine research question addressed in biology. In order to assess differences statistically, multiple comparison procedures are applied. The most prominent procedures of this type, the Dunnett and Tukey-Kramer test, control the probability of reporting at least one false positive result when the data are normally distributed and when the sample sizes and variances do not differ between groups. All three assumptions are non-realistic in biological research and any violation leads to an increased number of reported false positive results. Based on a general statistical framework for simultaneous inference and robust covariance estimators we propose a new statistical multiple comparison procedure for assessing multiple means. In contrast to the Dunnett or Tukey-Kramer tests, no assumptions regarding the distribution, sample sizes or variance homogeneity are necessary. The performance of the new procedure is assessed by means of its familywise error rate and power under different distributions. The practical merits are demonstrated by a reanalysis of fatty acid phenotypes of the bacterium Bacillus simplex from the “Evolution Canyons” I and II in Israel. The simulation results show that even under severely varying variances, the procedure controls the number of false positive findings very well. Thus, the here presented procedure works well under biologically realistic scenarios of unbalanced group sizes, non-normality and heteroscedasticity.  相似文献   

7.
Cheung and Holland (1992) extended Dunnett's procedure for comparing all active treatments with a control simultaneously within each of r groups while maintaining the Type I error rate at some designated level α allowing different sample sizes for each of the group‐treatment categories. This paper shows that exact percentage points can be easily calculated with current available statistical software (SAS). This procedure is compared to resampling techniques and a Bonferroni corrected Dunnett‐within‐group procedure by means of a simulation study.  相似文献   

8.
Traditionally drug development is generally divided into three phases which have different aims and objectives. Recently so-called adaptive seamless designs that allow combination of the objectives of different development phases into a single trial have gained much interest. Adaptive trials combining treatment selection typical for Phase II and confirmation of efficacy as in Phase III are referred to as adaptive seamless Phase II/III designs and are considered in this paper. We compared four methods for adaptive treatment selection, namely the classical Dunnett test, an adaptive version of the Dunnett test based on the conditional error approach, the combination test approach, and an approach within the classical group-sequential framework. The latter two approaches have only recently been published. In a simulation study we found that no one method dominates the others in terms of power apart from the adaptive Dunnett test that dominates the classical Dunnett by construction. Furthermore, scenarios under which one approach outperforms others are described.  相似文献   

9.
Computer simulation techniques were used to investigate the Type I and Type II error rates of one parametric (Dunnett) and two nonparametric multiple comparison procedures for comparing treatments with a control under nonnormality and variance homogeneity. It was found that Dunnett's procedure is quite robust with respect to violations of the normality assumption. Power comparisons show that for small sample sizes Dunnett's procedure is superior to the nonparametric procedures also in non-normal cases, but for larger sample sizes the multiple analogue to Wilcoxon and Kruskal-Wallis rank statistics are superior to Dunnett's procedure in all considered nonnormal cases. Further investigations under nonnormality and variance heterogeneity show robustness properties with respect to the risks of first kind and power comparisons yield similar results as in the equal variance case.  相似文献   

10.
We consider the problem of drawing superiority inferences on individual endpoints following non-inferiority testing. R?hmel et al. (2006) pointed out this as an important problem which had not been addressed by the previous procedures that only tested for global superiority. R?hmel et al. objected to incorporating the non-inferiority tests in the assessment of the global superiority test by exploiting the relationship between the two, since the results of the latter test then depend on the non-inferiority margins specified for the former test. We argue that this is justified, besides the fact that it enhances the power of the global superiority test. We provide a closed testing formulation which generalizes the three-step procedure proposed by R?hmel et al. for two endpoints. For the global superiority test, R?hmel et al. suggest using the L?uter (1996) test which is modified to make it monotone. The resulting test not only is complicated to use, but the modification does not readily extend to more than two endpoints, and it is less powerful in general than several of its competitors. This is verified in a simulation study. Instead, we suggest applying the one-sided likelihood ratio test used by Perlman and Wu (2004) or the union-intersection t(max) test used by Tamhane and Logan (2004).  相似文献   

11.
The public health situation in Sweden has become drastically worse since the Autumn of 1997. A massive roll-out of GSM main transmitter towers and roof-mounted transmitters that became allowed after mid-1997 led to a booming sale of GSM handsets all over Sweden. The authorities in Sweden have issued a brochure on ‘Radiation from Mobile Systems’ [] stating that good transmitter coverage leads to low handset output power that can vary from 2 W down to 0.001 W []. Thus, we examined health statistics data and GSM coverage in all counties in Sweden, Norway and Denmark. Here, we show that there is a very strong correlation between health degradation and weak GSM coverage, while there is no such relation noticed for the time period 1981–1991 when no handset power regulation was applied. The immediate implications from this study are the needs for: 1) a deeper analysis of handset power levels and health statistics and, 2) reconsideration of the planned massive roll-out of yet another mobile system (3G).  相似文献   

12.
A general multistage (stepwise) procedure is proposed for dealing with arbitrary gatekeeping problems including parallel and serial gatekeeping. The procedure is very simple to implement since it does not require the application of the closed testing principle and the consequent need to test all nonempty intersections of hypotheses. It is based on the idea of carrying forward the Type I error rate for any rejected hypotheses to test hypotheses in the next ordered family. This requires the use of a so-called separable multiple test procedure (MTP) in the earlier family. The Bonferroni MTP is separable, but other standard MTPs such as Holm, Hochberg, Fallback and Dunnett are not. Their truncated versions are proposed which are separable and more powerful than the Bonferroni MTP. The proposed procedure is illustrated by a clinical trial example.  相似文献   

13.
Murphy A  Weiss ST  Lange C 《PLoS genetics》2008,4(9):e1000197
For genome-wide association studies in family-based designs, we propose a powerful two-stage testing strategy that can be applied in situations in which parent-offspring trio data are available and all offspring are affected with the trait or disease under study. In the first step of the testing strategy, we construct estimators of genetic effect size in the completely ascertained sample of affected offspring and their parents that are statistically independent of the family-based association/transmission disequilibrium tests (FBATs/TDTs) that are calculated in the second step of the testing strategy. For each marker, the genetic effect is estimated (without requiring an estimate of the SNP allele frequency) and the conditional power of the corresponding FBAT/TDT is computed. Based on the power estimates, a weighted Bonferroni procedure assigns an individually adjusted significance level to each SNP. In the second stage, the SNPs are tested with the FBAT/TDT statistic at the individually adjusted significance levels. Using simulation studies for scenarios with up to 1,000,000 SNPs, varying allele frequencies and genetic effect sizes, the power of the strategy is compared with standard methodology (e.g., FBATs/TDTs with Bonferroni correction). In all considered situations, the proposed testing strategy demonstrates substantial power increases over the standard approach, even when the true genetic model is unknown and must be selected based on the conditional power estimates. The practical relevance of our methodology is illustrated by an application to a genome-wide association study for childhood asthma, in which we detect two markers meeting genome-wide significance that would not have been detected using standard methodology.  相似文献   

14.
In clinical trials where several experimental treatments are of interest, the goal may be viewed as identification of the best of these and comparison of that treatment to a standard control therapy. However, it is undesirable to commit patients to a large-scale comparative trial of a new regimen without evidence that its therapeutic success rate is acceptably high. We propose a two-stage design in which patients are first randomized among the experimental treatments, and the single treatment having the highest observed success rate is identified. If this highest rate falls below a fixed cutoff then the trial is terminated. Otherwise, the "best" new treatment is compared to the control at a second stage. Locally optimal values of the cutoff and the stage-1 and stage-2 sample sizes are derived by minimizing expected total sample size. The design has both high power and high probability of terminating early when no experimental treatment is superior to the control. Numerical results for implementing the design are presented, and comparison to Dunnett's (1984, in Design of Experiments: Ranking and Selection, T. J. Santner and A. C. Tamhane (eds), 47-66; New York: Marcel Dekker) optimal one-stage procedure is made.  相似文献   

15.
Microcomputer software has been developed to control video acquisitionof 1024 x 1024 digital autoradiogram images which representtrue optical density (OD). Since video cameras are sensitiveto intensity (I), background correction and conversion to ODmust be accomplished in software. The software linearizes cameraoutput against an optical step tablet of known ODs and createsa ‘look-up table’ through which captured imagesare passed. This procedure allows the accurate and rapid conversionof a large number of images. The user is directed through thecalibration and capture procedures; safeguards are includedto ensure that the resultant images are correctly calibrated.The algorithms used in these programs accommodate the limitedcomputing power available in microcomputers. The use of a commerciallyavailable graphics library will enhance the portability to multiplehardware configurations. This software is a low-cost alternativefor the capture of digital images needed for quantitative densitometry. Received on June 20, 1991; accepted on February 17, 1991  相似文献   

16.
The energetic cost of maintaining lateral balance during human running   总被引:1,自引:0,他引:1  
To quantify the energetic cost of maintaining lateral balance during human running, we provided external lateral stabilization (LS) while running with and without arm swing and measured changes in energetic cost and step width variability (indicator of lateral balance). We hypothesized that external LS would reduce energetic cost and step width variability of running (3.0 m/s), both with and without arm swing. We further hypothesized that the reduction in energetic cost and step width variability would be greater when running without arm swing compared with running with arm swing. We controlled for step width by having subjects run along a single line (zero target step width), which eliminated any interaction effects of step width and arm swing. We implemented a repeated-measures ANOVA with two within-subjects fixed factors (external LS and arm swing) to evaluate main and interaction effects. When provided with external LS (main effect), subjects reduced net metabolic power by 2.0% (P = 0.032) and step width variability by 12.3% (P = 0.005). Eliminating arm swing (main effect) increased net metabolic power by 7.6% (P < 0.001) but did not change step width variability (P = 0.975). We did not detect a significant interaction effect between external LS and arm swing. Thus, when comparing conditions of running with or without arm swing, external LS resulted in a similar reduction in net metabolic power and step width variability. We infer that the 2% reduction in the net energetic cost of running with external LS reflects the energetic cost of maintaining lateral balance. Furthermore, while eliminating arm swing increased the energetic cost of running overall, arm swing does not appear to assist with lateral balance. Our data suggest that humans use step width adjustments as the primary mechanism to maintain lateral balance during running.  相似文献   

17.
Bakhtina M  Lee S  Wang Y  Dunlap C  Lamarche B  Tsai MD 《Biochemistry》2005,44(13):5177-5187
The kinetic mechanism and the structural bases of the fidelity of DNA polymerases are still highly controversial. Here we report the use of three probes in the stopped-flow studies of Pol beta to obtain new, direct evidence for our previous interpretations: (a) Increasing the viscosity of the reaction buffer by sucrose or glycerol is expected to slow down the conformational change differentially, and it was shown to slow down the first (fast) fluorescence transition selectively. (b) Use of dNTPalphaS in place of dNTP is expected to slow down the chemical step preferentially, and it was shown to slow down the second (slow) fluorescence transition selectively. (c) The substitution-inert Rh(III)dNTP was used to show for the first time that the slow fluorescence change occurs after mixing of Pol beta.DNA.Rh(III)dNTP with Mg(II). These results, along with crystal structures, suggest that the subdomain-closing conformational change occurs before binding of the catalytic Mg(II) while the rate-limiting step occurs after binding of the catalytic Mg(II). These results provide new evidence to the mechanism we suggested previously, but do not support the results of three recent papers of computational studies. The results were further supported by a "sequential mixing" stopped-flow experiment that used no analogues, and thus ruled out the possibility that the discrepancy between experimental and computational results is due to the use of analogues. The methodologies can be used to examine other DNA polymerases to answer whether the properties of Pol beta are exceptional or general.  相似文献   

18.
Qiou Z  Ravishanker N  Dey DK 《Biometrics》1999,55(2):637-644
In this paper, we describe Bayesian modeling of dependent multivariate survival data using positive stable frailty distributions. A flexible baseline hazard formulation using a piecewise exponential model with a correlated prior process is used. The estimation of the stable law parameter together with the parameters of the (conditional) proportional hazards model is facilitated by a modified Gibbs sampling procedure. The methodology is illustrated on kidney infection data (McGilchrist and Aisbett, 1991).  相似文献   

19.
A semi-automated, 96-well based liquid-liquid back-extraction (LLE) procedure was developed and used for sample preparation of dextromethorphan (DEX), an active ingredient in many over-the-counter cough formulations, and dextrorphan (DOR), an active metabolite of DEX, in human plasma. The plasma extracts were analyzed by liquid chromatography-tandem mass spectrometry (LC-MS-MS). The analytes were isolated from human plasma using an initial ether extraction, followed by a back extraction from the ether into a small volume of acidified water. The acidified water isolated from the back extraction was analyzed directly by LC-MS-MS, eliminating the need for a dry down step. A liquid handling system was utilized for all aspects of liquid transfers during the LLE procedure including the transfer of samples from individual tubes into a 96-well format, preparation of standards, addition of internal standard and the addition and transfer of the extraction solvents. The semi-automated, 96-well based LLE procedure reduced sample preparation time by a factor of four versus a comparable manually performed LLE procedure.  相似文献   

20.
Summary As biological studies become more expensive to conduct, statistical methods that take advantage of existing auxiliary information about an expensive exposure variable are desirable in practice. Such methods should improve the study efficiency and increase the statistical power for a given number of assays. In this article, we consider an inference procedure for multivariate failure time with auxiliary covariate information. We propose an estimated pseudopartial likelihood estimator under the marginal hazard model framework and develop the asymptotic properties for the proposed estimator. We conduct simulation studies to evaluate the performance of the proposed method in practical situations and demonstrate the proposed method with a data set from the studies of left ventricular dysfunction ( SOLVD Investigators, 1991 , New England Journal of Medicine 325 , 293–302).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号