首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kwong KS  Cheung SH  Chan WS 《Biometrics》2004,60(2):491-498
In clinical studies, multiple superiority/equivalence testing procedures can be applied to classify a new treatment as superior, equivalent (same therapeutic effect), or inferior to each set of standard treatments. Previous stepwise approaches (Dunnett and Tamhane, 1997, Statistics in Medicine16, 2489-2506; Kwong, 2001, Journal of Statistical Planning and Inference 97, 359-366) are only appropriate for balanced designs. Unfortunately, the construction of similar tests for unbalanced designs is far more complex, with two major difficulties: (i) the ordering of test statistics for superiority may not be the same as the ordering of test statistics for equivalence; and (ii) the correlation structure of the test statistics is not equi-correlated but product-correlated. In this article, we seek to develop a two-stage testing procedure for unbalanced designs, which are very popular in clinical experiments. This procedure is a combination of step-up and single-step testing procedures, while the familywise error rate is proved to be controlled at a designated level. Furthermore, a simulation study is conducted to compare the average powers of the proposed procedure to those of the single-step procedure. In addition, a clinical example is provided to illustrate the application of the new procedure.  相似文献   

2.
Predictive and prognostic biomarkers play an important role in personalized medicine to determine strategies for drug evaluation and treatment selection. In the context of continuous biomarkers, identification of an optimal cutoff for patient selection can be challenging due to limited information on biomarker predictive value, the biomarker’s distribution in the intended use population, and the complexity of the biomarker relationship to clinical outcomes. As a result, prespecified candidate cutoffs may be rationalized based on biological and practical considerations. In this context, adaptive enrichment designs have been proposed with interim decision rules to select a biomarker-defined subpopulation to optimize study performance. With a group sequential design as a reference, the performance of several proposed adaptive designs are evaluated and compared under various scenarios (e.g., sample size, study power, enrichment effects) where type I error rates are well controlled through closed testing procedures and where subpopulation selections are based upon the predictive probability of trial success. It is found that when the treatment is more effective in a subpopulation, these adaptive designs can improve study power substantially. Furthermore, we identified one adaptive design to have generally higher study power than the other designs under various scenarios.  相似文献   

3.
J Whitehead 《Biometrics》1985,41(2):373-383
Conventional statistical determinations of sample size in phase II studies typically lead to sample sizes of the order of 25 (Schoenfeld, 1980, International Journal of Radiation Oncology, Biology and Physics 6, 371-374). When the development of new treatments is proceeding rapidly relative to the recruitment of suitable patients, such requirements can prove to be too demanding. As a result, either sample sizes are reduced by a rather arbitrary weakening of the risk specifications, or certain new treatments go untested. In this paper, the phase II testing of a number of treatments will be considered as a single study which has the objective of identifying the most promising treatment for phase III investigation. It is seen to be advantageous to test more treatments, with fewer subjects receiving each, than the conventional methods would allow.  相似文献   

4.
As phylogenetically controlled experimental designs become increasingly common in ecology, the need arises for a standardized statistical treatment of these datasets. Phylogenetically paired designs circumvent the need for resolved phylogenies and have been used to compare species groups, particularly in the areas of invasion biology and adaptation. Despite the widespread use of this approach, the statistical analysis of paired designs has not been critically evaluated. We propose a mixed model approach that includes random effects for pair and species. These random effects introduce a “two-layer” compound symmetry variance structure that captures both the correlations between observations on related species within a pair as well as the correlations between the repeated measurements within species. We conducted a simulation study to assess the effect of model misspecification on Type I and II error rates. We also provide an illustrative example with data containing taxonomically similar species and several outcome variables of interest. We found that a mixed model with species and pair as random effects performed better in these phylogenetically explicit simulations than two commonly used reference models (no or single random effect) by optimizing Type I error rates and power. The proposed mixed model produces acceptable Type I and II error rates despite the absence of a phylogenetic tree. This design can be generalized to a variety of datasets to analyze repeated measurements in clusters of related subjects/species.  相似文献   

5.
Eccentric exercise continues to receive attention as a productive means of exercise. Coupled with this has been the heightened study of the damage that occurs in early stages of exposure to eccentric exercise. This is commonly referred to as delayed onset muscle soreness (DOMS). To date, a sound and consistent treatment for DOMS has not been established. Although multiple practices exist for the treatment of DOMS, few have scientific support. Suggested treatments for DOMS are numerous and include pharmaceuticals, herbal remedies, stretching, massage, nutritional supplements, and many more. DOMS is particularly prevalent in resistance training; hence, this article may be of particular interest to the coach, trainer, or physical therapist to aid in selection of efficient treatments. First, we briefly review eccentric exercise and its characteristics and then proceed to a scientific and systematic overview and evaluation of treatments for DOMS. We have classified treatments into 3 sections, namely, pharmacological, conventional rehabilitation approaches, and a third section that collectively evaluates multiple additional practiced treatments. Literature that addresses most directly the question regarding the effectiveness of a particular treatment has been selected. The reader will note that selected treatments such as anti-inflammatory drugs and antioxidants appear to have a potential in the treatment of DOMS. Other conventional approaches, such as massage, ultrasound, and stretching appear less promising.  相似文献   

6.
We consider some multiple comparison problems in repeated measures designs for data with ties, particularly ordinal data; the methods are also applicable to continuous data, with or without ties. A unified asymptotic theory of rank tests of Brunner , Puri and Sen (1995) and Akritas and Brunner (1997) is utilized to derive large sample multiple comparison procedures (MCP's). First, we consider a single treatment and address the problem of comparing its time effects with respect to the baseline. Multiple sign tests and rank tests (and the corresponding simultaneous confidence intervals) are derived for this problem. Next, we consider two treatments and address the problem of testing for treatment × time interactions by comparing their time effects with respect to the baseline. Simulation studies are conducted to study the type I familywise error rates and powers of competing procedures under different distributional models. The data from a psychiatric study are analyzed using the above MCP's to answer the clinicians' questions.  相似文献   

7.
An experimental design is proposed for high-throughput testing of combined interventions that might increase life expectancy in rodents. There is a growing backlog of promising treatments that have never been tested in mammals, and known treatments have not been tested in combination. The dose-response curve is often nonlinear, as are the interactions among different therapies. Herein are proposed two experimental designs optimized for detecting high-value combinations. In Part I, numerical simulation is used to explore a protocol for testing different dosages of a single intervention. With reasonable and general biological assumptions about the dose-response curve, information is maximized when each animal receives a different dosage. In Part II, numerical simulation is used to explore a protocol for testing interactions among many combinations of treatments, once their individual dosages have been established. Combinations of three are identified as a sweet spot for statistics. To conserve resources, the protocol is designed to identify those outliers that lead to life extension greater than 50%, but not to offer detailed survival curves for any treatments. Every combination of three treatments from a universe of 15 total treatments is represented, with just three mice replicating each combination. Stepwise regression is used to infer information about the effects of individual treatments and all their pairwise interactions. Results are not quite as robust as for the dosage protocol in Part I, but if there is a combination that extends lifespan by more than 50%, it will be detected with 80% certainty. These two screening protocols offer the possibility of expediting the identification of treatment combinations that are most likely to have the largest effect, while controlling costs overall.  相似文献   

8.
Critical limb ischemia (CLI) is the advanced stage of peripheral artery disease spectrum and is defined by limb pain or impending limb loss because of compromised blood flow to the affected extremity. Current conventional therapies for CLI include amputation, bypass surgery, endovascular therapy, and pharmacological approaches. Although these conventional therapeutic strategies still remain as the mainstay of treatments for CLI, novel and promising therapeutic approaches such as proangiogenic gene/protein therapies and stem cell-based therapies have emerged to overcome, at least partially, the limitations and disadvantages of current conventional therapeutic approaches. Such novel CLI treatment options may become even more effective when other complementary approaches such as utilizing proper bioscaffolds are used to increase the survival and engraftment of delivered genes and stem cells. Therefore, herein, we address the benefits and disadvantages of current therapeutic strategies for CLI treatment and summarize the novel and promising therapeutic approaches for CLI treatment. Our analyses also suggest that these novel CLI therapeutic strategies show considerable advantages to be used when current conventional methods have failed for CLI treatment.  相似文献   

9.
Chan IS  Tang NS  Tang ML  Chan PS 《Biometrics》2003,59(4):1170-1177
Testing of noninferiority has become increasingly important in modern medicine as a means of comparing a new test procedure to a currently available test procedure. Asymptotic methods have recently been developed for analyzing noninferiority trials using rate ratios under the matched-pair design. In small samples, however, the performance of these asymptotic methods may not be reliable, and they are not recommended. In this article, we investigate alternative methods that are desirable for assessing noninferiority trials, using the rate ratio measure under small-sample matched-pair designs. In particular, we propose an exact and an approximate exact unconditional test, along with the corresponding confidence intervals based on the score statistic. The exact unconditional method guarantees the type I error rate will not exceed the nominal level. It is recommended for when strict control of type I error (protection against any inflated risk of accepting inferior treatments) is required. However, the exact method tends to be overly conservative (thus, less powerful) and computationally demanding. Via empirical studies, we demonstrate that the approximate exact score method, which is computationally simple to implement, controls the type I error rate reasonably well and has high power for hypothesis testing. On balance, the approximate exact method offers a very good alternative for analyzing correlated binary data from matched-pair designs with small sample sizes. We illustrate these methods using two real examples taken from a crossover study of soft lenses and a Pneumocystis carinii pneumonia study. We contrast the methods with a hypothetical example.  相似文献   

10.
Ding M  Rosner GL  Müller P 《Biometrics》2008,64(3):886-894
Summary .   Most phase II screening designs available in the literature consider one treatment at a time. Each study is considered in isolation. We propose a more systematic decision-making approach to the phase II screening process. The sequential design allows for more efficiency and greater learning about treatments. The approach incorporates a Bayesian hierarchical model that allows combining information across several related studies in a formal way and improves estimation in small data sets by borrowing strength from other treatments. The design incorporates a utility function that includes sampling costs and possible future payoff. Computer simulations show that this method has high probability of discarding treatments with low success rates and moving treatments with high success rates to phase III trial.  相似文献   

11.
Randomization analyses have been developed for testing main effects and interactions in standard experimental designs. However, exact multiple comparisons procedures for these randomization analyses have received little attention. This article proposes a general procedure for constructing simultaneous randomization tests that have prescribed type I error rates. An application of the procedure does provide for multiple comparisons in the randomization analyses of designed experiments. This application is made to data collected in a biopharmaceutical experiment.  相似文献   

12.
Summary .  Genomewide association studies attempting to unravel the genetic etiology of complex traits have recently gained attention. Frequently, these studies employ a sequential genotyping strategy: A large panel of markers is examined in a subsample of subjects, and the most promising markers are genotyped in the remaining subjects. In this article, we introduce a novel method for such designs enabling investigators to, for example, modify marker densities and sample proportions while strongly controlling the family-wise type I error rate. Loss of efficiency is avoided by redistributing conditional type I error rates of discarded markers. Our approach can be combined with cost optimal designs and entails a greater flexibility than all previously suggested designs. Among other features, it allows for marker selections based upon biological criteria instead of statistical criteria alone, or the option to modify the sample size at any time during the course of the project. For practical applicability, we develop a new algorithm, subsequently evaluate it by simulations, and illustrate it using a real data set.  相似文献   

13.
A common assumption of data analysis in clinical trials is that the patient population, as well as treatment effects, do not vary during the course of the study. However, when trials enroll patients over several years, this hypothesis may be violated. Ignoring variations of the outcome distributions over time, under the control and experimental treatments, can lead to biased treatment effect estimates and poor control of false positive results. We propose and compare two procedures that account for possible variations of the outcome distributions over time, to correct treatment effect estimates, and to control type-I error rates. The first procedure models trends of patient outcomes with splines. The second leverages conditional inference principles, which have been introduced to analyze randomized trials when patient prognostic profiles are unbalanced across arms. These two procedures are applicable in response-adaptive clinical trials. We illustrate the consequences of trends in the outcome distributions in response-adaptive designs and in platform trials, and investigate the proposed methods in the analysis of a glioblastoma study.  相似文献   

14.
In applied entomological experiments, when the response is a count-type variable, certain transformation remedies such as the square root, logarithm (log), or rank transformation are often used to normalize data before analysis of variance. In this study, we examine the usefulness of these transformations by reanalyzing field-collected data from a split-plot experiment and by performing a more comprehensive simulation study of factorial and split-plot experiments. For field-collected data, significant interactions were dependent upon the type of transformation. For the simulation study, Poisson distributed errors were used for a 2 by 2 factorial arrangement, in both randomized complete block and split-plot settings. Various sizes of main effects were induced, and type I error rates and powers of the tests for interaction were examined for the raw response values, log-, square root-, and rank-transformed responses. The aligned rank transformation also was investigated because it has been shown to perform well in testing interactions in factorial arrangements. We found that for testing interactions, the untransformed response and the aligned rank response performed best (preserved nominal type I error rates), whereas the other transformations had inflated error rates when main effects were present. No evaluations of the tests for main effects or simple effects have been conducted. Potentially these transformations will still be necessary when performing these tests.  相似文献   

15.
The Newman-Keuls (NK) procedure for testing all pairwise comparisons among a set of treatment means, introduced by Newman (1939) and in a slightly different form by Keuls (1952) was proposed as a reasonable way to alleviate the inflation of error rates when a large number of means are compared. It was proposed before the concepts of different types of multiple error rates were introduced by Tukey (1952a, b; 1953). Although it was popular in the 1950s and 1960s, once control of the familywise error rate (FWER) was accepted generally as an appropriate criterion in multiple testing, and it was realized that the NK procedure does not control the FWER at the nominal level at which it is performed, the procedure gradually fell out of favor. Recently, a more liberal criterion, control of the false discovery rate (FDR), has been proposed as more appropriate in some situations than FWER control. This paper notes that the NK procedure and a nonparametric extension controls the FWER within any set of homogeneous treatments. It proves that the extended procedure controls the FDR when there are well-separated clusters of homogeneous means and between-cluster test statistics are independent, and extensive simulation provides strong evidence that the original procedure controls the FDR under the same conditions and some dependent conditions when the clusters are not well-separated. Thus, the test has two desirable error-controlling properties, providing a compromise between FDR control with no subgroup FWER control and global FWER control. Yekutieli (2002) developed an FDR-controlling procedure for testing all pairwise differences among means, without any FWER-controlling criteria when there is more than one cluster. The empirica example in Yekutieli's paper was used to compare the Benjamini-Hochberg (1995) method with apparent FDR control in this context, Yekutieli's proposed method with proven FDR control, the Newman-Keuls method that controls FWER within equal clusters with apparent FDR control, and several methods that control FWER globally. The Newman-Keuls is shown to be intermediate in number of rejections to the FWER-controlling methods and the FDR-controlling methods in this example, although it is not always more conservative than the other FDR-controlling methods.  相似文献   

16.
Donner A  Klar N  Zou G 《Biometrics》2004,60(4):919-925
Split-cluster designs are frequently used in the health sciences when naturally occurring clusters such as multiple sites or organs in the same subject are assigned to different treatments. However, statistical methods for the analysis of binary data arising from such designs are not well developed. The purpose of this article is to propose and evaluate a new procedure for testing the equality of event rates in a design dividing each of k clusters into two segments having multiple sites (e.g., teeth, lesions). The test statistic proposed is a generalization of a previously published procedure based on adjusting the standard Pearson chi-square statistic, but can also be derived as a score test using the approach of generalized estimating equations.  相似文献   

17.
Different types of panelist by treatment interaction are explored to determine how they influence the outcomes of discrimination tests. The study compares the situations where panelists are considered as fixed or random effects over the range of most testing conditions for small panels (5–15 panelists) that replicate their judgements. Magnitude interaction and nonperceivers or nondiscriminators have minor effects on test outcomes. Cross-over interaction increases the chances for a type II error, especially when panelists are considered as random effects. False discrimination increases the chances for a type I error when panelists are considered as fixed effects. Applications of methods to reduce the chances for these errors in the testing for differences among treatments are discussed.  相似文献   

18.
This article aims to examine current best practice in the field reference to first-line, second-line, rescue and emerging treatment regimens for Helicobater pylori eradication. The recommended first-line treatment in published guidelines in Europe and North American is proton pump inhibitor combined with amoxicillin and clarithromycin being the favoured regimen. Rates of eradication with this regimen however are falling alarmingly due to a combination of antibiotic resistance and poor compliance with therapy. Bismuth based quadruple therapies and levofloxacin based regimes have been shown to be effective second line regimens. Third-line options include regimes based on rifabutin or furazolidone, but susceptibility testing is the most rational option here, but is currently not used widely enough. Sequential therapy is promising but needs further study and validation outside of Italy. Although the success of first line treatments is falling, if compliance is good and a clear treatment paradigm adhered to, almost universal eradication rates can still be achieved. If compliance is not achievable, the problem of antibiotic resistance will continue to beset any combination of drugs used for H. pylori eradication.  相似文献   

19.
Nonchemical, environmentally friendly quarantine treatments are preferred for use in postharvest control of insect pests. Combined high temperature and controlled atmosphere quarantine treatments for phytosanitary fruit pests Macchiademus diplopterus (Distant) (Hemiptera: Lygaeidae) and Phlyctinus callosus (Schoenherr) (Coleoptera: Curculionidae) were investigated to determine the potential of such treatments for quarantine security. Field-collected, aestivating M. diplopterus adults and P. callosus adults were treated using a controlled atmosphere waterbath system. This system simulates the controlled atmosphere temperature treatment system (CATTS) used to control a number of phytosanitary pests in the United States and allows for a rapid assessment of pest response to treatment. Insects were treated under regular air conditions and a controlled atmosphere of 1% oxygen, 15% carbon dioxide in nitrogen, at two ramping heat rates, 12 and 24 degrees C/h. Treatment of both species was more effective under both heating rates when the controlled atmosphere condition was applied. Under these conditions of controlled atmospheres, mortality of P. callosus was greater when the faster heating rate was used, but the opposite was true for M. diplopterus. This could be due to the physiological condition of aestivation contributing to metabolic arrest in response to the stresses being applied during treatment. Results indicate that the potential for the development of CATTS treatments for these phytosanitary pests, particularly P. callosus, is promising.  相似文献   

20.
Phylogenetic estimation has largely come to rely on explicitly model-based methods. This approach requires that a model be chosen and that that choice be justified. To date, justification has largely been accomplished through use of likelihood-ratio tests (LRTs) to assess the relative fit of a nested series of reversible models. While this approach certainly represents an important advance over arbitrary model selection, the best fit of a series of models may not always provide the most reliable phylogenetic estimates for finite real data sets, where all available models are surely incorrect. Here, we develop a novel approach to model selection, which is based on the Bayesian information criterion, but incorporates relative branch-length error as a performance measure in a decision theory (DT) framework. This DT method includes a penalty for overfitting, is applicable prior to running extensive analyses, and simultaneously compares all models being considered and thus does not rely on a series of pairwise comparisons of models to traverse model space. We evaluate this method by examining four real data sets and by using those data sets to define simulation conditions. In the real data sets, the DT method selects the same or simpler models than conventional LRTs. In order to lend generality to the simulations, codon-based models (with parameters estimated from the real data sets) were used to generate simulated data sets, which are therefore more complex than any of the models we evaluate. On average, the DT method selects models that are simpler than those chosen by conventional LRTs. Nevertheless, these simpler models provide estimates of branch lengths that are more accurate both in terms of relative error and absolute error than those derived using the more complex (yet still wrong) models chosen by conventional LRTs. This method is available in a program called DT-ModSel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号