首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that point estimates in group sequential designs are biased. This also applies to adaptive designs that enable, e.g., data driven reassessments of group sample sizes. For triangular designs, Whitehead (1986) (Biometrika 73 , 573–581) proposed a bias adjusted estimate. But this estimate is not feasible in adaptive designs although it is in group sequential designs. Furthermore, there is a waste of information because it does not use the information at which stage the trial was stopped. We present a modification which does use this information and which is applicable to adaptive designs. The modified estimate achieves an improvement in group sequential designs and shows similar results in adaptive designs.  相似文献   

2.
Although linear rank statistics for the two‐sample problem are distribution free tests, their power depends on the distribution of the data. In the planning phase of an experiment, researchers are often uncertain about the shape of this distribution and so the choice of test statistic for the analysis and the determination of the required sample size are based on vague information. Adaptive designs with interim analysis can potentially overcome both problems. And in particular, adaptive tests based on a selector statistic are a solution to the first. We investigate whether adaptive tests can be usefully implemented in flexible two‐stage designs to gain power. In a simulation study, we compare several methods for choosing a test statistic for the second stage of an adaptive design based on interim data with the procedure that applies adaptive tests in both stages. We find that the latter is a sensible approach that leads to the best results in most situations considered here. The different methods are illustrated using a clinical trial example.  相似文献   

3.
Summary The two‐stage case–control design has been widely used in epidemiology studies for its cost‐effectiveness and improvement of the study efficiency ( White, 1982 , American Journal of Epidemiology 115, 119–128; Breslow and Cain, 1988 , Biometrika 75, 11–20). The evolution of modern biomedical studies has called for cost‐effective designs with a continuous outcome and exposure variables. In this article, we propose a new two‐stage outcome‐dependent sampling (ODS) scheme with a continuous outcome variable, where both the first‐stage data and the second‐stage data are from ODS schemes. We develop a semiparametric empirical likelihood estimation for inference about the regression parameters in the proposed design. Simulation studies were conducted to investigate the small‐sample behavior of the proposed estimator. We demonstrate that, for a given statistical power, the proposed design will require a substantially smaller sample size than the alternative designs. The proposed method is illustrated with an environmental health study conducted at National Institutes of Health.  相似文献   

4.
Summary A two‐stage design is cost‐effective for genome‐wide association studies (GWAS) testing hundreds of thousands of single nucleotide polymorphisms (SNPs). In this design, each SNP is genotyped in stage 1 using a fraction of case–control samples. Top‐ranked SNPs are selected and genotyped in stage 2 using additional samples. A joint analysis, combining statistics from both stages, is applied in the second stage. Follow‐up studies can be regarded as a two‐stage design. Once some potential SNPs are identified, independent samples are further genotyped and analyzed separately or jointly with previous data to confirm the findings. When the underlying genetic model is known, an asymptotically optimal trend test (TT) can be used at each analysis. In practice, however, genetic models for SNPs with true associations are usually unknown. In this case, the existing methods for analysis of the two‐stage design and follow‐up studies are not robust across different genetic models. We propose a simple robust procedure with genetic model selection to the two‐stage GWAS. Our results show that, if the optimal TT has about 80% power when the genetic model is known, then the existing methods for analysis of the two‐stage design have minimum powers about 20% across the four common genetic models (when the true model is unknown), while our robust procedure has minimum powers about 70% across the same genetic models. The results can be also applied to follow‐up and replication studies with a joint analysis.  相似文献   

5.
Paired data arises in a wide variety of applications where often the underlying distribution of the paired differences is unknown. When the differences are normally distributed, the t‐test is optimum. On the other hand, if the differences are not normal, the t‐test can have substantially less power than the appropriate optimum test, which depends on the unknown distribution. In textbooks, when the normality of the differences is questionable, typically the non‐parametric Wilcoxon signed rank test is suggested. An adaptive procedure that uses the Shapiro‐Wilk test of normality to decide whether to use the t‐test or the Wilcoxon signed rank test has been employed in several studies. Faced with data from heavy tails, the U.S. Environmental Protection Agency (EPA) introduced another approach: it applies both the sign and t‐tests to the paired differences, the alternative hypothesis is accepted if either test is significant. This paper investigates the statistical properties of a currently used adaptive test, the EPA's method and suggests an alternative technique. The new procedure is easy to use and generally has higher empirical power, especially when the differences are heavy‐tailed, than currently used methods.  相似文献   

6.
The application of stabilized multivariate tests is demonstrated in the analysis of a two‐stage adaptive clinical trial with three treatment arms. Due to the clinical problem, the multiple comparisons include tests of superiority as well as a test for non‐inferiority, where non‐inferiority is (because of missing absolute tolerance limits) expressed as linear contrast of the three treatments. Special emphasis is paid to the combination of the three sources of multiplicity – multiple endpoints, multiple treatments, and two stages of the adaptive design. Particularly, the adaptation after the first stage comprises a change of the a‐priori order of hypotheses.  相似文献   

7.
8.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

9.
10.
Adaptive clinical trials are becoming very popular because of their flexibility in allowing mid‐stream changes of sample size, endpoints, populations, etc. At the same time, they have been regarded with mistrust because they can produce bizarre results in very extreme settings. Understanding the advantages and disadvantages of these rapidly developing methods is a must. This paper reviews flexible methods for sample size re‐estimation when the outcome is continuous.  相似文献   

11.
Sensitivity and specificity have traditionally been used to assess the performance of a diagnostic procedure. Diagnostic procedures with both high sensitivity and high specificity are desirable, but these procedures are frequently too expensive, hazardous, and/or difficult to operate. A less sophisticated procedure may be preferred, if the loss of the sensitivity or specificity is determined to be clinically acceptable. This paper addresses the problem of simultaneous testing of sensitivity and specificity for an alternative test procedure with a reference test procedure when a gold standard is present. The hypothesis is formulated as a compound hypothesis of two non‐inferiority (one‐sided equivalence) tests. We present an asymptotic test statistic based on the restricted maximum likelihood estimate in the framework of comparing two correlated proportions under the prospective and retrospective sampling designs. The sample size and power of an asymptotic test statistic are derived. The actual type I error and power are calculated by enumerating the exact probabilities in the rejection region. For applications that require high sensitivity as well as high specificity, a large number of positive subjects and a large number of negative subjects are needed. We also propose a weighted sum statistic as an alternative test by comparing a combined measure of sensitivity and specificity of the two procedures. The sample size determination is independent of the sampling plan for the two tests.  相似文献   

12.
13.
There are many epidemiologic studies or clinical trials, in which we may wish to establish an equivalence rather than to detect a difference between the distributions of responses. In this paper, we develop test procedures to detect equivalence with respect to the tail marginal distributions and the marginal proportions when the underlying data are on an ordinal scale with matched pairs. We include a numerical example concerning the unaided distance vision of two eyes over 7477 women to illustrate the practical usefulness of the proposed procedure. Finally, we include a brief discussion on the relation between the test procedures developed here and an asymptotic interval estimator proposed elsewhere for the simple difference in dichotomous data with matched‐pairs.  相似文献   

14.
15.
A statistical method for parametric density estimation based upon a mixture‐of‐genotypes model is developed for the thermostable phenol sulfotransferase (SULT1A1) activity which has a putative role in modifying risk for colon and prostate cancer/polyps. The EM algorithm for the general mixture model is modified to accommodate the genetic constraints and is used to estimate genotype frequencies from the distribution of the SULT1A1 phenotype. A parametric bootstrap likelihood ratio test is considered as a testing method for the number of mixing components. The size and power of the test is then investigated and compared with the conventional chi‐squared test. The relative risk associated with genotypes defined by this model is also investigated through the generalized linear model. This analysis revealed that a genotype with the highest mean value of SULT1A1 activity has greater impact on cancer risk than others. This result suggests that the phenotype with a higher SULT1A1 activity might be important in studying the association between the cancer risk and SULT1A1 activity. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
When a case‐control study is planned to include an internal validation study, the sample size of the study and the proportion of validated observations has to be calculated. There are a variety of alternative methods to accomplish this. In this article some possible procedures will be compared in order to clarify whether considerable differences in the suggested optimal designs occur, dependent on the used method.  相似文献   

17.
18.
19.
In p‐i‐n planar perovskite solar cells (pero‐SCs) based on methylammonium lead iodide (MAPbI3) perovskite, high‐quality MAPbI3 film, perfect interfacial band alignment and efficient charge extracting ability are critical for high photovoltaic performance. In this work, a hydrophilic fullerene derivative [6,6]‐phenyl‐C61‐butyric acid‐(3,4,5‐tris(2‐(2‐(2‐methoxyethoxy)ethoxy)ethoxy)phenyl)methanol ester (PCBB‐OEG) is introduced as additive in the methylammonium iodide precursor solution in the preparation of MAPbI3 perovskite film by two‐step sequential deposition method, and obtained a top‐down gradient distribution with an ultrathin top layer of PCBB‐OEG. Meanwhile, a high‐quality perovskite film with high crystallinity, less trap‐states, and dense‐grained uniform morphology can well grow on both hydrophilic (poly(3,4‐ethylenedioxythiophene)/poly(styrenesulfonic acid)) and hydrophobic (polytriarylamine, PTAA) hole transport layers. When the PCBB‐OEG‐containing perovskite film (pero‐0.1) is prepared in a p‐i‐n planar pero‐SC with the configuration of ITO/PTAA/pero‐0.1/[6,6]‐phenyl‐C61‐butyric acid methyl ester/Al, the device delivers a promising power conversion efficiency (PCE) of 20.2% without hysteresis, which is one of the few PCE over 20% for the p‐i‐n planar pero‐SCs. Importantly, the pero‐0.1‐based device shows an excellent stability that can retain 98.4% of its initial PCE after being exposed for 300 h under ambient atmosphere with a high humidity, and the flexible pero‐SCs based on pero‐0.1 also demonstrate a promising PCE of 18.1%.  相似文献   

20.
Summary .   Several authors have addressed the problem of calculating sample size for a matched case–control study with a dichotomous exposure. The approach of Parker and Bregman (1986, Biometrics 42, 919–926) is, in our view, one of the most satisfactory, since it requires specification of quantities that are often easily available to the investigator. However, its recommended implementation involves a computational approximation. We show here that the approximation performs poorly in extreme situations and can be easily replaced with a more exact calculation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号