首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

2.
A two-stage design is proposed to choose among several experimental treatments and a standard treatment in clinical trials. The first stage employs a selection procedure to select the best treatment, provided it is better than the standard. The second stage tests the hypothesis between the best treatment selected at the first stage (if any) and the standard treatment. All the treatments are assumed to follow normal distributions and the best treatment is the one with the largest population mean. The level and the power are defined and they are used to set up equations to solve unknown first stage sample size, second stage sample size, and procedure parameters. The optimal design is the one that gives the smallest average sample size. Numerical results are presented to illustrate the improvement of one design as compared to existing one stage design.  相似文献   

3.
Ranked set sampling (RSS) is a sampling procedure that can be considerably more efficient than simple random sampling (SRS). When the variable of interest is binary, ranking of the sample observations can be implemented using the estimated probabilities of success obtained from a logistic regression model developed for the binary variable. The main objective of this study is to use substantial data sets to investigate the application of RSS to estimation of a proportion for a population that is different from the one that provides the logistic regression. Our results indicate that precision in estimation of a population proportion is improved through the use of logistic regression to carry out the RSS ranking and, hence, the sample size required to achieve a desired precision is reduced. Further, the choice and the distribution of covariates in the logistic regression model are not overly crucial for the performance of a balanced RSS procedure.  相似文献   

4.
Two-stage designs for experiments with a large number of hypotheses   总被引:1,自引:0,他引:1  
MOTIVATION: When a large number of hypotheses are investigated the false discovery rate (FDR) is commonly applied in gene expression analysis or gene association studies. Conventional single-stage designs may lack power due to low sample sizes for the individual hypotheses. We propose two-stage designs where the first stage is used to screen the 'promising' hypotheses which are further investigated at the second stage with an increased sample size. A multiple test procedure based on sequential individual P-values is proposed to control the FDR for the case of independent normal distributions with known variance. RESULTS: The power of optimal two-stage designs is impressively larger than the power of the corresponding single-stage design with equal costs. Extensions to the case of unknown variances and correlated test statistics are investigated by simulations. Moreover, it is shown that the simple multiple test procedure using first stage data for screening purposes and deriving the test decisions only from second stage data is a very powerful option.  相似文献   

5.
In this paper, we propose a unified Bayesian joint modeling framework for studying association between a binary treatment outcome and a baseline matrix-valued predictor. Specifically, a joint modeling approach relating an outcome to a matrix-valued predictor through a probabilistic formulation of multilinear principal component analysis is developed. This framework establishes a theoretical relationship between the outcome and the matrix-valued predictor, although the predictor is not explicitly expressed in the model. Simulation studies are provided showing that the proposed method is superior or competitive to other methods, such as a two-stage approach and a classical principal component regression in terms of both prediction accuracy and estimation of association; its advantage is most notable when the sample size is small and the dimensionality in the imaging covariate is large. Finally, our proposed joint modeling approach is shown to be a very promising tool in an application exploring the association between baseline electroencephalography data and a favorable response to treatment in a depression treatment study by achieving a substantial improvement in prediction accuracy in comparison to competing methods.  相似文献   

6.
In the era of big data analysis, it is of interest to develop diagnostic tools for preliminary scanning of large spatial databases. One problem is identification of locations where certain characteristics exceed a given norm, e.g. timber volume or mean tree diameter exceeding a user-defined threshold. Some of the challenges are, large size of the database, randomness, complex shape of the spatial mean surface, heterogeneity and others. In a step-by-step procedure, we propose a method for achieving this for large spatial data sets. For illustration, we work through a simulated spatial data set as well as a forest inventory data set from Alaska (source: USDA Forest Services). Working within the framework of nonparametric regression modeling, the proposed method can attain a high degree of flexibility regarding the shape of the spatial mean surface. Taking advantage of the large sample size, we also provide asymptotic formulas that are easy to implement in any statistical software.  相似文献   

7.
Switching between testing for superiority and non-inferiority has been an important statistical issue in the design and analysis of active controlled clinical trial. In practice, it is often conducted with a two-stage testing procedure. It has been assumed that there is no type I error rate adjustment required when either switching to test for non-inferiority once the data fail to support the superiority claim or switching to test for superiority once the null hypothesis of non-inferiority is rejected with a pre-specified non-inferiority margin in a generalized historical control approach. However, when using a cross-trial comparison approach for non-inferiority testing, controlling the type I error rate sometimes becomes an issue with the conventional two-stage procedure. We propose to adopt a single-stage simultaneous testing concept as proposed by Ng (2003) to test both non-inferiority and superiority hypotheses simultaneously. The proposed procedure is based on Fieller's confidence interval procedure as proposed by Hauschke et al. (1999).  相似文献   

8.
Cheung YK 《Biometrics》2008,64(3):940-949
Summary .   In situations when many regimens are possible candidates for a large phase III study, but too few resources are available to evaluate each relative to the standard, conducting a multi-armed randomized selection trial is a useful strategy to remove inferior treatments from further consideration. When the study has a relatively quick endpoint such as an imaging-based lesion volume change in acute stroke patients, frequent interim monitoring of the trial is ethically and practically appealing to clinicians. In this article, I propose a class of sequential selection boundaries for multi-armed clinical trials, in which the objective is to select a treatment with a clinically significant improvement upon the control group, or to declare futility if no such treatment exists. The proposed boundaries are easy to implement in a blinded fashion, and can be applied on a flexible monitoring schedule in terms of calendar time. Design calibration with respect to prespecified levels of confidence is simple, and can be accomplished when the response rate of the control group is known only up to an interval. One of the proposed methods is applied to redesign a selection trial with an imaging endpoint in acute stroke patients, and is compared to an optimal two-stage design via simulations: The proposed method imposes smaller sample size on average than the two-stage design; this advantage is substantial when there is in fact a superior treatment to the control group.  相似文献   

9.
The present study assesses the accuracy with which the subject specific coordinates of the hip joint centre (HJC) in a pelvic anatomical frame can be estimated using different methods. The functional method was applied by calculating the centre of the best sphere described by the trajectory of markers placed on the thigh during several trials of hip rotations. Different prediction methods, proposed in the literature and in the present investigation, which estimate the HJC of adult subjects using regression equations and anthropometric measurements, were also assessed. The accuracy of each of the above-mentioned methods was investigated by comparing their predictions with measurements obtained on a sample of 11 male adult able-bodied volunteers using roentgen stereophotogrammetric analysis (RSA), assumed to provide the true HJC locations. Prediction methods estimated the HJC location at an average rms distance of 25-30 mm. The functional method performed significantly better and estimated HJCs within a rms distance of 13 mm on average. This result may be confidently generalised if the photogrammetric experiment is carefully conducted and an optimal analytical approach used. The method is therefore suggested for use in motion analysis when the subject's hip range of motion is not limited. In addition, the facts that it is not an invasive technique and that it has relatively small and un-biased errors, make it suitable for regression equations identification with no limit to sample size and population typology.  相似文献   

10.
To reduce the lengthy duration of a crossover trial for comparing three treatments, the incomplete block design has been often considered. A sample size calculation procedure for testing nonequality between either of the two experimental treatments and a placebo under such a design is developed. To evaluate the performance of the proposed sample size calculation procedure, Monte Carlo simulation is employed. The accuracy of the sample size calculation procedure developed here is demonstrated in a variety of situations. As compared with the parallel groups design, a substantial proportional reduction in the total minimum required sample size in use of the incomplete block crossover design is found. A crossover trial comparing two different doses of formoterol with a placebo on the forced expiratory volume is applied to illustrate the use of the sample size calculation procedure.  相似文献   

11.
Flexible design for following up positive findings   总被引:2,自引:0,他引:2       下载免费PDF全文
As more population-based studies suggest associations between genetic variants and disease risk, there is a need to improve the design of follow-up studies (stage II) in independent samples to confirm evidence of association observed at the initial stage (stage I). We propose to use flexible designs developed for randomized clinical trials in the calculation of sample size for follow-up studies. We apply a bootstrap procedure to correct the effect of regression to the mean, also called "winner's curse," resulting from choosing to follow up the markers with the strongest associations. We show how the results from stage I can improve sample size calculations for stage II adaptively. Despite the adaptive use of stage I data, the proposed method maintains the nominal global type I error for final analyses on the basis of either pure replication with the stage II data only or a joint analysis using information from both stages. Simulation studies show that sample-size calculations accounting for the impact of regression to the mean with the bootstrap procedure are more appropriate than is the conventional method. We also find that, in the context of flexible design, the joint analysis is generally more powerful than the replication analysis.  相似文献   

12.
H S Lynn  C E McCulloch 《Biometrics》1992,48(2):397-409
Two methods of analysis are compared to estimate the treatment effect of a comparative study where each treated individual is matched with a single control at the design stage. The usual matched-pairs analysis accounts for the pairing directly in its model, whereas regression adjustment ignores the matching but instead models the pairing using a set of covariates. For a normal linear model, the estimated treatment effect from the matched-pairs analysis (paired t-test) is more efficient. For a Bernoulli logistic model, matched-pairs analysis performs better when the sample size is small, but is inferior to logistic regression for large sample sizes.  相似文献   

13.
The increasing interest in subpopulation analysis has led to the development of various new trial designs and analysis methods in the fields of personalized medicine and targeted therapies. In this paper, subpopulations are defined in terms of an accumulation of disjoint population subsets and will therefore be called composite populations. The proposed trial design is applicable to any set of composite populations, considering normally distributed endpoints and random baseline covariates. Treatment effects for composite populations are tested by combining p-values, calculated on the subset levels, using the inverse normal combination function to generate test statistics for those composite populations while the closed testing procedure accounts for multiple testing. Critical boundaries for intersection hypothesis tests are derived using multivariate normal distributions, reflecting the joint distribution of composite population test statistics given no treatment effect exists. For sample size calculation and sample size, recalculation multivariate normal distributions are derived which describe the joint distribution of composite population test statistics under an assumed alternative hypothesis. Simulations demonstrate the absence of any practical relevant inflation of the type I error rate. The target power after sample size recalculation is typically met or close to being met.  相似文献   

14.
Bhoj (1997c) proposed a new ranked set sampling (NRSS) procedure for a specific two‐parameter family of distributions when the sample size is even. This NRSS procedure can be applied to one‐parameter family of distributions when the sample size is even. However, this procedure cannot be used if the sample size is odd. Therefore, in this paper, we propose a modified version of the NRSS procedure which can be used for one‐parameter distributions when the sample size is odd. Simple estimator for the parameter based on proposed NRSS is derived. The relative precisions of this estimator are higher than those of other estimators which are based on other ranked set sampling procedures and the best linear unbiased estimator using all order statistics.  相似文献   

15.
Sha Q  Zhang Z  Zhang S 《PloS one》2011,6(7):e21957
In family-based data, association information can be partitioned into the between-family information and the within-family information. Based on this observation, Steen et al. (Nature Genetics. 2005, 683-691) proposed an interesting two-stage test for genome-wide association (GWA) studies under family-based designs which performs genomic screening and replication using the same data set. In the first stage, a screening test based on the between-family information is used to select markers. In the second stage, an association test based on the within-family information is used to test association at the selected markers. However, we learn from the results of case-control studies (Skol et al. Nature Genetics. 2006, 209-213) that this two-stage approach may be not optimal. In this article, we propose a novel two-stage joint analysis for GWA studies under family-based designs. For this joint analysis, we first propose a new screening test that is based on the between-family information and is robust to population stratification. This new screening test is used in the first stage to select markers. Then, a joint test that combines the between-family information and within-family information is used in the second stage to test association at the selected markers. By extensive simulation studies, we demonstrate that the joint analysis always results in increased power to detect genetic association and is robust to population stratification.  相似文献   

16.
Thach CT  Fisher LD 《Biometrics》2002,58(2):432-438
In the design of clinical trials, the sample size for the trial is traditionally calculated from estimates of parameters of interest, such as the mean treatment effect, which can often be inaccurate. However, recalculation of the sample size based on an estimate of the parameter of interest that uses accumulating data from the trial can lead to inflation of the overall Type I error rate of the trial. The self-designing method of Fisher, also known as the variance-spending method, allows the use of all accumulating data in a sequential trial (including the estimated treatment effect) in determining the sample size for the next stage of the trial without inflating the Type I error rate. We propose a self-designing group sequential procedure to minimize the expected total cost of a trial. Cost is an important parameter to consider in the statistical design of clinical trials due to limited financial resources. Using Bayesian decision theory on the accumulating data, the design specifies sequentially the optimal sample size and proportion of the test statistic's variance needed for each stage of a trial to minimize the expected cost of the trial. The optimality is with respect to a prior distribution on the parameter of interest. Results are presented for a simple two-stage trial. This method can extend to nonmonetary costs, such as ethical costs or quality-adjusted life years.  相似文献   

17.
The drop plate (DP) method can be used to determine the number of viable suspended bacteria in a known beaker volume. The drop plate method has some advantages over the spread plate (SP) method. Less time and effort are required to dispense the drops onto an agar plate than to spread an equivalent total sample volume into the agar. By distributing the sample in drops, colony counting can be done faster and perhaps more accurately. Even though it has been present in the laboratory for many years, the drop plate method has not been standardized. Some technicians use 10-fold dilutions, others use twofold. Some technicians plate a total volume of 0.1 ml, others plate 0.2 ml. The optimal combination of such factors would be useful to know when performing the drop plate method.This investigation was conducted to determine (i) the standard deviation of the bacterial density estimate, (ii) the cost of performing the drop plate procedure, (iii) the optimal drop plate design, and (iv) the advantages of the drop plate method in comparison to the standard spread plate method. The optimal design is the combination of factor settings that achieves the smallest standard deviation for a fixed cost. Computer simulation techniques and regression analysis were used to express the standard deviation as a function of the beaker volume, dilution factor, and volume plated. The standard deviation expression is also applicable to the spread plate method.  相似文献   

18.
Summary As compared to classical, fixed sample size techniques, simulation studies showed that a proposed sequential sampling procedure can provide a substantial decrease (up to 50%, in some cases) in the mean sample size required for the detection of linkage between marker loci and quantitative trait loci. Sequential sampling with truncation set at the required sample size for the non-sequential test, produced a modest further decrease in mean sample size, accompanied by a modest increase in error probabilities. Sequential sampling with observations taken in groups produced a noticeable increase in mean sample size, with a considerable decrease in error probabilities, as compared to straightforward sequential sampling. It is concluded that sequential sampling has a particularly useful application to experiments aimed at investigating the genetics of differences between lines or strains that differ in some single outstanding trait.  相似文献   

19.
Posch M  Bauer P 《Biometrics》2000,56(4):1170-1176
This article deals with sample size reassessment for adaptive two-stage designs based on conditional power arguments utilizing the variability observed at the first stage. Fisher's product test for the p-values from the disjoint samples at the two stages is considered in detail for the comparison of the means of two normal populations. We show that stopping rules allowing for the early acceptance of the null hypothesis that are optimal with respect to the average sample size may lead to a severe decrease of the overall power if the sample size is a priori underestimated. This problem can be overcome by choosing designs with low probabilities of early acceptance or by midtrial adaptations of the early acceptance boundary using the variability observed in the first stage. This modified procedure is negligibly anticonservative and preserves the power.  相似文献   

20.
Multiple regression of observational data is frequently used to infer causal effects. Partial regression coefficients are biased estimates of causal effects if unmeasured confounders are not in the regression model. The sensitivity of partial regression coefficients to omitted confounders is investigated with a Monte‐Carlo simulation. A subset of causal traits is “measured” and their effects are estimated using ordinary least squares regression and compared to their expected values. Three major results are: (1) the error due to confounding is much larger than that due to sampling, especially with large samples, (2) confounding error shrinks trivially with sample size, and (3) small true effects are frequently estimated as large effects. Consequently, confidence intervals from regression are poor guides to the true intervals, especially with large sample sizes. The addition of a confounder to the model improves estimates only 55% of the time. Results are improved with complete knowledge of the rank order of causal effects but even with this omniscience, measured intervals are poor proxies for true intervals if there are many unmeasured confounders. The results suggest that only under very limited conditions can we have much confidence in the magnitude of partial regression coefficients as estimates of causal effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号