首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modification of sample size in group sequential clinical trials   总被引:1,自引:0,他引:1  
Cui L  Hung HM  Wang SJ 《Biometrics》1999,55(3):853-857
In group sequential clinical trials, sample size reestimation can be a complicated issue when it allows for change of sample size to be influenced by an observed sample path. Our simulation studies show that increasing sample size based on an interim estimate of the treatment difference can substantially inflate the probability of type I error in most practical situations. A new group sequential test procedure is developed by modifying the weights used in the traditional repeated significance two-sample mean test. The new test has the type I error probability preserved at the target level and can provide a substantial gain in power with the increase of sample size. Generalization of the new procedure is discussed.  相似文献   

2.
In cancer clinical proteomics, MALDI and SELDI profiling are used to search for biomarkers of potentially curable early-stage disease. A given number of samples must be analysed in order to detect clinically relevant differences between cancers and controls, with adequate statistical power. From clinical proteomic profiling studies, expression data for each peak (protein or peptide) from two or more clinically defined groups of subjects are typically available. Typically, both exposure and confounder information on each subject are also available, and usually the samples are not from randomized subjects. Moreover, the data is usually available in replicate. At the design stage, however, covariates are not typically available and are often ignored in sample size calculations. This leads to the use of insufficient numbers of samples and reduced power when there are imbalances in the numbers of subjects between different phenotypic groups. A method is proposed for accommodating information on covariates, data imbalances and design-characteristics, such as the technical replication and the observational nature of these studies, in sample size calculations. It assumes knowledge of a joint distribution for the protein expression values and the covariates. When discretized covariates are considered, the effect of the covariates enters the calculations as a function of the proportions of subjects with specific attributes. This makes it relatively straightforward (even when pilot data on subject covariates is unavailable) to specify and to adjust for the effect of the expected heterogeneities. The new method suggests certain experimental designs which lead to the use of a smaller number of samples when planning a study. Analysis of data from the proteomic profiling of colorectal cancer reveals that fewer samples are needed when a study is balanced than when it is unbalanced, and when the IMAC30 chip-type is used. The method is implemented in the clippda package and is available in R at: http://www.bioconductor.org/help/bioc-views/release/bioc/html/clippda.html.  相似文献   

3.
Summary .   In this article, we apply the recently developed Bayesian wavelet-based functional mixed model methodology to analyze MALDI-TOF mass spectrometry proteomic data. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling nonparametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. For example, this provides a straightforward way to account for systematic block and batch effects that characterize these data. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, in a way that takes both statistical and clinical significance into account and controls the Bayesian false discovery rate to a prespecified level. We apply this method to two cancer studies.  相似文献   

4.
We describe the use of commercially available microcentrifugation devices (spin filters) for cleanup and digestion of protein samples for mass spectrometry analyses. The protein sample is added to the upper chamber of a spin filter with a > or = 3000 molecular weight cutoff membrane and then washed prior to resuspension in ammonium bicarbonate. The protein is then reduced, alkylated, and digested with trypsin in the upper chamber and the peptides are recovered by centrifugation through the membrane. The method provides digestion efficiencies comparable to standard in-solution digests, avoids lengthy dialysis steps, and allows rapid cleanup of samples containing salts, some detergents, and acidic or basic buffers.  相似文献   

5.
In historical control trials (HCTs), the experimental therapy is compared with a control therapy that has been evaluated in a previously conducted trial. Makuch and Simon developed a sample size formula where the observations from the HC group were considered not subject to sampling variability. Many researchers have pointed out that the Makuch–Simon sample size formula does not preserve the nominal power and type I error. We develop a sample size calculation approach that properly accounts for the uncertainty in the true response rate of the HC group. We demonstrate that the empirical power and type I error, obtained over the simulated HC data, have extremely skewed distributions. We then derive a closed‐form sample size formula that enables researchers to control percentiles, instead of means, of the power and type I error accounting for the skewness of the distributions. A simulation study demonstrates that this approach preserves the operational characteristics in a more realistic scenario where the true response rate of the HC group is unknown. We also show that the controlling percentiles can be used to describe the joint behavior of the power and type I error. It provides a new perspective on the assessment of HCTs.  相似文献   

6.
Determining sample sizes for microarray experiments is important but the complexity of these experiments, and the large amounts of data they produce, can make the sample size issue seem daunting, and tempt researchers to use rules of thumb in place of formal calculations based on the goals of the experiment. Here we present formulae for determining sample sizes to achieve a variety of experimental goals, including class comparison and the development of prognostic markers. Results are derived which describe the impact of pooling, technical replicates and dye-swap arrays on sample size requirements. These results are shown to depend on the relative sizes of different sources of variability. A variety of common types of experimental situations and designs used with single-label and dual-label microarrays are considered. We discuss procedures for controlling the false discovery rate. Our calculations are based on relatively simple yet realistic statistical models for the data, and provide straightforward sample size calculation formulae.  相似文献   

7.
Sample size for Poisson regression   总被引:2,自引:0,他引:2  
SIGNORINI  DAVID F. 《Biometrika》1991,78(2):446-450
  相似文献   

8.
Strategies employing non-gel based methods for quantitative proteomic profiling such as isotope coded affinity tags coupled with mass spectrometry (ICAT-MS) are gaining attention as alternatives to two-dimensional gel electrophoresis (2-DE). We have conducted a large-scale investigation to determine the degree of reproducibility and depth of proteome coverage of a typical ICAT-MS experiment by measuring protein changes in Escherichia coli treated with triclosan, an inhibitor of fatty acid biosynthesis. The entire ICAT-MS experiment was conducted on four independent occasions where more than 24 000 peptides were quantitated using an ion-trap mass spectrometer. Our results demonstrated that quantitatively, the technique provided good reproducibility (median coefficient of variation of ratios was 18.6%), and on average identified more than 450 unique proteins per experiment. However, the method was strongly biased to detect acidic proteins (pI < 7), under-represented small proteins (<10 kDa) and failed to show clear superiority over 2-DE methods in monitoring hydrophobic proteins from cell lysates.  相似文献   

9.
Summary .   We develop formulae to calculate sample sizes for ranking and selection of differentially expressed genes among different clinical subtypes or prognostic classes of disease in genome-wide screening studies with microarrays. The formulae aim to control the probability that a selected subset of genes with fixed size contains enough truly top-ranking informative genes, which can be assessed on the basis of the distribution of ordered statistics from independent genes. We provide strategies for conservative designs to cope with issues of unknown number of informative genes and unknown correlation structure across genes. Application of the formulae to a clinical study for multiple myeloma is given.  相似文献   

10.
Adaptive clinical trials are becoming very popular because of their flexibility in allowing mid‐stream changes of sample size, endpoints, populations, etc. At the same time, they have been regarded with mistrust because they can produce bizarre results in very extreme settings. Understanding the advantages and disadvantages of these rapidly developing methods is a must. This paper reviews flexible methods for sample size re‐estimation when the outcome is continuous.  相似文献   

11.
Summary A new methodology is proposed for estimating the proportion of true null hypotheses in a large collection of tests. Each test concerns a single parameter δ whose value is specified by the null hypothesis. We combine a parametric model for the conditional cumulative distribution function (CDF) of the p‐value given δ with a nonparametric spline model for the density g(δ) of δ under the alternative hypothesis. The proportion of true null hypotheses and the coefficients in the spline model are estimated by penalized least squares subject to constraints that guarantee that the spline is a density. The estimator is computed efficiently using quadratic programming. Our methodology produces an estimate of the density of δ when the null is false and can address such questions as “when the null is false, is the parameter usually close to the null or far away?” This leads us to define a falsely interesting discovery rate (FIDR), a generalization of the false discovery rate. We contrast the FIDR approach to Efron's (2004, Journal of the American Statistical Association 99, 96–104) empirical null hypothesis technique. We discuss the use of in sample size calculations based on the expected discovery rate (EDR). Our recommended estimator of the proportion of true nulls has less bias compared to estimators based upon the marginal density of the p‐values at 1. In a simulation study, we compare our estimators to the convex, decreasing estimator of Langaas, Lindqvist, and Ferkingstad (2005, Journal of the Royal Statistical Society, Series B 67, 555–572). The most biased of our estimators is very similar in performance to the convex, decreasing estimator. As an illustration, we analyze differences in gene expression between resistant and susceptible strains of barley.  相似文献   

12.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

13.
To reduce the lengthy duration of a crossover trial for comparing three treatments, the incomplete block design has been often considered. A sample size calculation procedure for testing nonequality between either of the two experimental treatments and a placebo under such a design is developed. To evaluate the performance of the proposed sample size calculation procedure, Monte Carlo simulation is employed. The accuracy of the sample size calculation procedure developed here is demonstrated in a variety of situations. As compared with the parallel groups design, a substantial proportional reduction in the total minimum required sample size in use of the incomplete block crossover design is found. A crossover trial comparing two different doses of formoterol with a placebo on the forced expiratory volume is applied to illustrate the use of the sample size calculation procedure.  相似文献   

14.
15.
Cho H  Smalley DM  Theodorescu D  Ley K  Lee JK 《Proteomics》2007,7(20):3681-3692
LC-MS/MS with certain labeling techniques such as isotope-coded affinity tag (ICAT) enables quantitative analysis of paired protein samples. However, current identification and quantification of differentially expressed peptides (and proteins) are not reliable for large proteomics screening of complex biological samples. The number of replicates is often limited because of the high cost of experiments and the limited supply of samples. Traditionally, a simple fold change cutoff is used, which results in a high rate of false positives. Standard statistical methods such as the two-sample t-test are unreliable and severely underpowered due to high variability in LC-MS/MS data, especially when only a small number of replicates are available. Using an advanced error pooling technique, we propose a novel statistical method that can reliably identify differentially expressed proteins while maintaining a high sensitivity, particularly with a small number of replicates. The proposed method was applied both to an extensive simulation study and a proteomics comparison between microparticles (MPs) generated from platelet (platelet MPs) and MPs isolated from plasma (plasma MPs). In these studies, we show a significant improvement of our statistical analysis in the identification of proteins that are differentially expressed but not detected by other statistical methods. In particular, several important proteins - two peptides for beta-globin and three peptides for von Willebrand Factor (vWF) - were identified with very small false discovery rates (FDRs) by our method, while none was significant when other conventional methods were used. These proteins have been reported with their important roles in microparticles in human blood cells: vWF is a platelet and endothelial cell product that binds to P-selectin, GP1b, and GP IIb/IIIa, and beta-globin is one of the peptides of hemoglobin involved in the transportation of oxygen by red blood cells.  相似文献   

16.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

17.
Cramer R  Corless S 《Proteomics》2005,5(2):360-370
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet -- matrix-assisted laser desorption/ionisation -- mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. The low-femtomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydroxybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and low-mass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.  相似文献   

18.
Mycobacterial carbohydrate sulfotransferase Stf0 catalyzes the sulfuryl group transfer from 3'-phosphoadenosine-5'-phosphosulfate (PAPS) to trehalose. The sulfation of trehalose is required for the biosynthesis of sulfolipid-1, the most abundant sulfated metabolite found in Mycobacterium tuberculosis. In this paper, an efficient enzyme kinetics assay for Stf0 using electrospray ionization (ESI) mass spectrometry is presented. The kinetic constants of Stf0 were measured, and the catalytic mechanism of the sulfuryl group transfer reaction was investigated in initial rate kinetics and product inhibition experiments. In addition, Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry was employed to detect the noncovalent complexes, the Stf0-PAPS and Stf0-trehalose binary complexes, and a Stf0-3'-phosphoadenosine 5'-phosphate-trehalose ternary complex. The results from our study strongly suggest a rapid equilibrium random sequential Bi-Bi mechanism for Stf0 with formation of a ternary complex intermediate. In this mechanism, PAPS and trehalose bind and their products are released in random fashion. To our knowledge, this is the first detailed mechanistic data reported for Stf0, which further demonstrates the power of mass spectrometry in elucidating the reaction pathway and catalytic mechanism of promising enzymatic systems.  相似文献   

19.
20.
Tang ML  Tang NS  Chan IS  Chan BP 《Biometrics》2002,58(4):957-963
In this article, we propose approximate sample size formulas for establishing equivalence or noninferiority of two treatments in match-pairs design. Using the ratio of two proportions as the equivalence measure, we derive sample size formulas based on a score statistic for two types of analyses: hypothesis testing and confidence interval estimation. Depending on the purpose of a study, these formulas can be used to provide a sample size estimate that guarantees a prespecified power of a hypothesis test at a certain significance level or controls the width of a confidence interval with a certain confidence level. Our empirical results confirm that these score methods are reliable in terms of true size, coverage probability, and skewness. A liver scan detection study is used to illustrate the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号