首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The interest in individualized medicines and upcoming or renewed regulatory requests to assess treatment effects in subgroups of confirmatory trials requires statistical methods that account for selection uncertainty and selection bias after having performed the search for meaningful subgroups. The challenge is to judge the strength of the apparent findings after mining the same data to discover them. In this paper, we describe a resampling approach that allows to replicate the subgroup finding process many times. The replicates are used to adjust the effect estimates for selection bias and to provide variance estimators that account for selection uncertainty. A simulation study provides some evidence of the performance of the method and an example from oncology illustrates its use.  相似文献   

2.
The two‐stage drop‐the‐loser design provides a framework for selecting the most promising of K experimental treatments in stage one, in order to test it against a control in a confirmatory analysis at stage two. The multistage drop‐the‐losers design is both a natural extension of the original two‐stage design, and a special case of the more general framework of Stallard & Friede ( 2008 ) (Stat. Med. 27 , 6209–6227). It may be a useful strategy if deselecting all but the best performing treatment after one interim analysis is thought to pose an unacceptable risk of dropping the truly best treatment. However, estimation has yet to be considered for this design. Building on the work of Cohen & Sackrowitz ( 1989 ) (Stat. Prob. Lett. 8 , 273–278), we derive unbiased and near‐unbiased estimates in the multistage setting. Complications caused by the multistage selection process are shown to hinder a simple identification of the multistage uniform minimum variance conditionally unbiased estimate (UMVCUE); two separate but related estimators are therefore proposed, each containing some of the UMVCUEs theoretical characteristics. For a specific example of a three‐stage drop‐the‐losers trial, we compare their performance against several alternative estimators in terms of bias, mean squared error, confidence interval width and coverage.  相似文献   

3.
Song X  Pepe MS 《Biometrics》2004,60(4):874-883
Selecting the best treatment for a patient's disease may be facilitated by evaluating clinical characteristics or biomarker measurements at diagnosis. We consider how to evaluate the potential impact of such measurements on treatment selection algorithms. For example, magnetic resonance neurographic imaging is potentially useful for deciding whether a patient should be treated surgically for Carpal Tunnel Syndrome or should receive less-invasive conservative therapy. We propose a graphical display, the selection impact (SI) curve that shows the population response rate as a function of treatment selection criteria based on the marker. The curve can be useful for choosing a treatment policy that incorporates information on the patient's marker value exceeding a threshold. The SI curve can be estimated using data from a comparative randomized trial conducted in the population as long as treatment assignment in the trial is independent of the predictive marker. Estimating the SI curve is therefore part of a post hoc analysis to determine whether the marker identifies patients that are more likely to benefit from one treatment over another. Nonparametric and parametric estimates of the SI curve are proposed in this article. Asymptotic distribution theory is used to evaluate the relative efficiencies of the estimators. Simulation studies show that inference is straightforward with realistic sample sizes. We illustrate the SI curve and statistical inference for it with data motivated by an ongoing trial of surgery versus conservative therapy for Carpal Tunnel Syndrome.  相似文献   

4.
In the planning stage of a clinical trial investigating a potentially targeted therapy, there is commonly a high degree of uncertainty whether the treatment is more efficient (or efficient only) in a subgroup compared to the whole population. Recently developed adaptive designs enable to allow for an efficacy assessment both for the whole population and a subgroup and to select the target population mid-course based on interim results (see, e.g., Wang et al., Pharm Stat 6:227–244, 2007, Brannath et al., Stat Med 28:1445–1463, 2009, Wang et al., Biom J 51:358–374, 2009, Jenkins et al., Pharm Stat 10:347–356, 2011, Friede et al., Stat Med 31:4309–4120, 2012). Frequently, predictive biomarkers are used in these trials for identifying patients more likely to benefit from a drug. We consider the situation that the selection of the patient population is based on a biomarker and where the diagnostics that evaluates the biomarker may be perfect, i.e., with 100 % sensitivity and specificity, or not. The performance of the applied subset selection rule is crucial for the overall characteristics of the design. In the setting of an adaptive enrichment design, we evaluate the properties of subgroup selection rules in terms of type I error rate and power by taking into account decision rules with a fixed ad hoc threshold and optimal decision rules developed for the situation of uncertain assumptions. In a simulation study, we demonstrate that designs with optimal decision rules are under certain assumptions more powerful as compared to those with ad hoc decision rules. Throughout the results, a strong impact of sensitivity and specificity of the biomarker on both type I error rate and power is observed.  相似文献   

5.
A popular design for clinical trials assessing targeted therapies is the two-stage adaptive enrichment design with recruitment in stage 2 limited to a biomarker-defined subgroup chosen based on data from stage 1. The data-dependent selection leads to statistical challenges if data from both stages are used to draw inference on treatment effects in the selected subgroup. If subgroups considered are nested, as when defined by a continuous biomarker, treatment effect estimates in different subgroups follow the same distribution as estimates in a group-sequential trial. This result is used to obtain tests controlling the familywise type I error rate (FWER) for six simple subgroup selection rules, one of which also controls the FWER for any selection rule. Two approaches are proposed: one based on multivariate normal distributions suitable if the number of possible subgroups, k, is small, and one based on Brownian motion approximations suitable for large k. The methods, applicable in the wide range of settings with asymptotically normal test statistics, are illustrated using survival data from a breast cancer trial.  相似文献   

6.
Staniswalis JG 《Biometrics》2008,64(4):1054-1061
SUMMARY: Nonparametric regression models are proposed in the framework of ecological inference for exploratory modeling of disease prevalence rates adjusted for variables, such as age, ethnicity/race, and socio-economic status. Ecological inference is needed when a response variable and covariate are not available at the subject level because only summary statistics are available for the reporting unit, for example, in the form of R x C tables. In this article, only the marginal counts are assumed available in the sample of R x C contingency tables for modeling the joint distribution of counts. A general form for the ecological regression model is proposed, whereby certain covariates are included as a varying coefficient regression model, whereas others are included as a functional linear model. The nonparametric regression curves are modeled as splines fit by penalized weighted least squares. A data-driven selection of the smoothing parameter is proposed using the pointwise maximum squared bias computed from averaging kernels (explained by O'Sullivan, 1986, Statistical Science 1, 502-517). Analytic expressions for bias and variance are provided that could be used to study the rates of convergence of the estimators. Instead, this article focuses on demonstrating the utility of the estimators in a study of disparity in health outcomes by ethnicity/race.  相似文献   

7.
For a patient who is facing a treatment decision, the added value of information provided by a biomarker depends on the individual patient’s expected response to treatment with and without the biomarker, as well as his/her tolerance of disease and treatment harm. However, individualized estimators of the value of a biomarker are lacking. We propose a new graphical tool named the subject-specific expected benefit curve for quantifying the personalized value of a biomarker in aiding a treatment decision. We develop semiparametric estimators for two general settings: (i) when biomarker data are available from a randomized trial; and (ii) when biomarker data are available from a cohort or a cross-sectional study, together with external information about a multiplicative treatment effect. We also develop adaptive bootstrap confidence intervals for consistent inference in the presence of nonregularity. The proposed method is used to evaluate the individualized value of the serum creatinine marker in informing treatment decisions for the prevention of renal artery stenosis.  相似文献   

8.
Adaptive seamless phase II/III designs combine a phase II and a phase III study into one single confirmatory clinical trial. Several examples of such designs are presented, where the primary endpoint is binary, time-to-event or continuous. The interim adaptations considered include the selection of treatments and the selection of hypotheses related to a pre-specified subgroup of patients. Practical aspects concerning the planning and implementation of adaptive seamless confirmatory studies are also discussed.  相似文献   

9.
Targeted therapies are becoming more common. In targeted therapy development, suppose its companion diagnostic test divides patients into a marker‐positive subgroup and its complementary marker‐negative subgroup. To find the right patient population for the therapy to target, inference on efficacy in the marker‐positive and marker‐negative subgroups as well as efficacy in the overall mixture population are all of interest. Depending on the type of clinical endpoints, inference on mixture population can be nontrivial and commonly used efficacy measures may not be suitable for a mixture population. Correlations among estimates of efficacy in the marker‐positive, marker‐negative, and overall mixture population play a crucial role in using an earlier phase study to inform on the design of a confirmatory study (e.g., determination of sample size). This article first shows that when the clinical endpoint is binary (such as respond or not), odds ratio is inappropriate as an efficacy measure in this setting, but relative response (RR) is appropriate. We show a safe way of calculating estimated correlations is to consider mixing subgroup response probabilities within each treatment arm first, and then derive the joint distribution of RR estimates. We also show, if one calculates RR within each subgroup first, how wrong the correlations can be if the Delta method derivation fails to take randomness of estimating the mixing coefficient into account.  相似文献   

10.
11.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

12.
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three‐way ROC analysis focuses on ordinal tests. We propose verification bias‐correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U‐statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease.  相似文献   

13.
Malka Gorfine 《Biometrics》2001,57(2):589-597
In this article, we investigate estimation of a secondary parameter in group sequential tests. We study the model in which the secondary parameter is the mean of the normal distribution in a subgroup of the subjects. The bias of the naive secondary parameter estimator is studied. It is shown that the sampling proportions of the subgroup have a crucial effect on the bias: As the sampling proportion of the subgroup at or just before the stopping time increases, the bias of the naive subgroup parameter estimator increases as well. An unbiased estimator for the subgroup parameter and an unbiased estimator for its variance are derived. Using simulations, we compare the mean squared error of the unbiased estimator to that of the naive estimator, and we show that the differences are negligible. As an example, the methods of estimation are applied to an actual group sequential clinical trial, The Beta-Blocker Heart Attack Trial.  相似文献   

14.
There is a need for epidemiological and medical researchers to identify new biomarkers (biological markers) that are useful in determining exposure levels and/or for the purposes of disease detection. Often this process is stunted by high testing costs associated with evaluating new biomarkers. Traditionally, biomarker assessments are individually tested within a target population. Pooling has been proposed to help alleviate the testing costs, where pools are formed by combining several individual specimens. Methods for using pooled biomarker assessments to estimate discriminatory ability have been developed. However, all these procedures have failed to acknowledge confounding factors. In this paper, we propose a regression methodology based on pooled biomarker measurements that allow the assessment of the discriminatory ability of a biomarker of interest. In particular, we develop covariate‐adjusted estimators of the receiver‐operating characteristic curve, the area under the curve, and Youden's index. We establish the asymptotic properties of these estimators and develop inferential techniques that allow one to assess whether a biomarker is a good discriminator between cases and controls, while controlling for confounders. The finite sample performance of the proposed methodology is illustrated through simulation. We apply our methods to analyze myocardial infarction (MI) data, with the goal of determining whether the pro‐inflammatory cytokine interleukin‐6 is a good predictor of MI after controlling for the subjects' cholesterol levels.  相似文献   

15.
Disease prevalence is ideally estimated using a 'gold standard' to ascertain true disease status on all subjects in a population of interest. In practice, however, the gold standard may be too costly or invasive to be applied to all subjects, in which case a two-phase design is often employed. Phase 1 data consisting of inexpensive and non-invasive screening tests on all study subjects are used to determine the subjects that receive the gold standard in the second phase. Naive estimates of prevalence in two-phase studies can be biased (verification bias). Imputation and re-weighting estimators are often used to avoid this bias. We contrast the forms and attributes of the various prevalence estimators. Distribution theory and simulation studies are used to investigate their bias and efficiency. We conclude that the semiparametric efficient approach is the preferred method for prevalence estimation in two-phase studies. It is more robust and comparable in its efficiency to imputation and other re-weighting estimators. It is also easy to implement. We use this approach to examine the prevalence of depression in adolescents with data from the Great Smoky Mountain Study.  相似文献   

16.
Publication bias is a major concern in conducting systematic reviews and meta-analyses. Various sensitivity analysis or bias-correction methods have been developed based on selection models, and they have some advantages over the widely used trim-and-fill bias-correction method. However, likelihood methods based on selection models may have difficulty in obtaining precise estimates and reasonable confidence intervals, or require a rather complicated sensitivity analysis process. Herein, we develop a simple publication bias adjustment method by utilizing the information on conducted but still unpublished trials from clinical trial registries. We introduce an estimating equation for parameter estimation in the selection function by regarding the publication bias issue as a missing data problem under the missing not at random assumption. With the estimated selection function, we introduce the inverse probability weighting (IPW) method to estimate the overall mean across studies. Furthermore, the IPW versions of heterogeneity measures such as the between-study variance and the I2 measure are proposed. We propose methods to construct confidence intervals based on asymptotic normal approximation as well as on parametric bootstrap. Through numerical experiments, we observed that the estimators successfully eliminated bias, and the confidence intervals had empirical coverage probabilities close to the nominal level. On the other hand, the confidence interval based on asymptotic normal approximation is much wider in some scenarios than the bootstrap confidence interval. Therefore, the latter is recommended for practical use.  相似文献   

17.
Cutter AD 《Genetics》2008,178(3):1661-1672
Natural selection and neutral processes such as demography, mutation, and gene conversion all contribute to patterns of polymorphism within genomes. Identifying the relative importance of these varied components in evolution provides the principal challenge for population genetics. To address this issue in the nematode Caenorhabditis remanei, I sampled nucleotide polymorphism at 40 loci across the X chromosome. The site-frequency spectrum for these loci provides no evidence for population size change, and one locus presents a candidate for linkage to a target of balancing selection. Selection for codon usage bias leads to the non-neutrality of synonymous sites, and despite its weak magnitude of effect (N(e)s approximately 0.1), is responsible for profound patterns of diversity and divergence in the C. remanei genome. Although gene conversion is evident for many loci, biased gene conversion is not identified as a significant evolutionary process in this sample. No consistent association is observed between synonymous-site diversity and linkage-disequilibrium-based estimators of the population recombination parameter, despite theoretical predictions about background selection or widespread genetic hitchhiking, but genetic map-based estimates of recombination are needed to rigorously test for a diversity-recombination relationship. Coalescent simulations also illustrate how a spurious correlation between diversity and linkage-disequilibrium-based estimators of recombination can occur, due in part to the presence of unbiased gene conversion. These results illustrate the influence that subtle natural selection can exert on polymorphism and divergence, in the form of codon usage bias, and demonstrate the potential of C. remanei for detecting natural selection from genomic scans of polymorphism.  相似文献   

18.
We propose a joint hypothesis test for simultaneous confirmatory inference in the overall population and a pre-defined marker-positive subgroup under the assumption that the treatment effect in the marker-positive subgroup is larger than that in the overall population. The proposed confirmatory overall-subgroup simultaneous test (COSST) is based on partitioning the sample space of the test statistics in the marker-positive and marker-negative subgroups. We define two rejection regions in the joint sample space of the two test statistics: (1) efficacy in the marker-positive subgroup only; (2) efficacy in the overall population. COSST achieves higher statistical power to detect the overall and subgroup efficacy than most sequential procedures while controlling the family-wise type I error rate. COSST also takes into account the potentially harmful effect in the subgroups in the decision. The optimal rejection regions depend on the specific alternative hypothesis and the sample size. COSST can be useful for Phase III clinical trials with tailoring objectives.  相似文献   

19.
Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups.  相似文献   

20.
Genome-wide association studies (GWAS) provide an important approach to identifying common genetic variants that predispose to human disease. A typical GWAS may genotype hundreds of thousands of single nucleotide polymorphisms (SNPs) located throughout the human genome in a set of cases and controls. Logistic regression is often used to test for association between a SNP genotype and case versus control status, with corresponding odds ratios (ORs) typically reported only for those SNPs meeting selection criteria. However, when these estimates are based on the original data used to detect the variant, the results are affected by a selection bias sometimes referred to the "winner's curse" (Capen and others, 1971). The actual genetic association is typically overestimated. We show that such selection bias may be severe in the sense that the conditional expectation of the standard OR estimator may be quite far away from the underlying parameter. Also standard confidence intervals (CIs) may have far from the desired coverage rate for the selected ORs. We propose and evaluate 3 bias-reduced estimators, and also corresponding weighted estimators that combine corrected and uncorrected estimators, to reduce selection bias. Their corresponding CIs are also proposed. We study the performance of these estimators using simulated data sets and show that they reduce the bias and give CI coverage close to the desired level under various scenarios, even for associations having only small statistical power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号