首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Identifying subgroups of patients with an enhanced response to a new treatment has become an area of increased interest in the last few years. When there is knowledge about possible subpopulations with an enhanced treatment effect before the start of a trial it might be beneficial to set up a testing strategy, which tests for a significant treatment effect not only in the full population, but also in these prespecified subpopulations. In this paper, we present a parametric multiple testing approach for tests in multiple populations for dose-finding trials. Our approach is based on the MCP-Mod methodology, which uses multiple comparison procedures (MCPs) to test for a dose–response signal, while considering multiple possible candidate dose–response shapes. Our proposed methods allow for heteroscedastic error variances between populations and control the family-wise error rate over tests in multiple populations and for multiple candidate models. We show in simulations that the proposed multipopulation testing approaches can increase the power to detect a significant dose–response signal over the standard single-population MCP-Mod, when the specified subpopulation has an enhanced treatment effect.  相似文献   

2.
Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.  相似文献   

3.
Epigenetic research leads to complex data structures. Since parametric model assumptions for the distribution of epigenetic data are hard to verify we introduce in the present work a nonparametric statistical framework for two-group comparisons. Furthermore, epigenetic analyses are often performed at various genetic loci simultaneously. Hence, in order to be able to draw valid conclusions for specific loci, an appropriate multiple testing correction is necessary. Finally, with technologies available for the simultaneous assessment of many interrelated biological parameters (such as gene arrays), statistical approaches also need to deal with a possibly unknown dependency structure in the data. Our statistical approach to the nonparametric comparison of two samples with independent multivariate observables is based on recently developed multivariate multiple permutation tests. We adapt their theory in order to cope with families of hypotheses regarding relative effects. Our results indicate that the multivariate multiple permutation test keeps the pre-assigned type I error level for the global null hypothesis. In combination with the closure principle, the family-wise error rate for the simultaneous test of the corresponding locus/parameter-specific null hypotheses can be controlled. In applications we demonstrate that group differences in epigenetic data can be detected reliably with our methodology.  相似文献   

4.
In the past many multiple comparison procedure were difficult to perform. Usually, such procedures can be traced back to studentized multiple contrast tests. Numerical difficulties restricted the use of the exact procedures to simple, commonly balanced, designs. Conservative approximations or simulation based approaches have been used in the general cases. However, new efforts and results in the past few years have led to fast and efficient computations of the underlying multidimensional integrals. Inferences for any finite set of linear functions of normal means are now numerically feasible. These include all‐pairwise comparisons, comparisons with a control (including dose‐response contrasts), multiple comparison with the best, etc. The article applies the numerical progress on multiple comparisons procedures for common balanced and unbalanced designs within the general linear model.  相似文献   

5.
We consider hypothesis testing in a clinical trial with an interim treatment selection. Recently, unconditional and conditional procedures for selecting one treatment as the winner have been proposed when the mean responses are approximately normal. In this paper, we generalize both procedures to multi-winner cases. The distributions of the test statistics are obtained and step-down approaches are proposed. We prove that both unconditional and conditional procedures strongly control the family-wise error rate. We give a brief discussion on power comparisons.  相似文献   

6.
The confirmatory analysis of pre-specified multiple hypotheses has become common in pivotal clinical trials. In the recent past multiple test procedures have been developed that reflect the relative importance of different study objectives, such as fixed sequence, fallback, and gatekeeping procedures. In addition, graphical approaches have been proposed that facilitate the visualization and communication of Bonferroni-based closed test procedures for common multiple test problems, such as comparing several treatments with a control, assessing the benefit of a new drug for more than one endpoint, combined non-inferiority and superiority testing, or testing a treatment at different dose levels in an overall and a subpopulation. In this paper, we focus on extended graphical approaches by dissociating the underlying weighting strategy from the employed test procedure. This allows one to first derive suitable weighting strategies that reflect the given study objectives and subsequently apply appropriate test procedures, such as weighted Bonferroni tests, weighted parametric tests accounting for the correlation between the test statistics, or weighted Simes tests. We illustrate the extended graphical approaches with several examples. In addition, we describe briefly the gMCP package in R, which implements some of the methods described in this paper.  相似文献   

7.
Computer simulation techniques were used to investigate the Type I and Type II error rates of one parametric (Dunnett) and two nonparametric multiple comparison procedures for comparing treatments with a control under nonnormality and variance homogeneity. It was found that Dunnett's procedure is quite robust with respect to violations of the normality assumption. Power comparisons show that for small sample sizes Dunnett's procedure is superior to the nonparametric procedures also in non-normal cases, but for larger sample sizes the multiple analogue to Wilcoxon and Kruskal-Wallis rank statistics are superior to Dunnett's procedure in all considered nonnormal cases. Further investigations under nonnormality and variance heterogeneity show robustness properties with respect to the risks of first kind and power comparisons yield similar results as in the equal variance case.  相似文献   

8.
The most important decision faced by large-scale studies, such as those presently encountered in human genetics, is to distinguish between those tests that are true positives from those that are not. In the context of genetics, this entails the determination of genetic markers that actually underlie medically-relevant phenotypes from a vast number of makers typically interrogated in genome-wide studies. A critical part of these decisions relies on the appropriate statistical assessment of data obtained from tests across numerous markers. Several methods have been developed to aid with such analyses, with family-wise approaches, such as the Bonferroni and Dunn-Šidàk corrections, being popular. Conditions that motivate the use of family-wise corrections are explored. Although simple to implement, one major limitation of these approaches is that they assume that p-values are i.i.d. uniformly distributed under the null hypothesis. However, several factors may violate this assumption in genome-wide studies including effects from confounding by population stratification, the presence of related individuals, the correlational structure among genetic markers, and the use of limiting distributions for test statistics. Even after adjustment for such effects, the distribution of p-values can substantially depart from a uniform distribution under the null hypothesis. In this work, I present a decision theory for the use of family-wise corrections for multiplicity and a generalization of the Dunn-Šidàk correction that relaxes the assumption of uniformly-distributed null p-values. The independence assumption is also relaxed and handled through calculating the effective number of independent tests. I also explicitly show the relationship between order statistics and family-wise correction procedures. This generalization may be applicable to multiplicity problems outside of genomics.  相似文献   

9.
Monotonically increasing or decreasing functions are often used to model the relationship between the response of an experimental unit and the dose of a given substance. Of late, there has been an increased interest in dose-response relationships that exhibit hormetic effects. These effects may be characterized by an increase in response at low doses instead of the expected decrease in response that is observed at higher doses. Herein, we study the statistical implications of hormesis in several ways. First, we present a broad class of parametric mathematical-statistical models, constructed from standard dose-response models, that allow the incorporation of hormetic effects in such a way that the presence of hormesis can be tested statistically. Second, we consider the impact of model misspecification on effective dose estimation, such as the ED50 and the limiting dose for stimulation, when the hormetic effect is present but ignored in the dose-response model by the researcher (model underspecification) and when an hormetic effect is not present but incorporated into the dose-response model (model overspecification). Our simulation study reveals that it is more damaging to the estimation of effective dose to ignore the hormetic effect through model underspecification than to include the hormetic effect in the model through model overspecification. Third, we develop a nonpara-metric regression technique useful as an exploratory procedure to indicate hormetic effects when present. Finally, both parametric and nonparametric methods are illustrated with an example.  相似文献   

10.
Characterizing an appropriate dose‐response relationship and identifying the right dose in a clinical trial are two main goals of early drug‐development. MCP‐Mod is one of the pioneer approaches developed within the last 10 years that combines the modeling techniques with multiple comparison procedures to address the above goals in clinical drug development. The MCP‐Mod approach begins with a set of potential dose‐response models, tests for a significant dose‐response effect (proof of concept, PoC) using multiple linear contrasts tests and selects the “best” model among those with a significant contrast test. A disadvantage of the method is that the parameter values of the candidate models need to be fixed a priori for the contrasts tests. This may lead to a loss in power and unreliable model selection. For this reason, several variations of the MCP‐Mod approach and a hierarchical model selection approach have been suggested where the parameter values need not be fixed in the proof of concept testing step and can be estimated after the model selection step. This paper provides a numerical comparison of the different MCP‐Mod variants and the hierarchical model selection approach with regard to their ability of detecting the dose‐response trend, their potential to select the correct model and their accuracy in estimating the dose response shape and minimum effective dose. Additionally, as one of the approaches is based on two‐sided model comparisons only, we make it more consistent with the common goals of a PoC study, by extending it to one‐sided comparisons between the constant and alternative candidate models in the proof of concept step.  相似文献   

11.
Cheng C  Pounds S 《Bioinformation》2007,1(10):436-446
The microarray gene expression applications have greatly stimulated the statistical research on the massive multiple hypothesis tests problem. There is now a large body of literature in this area and basically five paradigms of massive multiple tests: control of the false discovery rate (FDR), estimation of FDR, significance threshold criteria, control of family-wise error rate (FWER) or generalized FWER (gFWER), and empirical Bayes approaches. This paper contains a technical survey of the developments of the FDR-related paradigms, emphasizing precise formulation of the problem, concepts of error measurements, and considerations in applications. The goal is not to do an exhaustive literature survey, but rather to review the current state of the field.  相似文献   

12.
Microarray technology is rapidly emerging for genome-wide screening of differentially expressed genes between clinical subtypes or different conditions of human diseases. Traditional statistical testing approaches, such as the two-sample t-test or Wilcoxon test, are frequently used for evaluating statistical significance of informative expressions but require adjustment for large-scale multiplicity. Due to its simplicity, Bonferroni adjustment has been widely used to circumvent this problem. It is well known, however, that the standard Bonferroni test is often very conservative. In the present paper, we compare three multiple testing procedures in the microarray context: the original Bonferroni method, a Bonferroni-type improved single-step method and a step-down method. The latter two methods are based on nonparametric resampling, by which the null distribution can be derived with the dependency structure among gene expressions preserved and the family-wise error rate accurately controlled at the desired level. We also present a sample size calculation method for designing microarray studies. Through simulations and data analyses, we find that the proposed methods for testing and sample size calculation are computationally fast and control error and power precisely.  相似文献   

13.
Simultaneous inference in general parametric models   总被引:6,自引:0,他引:6  
Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the pre-specified significance level. Simultaneous inference procedures have to be used which adjust for multiplicity and thus control the overall type I error rate. In this paper we describe simultaneous inference procedures in general parametric models, where the experimental questions are specified through a linear combination of elemental model parameters. The framework described here is quite general and extends the canonical theory of multiple comparison procedures in ANOVA models to linear regression problems, generalized linear models, linear mixed effects models, the Cox model, robust linear models, etc. Several examples using a variety of different statistical models illustrate the breadth of the results. For the analyses we use the R add-on package multcomp, which provides a convenient interface to the general approach adopted here.  相似文献   

14.
In the 1940s and 1950s, children in Israel were treated for tinea capitis by irradiation to the scalp to induce epilation. Follow-up studies of these patients and of other radiation- exposed populations show an increased risk of malignant and benign thyroid tumors. Those analyses, however, assume that thyroid dose for individuals is estimated precisely without error. Failure to account for uncertainties in dosimetry may affect standard errors and bias dose-response estimates. For the Israeli tinea capitis study, we discuss sources of uncertainties and adjust dosimetry for uncertainties in the prediction of true dose from X-ray treatment parameters. We also account for missing ages at exposure for patients with multiple X-ray treatments, since only ages at first treatment are known, and for missing data on treatment center, which investigators use to define exposure. Our reanalysis of the dose response for thyroid cancer and benign thyroid tumors indicates that uncertainties in dosimetry have minimal effects on dose-response estimation and for inference on the modifying effects of age at first exposure, time since exposure, and other factors. Since the components of the dose uncertainties we describe are likely to be present in other epidemiological studies of patients treated with radiation, our analysis may provide a model for considering the potential role of these uncertainties.  相似文献   

15.
Interlaboratory studies are common in toxicology, particularly for the introduction of alternative assays. Numerous papers are available on the statistical analysis of interlaboratory studies, but these deal primarily with the case of a replicated single sample studied in several laboratories. This approach can be used for some assays, but for the majority, the results will be unsatisfactory, i.e. involving great variability between both the dose groups and the laboratories. However, the primary objective of toxicological assays is to achieve similarity between the sizes of effects, rather than to determine absolute values. In the parametric model, the sizes of effects are the studentised differences from the negative control or, for the commonly used dose-response designs, the similarity of the slopes of the dose-response curves. Standard approaches for the estimation of intralaboratory and interlaboratory variability, including Mandel plots, are introduced, and new approaches are presented for demonstrating similarity of effect sizes, with or without assuming a dose-response model. One approach is based on a modification of the parallel-line assay, the other is based on a modification of the interaction contrasts of the analysis of variance. SAS programs are given for all approaches, and real data from an interlaboratory immunotoxicological study are analysed as a demonstration.  相似文献   

16.
Trimmed logit method for estimating the ED50 in quantal bioassay   总被引:1,自引:0,他引:1  
Trimmed nonparametric procedures such as the trimmed Spearman-Karber method have been proposed in the literature for overcoming the deficiencies of the probit and logit models in the analysis of quantal bioassay data. However, there are situations where the median effective dose (ED50) is not calculable with the trimmed Spearman-Karber method, but is estimable with a parametric model. Also, it is helpful to have a parametric model for estimating percentiles of the dose-response curve such as the ED10 and ED25. A trimmed logit method that combines the advantages of a parametric model with that of trimming in dealing with heavy-tailed distributions is presented here. These advantages are substantiated with examples of actual bioassay data. Simulation results are presented to support the validity of the trimmed logit method, which has been found to work well in our experience with over 200 data sets. A computer program for computing the ED50 and associated 95% asymptotic confidence interval, based on the trimmed logit method, can be obtained from the authors.  相似文献   

17.
The traditional q1 * methodology for constructing upper confidence limits (UCLs) for the low-dose slopes of quantal dose-response functions has two limitations: (i) it is based on an asymptotic statistical result that has been shown via Monte Carlo simulation not to hold in practice for small, real bioassay experiments (Portier and Hoel, 1983); and (ii) it assumes that the multistage model (which represents cumulative hazard as a polynomial function of dose) is correct. This paper presents an uncertainty analysis approach for fitting dose-response functions to data that does not require specific parametric assumptions or depend on asymptotic results. It has the advantage that the resulting estimates of the dose-response function (and uncertainties about it) no longer depend on the validity of an assumed parametric family nor on the accuracy of the asymptotic approximation. The method derives posterior densities for the true response rates in the dose groups, rather than deriving posterior densities for model parameters, as in other Bayesian approaches (Sielken, 1991), or resampling the observed data points, as in the bootstrap and other resampling methods. It does so by conditioning constrained maximum-entropy priors on the observed data. Monte Carlo sampling of the posterior (constrained, conditioned) probability distributions generate values of response probabilities that might be observed if the experiment were repeated with very large sample sizes. A dose-response curve is fit to each such simulated dataset. If no parametric model has been specified, then a generalized representation (e.g., a power-series or orthonormal polynomial expansion) of the unknown dose-response function is fit to each simulated dataset using “model-free” methods. The simulation-based frequency distribution of all the dose-response curves fit to the simulated datasets yields a posterior distribution function for the low-dose slope of the dose-response curve. An upper confidence limit on the low-dose slope is obtained directly from this posterior distribution. This “Data Cube” procedure is illustrated with a real dataset for benzene, and is seen to produce more policy-relevant insights than does the traditional q1 * methodology. For example, it shows how far apart are the 90%, 95%, and 99% limits and reveals how uncertainty about total and incremental risk vary with dose level (typically being dominated at low doses by uncertainty about the response of the control group, and being dominated at high doses by sampling variability). Strengths and limitations of the Data Cube approach are summarized, and potential decision-analytic applications to making better informed risk management decisions are briefly discussed.  相似文献   

18.
Generalized spatial structural equation models   总被引:1,自引:0,他引:1  
It is common in public health research to have high-dimensional, multivariate, spatially referenced data representing summaries of geographic regions. Often, it is desirable to examine relationships among these variables both within and across regions. An existing modeling technique called spatial factor analysis has been used and assumes that a common spatial factor underlies all the variables and causes them to be related to one another. An extension of this technique considers that there may be more than one underlying factor, and that relationships among the underlying latent variables are of primary interest. However, due to the complicated nature of the covariance structure of this type of data, existing methods are not satisfactory. We thus propose a generalized spatial structural equation model. In the first level of the model, we assume that the observed variables are related to particular underlying factors. In the second level of the model, we use the structural equation method to model the relationship among the underlying factors and use parametric spatial distributions on the covariance structure of the underlying factors. We apply the model to county-level cancer mortality and census summary data for Minnesota, including socioeconomic status and access to public utilities.  相似文献   

19.
We studied several methods for selecting single-nucleotide polymorphisms (SNPs) in a disease association study. Two major categories for analytical strategy are the univariate and the set selection approaches. The univariate approach evaluates each SNP marker one at a time, while the set selection approach tests disease association of a set of SNP markers simultaneously. We examined various test statistics that can be utilized in testing disease association and also reviewed several multiple testing procedures that can properly control the family-wise error rates when the univariate approach is applied to multiple markers. The set association methods were then briefly reviewed. Finally, we applied these methods to the data from Collaborative Study on the Genetics of Alcoholism (COGA).  相似文献   

20.
Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号