首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Sensitivity analysis for matching with multiple controls   总被引:1,自引:0,他引:1  
ROSENBAUM  PAUL R. 《Biometrika》1988,75(3):577-581
  相似文献   

3.
4.
Rosenbaum PR 《Biometrics》1999,55(2):560-564
When a treatment has a dilated effect, with larger effects when responses are higher, there can be much less sensitivity to bias at upper quantiles than at lower quantiles; i.e., small, plausible hidden biases might explain the ostensible effect of the treatment for many subjects, and yet only quite large hidden biases could explain the effect on a few subjects having dramatically elevated responses. An example concerning kidney function of cadmium workers is discussed in detail. In that example, the treatment effect is far from additive: It is plausibly zero at the lower quartile of responses to control, and it is large and fairly insensitive to bias at the upper quartile.  相似文献   

5.
Barber S  Jennison C 《Biometrics》1999,55(2):430-436
We describe existing tests and introduce two new tests concerning the value of a survival function. These tests may be used to construct a confidence interval for the survival probability at a given time or for a quantile of the survival distribution. Simulation studies show that error rates can differ substantially from their nominal values, particularly at survival probabilities close to zero or one. We recommend our new constrained bootstrap test for its good overall performance.  相似文献   

6.
When the number of tumors is small, a significance level for the Cox-Mantel (log-rank) test Z is often computed using a discrete approximation to the permutation distribution. For j = 0,…, J let Nj(t) be the number of animals in group j alive and tumor-free at the start of time t. Make a 2 × (1+J) table for each time t of the number of animals Rj(t) with newly palpated tumor out of the total Nj(t) at risk. There are a total of say K tables, one for each distinct time t with observed death or newly palpated tumor. The usual discrete approximation to the permutation distribution of Z is defined by taking tables to be independent with fixed margins Nj(t) and ΣRj(t) for all t. However, the Nj(t) are random variables for the actual permutation distribution of Z, resulting in dependence among the tables. Calculations for the exact permutation distribution are explained, and examples are given where the exact significance level differs substantially from the usual discrete approximation. The discrepancy arisis primarily because permutations with different Z-scores under the exact distribution can be equal for the discrete approximation, inflating the approximate P-value.  相似文献   

7.
Peng J  Lee CI  Davis KA  Wang W 《Biometrics》2008,64(3):877-885
Summary .   In dose–response studies, one of the most important issues is the identification of the minimum effective dose (MED), where the MED is defined as the lowest dose such that the mean response is better than the mean response of a zero-dose control by a clinically significant difference. Dose–response curves are sometimes monotonic in nature. To find the MED, various authors have proposed step-down test procedures based on contrasts among the sample means. In this article, we improve upon the method of Marcus and Peritz (1976, Journal of the Royal Statistical Society, Series B 38 , 157–165) and implement the dose–response method of Hsu and Berger (1999, Journal of the American Statistical Association 94 , 468–482) to construct the lower confidence bound for the difference between the mean response of any nonzero-dose level and that of the control under the monotonicity assumption to identify the MED. The proposed method is illustrated by numerical examples, and simulation studies on power comparisons are presented.  相似文献   

8.
Randomization models for the matched and unmatched 2 ? 2 tables   总被引:1,自引:0,他引:1  
COPAS  J. B. 《Biometrika》1973,60(3):467-476
  相似文献   

9.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

10.
Agresti A 《Biometrics》1999,55(2):597-602
Unless the true association is very strong, simple large-sample confidence intervals for the odds ratio based on the delta method perform well even for small samples. Such intervals include the Woolf logit interval and the related Gart interval based on adding .5 before computing the log odds ratio estimate and its standard error. The Gart interval smooths the observed counts toward the model of equiprobability, but one obtains better coverage probabilities by smoothing toward the independence model and by extending the interval in the appropriate direction when a cell count is zero.  相似文献   

11.
Valid inference in random effects meta-analysis   总被引:2,自引:0,他引:2  
The standard approach to inference for random effects meta-analysis relies on approximating the null distribution of a test statistic by a standard normal distribution. This approximation is asymptotic on k, the number of studies, and can be substantially in error in medical meta-analyses, which often have only a few studies. This paper proposes permutation and ad hoc methods for testing with the random effects model. Under the group permutation method, we randomly switch the treatment and control group labels in each trial. This idea is similar to using a permutation distribution for a community intervention trial where communities are randomized in pairs. The permutation method theoretically controls the type I error rate for typical meta-analyses scenarios. We also suggest two ad hoc procedures. Our first suggestion is to use a t-reference distribution with k-1 degrees of freedom rather than a standard normal distribution for the usual random effects test statistic. We also investigate the use of a simple t-statistic on the reported treatment effects.  相似文献   

12.
13.
14.
Zhang H  Zheng G  Li Z 《Biometrics》2006,62(4):1124-1131
Using unphased genotype data, we studied statistical inference for association between a disease and a haplotype in matched case-control studies. Statistical inference for haplotype data is complicated due to ambiguity of genotype phases. An estimating equation-based method is developed for estimating odds ratios and testing disease-haplotype association. The method potentially can also be applied to testing haplotype-environment interaction. Simulation studies show that the proposed method has good performance. The performance of the method in the presence of departures from Hardy-Weinberg equilibrium is also studied.  相似文献   

15.
Rank tests for censored matched pairs   总被引:2,自引:0,他引:2  
  相似文献   

16.
We propose a method to construct simultaneous confidence intervals for a parameter vector from inverting a series of randomization tests (RT). The randomization tests are facilitated by an efficient multivariate Robbins–Monro procedure that takes the correlation information of all components into account. The estimation method does not require any distributional assumption of the population other than the existence of the second moments. The resulting simultaneous confidence intervals are not necessarily symmetric about the point estimate of the parameter vector but possess the property of equal tails in all dimensions. In particular, we present the constructing the mean vector of one population and the difference between two mean vectors of two populations. Extensive simulation is conducted to show numerical comparison with four methods. We illustrate the application of the proposed method to test bioequivalence with multiple endpoints on some real data.  相似文献   

17.
Colin B. Fogarty 《Biometrics》2023,79(3):2196-2207
We develop sensitivity analyses for the sample average treatment effect in matched observational studies while allowing unit-level treatment effects to vary. The methods may be applied to studies using any optimal without-replacement matching algorithm. In contrast to randomized experiments and to paired observational studies, we show for general matched designs that over a large class of test statistics, any procedure bounding the worst-case expectation while allowing for arbitrary effect heterogeneity must be unnecessarily conservative if treatment effects are actually constant across individuals. We present a sensitivity analysis which bounds the worst-case expectation while allowing for effect heterogeneity, and illustrate why it is generally conservative if effects are constant. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and that is valid for testing the sample average effect under additional restrictions which may be deemed benign by practitioners. Simulations demonstrate that this alternative procedure results in a valid sensitivity analysis for the weak null hypothesis under a host of reasonable data-generating processes. The procedures allow practitioners to assess robustness of estimated sample average treatment effects to hidden bias while allowing for effect heterogeneity in matched observational studies.  相似文献   

18.
Propensity-score matching is frequently used in the medical literature to reduce or eliminate the effect of treatment selection bias when estimating the effect of treatments or exposures on outcomes using observational data. In propensity-score matching, pairs of treated and untreated subjects with similar propensity scores are formed. Recent systematic reviews of the use of propensity-score matching found that the large majority of researchers ignore the matched nature of the propensity-score matched sample when estimating the statistical significance of the treatment effect. We conducted a series of Monte Carlo simulations to examine the impact of ignoring the matched nature of the propensity-score matched sample on Type I error rates, coverage of confidence intervals, and variance estimation of the treatment effect. We examined estimating differences in means, relative risks, odds ratios, rate ratios from Poisson models, and hazard ratios from Cox regression models. We demonstrated that accounting for the matched nature of the propensity-score matched sample tended to result in type I error rates that were closer to the advertised level compared to when matching was not incorporated into the analyses. Similarly, accounting for the matched nature of the sample tended to result in confidence intervals with coverage rates that were closer to the nominal level, compared to when matching was not taken into account. Finally, accounting for the matched nature of the sample resulted in estimates of standard error that more closely reflected the sampling variability of the treatment effect compared to when matching was not taken into account.  相似文献   

19.
Simultaneous confidence intervals for comparing binomial parameters   总被引:1,自引:0,他引:1  
Agresti A  Bini M  Bertaccini B  Ryu E 《Biometrics》2008,64(4):1270-1275
SUMMARY: To compare proportions with several independent binomial samples, we recommend a method of constructing simultaneous confidence intervals that uses the studentized range distribution with a score statistic. It applies to a variety of measures, including the difference of proportions, odds ratio, and relative risk. For the odds ratio, a simulation study suggests that the method has coverage probability closer to the nominal value than ad hoc approaches such as the Bonferroni implementation of Wald or "exact" small-sample pairwise intervals. It performs well even for the problematic but practically common case in which the binomial parameters are relatively small. For the difference of proportions, the proposed method has performance comparable to a method proposed by Piegorsch (1991, Biometrics 47, 45-52).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号