首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Testing small study effects in multivariate meta-analysis
Authors:Chuan Hong  Georgia Salanti  Sally C Morton  Richard D Riley  Haitao Chu  Stephen E Kimmel  Yong Chen
Institution:1. Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts;2. Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland;3. Department of Statistics, Virginia Tech, Blacksburg, Virginia;4. Centre for Prognosis Research, School of Medicine, Keele University, Staffordshire, UK;5. Division of Biostatistics, University of Minnesota, Minneapolis, Minnesota;6. Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania

Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania;7. Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania

Abstract:Small study effects occur when smaller studies show different, often larger, treatment effects than large ones, which may threaten the validity of systematic reviews and meta-analyses. The most well-known reasons for small study effects include publication bias, outcome reporting bias, and clinical heterogeneity. Methods to account for small study effects in univariate meta-analysis have been extensively studied. However, detecting small study effects in a multivariate meta-analysis setting remains an untouched research area. One of the complications is that different types of selection processes can be involved in the reporting of multivariate outcomes. For example, some studies may be completely unpublished while others may selectively report multiple outcomes. In this paper, we propose a score test as an overall test of small study effects in multivariate meta-analysis. Two detailed case studies are given to demonstrate the advantage of the proposed test over various naive applications of univariate tests in practice. Through simulation studies, the proposed test is found to retain nominal Type I error rates with considerable power in moderate sample size settings. Finally, we also evaluate the concordance between the proposed tests with the naive application of univariate tests by evaluating 44 systematic reviews with multiple outcomes from the Cochrane Database.
Keywords:comparative effectiveness research  composite likelihood  outcome reporting bias  publication bias  small study effect  systematic review
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号