The Use of Bayesian Networks to Assess the Quality of Evidence from Research Synthesis: 2. Inter-Rater Reliability and Comparison with Standard GRADE Assessment |
| |
Authors: | Alexis Llewellyn Craig Whittington Gavin Stewart Julian PT Higgins Nick Meader |
| |
Affiliation: | 1. Centre for Reviews and Dissemination, University of York, York, United Kingdom.; 2. Centre for Outcomes Research and Effectiveness Research, Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom.; 3. School of Agriculture, Food and Rural Development, Newcastle University, Newcastle, United Kingdom.; 4. School of Social and Community Medicine, University of Bristol, Bristol, United Kingdom.; University of Illinois-Chicago, UNITED STATES, |
| |
Abstract: | BackgroundThe grades of recommendation, assessment, development and evaluation (GRADE) approach is widely implemented in systematic reviews, health technology assessment and guideline development organisations throughout the world. We have previously reported on the development of the Semi-Automated Quality Assessment Tool (SAQAT), which enables a semi-automated validity assessment based on GRADE criteria. The main advantage to our approach is the potential to improve inter-rater agreement of GRADE assessments particularly when used by less experienced researchers, because such judgements can be complex and challenging to apply without training. This is the first study examining the inter-rater agreement of the SAQAT.MethodsWe conducted two studies to compare: a) the inter-rater agreement of two researchers using the SAQAT independently on 28 meta-analyses and b) the inter-rater agreement between a researcher using the SAQAT (who had no experience of using GRADE) and an experienced member of the GRADE working group conducting a standard GRADE assessment on 15 meta-analyses.ResultsThere was substantial agreement between independent researchers using the Quality Assessment Tool for all domains (for example, overall GRADE rating: weighted kappa 0.79; 95% CI 0.65 to 0.93). Comparison between the SAQAT and a standard GRADE assessment suggested that inconsistency was parameterised too conservatively by the SAQAT. Therefore the tool was amended. Following amendment we found fair-to-moderate agreement between the standard GRADE assessment and the SAQAT (for example, overall GRADE rating: weighted kappa 0.35; 95% CI 0.09 to 0.87).ConclusionsDespite a need for further research, the SAQAT may aid consistent application of GRADE, particularly by less experienced researchers. |
| |
Keywords: | |
|
|