首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Evaluating the accuracy and calibration of expert predictions under uncertainty: predicting the outcomes of ecological research
Authors:Marissa F McBride  Fiona Fidler  Mark A Burgman
Institution:Australian Centre of Excellence for Risk Analysis, School of Botany, University of Melbourne, Parkville, Vic. 3010, Australia
Abstract:Aim Expert knowledge routinely informs ecological research and decision‐making. Its reliability is often questioned, but is rarely subject to empirical testing and validation. We investigate the ability of experts to make quantitative predictions of variables for which the answers are known. Location Global. Methods Experts in four ecological subfields were asked to make predictions about the outcomes of scientific studies, in the form of unpublished (in press) journal articles, based on information in the article introduction and methods sections. Estimates from students were elicited for one case study for comparison. For each variable, participants assessed a lower and upper bound, best guess and level of confidence that the observed value will lie within their ascribed interval. Responses were assessed for (1) accuracy: the degree to which predictions corresponded with observed experimental results, (2) informativeness: precision of the uncertainty bounds, and (3) calibration: degree to which the uncertainty bounds contained the truth as often as specified. Results Expert responses were found to be overconfident, specifying 80% confidence intervals that captured the truth only 49–65% of the time. In contrast, student 80% intervals captured the truth 76% of the time, displaying close to perfect calibration. Best estimates from experts were on average more accurate than those from students. The best students outperformed the worst experts. No consistent relationships were observed between performance and years of experience, publication record or self‐assessment of expertise. Main conclusions Experts possess valuable knowledge but may require training to communicate this knowledge accurately. Expert status is a poor guide to good performance. In the absence of training and information on past performance, simple averages of expert responses provide a robust counter to individual variation in performance.
Keywords:Calibration  expert elicitation  expert knowledge  overconfidence  subjective judgment  uncertainty
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号