Indicators of expert judgement and their significance: an empirical investigation in the area of cyber security



In situations when data collection through observations is difficult to perform, the use of expert judgement can be justified. A challenge with this approach is, however, to value the credibility of different experts. A natural and state-of-the art approach is to weight the experts' judgements according to their calibration, that is, on the basis of how well their estimates of a studied event agree with actual observations of that event. However, when data collection through observations is difficult to perform, it is often also difficult to estimate the calibration of experts. As a consequence, variables thought to indicate calibration are generally used as a substitute of it in practice. This study evaluates the value of three such indicative variables: consensus, experience and self-proclamation. The significances of these variables are analysed in four surveys covering different domains in cyber security, involving a total of 271 subjects. Results show that consensus is a reasonable indicator of calibration. The mean Pearson correlation between these two variables across the four studies was 0.407. No significant correlations were found between calibration and experience or calibration and self-proclamation. However, as a side result, it was discovered that a subject that perceives itself as more knowledgeable than others likely also is more experienced.