Get access

A consensus-based strategy to improve the quality of fault localization


Correspondence to: W. Eric Wong, Department of Computer Science, The University of Texas at Dallas, Richardson, TX 75080-3021, USA



A vast number of software fault localization techniques have been proposed recently with the growing realization that manual debugging is time-consuming, tedious, and error-prone, and fault localization is one of the most expensive debugging activities. Although some techniques perform better than others on a large number of data sets, they do not do so on all data sets and therefore, the actual quality of fault localization can vary considerably by using just one technique. This paper proposes the use of a consensus-based strategy that combines the results of multiple fault localization techniques to consistently provide high quality performance, irrespective of data set. Empirical evidence based on case studies conducted on six sets of programs (seven programs of the Siemens suite, and the gzip, grep, make, space, and Ant programs) and three different fault localization techniques (Tarantula, Ochiai, and H3) suggests that the consensus-based strategy holds merit and generally provides close to the best, if not the best, results. Empirically, we show that this is true of both single-fault and multifault programs. Additionally, the consensus-based strategy makes use of techniques that all operate on the same set of input data, minimizing the overhead. It is also simple to include or exclude techniques from consensus, making it an easily extensible or tractable strategy. Copyright © 2011 John Wiley & Sons, Ltd.