• combining classifiers;
  • certainty estimators;
  • multiclassifier systems;
  • ensembles;
  • supervised learning

Selecting an effective method for combining the votes of base inducers in a multiclassifier system can have a significant impact on the system’s overall classification accuracy. Some methods cannot even achieve as high a classification accuracy as the most accurate base classifier. To address this issue, we present the strategy of aggregate certainty estimators, which uses multiple measures to estimate a classifier’s certainty in its predictions on an instance-by-instance basis. Use of these certainty estimators for vote-weighting allows the system to achieve a higher overall average in classification accuracy than the most accurate base classifier. Weighting with these aggregate measures also results in higher average classification accuracy than weighting with single certainty estimates. Aggregate certainty estimators outperform three baseline strategies, as well as the methods of modified stacking and arbitration, in terms of average accuracy over 36 data sets.