This article is published in Journal of Chemometrics as part of the Special Issue, “Conferentia Chemometrica 2009”, September 27–30, 2009, Siófok, Hungary.
Special Issue Article
Sum of ranking differences for method discrimination and its validation: comparison of ranks with random numbers†
Article first published online: 28 MAY 2010
Copyright © 2010 John Wiley & Sons, Ltd.
Journal of Chemometrics
Volume 25, Issue 4, pages 151–158, April 2011
How to Cite
Héberger, K. and Kollár-Hunek, K. (2011), Sum of ranking differences for method discrimination and its validation: comparison of ranks with random numbers. J. Chemometrics, 25: 151–158. doi: 10.1002/cem.1320
- Issue published online: 14 APR 2011
- Article first published online: 28 MAY 2010
- Manuscript Accepted: 18 APR 2010
- Manuscript Revised: 12 APR 2010
- Manuscript Received: 12 FEB 2010
- TÉT. Grant Number: 8/2010
- model and method comparison;
- permutation test;
- feature selection;
- determination of principal components
This paper describes the theoretical background, algorithm and validation of a recently developed novel method of ranking based on the sum of ranking differences [TrAC Trends Anal. Chem. 2010; 29: 101–109]. The ranking is intended to compare models, methods, analytical techniques, panel members, etc. and it is entirely general. First, the objects to be ranked are arranged in the rows and the variables (for example model results) in the columns of an input matrix. Then, the results of each model for each object are ranked in the order of increasing magnitude. The difference between the rank of the model results and the rank of the known, reference or standard results is then computed. (If the golden standard ranking is known the rank differences can be completed easily.) In the end, the absolute values of the differences are summed together for all models to be compared. The sum of ranking differences (SRD) arranges the models in a unique and unambiguous way. The closer the SRD value to zero (i.e. the closer the ranking to the golden standard), the better is the model. The proximity of SRD values shows similarity of the models, whereas large variation will imply dissimilarity. Generally, the average can be accepted as the golden standard in the absence of known or reference results, even if bias is also present in the model results in addition to random error. Validation of the SRD method can be carried out by using simulated random numbers for comparison (permutation test). A recursive algorithm calculates the discrete distribution for a small number of objects (n < 14), whereas the normal distribution is used as a reasonable approximation if the number of objects is large. The theoretical distribution is visualized for random numbers and can be used to identify SRD values for models that are far from being random. The ranking and validation procedures are called Sum of Ranking differences (SRD) and Comparison of Ranks by Random Numbers (CRNN), respectively. Copyright © 2010 John Wiley & Sons, Ltd.