Rating scales are common for self-assessments of qualitative variables and also for expert-rating of the severity of disability, outcomes, etc. Scale assessments and other ordered classifications generate ordinal data having rank-invariant properties only. Hence, statistical methods are often based on ranks. The aim is to focus at the differences in ranking approaches between measures of association and of disagreement in paired ordinal data. The Spearman correlation coefficient is a measure of association between two variables, when each data set is transformed to ranks. The augmented ranking approach to evaluate disagreement takes account of the information given by the pairs of data, and provides identification and measures of systematic disagreement, when present, separately from measures of additional individual variability in assessments. The two approaches were applied to empirical data regarding relationship between perceived pain and physical health and reliability in pain assessments made by patients. The art of disagreement between the patients' perceived levels of outcome after treatment and the doctor's criterion-based scoring was also evaluated. The comprehensive evaluation of observed disagreement in terms of systematic and individual disagreement provides valuable interpretable information of their sources. The presence of systematic disagreement can be adjusted for and/or understood. Large individual variability could be a sign of poor quality of a scale or heterogeneity among raters. It was also demonstrated that a measure of association must not be used as a measure of agreement, even though such misuse of correlation coefficients is common. Copyright © 2012 John Wiley & Sons, Ltd.