In this paper, we focus on measures to evaluate discrimination of prediction models for ordinal outcomes. We review existing extensions of the dichotomous c-index—which is equivalent to the area under the receiver operating characteristic (ROC) curve—suggest a new measure, and study their relationships. The volume under the ROC surface (VUS) scores sets of cases including one case from each outcome category. VUS considers sets as either correctly or incorrectly ordered by the model. All other existing measures assess pairs of cases. We propose an ordinal c-index (ORC) that is set-based but, contrary to VUS, scores sets more gradually by indicating the closeness of the model-based ordering to the perfect ordering. As a result, the ORC does not decrease rapidly as the number of outcome categories increases. It turns out that the ORC can be rewritten as the average of pairwise c-indexes. Hence, the ORC has both a set- and pair-based interpretation. There are several relationships between the existing measures, leading to only two types of existing measures: a prevalence-weighted average of pairwise c-indexes and the VUS. Our suggested measure ORC positions itself in between as it is set-based but turns out to equal an unweighted average of pairwise c-indexes. The measures are demonstrated through a case study on the prediction of six-month outcome after traumatic brain injury. In conclusion, the set-based nature and graded scoring system make the ORC an attractive measure with a simple interpretation, together with its prevalence-independence that is a natural property of a discrimination measure.