The RISK of an event generally relates to its expected severity and the perceived probability of its occurrence. In RISK research, however, there is no standard measure for subjective probability estimates. In this study, we compared five commonly used measurement formats—two rating scales, a visual analog scale, and two numeric measures—in terms of their ability to assess subjective probability judgments when objective probabilities are available. We varied the probabilities (low vs. moderate) and severity (low vs. high) of the events to be judged as well as the presentation mode of objective probabilities (sequential presentation of singular events vs. graphical presentation of aggregated information). We employed two complementary goodness-of-fit criteria: the correlation between objective and subjective probabilities (sensitivity), and the root mean square deviations of subjective probabilities from objective values (accuracy). The numeric formats generally outperformed all other measures. The severity of events had no effect on the performance. Generally, a rise in probability led to decreases in performance. This effect, however, depended on how the objective probabilities were encoded: pictographs ensured perfect information, which improved goodness of fit for all formats and diminished this negative effect on the performance. Differences in performance between scales are thus caused only in part by characteristics of the scales themselves—they also depend on the process of encoding. Consequently, researchers should take the source of probability information into account before selecting a measure.