Healthcare quality monitoring by the Ministry of Health in Slovenia includes over 100 business indicators of economy, efficiency and funding allocation, which are analysed annually for over 20 hospitals. Most of these indicators are random-denominator same-quantity ratios with a strongly correlated numerator and denominator, and the goal is the identification of outliers. A large simulation study was performed to study the performance of three types of methods: common outlier detection tests for small samples—Grubbs, Dean and Dixon, and Nalimov tests—applied unconditionally and conditionally upon results of Shapiro–Wilk normality test; the boxplot rule; and the double-square-root control chart, for which we introduced regression-through-origin-based control limits. Pert, Burr and three-parameter loglogistic distributions, which fitted the real data best, were used with no, one or two outliers in the simulated samples of sizes 5 to 30. Small (below 0.2, right skewed) and large (above 0.5, more symmetrical) ratios were simulated. Performance of the methods varied greatly across the conditions. Formal small-sample tests proved virtually useless if applied conditionally upon passed normality pre-test in the presence of outliers. Boxplot rule performed most variedly but was the only useful one for tiny samples. Our variant of the double-square-root control chart proved too conservative in tiny samples and too liberal for samples of size 20 or more without outliers but appeared the most useful to detect actual outliers in samples of the latter size. As a possibility for future improvement and research, we propose pre-testing of normality by using a class of robustified Jarque–Bera tests. Copyright © 2011 John Wiley & Sons, Ltd.