Data splitting is an important step in the artificial neural network (ANN) development process, whereby the available data are divided into training, testing, and validation subsets to ensure good generalization ability of the model. Considering that only one split of the data is typically used when developing ANN models, data splitting has a significant impact on model performance, depending on which data are allocated to the three subsets. Therefore, it is important to find a data splitting method that consistently results in predictive validation errors that are representative of the predictive errors obtained over the full range of the available data. This paper addresses this issue by introducing a benchmarking approach for comparing different data splitting methods in terms of (1) bias, which is the difference between the expected validation performance over the entire data set and that obtained using a particular data splitting method and (2) variability, which is the spread of the validation errors obtained by repeated implementation of that method. The utility of the proposed approach is assessed on a number of well-known data splitting methods in the context of four water resources ANN modelling problems. The results obtained indicate that the proposed approach for comparing data splitting methods is more representative than the previous approach where a value of zero is used as the predictive performance benchmark, as it can avoid the selection of an over-optimistic data splitting method that under-represents extreme data in the validation set.