Abstract: While training a Neural Network to model a rainfall-runoff process, generally two aspects are considered: its capability to be able to describe the complex nature of the processes being modeled and the ability to generalize so that novel samples could be mapped correctly. The general conclusion is that, the smallest size network capable of representing the sample distribution is the best choice, as far as generalization is concerned. Oftentimes input variables are selected a priori in what is called an explanatory data analysis stage and are not part of the actual network training and testing procedures. When they are, the final model will have only a “fixed” type of inputs, lag-space, and/or network structure. If one of these constituents was to change, one would obtain another equally “optimal” Neural Network. Following Beven and others' generalized likelihood uncertainty estimate approach, a methodology is introduced here that accounts for uncertainties in network structures, types of inputs, and their lag-space relationships by looking at a population of Neural Networks rather than target in getting a single “optimal” network. It is shown that there is a wide array of networks that provide “similar” results, as seen by a likelihood measure, for different types of inputs, lag-space, and network size combinations. These equally optimal networks expose the range of uncertainty in streamflow predictions and their expected value results in a better performance than any of the single network predictions.