Data analysis in phylogeographic investigations is typically conducted in either a qualitative manner, or alternatively via the testing of null hypotheses. The former, where inferences about population processes are derived from geographical patterns of genetic variation, may be subject to confirmation bias and prone to overinterpretation. Testing the predictions of null hypotheses is arguably less prone to bias than qualitative approaches, but only if the tested hypotheses are biologically meaningful. As it is difficult to know a priori if this is the case, there is the general need for additional methodological approaches in phylogeographic research. Here, we explore an alternative method for analysing phylogeographic data that utilizes information theory to quantify the probability of multiple hypotheses given the data. We accomplish this by augmenting the model-selection procedure implemented in ima with calculations of Akaike Information Criterion scores and model probabilities. We generate a ranking of 17 models each representing a set of historical evolutionary processes that may have contributed to the evolution of Plethodon idahoensis, and then quantify the relative strength of support for each hypothesis given the data using metrics borrowed from information theory. Our results suggest that two models have high probability given the data. Each of these models includes population divergence and estimates of ancestral θ that differ from estimates of descendent θ, inferences consistent with prior work in this system. However, the models disagree in that one includes migration as a parameter and one does not, suggesting that there are two regions of parameter space that produce model likelihoods that are similar in magnitude given our data. Results of a simulation study suggest that when data are simulated with migration, most of the optimal models include migration as a parameter, and further that when all of the shared polymorphism results from incomplete lineage sorting, most of the optimal models do not. The results could also indicate a lack of precision, which may be a product of the amount of data that we have collected. In any case, the information-theoretic metrics that we have applied to the analysis of our data are statistically rigorous, as are hypothesis-testing approaches, but move beyond the ‘reject/fail to reject’ dichotomy of conventional hypothesis testing in a manner that provides considerably more flexibility to researchers.