The prioritisation of literature searches aims to order the large numbers of articles returned by a simple search so that the ones most likely to be relevant are at the top of the list. Prioritisation relies on having a good model of human decision-making that can learn from the articles users select as being relevant to make predictions about which of the remaining articles will be relevant. We develop and evaluate two psychological decision-making models for prioritisation: A “rational” model that considers all of the available information, and a “one reason” model that uses limited information to make decisions. The models are evaluated in an experiment where users rate the relevance of every article returned by PsyclNFO for a number of different research topics, with the results showing that both models achieve a level of prioritisation that significantly improves upon the default ordering of PsycINFO. The one reason model is shown to be superior to the rational model, especially when there are only a few relevant articles. The implications of the results for developing prioritisation systems in applied settings are discussed, together with implications for the general modeling of human decision-making.