Models of species distributions are increasingly being used to address a variety of problems in conservation biology. In many applications, perfect or constant detectability of species, given presence, is assumed. While this problem has been acknowledged and addressed through the development of occupancy models, we still know little regarding whether addressing the potential for imperfect detection improves the predictive performance of species distribution models in nature. Here, we contrast logistic regression models of species occurrence that do not correct for detectability to hierarchical occupancy models that explicitly estimate and adjust for detectability, and maximum entropy models that attempt to circumvent the detectability problem by using data from known presence locations only. We use a large-scale, long-term monitoring database across western Montana and northern Idaho to contrast these models for nine landbird species that cover a broad spectrum in detectability. Overall, occupancy models were similar to or better than other approaches in terms of predictive accuracy, as measured by the Area Under the ROC Curve (AUC) and Kappa, with maximum entropy tending to provide the lowest predictive accuracy. Models varied in the types of errors associated with predictions, such that some model approaches may be preferred over others in certain situations. As expected, predictive performance varied across a gradient in species detectability, with logistic regression providing lower relative performance for less detectable species and Maxent providing lower performance for highly detectable species. We conclude by discussing the advantages and limitations to each approach for developing large-scale species distribution models.