Get access

Bayesian Models of Cognition: What's Built in After All?



This article explores some of the philosophical implications of the Bayesian modeling paradigm. In particular, it focuses on the ramifications of the fact that Bayesian models pre-specify an inbuilt hypothesis space. To what extent does this pre-specification correspond to simply ‘‘building the solution in''? I argue that any learner (whether computer or human) must have a built-in hypothesis space in precisely the same sense that Bayesian models have one. This has implications for the nature of learning, Fodor's puzzle of concept acquisition, and the role of modeling in cognitive science.