SEARCH

SEARCH BY CITATION

Keywords:

  • Concept learning;
  • Categorization;
  • Bayesian induction;
  • Probabilistic grammar;
  • Rules;
  • Language of thought

Abstract

This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space—a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well-known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7-feature concepts—a more natural setting in several ways—and again finds that the model explains human performance.