Judging similarities among objects, events, and experiences is one of the most basic cognitive abilities, allowing us to make predictions and generalizations. The main assumption in similarity judgment is that people selectively attend to salient features of stimuli and judge their similarities on the basis of the common and distinct features of the stimuli. However, it is unclear how people select features from stimuli and how they weigh features. Here, we present a computational method that helps address these questions. Our procedure combines image-processing techniques with a machine-learning algorithm and assesses feature weights that can account for both similarity and categorization judgment data. Our analysis suggests that a small number of local features are particularly important to explain our behavioral data.