Surveys provide widely cited measures of political knowledge. Do seemingly arbitrary features of survey interviews affect their validity? Our answer comes from experiments embedded in a representative survey of over 1200 Americans. A control group was asked political knowledge questions in a typical survey context. Treatment groups received the questions in altered contexts. One group received a monetary incentive for answering the questions correctly. Another was given extra time. The treatments increase the number of correct answers by 11–24%. Our findings imply that conventional knowledge measures confound respondents' recall of political facts with variation in their motivation to exert effort during survey interviews. Our work also suggests that existing measures fail to capture relevant political search skills and, hence, provide unreliable assessments of what many citizens know when they make political decisions. As a result, existing knowledge measures likely underestimate people's capacities for informed decision making.