Summary. Doctor ratings of clerkship performance are often discounted as not accurately reflecting clinical competence. Such ratings are influenced by the following uncontrolled variables: case difficulty; differing rater focus and standards; lack of agreement on what constitutes acceptable performance; and collective patient care responsibility masks individual contributions. Standardized direct measures of clinical competence were developed to control these factors and allow direct comparisons of student performance. Students saw 18 patients representing frequently occurring and important patient problems. Student actions and decisions were recorded and subsequent responses to questions revealed knowledge of pathophysiology, basis for actions, use and interpretation of laboratory investigations, and management. Actions and responses were graded using a pre-set key. The examination covered 73% of designated clinical competencies. Examination scores corresponded with independent measures of clinical competence. Reliability studies indicated that new cases can be substituted in subsequent years with confidence that scores will maintain similar meaning. Costs are £6.95 per student per case, which is modest considering the quality and quantity of information acquired. Methods described are practical for evaluation of clerks and residents and for licensing and specialty certification examinations.