Initial evaluation of the first year of the Foundation Assessment Programme
Article first published online: 17 DEC 2008
© Blackwell Publishing Ltd 2009
Volume 43, Issue 1, pages 74–81, January 2009
How to Cite
Davies, H., Archer, J., Southgate, L. and Norcini, J. (2009), Initial evaluation of the first year of the Foundation Assessment Programme. Medical Education, 43: 74–81. doi: 10.1111/j.1365-2923.2008.03249.x
- Issue published online: 17 DEC 2008
- Article first published online: 17 DEC 2008
- Received 6 February 2008; editorial comments to authors 13 March 2008, 27 May 2008; accepted for publication 29 July 2008
- *education, medical, graduate;
- clinical competence/*standards;
- professional practice/*standards;
- quality control;
- evaluation studies [publication type];
- medical history taking;
- physical examination;
- decision making;
Objectives This study represents an initial evaluation of the first year (F1) of the Foundation Assessment Programme (FAP), in line with Postgraduate Medical Education and Training Board (PMETB) assessment principles.
Methods Descriptive analyses were undertaken for total number of encounters, assessors and trainees, mean number of assessments per trainee, mean number of assessments per assessor, time taken for the assessments, mean score and standard deviation for each method. Reliability was estimated using generalisability coefficients. Pearson correlations were used to explore relationships between instruments. The study sample included 3640 F1 trainees from 10 English deaneries.
Results A total of 2929 trainees submitted at least one of all four methods. A mean of 16.6 case-focused assessments were submitted per F1 trainee. Based on a return per trainee of six of each of the case-focused assessments, and eight assessors for multi-source feedback, 95% confidence intervals (CIs) ranged between 0.4 and 0.48. The estimated time required for this is 9 hours per trainee per year. Scores increased over time for all instruments and correlations between methods were in keeping with their intended focus of assessment, providing evidence of validity.
Conclusions The FAP is feasible and achieves acceptable reliability. There is some evidence to support its validity. Collated assessment data should form part of the evidence considered for selection and career progression decisions although work is needed to further develop the FAP. It is in any case of critical importance for the profession’s accountability to the public.