Impact of a medical education unit on assessment practices


  • Khalid U Al-Umran,

  • Mona H Al-Sheikh,

  • Balachandra V Adkoli

Balachandra V Adkoli, 2nd Floor, Administrative Building, King Fahd Hospital of the University, PO Box 2208, Al-Khobar, Saudi Arabia. Tel: 00 966 3 896 6666 (ext 3007); Fax: 00 966 3 896 6720; E-mail:

Context and setting

Medical schools are challenged with the task of designing assessment systems that are valid, reliable, transparent and facilitate learning through continuous feedback. A major vehicle for bringing about changes in the tools and techniques of assessment is faculty development initiated by medical education units (MEUs). Experience suggests that MEUs can adopt strategic interventions in faculty development that can result in meaningful organisational changes.

Why the idea was necessary

Prompted by the need for external accreditation, our college carried out a survey in 2005, which generated baseline data on the assessment tools and practices used by various departments. This study led to the establishment of an MEU and an examination centre, which worked together to launch various reforms. In 2009, the college decided to find out whether these activities had resulted in changes to assessment practices. Such activities are known to enhance participants’ knowledge and skills; however, this study focused on changes in practice.

What was done

Between May 2006 and December 2008, the MEU organised five workshops on assessment, emphasising multiple-choice questions (MCQs) and item analysis, and their role in improving assessment. All course coordinators and a large section of faculty staff were sensitised. The Examination Centre installed an optical scanner, commissioned software and provided item analysis for each course. The resulting report included data on candidates’ performance, lists of items with extreme difficulty and discrimination indices, and overall reliability computed using the Kuder–Richardson formula (KR-20).

A repeat survey questionnaire was administered in 2009 to pre-clinical and clinical course coordinators (17 of each) to elicit information on: (i) which assessment tool they used, and (ii) whether they followed good assessment practices (i.e. whether the questions were selected by a group of teachers, whether they followed a marking scheme, whether they practised double-marking, whether they used an objective structured clinical examination [OSCE] or an objective structured physical examination [OSPE] with a checklist, whether they utilised an optical scanner, whether they carried out item analysis and whether they reviewed examination results). Data were compared with baseline data to track the changes.

Evaluation of results and impact

An overwhelming use of MCQs (A-type and R-type) was observed in 2009. Essay-type questions had been abolished. The use of short notes and true/false items had declined in clinical courses, but these methods were still used in pre-clinical courses. These figures were associated with less use of the optical scanner and item analysis in these subjects. It appears that the pre-clinical departments’ proximity to the Examination Centre (and far from the university) determined their choice to continue with traditional types of assessment.

A substantial increase was noticed in the proportion of courses following good practices in assessment. The practice of having a group of teachers set questions increased in both pre-clinical (from 21.4% to 88.2%) and clinical (from 58.8% to 100%) courses. The use of marking schemes increased in pre-clinical (from 35.7% to 100%) and clinical (from 11.8% to 94.1%) courses. The practice of double marking and the use of OSCEs and OSPEs with checklists also increased in both pre-clinical and clinical courses.

We conclude that the sensitisation of the faculty by the MEU, coupled with the proactive role of the Examination Centre, contributed to these changes. Facilitating access and granting more legislative powers to the MEU may further enhance the impact.