SEARCH

SEARCH BY CITATION

Keywords:

  • dental education;
  • assessment;
  • undergraduate students;
  • OSCE

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References

In the University of Oulu, the competencies of fourth-year dental students have traditionally been assessed with a written examination before they go to work for the first time as dentists outside the Institute of Dentistry. In 2009, the objective structural clinical examination (OSCE) modified with multiple-choice questions was introduced as a tool for assessing clinical competencies. The aim of the study was to evaluate the validity of the modified OSCE (m-OSCE) by measuring the attitude of examiners (teachers) and dental students towards the m-OSCE and to evaluate whether the OSCE is preferred to the written examination in the assessment of knowledge and clinical skills. Additionally, the aim was to evaluate the reliability of the multiple-choice examination. Altogether 30 students (86%) and 11/12 examiners (92%) responded to the questionnaire. Most of the students considered the multiple-choice questions easy, but complained about the complex formulation of the questions. The test stations were easy for 87% of the students, but the time allocated was too short. Most of the students (73%) and examiners (91%) preferred the m-OSCE to the written examination. All students and examiners found the immediate assessment of the tasks good. Based on the evaluations of m-OSCE, it could be concluded that both students and examiners preferred the m-OSCE to the pure written examination in assessment, which indicate that m-OSCE had good face validity. Combining multiple methods in assessment of knowledge and clinical skills whilst simultaneously taking into account the feasibility and available resources provides more valid results.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References

The objective structural clinical examination (OSCE) uses a series of test stations to test clinical competencies. Since 1975, the OSCE has been widely used for testing competencies in medical education (1), and around 1997, the OSCE was also introduced in dental education (2–4). The traditional OSCE is a clinical competency test where the student rotates between 10 and 20 test stations with assignments of mostly 5- or 10-min duration. In each station, the student’s performance is observed and assessed by an examiner using a multi-item criteria checklist (1). OSCE has been used to evaluate student performance both at undergraduate level and in postgraduate assessment (5).

In Miller’s model, the facets for the clinical assessment are ‘knows’, ‘knows how’, ‘shows how’ and ‘does’ (6). The written examinations measure what student ‘knows’ or ‘knows how’, and OSCE is a tool for assessing ‘shows how’. The top of the pyramid (‘does’) is the assessment of the real patient treatment, not simulated patients as in OSCE. Written examinations include essay examinations, multiple-choice questions (7), learning journals, portfolios (8) or combination of these (9). The written assessment methods focus on students’ theoretical knowledge base and their abilities to memorise, and the results do not typically determine or predict clinical success (7). The assessment of single practical procedures (‘shows how’) has been presented in orthodontics (10), in prosthodontics (11, 12) and in oral surgery (13). In OSCE, many procedures can be evaluated in the same examination, and the OSCE can also highlight areas of weaknesses in theoretical knowledge of the clinical procedures or in relation to dental competencies (14, 15).

At the Institute of Dentistry in the University of Oulu, the curriculum of undergraduate dentists lasts 5 years. The competencies of dental students are assessed before they go to work for the first time as dentists outside the Institute after 4-year studies. The assessment is based on the clinical assessment rubrics that define the absolute performance criteria for the most important clinical skills in practical patient care (14). The rubrics define for the students their clinical competence to be in insufficient level, basic level, advanced level or excellent level. The competency test has traditionally been a pure written examination, and to pass the examination, the student should have competency in basic level. In 2009, the OSCE was introduced as an additional tool for assessing clinical competencies beside the written multiple-choice question examination. Verhoeven et al. (16) have proved that the reliability of the OSCE improves when a separate written component is added in the examination. The aim of the study was to evaluate the validity of the modified OSCE (m-OSCE) by measuring the attitude of examiners (teachers) and dental students towards the m-OSCE and to evaluate whether the OSCE is preferred to the written examination in the assessment of knowledge and clinical skills. Additionally, the aim was to evaluate the reliability of the multiple-choice examination.

Materials and methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References

Design of the m-OSCE

All fourth-year pre-graduate dental students (n = 35, 100%) attended the m-OSCE in April 2009 at the Institute of Dentistry, University of Oulu, Finland. The students were divided alphabetically into four groups, and the groups of the students passed through the parts of the m-OSCE together (Fig. 1). The groups were organised so as to move on from one part to another separately from other groups. The m-OSCE examination was scheduled at the end of the fourth year, just before the students can work for the first time as dentists outside the Institute of Dentistry during the summer.

image

Figure 1.  The modified objective structural clinical examination included 12 multiple-choice questions, five test stations in simulation laboratory, one simulated patient case and a rest station.

Download figure to PowerPoint

The students were informed 1 month in advance of what the m-OSCE entailed its procedure and the assessment criteria. The examiners (teachers) and staff members of the clinical disciplines were invited to an introductory lecture concerning m-OSCE 4 months before the test. After the lecture, the teachers from all clinical disciplines were asked by an e-mail letter to develop one to three m-OSCE stations or multiple-choice questions. The letter included information about the organisation of the test stations and the formulation of the questions.

The final m-OSCE included 12 multiple-choice questions, five test stations in a simulation laboratory, one simulated patient case and a rest station (Fig. 1). In the rest station, the group of students were in a separate room. The topics of the questions, test stations and the patient case were shared equally between the clinical disciplines, and the domains of competencies tested were in alignment with the dental key competencies (DKC) at the Institute of Dentistry (general medicine and oral biology; diagnostics; manual skills; dental materials; information, knowledge and evaluation; social interactions; society and administration; and common education) (Table 1) (14). All the multiple-choice questions, tasks and assessment criteria in the simulation laboratory as well as the patient case task and its assessment criteria were subjected to peer assessment within and across the disciplines before the m-OSCE. Each part of the m-OSCE lasted 1 h. This means that the m-OSCE lasted 4 h, and the test was completed in 1 day. After the m-OSCE, there was a 1-h meeting for feedback for all students and staff.

Table 1.   The topics of the multiple-choice questions, test stations and patient case in modified OSCE (m-OSCE) were in alignment with the dental key competencies (DKC) at the Institute of Dentistry (general medicine and oral biology; diagnostics; manual skills; dental materials; information, knowledge and evaluation; social interaction; society and administration; and common education)
OSCE stationCompetency tested (according to DKC model)
  1. OSCE, objective structural clinical examination.

Multiple-choice questions (Interactive presenter®)
 3 PeriodonticsDiagnostics
 1 Oral surgeryDiagnostics
 3 OrthodonticsDiagnostics
 3 Oral pathologyDiagnostics
 2 ProstheticDiagnostics
Test stations in simulation laboratory
 1 PeriodonticsGeneral medicine and oral biology
 2 CariologyManual skills
 1 Information gathering and diagnosticsInformation, knowledge and evaluation
 1 Patient filesSociety and administration
Simulated patient case
 Stomatognathic patientDiagnostics
Manual skills
Social interactions
Rest station

Assessment of m-OSCE

The 12 multiple-choice questions were answered with a voting system (Interactive Presenter®, Dolphin Interactive Ltd, Finland, http://www.interactivepresenter.com), supervised by one examiner. The questions had 3—4 alternative answers. Each student had a transmitter for answering the questions, and the answers were saved in a file. The students had 3 min to answer each multiple-choice question. After the multiple-choice question session, 20 min were reserved for showing the correct answers and discussion. The assessment of the multiple-choice question task was performed, and the students were informed about the results after the entire m-OSCE had been completed.

The five test stations in a simulation laboratory were organised as two concurrent stations, i.e. there were two similar test stations with one examiner at each station (total of 10 examiners), and 10 students performed the test station task individually at the same time (Fig. 1). Six minutes were allocated for the task at the station, 3 min for the assessment and individual feedback and 1 min for changing the station. The assessment criteria were printed as a written checklist of 10 pre-determined items with yes/no answers of basic level of competency according to the competency levels of the clinical assessment rubrics at the Institute of Dentistry. The criteria were set by clinical disciplines and repeated in advance with the examiners of the same test station (inter-examiner reliability). Passing a station required seven passed items.

The patient case station was headed by one examiner, and there was a simulated patient with a clinical problem (temporomandibular disorders) (Table 2). The students were informed in advance of the signs and symptoms of the simulated patient. At this station, there were 2—3 students at the same time, and the students performed the task together. Ten minutes were allocated for the examination, interview and diagnosis of the simulated patient and 5 min for the assessment and feedback for the students together. There were 10 predetermined items at this station, seven of which had to be passed.

Table 2.   The simulated patient case as an example of m-OSCE station
  1. OSCE, objective structural clinical examination; m-OSCE, modified OSCE.

The patient with temporomandibular disorder
Symptoms of the simulated patient
 Tiredness and stiffness in temporomandibular joints
 Difficulties in mouth opening
 Sore muscles in neck area
 Occasional feelings of emotional stress
 Work as a secretary, plenty of work on a computer
Signs of the simulated patient
 Pain in masticatory muscles on palpation
 Restricted maximal mouth opening
 Restricted lateral movements
 No pain on palpation in temporomandibular joint area
 Pain on palpation in neck area
 Occlusal wear
Assessment criteria (pass 7/10)
 1. Introduce oneself
 2. Checking the formulated anamnesis
 3. Asking additional anamnestic information
 4. Wash hands
 5. Palpating temporomandibular joints
 6. Palpating muscles
 7. Measuring the mouth opening and lateral movements
 8. Examine the dentition (occlusal wear)
 9. Inform the patient
 10. Treatment plan

In conclusion, the m-OSCE was assessed by test stations and also by clinical disciplines, and the assessment of the m-OSCE was formative. Failure to pass the test station, patient case station or multiple-choice question part of the m-OSCE resulted in additional written essay concerning the discipline of the failed part of the m-OSCE.

Evaluation by the study group

After the m-OSCE, the students (n = 35) and evaluating examiners (n = 12) were asked to fill in an anonymous questionnaire, including the ease/difficulty and the time scheduled for the multiple-choice questions, test stations and patient case station, as well as comparison of m-OSCE and written examination (Table 3). The ease/difficulty of multiple-choice questions, test stations and patient case station was evaluated on a two-point scale, and the time allocated for the test stations was evaluated on a three-point scale. Overall evaluation of the stations was on a five-point scale (1 = poor, 5 = excellent). The preference between OSCE and written examination was evaluated on a three-point scale. Additionally, open-ended questions asked for feedback on the m-OSCE in general.

Table 3.   The distribution of the responses of students and examiners of the questionnaire in modified OSCE (m-OSCE)
QuestionnaireStudentsExaminers1
n = 30% n = 11%
  1. 1There was only one examiner in multiple-choice questions and in patient case.

  2. OSCE, objective structural clinical examination.

Multiple-choice questions were…
 a. Easy19631 
 b. Difficult1137  
Multiple-choice questions
Time allocated for the tasks was…
 a. Too short13  
 b. Convenient29971 
 c. Too long00  
The tasks in clinical test stations were…
 a. Easy2687  
 b. Difficult413  
Clinical test station
Time allocated for the tasks was…
 a. Too short1963333
 b. Convenient1137445
 c. Too long00222
The task ‘patient case’ was…
 a. Easy29971 
 b. Difficult13  
Patient case
Time for the task was…
 a. Too short827  
 b. Convenient22731 
 c. Too long00  
Assessment of the tasks (scale 1 = poor, 5 = excellent)
Multiple-choice questions
 100  
 200  
 3930  
 412401 
 5930  
Clinical test stations
 10 00
 241300
 31033111
 41240667
 5413222
Patient case
 113  
 227  
 31240  
 410331 
 5517  
OSCE vs. written exam
 a. OSCE is better than written exam22731091
 b. OSCE is equal to written exam413111
 c. OSCE is worse that written exam41300
Assessment immediately after the test station was…
 a. Good3010011100
 b. Not good0000

The results are presented as percentage distribution and mean scores in Tables. The reliability of the multiple-choice questions was evaluated with intraclass correlation coefficients (ICC) according to two test lines.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References

Altogether, 30 students (86%) and 11/12 examiners (91%) responded to the questionnaire. Students considered the m-OSCE multiple-choice questions easy (63%) or difficult (37%) (Table 3). Most of the complaints on the open-ended questions in this area concerned the formulation of the questions, which was too complex. The timing of the multiple-choice question task was considered adequate. The ICC concerning the multiple-choice questions was 0.685.

The tasks at the test stations in the simulation laboratory were easy for 87% of the students, but the time allocated for the tasks was evaluated to be too short. Examiners found the allocated timing mostly convenient (45%). The simulated patient case was considered easy, and the timing was adequate. m-OSCE was preferred to the written examination by 73% of students and 91% of examiners, whilst 13% of the students preferred the written examination to m-OSCE, and 13% of the students and one examiner (11%) found the m-OSCE to be equal to written examination. All students and examiners found the immediate assessment of the tasks good. The mean scores assessed by the students were 4.0 (range 3—5) for the multiple-choice questions, 3.5 (range 2—5) for the tasks in the simulation laboratory and 3.5 (range 1—5) for the simulated patient case.

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References

Most of the students found the m-OSCE test stations and simulated patient case easy to perform and preferred it to written examinations, which indicate that m-OSCE had good face validity. The reliability (ICC) in multiple-choice questions was 0.685, which can be considered moderate to strong reliability. The m-OSCE stations were planned based on the basic level of clinical assessment rubrics of dental curriculum because the m-OSCE served as an entrance examination for working as a dentist outside the Institute of Dentistry for the first time after 4 years of study in 5-year curriculum. During previous years, this examination has been a pure written examination. It has been stated that students would change their existing studying strategies to comply with the OSCE examination as opposed to written examinations (17) and that students have more realistic self-assessment in the OSCE compared to written examinations (18). These statements confirm the feeling of ‘easy tasks’ when students are dealing with tasks based on the core knowledge in dental curriculum and on basic level of clinical assessment rubrics. In addition, it has been shown that students prepared more for the OSCE than for other examinations because the OSCE may be an anxiety-provoking assessment method and the expectation to succeed could be higher for the OSCE than for the written examination (12). Although a timed examination can provoke more anxiety, it has been shown that more time per station did not improve students’ performance (19).

Students found the multiple-choice questions difficult more often compared to test stations, although the mean score of multiple-choice questions was higher. The higher score might be related to the more traditional part of this test in the students’ opinion. The multiple-choice question part of our m-OSCE modified the traditional one, but it made it possible to arrange an OSCE with limited staff.

The majority of the students preferred the m-OSCE to written examination as an assessment tool. Some students thought that they could express themselves better in writing and had doubts as to their ability to pass the clinical tasks at test stations under examiners’ continuous assessment.

All students and examiners found the immediate assessment of the tasks in test stations to be good. The assessment should stimulate learning (20, 21), and we organised immediate assessment in m-OSCE. The criteria for the test stations were set by the clinical disciplines, and there were no students involved in this process. It might be useful if the questions were read in advance by both teachers and older students because most of the complaints on multiple-choice questions concerned the complex formulation of the questions and ambiguous answer options.

The assessment of the m-OSCE was passed/failed based on the criteria of clinical competence. The m-OSCE was formative in our case, and no points were scored in total (summative). Because of the formative assessment and low number of test stations, the reliability of test stations was not counted. The pass/fail point was set on the basis of what the student should know or be able to do on the basic level of clinical assessment rubrics of dental curriculum. In dentistry, the OSCE has also been used for testing diagnostic, clinical and communication skills, all based on relevant knowledge (2–4). At the Institute of Dentistry, the competencies required from the graduating dentist had been represented in a DKC model (8), and most of these competencies were involved in the OSCE. In forthcoming OSCEs, it would also be useful to compare the competencies to the profiles of the European dentist (15), as suggested by Scoonheim-Klein et al. (19).

The m-OSCE required less work, at least after the examination, than the traditional written examination, but the workload is greater in advance, during the planning of the OSCE (22). The OSCE study of Schoonheim et al. (23) showed that for a reliable decision in a formative setting, at least 12 stations are needed in OSCE. Because of shortage of staff, we could not implement an OSCE with 12 stations for a reliable formative decision. In addition, one planned task had to be omitted because of lack of availability of adequate staff. The cost for the administration and the availability of rooms for test stations had to be locally taken into consideration. It would be useful to inform and motivate the staff more before the OSCE (4). When testing large numbers of students, it would be reasonable to administer the OSCE on different days to limit the number of staff and rooms required (23), but in our case, we solved the problems by arranging the multiple-choice question task in one lecture room with one examiner (supervisor).

The strength of this study is the good response rate of the attending students and examiners. Students were asked to fill in the questionnaire anonymously immediately after the last task. Five students did not respond. The limitation of this study is related to the modification of the traditional OSCE with multiple-choice questions. However, the results of this study showed that the OSCE with written multiple-choice questions works out well in assessing undergraduate students’ core theoretical and clinical knowledge of dental curriculum with a limited number of staff available. Thus, the m-OSCE has a place in the assessment programme in dentistry. In the future, to improve our OSCE, more staff and more rooms are required so that more test stations can be arranged, but at the same time, we have to pay attention to an increasing number of students and decreasing number of teachers as well as the limited funds available for facilities and materials.

Based on the evaluations of m-OSCE, it could be concluded that both students and examiners preferred the m-OSCE to the pure written examination in assessment, which indicate that m-OSCE had good face validity. Combining multiple methods in assessment of knowledge and clinical skills whilst simultaneously taking into account the feasibility and available resources provides more valid results.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and methods
  5. Results
  6. Discussion
  7. References