Examination results of medical students with dyslexia

Authors


Jean McKendree, Hull York Medical School, Heslington Road, York YO10 4DD, UK. Tel: 00 44 1904 321751; Fax: 00 44 1904 321696; E-mail: jean.mckendree@hyms.ac.uk

Abstract

Medical Education 2011: 45: 176–182

Context  Dyslexia is a learning disorder, the primary sign of which is significant difficulty in learning to read and spell. However, accumulating evidence suggests that many people with dyslexia can overcome their reading difficulties and enjoy high levels of educational success. There is debate about the appropriateness of different forms of summative assessment for people with dyslexia, but there is little research investigating different examination formats, particularly in higher education, including medical education. Currently, medical school examinations comprise a range of different assessments, both written and performance-based, offering an opportunity to compare performance on different formats. This study compared results between students with and without dyslexia on all summative assessment types used at one UK medical school.

Methods  Examination scores were collated for all summative Year 1 and 2 examinations at Hull York Medical School (HYMS) over four cohorts entering from 2004 to 2007. These included scores on two types of forced-choice question (multiple-choice and extended matching question) examinations, on short written answer examinations and on performance in a 16-station objective structured clinical examination (OSCE). Results for written answers were gathered separately for basic science questions and for questions involving critical analysis and evidence-based medicine.

Results  An overall multivariate analysis of covariance (mancova) on examinations across both years controlling for gender, ethnicity and age on entry indicated that there was no significant overall effect of dyslexia on examination results. Regression analysis further showed that dyslexia was not a significant predictor on any of the examination forms in Year 1 or Year 2.

Conclusions  There is no indication that any of the assessment methods used in HYMS, in common with many other medical schools, disadvantage students with dyslexia in comparison with their peers. In the light of these findings, we support the current view that a variety of assessment types should be included in the assessment of all medical students, as is already considered to be best practice.

Introduction

Dyslexia is a learning disorder, the primary sign of which is significant difficulty in learning to read and spell and in which some affected people are unable to attain reading fluency even as adults.1 Dyslexia is associated with impairments in the processing of speech sounds (phonological skills), verbal short-term memory and verbal processing speed; it frequently co-occurs with difficulties in oral language, coordination, attention and organisation and sometimes ‘visual stress’, but none of these are themselves markers of dyslexia.2

Although dyslexia is typically associated with educational underachievement3,4 and people with dyslexia have often been reported to have poor career prospects,5 it is no longer the case that dyslexia is a barrier to higher education. Indeed, an accumulation of evidence suggests that many individuals with dyslexia, in particular those with high IQs, can overcome their reading difficulties and enjoy high levels of educational success despite basic processing difficulties and persistent problems with spelling and writing.6–8 Students with dyslexia represent about 3.2% of the population of students on undergraduate taught degree courses in UK universities,9 but medical schools often report higher percentages. Such high-functioning individuals, often referred to as ‘compensated dyslexics’,10 have clearly developed effective mechanisms for study, although there is some evidence that, as a group, students with dyslexia may be more anxious than their peers in a variety of situations11 and may require additional emotional as well as academic support.12

Ten years ago, a national working party, convened on behalf of the Higher Education Funding Councils of England and Scotland,13 laid out advice regarding the support and guidance required by students with dyslexia in higher education, which included guidance on examination arrangements. Typically, students with dyslexia are given extra time to complete formal examinations in order to compensate for poor reading fluency and writing speed, and an allowance may be requested for poor spelling. Some students with additional co-occurring difficulties may be allowed to use a laptop computer or may be provided with an amanuensis (for students with dyspraxia) or a prompter (for students with attention-deficit hyperactivity disorder [ADHD]). Although anecdotal evidence suggests these accommodations are helpful,14 there is a dearth of specific research regarding the outcomes of such policies or, more generally, on whether students with dyslexia perform more poorly than other students in formal assessments. A commonly held view is that people with dyslexia fare better in oral than in written examinations, but evidence is lacking. Given that students with dyslexia frequently complain of difficulties in finding words fluently, this may not be the most effective way of assessing their knowledge. Likewise, it has been claimed that multiple-choice examinations penalise students with dyslexia because of the reading demands they pose.

Currently, most medical school examinations comprise a range of different assessments, which are both written and performance-based. There has been recent debate as to the appropriateness of different forms of summative assessment for people with dyslexia, but there has been little research looking at different examination formats. Following a complaint brought against a UK medical school and the General Medical Council (GMC)15,16 regarding the use of multiple-choice questions (MCQs), a comprehensive analysis of examination performance from this medical school indicated that students with dyslexia performed as well as students without.17 However, this study was limited by its focus on MCQs and the fact that it did not include an analysis of other types of examination format.

Another form of written assessment that is frequently used involves the extended matching question (EMQ), a form of MCQ. Here, the candidate is asked to choose a correct response from a longer list of alternatives than in a typical MCQ, generally between 10 and 20 items. Leinster and Gibson18 showed that there was no difference between students with and without dyslexia in this form of examination. An extensive review of the literature by Ricketts et al.17 concluded that more research into the performance of medical students with dyslexia on various types of examination is required before it is possible to conclude that any particular format does or does not disadvantage them.

The range of assessment formats commonly used in medical schools presents the opportunity to compare the performance of students on different types of examination and can inform our decisions about how to design a diet of assessments that not only assess the knowledge and skills that form the foundation of medical practice, but also do not eliminate students on the basis of a disability that may not be relevant to performance as a doctor. Thus, this study aimed to replicate the findings that students with dyslexia are not penalised on MCQs and, in addition, investigated whether there are detectable differences in results between students with and without dyslexia on other types of summative assessment or on questions that require critical reasoning as well as factual knowledge.

Methods

Examination types

Examinations at Hull York Medical School (HYMS), as at most medical schools, include written and performance-based assessments of various types. There are three overarching ‘themes’ which are examined separately in the HYMS curriculum. Theme A encompasses knowledge of life sciences and clinical sciences; Theme B covers clinical techniques, clinical skills and person-centred care, and Theme C covers evidence-based decision making, population health and the management of resources, and includes critical analysis of research literature. Themes A and C are assessed primarily by written papers, whereas Theme B is assessed primarily by 16-station objective structured clinical examinations (OSCEs) in Years 1 and 2 (the years under analysis in this paper). Written examinations contain three types of question: five-option MCQs; EMQs, and modified essay questions (MEQs) in which a scenario is described and a series of related questions requiring short written answers are given.

Because we were interested in whether different types of question, in particular forced-choice questions such as MCQs and EMQs, showed any patterns in relation to students with dyslexia, we disaggregated the examination results into different groups according to question type. The results of the MCQs and EMQs, which are both forced-choice formats, were extracted into one group, keeping Themes A and C separate; the MEQs for each theme were extracted, again for Themes A and Theme C separately, and the results of the OSCEs were grouped. This resulted in five scores for each student: Theme A MCQ/EMQ examinations; Theme A MEQ examination; Theme C MCQ/EMQ examinations; Theme C MEQ examination, and Theme B OSCE.

Examination scores were collated for the Year 1 and Year 2 end-of-year examinations over four cohorts of students who entered in 2004 to 2007. We chose Years 1 and 2 for analysis for two reasons. Firstly, previous research suggests that in schools that have found differences in performance, these disappeared by Year 2.18 Secondly, because the number of students with dyslexia in a given year is small, we required data from four consecutive cohorts for analysis; however, as HYMS is a new medical school, the later cohorts had not yet taken Year 4 and 5 examinations. Furthermore, if students with dyslexia are at a disadvantage, this should show up in the early years of the curriculum, particularly if these students tend to fail or withdraw at higher rates than other students.

Because the tests are different each year and have not been equated for difficulty, standard scores (Z-scores) were calculated for each cohort on each question type (MCQ/EMQ, MEQ and OSCE) in the three theme clusters. This gives a population distribution of scores with a mean of 0 and a standard deviation of 1. Scores may therefore take on positive or negative values.

Dyslexia status was derived from the student record held by the university. As there are no standard criteria for the ‘diagnosis’ of dyslexia, students who report as dyslexic must have this confirmed by a professional assessment in order to qualify for formal arrangements, such as the provision of extra time in examinations. In the present study, dyslexia status was confirmed by an educational psychologist employed by the university disability services and the result of that assessment was recorded as ‘dyslexia’ or ‘unseen disability/dyslexia’. Our cohorts included a small number of students with other disabilities, such as difficulties with the mechanics of writing; their data were included with those of the group without dyslexia for analysis. Other demographic data collected from the student record included age at entry (categorised into ≤ 21 years or > 21 years), gender and ethnicity.

Students with dyslexia (and those with other difficulties that were deemed to affect writing) were given 15 minutes extra per hour of testing for the written assessments. All students were given the same amount of time for the OSCE stations.

Our cohorts totalled 544 students, of whom 508 did not declare dyslexia and 36 (6.6%) did. In Year 2, 10 students left; seven of these failed examinations or withdrew and three were given leave of absence. This left 534 students in total for Year 2. None of those who left had dyslexia.

Initial statistical analysis was aimed at determining whether any overall differences emerged in either Year 1 or Year 2 examination results. Regressions investigated whether dyslexia was a significant predictor of examination results, alongside other potential factors, such as gender or age, found to influence assessment outcomes.

Ethical approval was given by the HYMS Educational Ethics Committee. All data records were anonymised for analysis after demographic information had been recorded.

Results

Table 1 shows the numbers of students classified as having or not having dyslexia, by gender, age and ethnicity. These numbers are slightly higher than the 3–5% reported in the general population (http://www.dyslexiaaction.org.uk) and are similar to those in at least one other medical school in the UK, which reported percentages of 5–7%.17 All analyses were run in spss Version 17 (SPSS, Inc., Chicago, IL, USA).

Table 1.   Reported dyslexia by gender, age and ethnicity
 Male/femaleAge, ≤ 21/> 21 yearsWhite/non-White
With dyslexia 18/18 22/14 29/7
Without dyslexia207/301389/119354/154

The overall mancova examining examination performance across both years and controlling for gender, ethnicity and age on entry indicated that dyslexia had no significant effect on the overall results of Year 1 examinations (= 0.19; d.f. = 5, 532; p = 0.97; effect size = 0.002) or Year 2 examinations (F = 1.6; d.f. = 5, 522; p = 0.16; effect size = 0.015). It is interesting that, although not significant, there is a trend for an effect in Year 2, contrary to the results of Leinster and Gibson.18

Stepwise regression analyses looking at each examination type and using gender, age at entry and ethnicity as well as dyslexia as possible predictors confirmed that dyslexia was not a significant predictor on any of the examination forms in Year 1 or Year 2 as it did not enter any of the regression equations for any of the examination types. The standardised beta weights for each of the examination types for dyslexia status for Year 1 were: Theme A MCQ/EMQ: β = 0.008, p = 0.85; Theme A MEQ: β = − 0.048, p = 0.26; Theme C MCQ/EMQ: β = 0.022, p = 0.61; Theme C MEQ: β = − 0.031, p = 0.47, and OSCE: β = 0.009, p = 0.83. For Year 2, the beta weights were: Theme A MCQ/EMQ: β = − 0.014, p = 0.74; Theme A MEQ: β = − 0.010, p = 0.81; Theme C MCQ/EMQ: β = 0.031, p = 0.47; Theme C MEQ: β = − 0.050, p = 0.23, and OSCE: β = 0.008; p = 0.85.

Table 2 shows the means, standard deviations and 95% confidence intervals for all examination results for students with and without dyslexia.

Table 2.   Mean examination results for students with and without dyslexia
Examination With dyslexia
Mean (SD)
Without dyslexia
Mean (SD)
With dyslexia
95% CI
Without dyslexia
95% CI
  1. * Z-score distribution has a mean of 0 and an SD of 1

  2. SD = standard deviation; 95% CI = 95% confidence interval; Y1 = Year 1; Y2 = Year 2; MCQ = multiple-choice question; EMQ = extended matching question; MEQ = modified essay question; OSCE = objective structured clinical examination

Y1 Theme A MCQ/EMQ0.091 (1.12)0.018 (0.98)− 0.229 to 0.471− 0.067 to 0.103
Y1 Theme A MEQ− 0.094 (0.98)0.022 (0.98)− 0.458 to 0.270− 0.064 to 0.107
Y1 Theme C MCQ/EMQ− 0.038 (1.03)0.010 (1.00)− 0.388 to 0.312− 0.077 to 0.097
Y1 Theme C MEQ− 0.052 (1.17)0.024 (0.95)− 0.448 to 0.344− 0.058 to 0.107
Y1 OSCE0.083 (0.94)− 0.012 (1.00)− 0.234 to 0.400− 0.099 to 0.076
Y2 Theme A MCQ/EMQ− 0.029 (1.26)0.002 (0.97)− 0.454 to 0.396− 0.083 to 0.088
Y2 Theme A MEQ0.001 (1.17)0.006 (0.97)− 0.395 to 0.397− 0.080 to 0.091
Y2 Theme C MCQ/EMQ0.157 (1.08)− 0.011 (0.99)− 0.207 to 0.521− 0.098 to 0.076
Y2 Theme C MEQ− 0.117 (1.29)0.017 (0.97)− 0.552 to 0.319− 0.068 to 0.102
Y2 OSCE− 0.013 (1.00)0.010 (0.99)− 0.351 to 0.325− 0.077 to 0.096

Figures 1 and 2 present the scores as boxplots. These illustrate that, although there may be differences in means, there is wide variation within each group. The black line is the median (which will not be affected by outliers); boxes represent the range from the 25th to 75th percentiles (50% of cases fall within this range) and the bars extend 1.5-times the height of the box (or to the maximum absolute value if cases do not extend the full range), which includes 95% of cases in a normally distributed set of data. The points outside are outliers that fall outwith the bars.

Figure 1.

 Boxplots of Year 1 mean examination scores for students with and without dyslexia. MCQ = multiple-choice question; EMQ = extended matching question; MEQ = modified essay question; OSCE = objective structured clinical examination

Figure 2.

 Boxplots of Year 2 mean examination scores for students with and without dyslexia. MCQ = multiple-choice question; EMQ = extended matching question; MEQ = modified essay question; OSCE = objective structured clinical examination

Discussion

This paper is the first that we know of to report a comprehensive analysis of the examination performance of medical students with dyslexia in a range of different formats of assessment. All the students with dyslexia were given extra time on written examinations, in accordance with university guidelines. The OSCE stations were identical for all students and no extra time was given. Data from four consecutive cohorts of students indicate that there was no overall effect of dyslexia status on examination results and, in line with Ricketts et al.,17 refute the claim that students with dyslexia are penalised by the format of MCQs.

Indeed, this was the case for all assessment formats we investigated, including short written answers requiring critical reasoning and analysis, and performance assessments, as well as forced-choice questions. As the boxplots show, the distribution of scores for the two groups shows the majority of scores of students with dyslexia fell inside the range of scores of students without dyslexia, indicating that the subgroup with dyslexia did not show a significantly different pattern of results.

It is important to raise a number of issues surrounding the present research. Firstly, the data available for the analyses were necessarily limited and there was no measure of IQ. Medical students have above-average general cognitive ability relative to population norms, but, nonetheless, some variation within the sample is to be expected. In the absence of this information, it is not possible to ascertain at the level of individuals whether some students with dyslexia underachieved even in the face of examination arrangements; but, likewise, it is not possible to tell if any were positively advantaged by the provision of extra time.

Most studies that have been performed in the general higher education population have involved standardised MCQ tests which are generally timed so that many candidates do not complete the examination. In these circumstances, students with dyslexia typically perform somewhat more poorly, but the provision of extra time does lessen the difference. We have had no instances of students being unable to complete the written examinations in the time provided, which would indicate that the examinations are, in effect, untimed tests and students with dyslexia are not pressured unduly in terms of the time allowed. This is as we intend, given that knowledge and critical reasoning, rather than performance under time pressure, are being assessed.

When differences have been found on written examinations, it is typically on tests of reading comprehension, such as the Nelson–Denny Reading Test, but tests of abilities including skills in mathematics, reasoning and general knowledge, such as the American College Test (ACT) and Scholastic Achievement Test (SAT), show smaller or no differences. Moreover, among students with high IQs, the results generally show no differences between students with and without learning disabilities.19 This would suggest that medical students with dyslexia would do well on written knowledge tests, given their previously demonstrated high academic achievement. However, it is important to keep monitoring examination performance in the light of changing demographics in medical student admissions.

One reassuring finding of this study is that there were no differences in scores on the performance-based OSCEs. We have been unable to find any other studies comparing students with and without dyslexia on performance-based examinations. Although time is limited in this type of assessment, we would argue that performance under pressure is an authentic component of medicine and therefore is not an irrelevant factor in this type of examination. It could be argued that giving students with dyslexia extra time might potentially disadvantage students without dyslexia. This is a question that deserves further research, although this study would indicate that dyslexia is not a factor in performance on this type of assessment.

A recent meta-analysis of test accommodations for adolescents transitioning into university noted that there is a great over-representation of studies looking at performance on multiple-choice entrance examinations and a great need for data on other types of assessment.20 We hope that this study will be the first of many to look at a wider range of examination types.

A second issue pertaining to the present study is that our sample with dyslexia was relatively small and we held no information regarding co-morbid disorders, such as motor skill or attention difficulties, that might differentially affect some students. Ideally, the present study should be replicated across cohorts of students not only in medical education, but also in other disciplines and perhaps with further knowledge of other factors such as IQ and other disorders.

Nevertheless, in the light of these findings, we support the view that a variety of assessment types should be included in the assessment of medical students, with no one type being so heavily weighted that it would eliminate students for a reason unrelated to the profession, as is already considered to be best practice. It is also clear that ongoing assessment is an inevitable part of a medical career and students when they graduate must be prepared and practised in the types of assessment they will be expected to undertake. This study indicates that all these examination forms can be completed just as successfully by medical students with dyslexia as by their peers without it. As one study on attainment in higher education states: ‘…dyslexia is by no means incompatible with a successful outcome in higher education, given an appropriate level of commitment on the part of the students and an appropriate level of resources on the part of their institution.’21

Contributors:  both authors conceived the study and contributed equally to the writing of the paper. JM gathered the data and conducted the analysis in consultation with MJS.

Acknowledgements:  none.

Funding:  none.

Conflicts of interest:  none.

Ethical approval:  this study was approved by the Hull York Medical School Ethics Committee, University of Hull, Hull, UK.

Ancillary