All medical degrees are equal, but some are more equal than others: An analysis of medical degree classifications

Inequity in assessment can lead to differential attainment. Degree classifications, such as ‘Honours’, are an assessment outcome used to differentiate students after graduation. However, there are no standardised criteria used to determine what constitutes these awards.


| INTRODUCTION
Inequity in medical education is an international issue, [1][2][3] and it occurs through unfair processes that are avoidable. 4 Students from minority ethnicities and socio-economically disadvantaged backgrounds perform worse on average during their education and postgraduate training. [1][2][3]5,6 This 'differential attainment' is not due to individual poor performance but is a reflection of inequitable processes within medical education. 7,8 One area where differential attainment can occur is within assessment. 5 As such, it is important to know the 'consequential validity' of an assessment, which Messick defines as the consequences (positive or negative) of a test. 9 Not recognising the consequential validity of an assessment can have adverse effects and can act as an 'inappropriate barrier' to some populations. 10 Degree classifications are an assessment outcome used to differentiate students based on academic achievement. 11 Unlike most university degrees awarded in the UK, medical degrees are technically unclassified as students are not awarded a first-, second-or third-class degree. 12 Instead, medical students receive a 'Pass'-indicating that the graduate meets the UK General Medical Council's (GMC) core competencies required of a newly qualified doctor. 13 Despite the lack of classification attached to UK medical degrees, some medical schools use a hierarchy of awards to acknowledge academic achievement beyond a Pass. For example, students may graduate with 'Pass with Honours' displayed on their medical degree certificates and transcripts. 14 Internationally, similar systems exist, 15,16 though some countries use different terminology. 17,18 In the USA, academic achievement may be acknowledged through grading systems used during clerkships (where grades such as 'Honours' may be awarded) 19 and membership to Honours societies, such as the Alpha Omega Alpha society (which member institutions can elect up to 16% of their graduating class 20 ). It appears that many higher education institutions internationally employ systems to denote exceptional achievement during medical studies.
In some countries, these awards contribute to the ranking of postgraduate applications-such as residency applications in the USA 21,22 and Kuwait. 16 In the UK, academic awards matter most within postgraduate specialty training applications. The foundation programme, a generalist, 2-year training programme that all new UK doctors must complete before progressing to further training, does not recognise degree awards. Instead, academic ranking is calculated via the student's decile within their medical school year. 23 However, following foundation training, many training application marking schemes award additional marks for candidates with degree awards such as Distinctions or Honours-including core surgical training and internal medicine training. [24][25][26][27] Despite the value placed upon classifications in postgraduate training applications, there is a paucity of information regarding them.
No standardised criteria exist to determine what constitutes these awards both in the UK or internationally. 19 This is surprising within a national context where the regulator of doctors, the GMC, accredits medical schools on the basis of quality equity between schools 28 and is increasingly moving towards standardised methods of assessment (evident with the advent of the UK Medical Licencing Assessment in 2024 29 ). It is unknown how many UK medical schools make degree awards, what criteria are used and what percentage of students receive the awards. This is a critical absence of information given the importance of consequential validity and inequity within assessment.
In this study, we set out to map the use of classifications and discrepancies between medical schools. In exploring this topic, we hope to begin a conversation regarding degree classification validity and inequity within medical school assessment.

| Data collection
We conducted a cross-sectional study of classifications awarded by UK medical schools using Freedom of Information (FOI) Act requests.
The FOI Act 2000 is a UK legislature that grants a right of access to information held by 'public authorities', which includes most universities in the UK. The authority must reply 30 but may decline to provide information (e.g. if obtaining the data would be too costly or take up too much staff time, be inappropriate or release personally identifiable information 31 ). Similar processes exist in other countries-including the USA, Ireland and Australia. 32 Medical schools were identified from the Medical Schools Council website on 16 December 2019 and were contacted via their general enquiries e-mail address. 33 We sent FOI requests to each medical school following guidance by Savage and Hyde on the principles of FOI research 32 and Walby and Luscombe on criteria for quality in FOI research 34 -including sampling a smaller sample initially, using a standardised request 32 and note-taking during the process. 34 Our FOI requests contained the following ad verbatim questions: 1. What classification do you use for the final medical degree awarded at graduation, ranked from highest to lowest (e.g. Honours, Distinction and Merit)?
2. What percentage of medical students obtain each of these final degree classifications at graduation?
3. What criteria are used to determine whether a student obtains each of these degree classifications?
Where a simple percentage cut-off criteria was used that was taken as the percentage of students who received the award -e.g. Honours awarded to the top 10% of the year. Where more complex systems were used, we asked for the percentage of students who received the awards for the 5 years before the 2019/2020 academic year.
We staggered our initial requests so that we received and reviewed data from at least five universities before wider dissemination, and an expert in assessment reviewed our request and preliminary data to check the questions we asked were appropriate. Responses were collected from 16 December 2019 until 10 May 2021.
All responses were screened independently by two reviewers (MHVB and SEY), data were extracted and any discrepancies were resolved by consensus. Duplicate screening was used because the FOI Act permits the public body to provide a link to the data if already in the public domain, instead of writing a succinct written responsewhere this was the case the materials provided would often be comprehensive policy documents or course handbooks requiring careful review. We clarified any points that were unclear with repeat FOI requests.
We excluded intercalated degrees/certificates, prizes, grants, bursaries and awards for individual parts of the degree (e.g. Pass with a Distinction in a Student Selected Component).

| Ethics
The study was reviewed by the University of Oxford Medical Sciences Interdivisional Research Ethics Committee, and the study did not require ethical approval (reference: R80126/RE001).

| Analysis
We calculated the median and range for the percentage of students awarded each classification and any type of classification across the 5-year period. Data are presented as median (range), with percentages >10% rounded to the nearest whole number. To summarise the classification criteria, we performed a content analysis, coded the data and present the count and percentage for each code. We performed a descriptive analysis of our data due to its heterogeneity.

| RESULTS
We received responses from all 42 UK medical schools listed on the UK Medical Schools Council website. We excluded six universities because three universities did not host medical students for the whole degree programme or the final degree was awarded by another university, two universities were new medical schools with no graduating cohort and had not determined their classification system and one university was a private university and was exempt from FOI requests.
Thirty-six universities provided usable data and were included in our analysis ( Figure 1). We received complete datasets from 31 of 36 universities (86%), and the only missing data points were related to percentages of students ( Figure 1).
A total of 45 classifications above a Pass were awarded from the 36 included universities ( Table 1) (Table 2).

| Criteria used to determine classifications
The criteria for classification varied considerably and were determined through rank, grade or more complex systems ( Table 2). The criteria used for each classification can be seen in Table S1.

| Percentage of students receiving classifications
Twenty-five of the 30 universities that awarded classifications provided data on the percentage of students who were awarded classifications, and a median of 15% of students received any type of classification from their university. The percentage of students receiving any type of classification from their university ranged from 5.3% to 38% across the 25 universities. For some universities, there was also a wide range in the percentage of students who received the same classification from the same university. For example, the percentage of students who were awarded an Honours from University Number 24 ranged from 6.7% to 24%, during the 5-year period we assessed (Table S1). The median and range awarded for each classification can be seen in Table S1.

| DISCUSSION
To our knowledge, this is the first study to systematically examine the degree classifications awarded by medical schools in the UK. We identified that most UK medical schools do award classifications. However, there was wide variation in terminology, criteria and percentage of students awarded classifications.
Our data demonstrate that there is inequity among medical schools in the way they recognise achievement. This inequity creates advantages for students from some institutions and is likely to influence success in postgraduate training applications. In Figure 3   grading systems used to determine performance. 19 Similar to our findings, the same terminology had different meanings between different American medical schools, and there was a significant variability between the percentages of medical students who were awarded the top grade in each clerkship (5% and 97%). 19 Takayama et al. argue that because of this variability, the use of Honours in clerkships as an assessment tool is unreliable. 39 Indeed, classifications are not just used as assessment of performance but are also used to inform important decisions. 16,21,22,[24][25][26][27]39 With this being the first paper to quan- Consequently, there is a risk of worsening differential attainment.
There is evidence that suggests the presence of gender-based disparities in the way Honours are awarded for some clerkships in the USA. 40 Fewer students from non-Caucasian ethnicities and socioeconomically disadvantaged backgrounds receive Honours 41,42 or are Alpha Omega Alpha society members in the USA. 43,44 Over time, small differences can amplify and further disadvantage underrepresented groups. 45 Although postgraduate training pathways may differ between the USA and other countries, similar awards contribute to postgraduate application scoring internationally. 16 This is a critical direction for future international study, as the effects of awards from countries outside the USA (including their consequential validity and contributions to differential attainment) are unknown. Our

| Strengths and limitations
The use of FOI requests is a strength of our study, as they generally have a very high response rate, and we were able to access information that would have been challenging to obtain through alternative methods (such as the written criteria used for each classification).
However, FOI requests are unable to answer questions around 'why' public authorities made decisions. 32 For example, we were unable to determine why universities used classifications. FOI requests can only obtain objective information relevant to the questions asked and cannot give the same depth of knowledge as a survey or an interview (where subjective and information that expands beyond the question asked can be obtained).
Walby and Larsen note that there may be issues with the quality of responses if requests or responses have unclear wording. 48 The FOI data within our study are self-reported and dependent upon the person responding to our request. As such, there is a risk of systematic errors-where the person does not interpret the request appropriately, or random errors-where the person extracting the data makes a typographical error. 49 We were not able to perform pilot testing or cognitive interviews with respondents, due to the limited number of universities, 50 and the cost and time limitation clause in the FOI Act. 31 These limitations may affect the content and response process validity, 50,51 and reliability of our data and interpretations. 34 We sought to reduce systematic errors by reviewing our request with an expert in assessment, sampling a small number of universities initially and reviewing the preliminary data, using a standardised request, answering questions about the requested data from the respondents and using repeat FOI requests to clarify any questions we had regarding the data. 32,34 In our study, a small number of universities provided materials such as comprehensive policy documents or course handbooks, instead of providing a succinct written response. To reduce the risk of random errors during data extraction, we used duplicate screening.
F I G U R E 3 Examples of how current practice could advantage or disadvantage students in postgraduate applications. In each example, the candidate and examinations are identical. For (a)-(c), the only difference is the criteria that are used to determine the award of the classification, and for (d), the only change is the terminology of the classification that is awarded. The criteria presented in (a)-(c) are representative of those used by medical schools in our study. The marking scheme presented in (d) is representative of the 2022 Core Surgical Training application, 26 which awarded 6 points for a 'Distinction at Final Year undergraduate level' out of a total of 72 available for the candidate's portfolio.

| CONCLUSION
We demonstrate considerable variation in the way UK medical degrees are awarded in terms of terminology, criteria and percentage of students awarded classifications. This inequity creates advantages for students from some institutions and is likely to influence success in postgraduate training applications. Without assessing the consequential validity of these awards, there is a risk that they could contribute to differential attainment. Many countries use similar classifications, but there is a paucity of information regarding classifications and their consequential validity outside the USA. There is a need to fully evaluate the use and impact of hierarchical degree awards internationally. writing-review and editing.