Zubair Amin, Department of Paediatrics, Faculty of Medicine, National University of Singapore, 10 Medical Drive, 117579 Singapore, Singapore. Tel: 00 65 6874 1049; Fax: 00 65 6872 1454; E-mail: email@example.com
In the complex ecosystem of medical education, we expect an assessment scheme to fulfil multiple purposes. The purposes, either explicit or implicit, are varied and not necessarily congruent with one another. For example, we expect assessment in medical education to: assure the public about the quality of graduating doctors; guide curriculum planners towards better programme development; differentiate and rank students based on their abilities; help students monitor their own learning; maintain a high degree of fairness and objectivity in testing, and generate data to enable continuous quality improvement.
That many stakeholders are interested in the information gained from assessment makes the many purposes of a given assessment unsurprising. However, it is frequently problematic when one of these purposes undermines another. For example, balancing the educational needs of students by using an examination that is supportive of students’ learning and an examination that purports to rank students based on their abilities is fraught with many challenges. Similarly, the need for rigorous objective assessment – a necessary criterion from an administrator’s perspective – may not be compatible with the needs of an educator who is interested in determining students’ competency in clinical medicine, which is, by nature, fluid, dynamic and variable.
That many stakeholders are interested in the information gained from assessment makes its many purposes unsurprising
The lack of clarity about the purpose of assessment at the implementation level, fuelled by an incessant effort to objectify assessment data and a misconception that judgement-based assessments are inferior in validity and reliability, is, at least in part, responsible for what might be described variously as ‘reductionist’, ‘deconstructive’, ‘tick-box’, ‘mechanistic’ or ‘instrumentalist’ approaches to assessment. The shared features include a never-ending need to objectify the assessment in countable numbers, devaluing the expertise and judgement of the assessors, a rigid focus on the measureable domains of doctor's competency, a fixation with psychometrics and statistics to demonstrate the worth of individual assessment instruments, and a lack of appreciation of assessment as a learning tool for the student and an essential development tool for the educator. These types of assessment have their place, but the practice of equating such positions with assessment in general carries the risk that we might undermine and ultimately lose what we have gained over the last three decades in terms of understanding what constitutes a robust assessment, its educational value and its role in health care.
These features include a need to objectify the assessment in countable numbers, devaluing the expertise and judgement of the assessors
Therefore, it is much to my delight that several distinguished researchers and leaders of thinking in the field of medical education have written three scholarly pieces for the 2012 State of the Science issue of Medical Education and that they make use of a common voice that calls for a more holistic view on assessment.
Lurie’s1 article raises a distinct uneasiness as to whether or not the objectification of competency-based education might lead to over-reliance on checklist-based measurements. Clinical competency is a complex construct that must take into account the heterogeneity of clinical management, the contexts of patients’ problems, and variations in clinical care. An overtly objective assessment with rigid standards risks being invalid because such an assessment is unable to embrace and account for the reality and richness of real-life clinical medicine. Specifying competencies into innumerable, measureable domains potentially leads to over-specifications that are unproductive and impractical to implement.
An overtly objective assessment risks being invalid because it is unable to embrace the richness of real-life clinical medicine
In their article, Schuwirth and van der Vleuten2 propose a more encompassing view of validity of assessment targeted at the programme level. The authors argue that the key issue concerns whether the system of assessment is valid, rather than a separate analysis of the validity of individual component parts. The challenge for an assessor concerns proofing the robustness of the entire examination process in terms of its psychometric data, utility and educational value. Thus, a programmatic approach is far more likely to satisfy the needs of multiple stakeholders that are typical of a complex assessment system.
The article by Crossley and Jolly3 highlights the point that not all measures in clinical medicine can or should be strictly objective. Rather, assessors should be given more latitude and responsibility to exercise judgement when evaluating specific competencies that are central to the given task. Typically, such competencies cannot be specified to the level of exactness demanded by highly objective assessments. Judgement-based assessments are not necessarily invalid or unreliable. However, it is imperative that we choose assessors carefully, train them adequately for the task and include sufficient data points to account for variations.
Assessors should be given more latitude to exercise judgement when evaluating competencies that are central to the given task
The key messages I absorb from these articles1–3 concern the importance of taking into account the complexity of the context in clinical care, the need to encourage a higher tolerance for subjective, value-based judgement, the need to facilitate increasing recognition of the ‘indications, side-effects and contra-indications’2 of assessment, and the need for a systems approach to assessment.
Translating the theoretical evidence into real practice can be challenging but rewarding. Experience from clinical medicine teaches us that managing this translational interphase, the rockface between theory and practice, is critical to success.4 The theoretical construct is often undisputed; however, things go awry at the implementation level because the complexities and uniqueness of any given context cannot be predicted by the original research. As a practitioner in assessment, I offer some suggestions of elements that are important in managing this interphase.
Translating the theoretical evidence into real practice can be challenging but rewarding
First and foremost, we need to articulate the purpose of the particular assessment with the greatest possible clarity in a manner that goes beyond its simple categorisation as summative or formative. We must ask repeatedly what the real purpose of assessment is and be certain of its explicit, as well as its implicit, agenda. It is inevitable that there will be occasions when agendas conflict; in such situations, it is imperative that we return to basics and ask for whom the assessment is meant. In student-centric assessment, the priorities of students should take precedence over those of other stakeholders; in examinations that serve as ‘gate-keepers’, patient-centredness should represent the primary determining factor in the design of the format, content and quality assurance mechanisms. There is no contradiction here: improving students’ learning will ultimately lead to better patient care.
We need to articulate the purpose of an assessment in a manner that goes beyond its simple categorisation as summative or formative
Secondly, we must accept the value of subjective assessment by clinical experts. No matter how detailed and thick the volume of specifications may be, we still need human beings to interpret them. ‘Put the patient at ease’ or ‘Performed competently’ and many other criteria that are commonly presented in an objective manner in competency-based education require significant subjective judgement. There are many ways to put a patient at ease; some are known and some simply cannot be known at the point of examination. Furthermore, variability and uncertainty are constant features of clinical medicine, and overt standardisation undermines the authenticity of the clinical examination. Allowing an educated and well-informed assessor flexibility and liberty to make on-the-spot decisions based on the needs of the moment would result in more authentic assessment and better differentiation. More importantly, it would preserve the fidelity of assessment for its intended context.
Thirdly, we need to have a much greater appreciation of the pitfalls and shortcomings of assessment. Much of the discussion in medical education surrounds the characteristics and methods of assessment; few engage in honest discussions on the deficiencies of prevalent practices in assessment. The notion of reducing a complex entity such as clinical competency into a single number or an alphabet is logically incomprehensible, statistically indefensible and educationally meaningless. A score of ‘seven point two out of ten’ or ‘pass’ does not provide meaningful feedback to the learner if it is devoid of context and qualitative corroborations, especially in terms of the areas of his or her strengths and weaknesses. In the same way that health and disease are not dichotomous entities, but often exist together and in a continuum, competency and incompetency do not exist in mutually exclusive ‘all-or-none’ conditions. Therefore, much more nuanced descriptors of clinical performance are essential if we are to understand the richness of clinical competency.
The notion of reducing a complex entity into a single number is logically incomprehensible, statistically indefensible and educationally meaningless
Finally, we might want to realign our focus in competency-based education to one of more learning and less testing. The primary goal of education is to educate the student, not to assess for assessment’s sake. More effort should be devoted to establishing how assessment can improve learning rather than to debating obscurities and oddities in assessment. The focus of successful and meaningful medical education should shift away from the development of more objective examinations to the training of students to become more competent doctors. The centricity of this argument is that assessment in medical education should place students’ and patients’ interests ahead of those of other stakeholders.
The measures outlined in the articles1–3 published in this issue of the journal, and reinforced and reinterpreted herein, would serve as potent antidotes to reductionist approaches to assessment and more closely align the assessment system to its most important intent: the betterment of patients and students.