- Top of page
Clinical psychology training in the United Kingdom is solely offered on a post-graduate basis, with a recognized degree in psychology as a prerequisite. It involves training in three areas over the course of three calendar years: academic teaching and study, clinical placements in the National Health Service (NHS), and research including completion of a doctoral thesis. Successful completion of all aspects leads to the award of a Doctorate in Clinical Psychology (DClinPsy or ClinPsyD). Training is underpinned by a scientist-practitioner model similar to the Boulder model adopted in the United States more than 60 years ago (McFall, 2006). Training places for citizens of the United Kingdom or other countries in the European Economic Area are fully funded and salaried by the NHS. All 30 training courses are accredited by the British Psychological Society, and also approved by the Health & Care Professions Council.
Entry to clinical psychology training is highly competitive in the United Kingdom. All applications are administered by a national Clearing House (http://www.leeds.ac.uk/chpccp), using a single application for up to four courses across the United Kingdom. The high ratio of applicants to training places (on average 3.8:1 across all UK courses during the period covered by this study, data provided by the Clearing House), and the generally high quality of applications, makes selection resource intensive. In response, there has been a lot of interest over recent years in the selection process. Some courses have introduced computerized or written tests as part of their short-listing process and work is underway to develop a national screening test. Despite their somewhat different selection criteria and processes, all courses shortlist on the application form submitted via the Clearing House and interview as part of the selection process. Although several studies have examined factors associated with application success (Phillips, Hatton, & Gray, 2004; Scior, Gray, Halsey, & Roth, 2007), and selection procedures (Hemmings & Simpson, 2008; Simpson, Hemmings, Daiches, & Amor, 2010), the only UK-based study on prediction of performance during training demonstrated agreement between performance on a written short-listing task and academic performance on the course (Hemmings & Simpson, 2008), albeit with a small sample (N = 45). Several US studies have identified differences in the intakes and subsequent career paths associated with the three main clinical psychology training models used in the United States (clinical scientist, scientist-practitioner or practitioner-scholar; Cherry, Messenger, & Jacoby, 2000; McFall, 2006). However, our searches did not reveal any English language studies other than Hemmings and Simpson's (2008) that examined factors associated with performance during training. Given the high resource costs involved in training clinical psychologists, and the substantial responsibility and power trainees have on qualification, it is surprising that evidence on predicting good or poor performance during clinical psychology training is sparse. A likely reason for this gap is the fact that attrition and failure are rare in clinical psychology training, requiring large samples to investigate. Furthermore, courses vary somewhat in assessment procedures, making it difficult to assess outcomes across training courses.
Predictors of underperformance in medicine
It is useful to refer to the extensive literature on the predictors of dropout and academic underperformance at medical school. While medicine and clinical psychology have many differences, they also have important similarities: both require academic proficiency combined with communication and professional skills; both are highly selective; both qualify UK graduates for employment in the NHS; both have a strong professional identity.
Medical admissions procedures attempt to select for academic and non-academic competence using a combination of school grades, aptitude test scores, personal or school statements, and interviews (Parry et al., 2006). As in clinical psychology, there is concern in medicine about the demographic profile of the student population, especially in terms of gender and socioeconomic background (Elston, 2009; Mathers, Sitch, Marsh, & Parry, 2011). The medical education literature therefore explores which factors predict medical school outcomes.
Pre-admission grades are consistent predictors of medical school performance (Ferguson, James, & Madeley, 2002). Lower school grades also predict dropout (O'Neill, Wallstedt, Eika, & Hartvigsen, 2011). However, the majority of medical applicants have top grades (McManus et al., 2005), leading to the use of aptitude tests in selection, although with much debate about their usefulness (Emery, Bell, & Vidal Rodeiro, 2011; McManus et al., 2005). Tests for selection into undergraduate medical courses in use in the United Kingdom and Australia seem not to have good predictive validity (McManus, Ferguson, Wakeford, Powis, & James, 2011; Mercer & Puddey, 2011; Wilkinson et al., 2008; Yates & James, 2010), but MCAT, the test used in the United States where medicine is a graduate course like clinical psychology, appears to have reasonable predictive power (Donnon, Paolucci, & Violato, 2007).
Traditional interviews have low predictive power (Goho & Blackman, 2006), and variations in interviewing methods make it hard to identify consistent relationships between interview characteristics and outcomes (Ferguson et al., 2002). Many medical schools now use the multiple-mini interview in which students are assessed on how they deal with professional situations in practice (Eva et al., 2009). This seems to predict performance on later similar practical tests at medical school (Eva, Rosenfeld, Reiter, & Norman, 2004).
Personal and academic references
References are generally poor predictors (Ferguson, James, O'Hehir, Sanders, & McManus, 2003; Siu & Reiter, 2009), although negative comments from an academic referee may predict in-course difficulties (Yates & James, 2006), and a Canadian study showed personal statements and references to have a small predictive effect on medical school clinical performance (Peskun, Detsky, & Shandling, 2007).
In the United Kingdom, medical school performance is also associated with demographics, with females and white students doing better, raising issues of equity (Ferguson et al., 2002; Higham & Steer, 2004; Woolf, McManus, Potts, & Dacre, 2013; Woolf, Potts, & McManus, 2011).
Range restriction and other statistical challenges
Studies generally correlate selection variables (e.g., pre-admission grades, interview performance) with outcome measures. However, outcome measures are only available on those who were selected and not, naturally, on those who were not accepted. How selection variables predict performance within the selected group is not the same as assessing the variables as a means of selection (Burt, 1943). Because of range restriction, observed correlations between selection variables and outcomes are generally smaller than they would be were outcomes available on all applicants. It is possible to make a correction for range restriction, but only with the right data available on non-successful applicants (Sackett & Yang, 2000).
Other issues in the data – like the reliability of measures, ceiling effects, and ordinal outcome data – may also have the effect of reducing the observed correlations. Because of all these factors, the construct-level predictive validity can be much higher than observed correlations between selection variables and course outcomes (McManus et al.,2013).
The clinical psychology doctorate course investigated is the largest one in the United Kingdom, with an average applicant to place ratio of around 28:1 for its 40–42 training places. The course employs a three-stage selection procedure involving course staff and local clinical psychology supervisors. Written guidelines for selectors aim for maximum fairness in selection. The course previously found that successful applications were predicted by A-level (academic qualification offered by educational institutions in England, Wales, and Northern Ireland to students completing high school education) points (see 'Methods') and academic and clinical referee ratings (Scior et al., 2007), although selectors may rely particularly on these in the absence of other clear ways of distinguishing among hundreds of applicants with good honours degrees.
The aim of this study was to investigate the role of applicant characteristics, interview ratings, and referee ratings in predicting course performance in three domains: academic, clinical, and research. In doing so, we aimed to inform future selection procedures by identifying the predictive power of information available to selectors.
- Top of page
Completion rates in clinical psychology training are very high, with dropout very much the exception. This study set out to identify whether selection ratings and applicant characteristics predict performance during clinical psychology training. In considering the results, it should be borne in mind that in view of the high applicant to place ratio (average 28:1), the data presented here are very positively skewed as they only pertain to those successful in gaining a place; other than for A-level results, data variance was relatively small. It was not possible to make any corrections for range restriction. The actual predictive validity of the selection variables considered is probably higher than the observed relationships. The highly selective nature of the course makes it harder to see relationships between selection variables and outcomes, effectively reducing power. More research is needed to address this issue.
The key findings can be summed up as follows: generally, performance on one part of the course was correlated with performance on other parts of the course, with exam results showing statistically significant small to medium correlations with clinical placement concerns, viva outcome, and case report marks. However, against expectations, case reports correlated with academic, not clinical, performance, raising questions about the validity of case reports as indicators of clinical performance (cf. Simpson et al., 2010). From all the information available at selection, school leaving exam grades (A-levels) were the most important predictor of performance during training; as noted, they were also the only data that showed a reasonable range. They predicted marks on all four of the exams independently of other pre-course variables, and were univariately associated with clinical placement problems. This corroborates evidence from medicine where A-levels have been found to predict academic performance many years after graduation (McManus, Smithers, Partridge, Keeling, & Fleming, 2003). While caution has been urged about the use of A-levels in selection, given that they are influenced by social and educational advantage (Scior et al., 2007), in this study A-levels had a clear role in predicting performance. Although there was less variance in degree scores than in A-level scores, degree performance also predicted exam performance independently from A-levels on year 1 but not year 2 exams. University type was predictive of performance on the year 1 research methods exam and the year 2 statistics exam, with students who attended a post-1992 institution or a university outside the United Kingdom performing worse, and Oxbridge students performing better, on the statistics exam.
Demographic factors were also predictive: non-white students performed worse in the year 1 research methods exam and the year 2 statistics exam. Age also independently predicted the statistics exam. Trainees were relatively diverse in terms of age, but only a small proportion were males (15%) and an even smaller proportion (9%) were from BME backgrounds. While the proportion of BME trainees compares fairly well with the 10% of people from BME backgrounds nationally (Office for National Statistics, 2011), as a London based course it compares poorly with the 34% of the Greater London population (Greater London Authority, 2011). The relationship observed in this sample between ethnicity and performance raises the concern that the underperformance of non-white students seen in medical education (Woolf et al., 2011) and more broadly in higher education (Richardson, 2008) may also be seen in clinical psychology. This warrants further investigation, particularly as the UK Equality Act 2010 places a duty on all public authorities, including universities and the NHS, to monitor admission and progress of students by ethnic group to be able to address inequalities or disadvantage.
The finding that those with more clinical experience did worse in the statistics exam than those with minimal clinical experience is likely to be because the factors that impeded their exam performance also caused them to take longer to gain a training place, and thus they had more time to gain clinical experience.
Retrospective research supervisor ratings correlated with all outcomes and predicted case report marks and three of the four exam marks suggesting that these may measure global course ability, rather than just research skills. In contrast, contemporary interview ratings and referee ratings were not generally predictive of performance. The exception was with a marginal relationship with one of the year 1 exams. The results here were complicated: when we added the references to the regression model, both the academic and clinical reference were negative predictors; this may be a spurious relationship, however. None of the information available at selection predicted case report performance in our multivariate analyses.
The demographics of the trainee cohorts studied were very similar to the national picture of an average female: male ratio of 8.5:1 and a white: BME ratio of 9:1 (Clearing House data for 2002–2008). It is not possible to directly compare the academic qualifications of the present cohorts to the national picture as national data for the relevant period only records undergraduate results for those without post-graduate qualifications. However, given the high applicant: place ratio, those with first class degrees (40% across the cohorts studied) may be overrepresented (21.7% nationally had a first class degree, but many of the 25.4% of trainees nationally with Masters and PhD qualifications may have also had a first). This suggests that the findings are of relevance to other training providers in the United Kingdom. Overall, in view of our data, trainers and selectors should consider paying attention to A-levels and to some extent degree mark and university type in reaching decisions about who is likely to perform well on clinical psychology courses. This may seem a very unwelcome conclusion and raises ethical concerns. The desire to balance selecting students who are likely to perform well during training must be weighed carefully against the desire to select a diverse student body and profession. Furthermore, evidence that individuals from BME backgrounds are over-represented in less highly regarded universities (Shiner & Modood, 2002; Turpin & Fensom, 2004) suggests that increased attention to applicants’ academic history may run counter to attempts to widen access to the profession. One message does emerge clearly though from the findings: while references and interviews clearly play a crucial role in selection, they do not appear to predict actual performance during clinical training. This may well be because they help deselect unsuitable applicants, thus reducing the trainee body to individuals likely to broadly perform well, as suggested by very low drop-out and failure rates. However, those involved in reaching selection decisions may well wish to reconsider what relative importance they pay to the range of information available about applicants seen for interview.
The study findings relate only to one course, albeit the largest in the United Kingdom. Given that selection processes and criteria vary across courses, findings may not generalize to other settings. Furthermore, this was an exploratory study and we performed a large number of analyses; the possibility of type 1 error should be borne in mind.
Our analyses relied on quantitative indicators that could be accessed. Many other variables, not least personality factors and life events, may well contribute to performance during training but were not measured here. Furthermore, many of the analyses relied on retrospective ratings, which may be unreliable. Research supervisors generally had fairly extensive contact with trainees under their supervision and made use of the full 5-point scale, indicating that they were able to recall trainees’ performance fairly well. However, they suffer from the usual limitations of subjective ratings; due to the one-to-one supervisor–trainee relationship, it was not possible to assess inter-rater reliability. Course tutor ratings should be viewed with caution; due to the time lag involved, their reliability is questionable. It may be advisable for courses to collect contemporary ratings of trainee performance beyond academic and placement indicators, and to test their reliability and usefulness in monitoring trainee progress. The significant limitations of the data we had to rely on suggests that the reliability and validity of performance indicators commonly used during clinical psychology training merit careful consideration. Future research of this type should aim to use prospective data and more robust measures.
We want our selection methods to be as fair as possible a way of selecting among candidates who have all passed two previous stages of selection and who present relatively similar achievements to date. Our hope that interviews, and the applicants’ references, would predict performance were disappointed. We presume that the interview process screens out unsuitable candidates given that drop-out and failure are very much the exception. The range of scores of both references and interview ratings was small for accepted applicants, so it is unclear whether the finding that they did not predict performance is to do with inadequate variance in the data, poor predictive power of the judgements which give rise to the ratings (cf. Stanton & Stephens, 2012), range restriction, or perhaps because interviews elicit valuable information which is nevertheless not then summatively evaluated during training.
Dropout and failure rates were very low, with all but two trainees who dropped out early completing their training successfully, even where certain aspects of training had to be repeated due to initial failure. Only 4% failed a case report, 1% a placement, and none failed their thesis viva. Further research is needed to understand how performance on the course relates to practice over the longer term as a clinical psychologist, and whether all those who complete training are indeed fit to practice. In medicine, it is commonly asserted that being a ‘good’ doctor is not (just) about performance in exams, and that other factors that are harder to measure and changeable make the difference between good, poor and average doctors (Journal British Medical, 2002). The same is probably true of clinical psychologists. With so many strong students applying for so few places, it is worth asking whether any selection methods can choose which students will perform best as clinical psychologists after they qualify. Would a lottery system choosing among all students judged to meet entry criteria be fairer to applicants, trainees, and ultimately to service users (cf. Simpson, 1975)? Or are current attempts to develop a national screening test a move in the right direction?