SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Context  The process whereby medical students employ integrated analytic and non-analytic diagnostic strategies is not fully understood. Analysing academic performance data could provide a perspective complementary to that of laboratory experiments when investigating the nature of diagnostic strategy. This study examined the performance data of medical students in an integrated curriculum to determine the relative contributions of biomedical knowledge and clinical pattern recognition to diagnostic strategy.

Methods  Structural equation modelling was used to examine the relationship between biomedical knowledge and clinical cognition (clinical information gathering and interpretation) assessed in Years 1 and 2 of medical school and their relative contributions to diagnostic justification assessed at the beginning of Year 4. Modelling was applied to the academic performance data of 133 medical students who received their md degrees in 2011 and 2012.

Results  The model satisfactorily fit the data. The correlation between biomedical knowledge and clinical cognition was low–moderate (0.26). The paths between these two constructs and diagnostic justification were moderate and slightly favoured biomedical knowledge (0.47 and 0.40 for biomedical knowledge and clinical cognition, respectively).

Conclusions  The findings suggest that within the first 2 years of medical school, students possessed separate, but complementary, cognitive tools, comprising biomedical knowledge and clinical pattern recognition, which contributed to an integrated diagnostic strategy at the beginning of Year 4. Assessing diagnostic justification, which requires students to make their thinking explicit, may promote the integration of analytic and non-analytic processing into diagnostic strategy.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

It is generally accepted that accurate medical diagnosis requires the combined, flexible employment of analytic and non-analytic cognitive processes.1–3 Clinicians are thought to rely on past patient encounters to make initial, similarity-based diagnoses and to employ reasoning to test and refine their pattern recognition.2,4 Reasoning from biomedical knowledge is also believed to drive the diagnosis of unfamiliar or atypical cases.2,4 The process whereby medical students develop this cognitive expertise is not fully understood; however, an integrated curriculum that exposes students to actual and simulated patient encounters during the traditionally pre-clinical first 2 years of medical education is believed to facilitate diagnostic skill acquisition.1,3–6 The integration of basic science instruction and engagement in patient cases may help students acquire actionable biomedical knowledge that links pathophysiology to the symptom constellations observed clinically and drives the reasoning component of diagnosis.7 Patient encounters may also help students build a repository of symptom constellations, which will enable similarity-based processing.6

Relatively recent research on diagnostic strategy has shown that instructions emphasising a combination of analytic and non-analytic processing are effective in improving diagnostic accuracy.4,8,9 However, these studies feature ‘absolute novices’ (psychology undergraduates with no background or career interest in biomedical science) and/or relatively constrained, imagery-based diagnostic tasks that lend themselves to similarity-based diagnostic strategies. Investigating the underlying structure of academic performance data for medical students in an integrated curriculum would provide an important additional angle from which to view diagnostic strategy. The relationship between students’ biomedical knowledge (captured by basic science examination scores) and clinical cognition (captured by the information gathering and interpretation components of standardised patient [SP] examination scores) may illuminate the degree to which students naturalistically develop separate but complementary tools for diagnosis. The relative contributions of students’ biomedical knowledge and clinical cognition to diagnostic justification may illuminate the degree to which scientific and patient-centred information support an integrated diagnostic strategy at the early stages of doctor professional development.

The purpose of this study was to investigate, using structural equation modelling (SEM), the relationship between biomedical knowledge and clinical cognition and their combined contribution to diagnostic justification. Structural equation modelling, a statistical method commonly used in the study of human abilities10–12 and medical education,13–16 provides a way to investigate alternative conceptions about the underlying structure of performance data. SEM requires specifying a priori the theoretical constructs that account for the interrelations of a set of observed performance indicators, such as test scores. For example, the construct of ‘general clinical competence’ might be specified to account for a positive correlation among results on final oral examinations in multiple clerkships.15 The adequacy of one’s a priori specifications are evaluated using a variety of metrics that reflect the fit of the model to the actual performance data. This approach complements previous research4,8,9 by exploring the nature of diagnostic strategy under the instructional conditions already present in many medical schools, which tests the applicability of findings from laboratory experiments.17

In the SEM we propose, strong correspondence between biomedical knowledge and clinical cognition would suggest that students’ information gathering and interpretation during SP examinations are driven by their biomedical understanding of the chief complaint; scientific knowledge drives the line of questioning that students use to rule hypotheses up or down. Weak correspondence between these constructs would suggest that information gathering and interpretation are driven by conditions specific to patient encounters that are not captured by basic science examinations; in this context, the amount of scientific knowledge plays a small role relative to other, unmeasured factors, such as the store of symptom constellations acquired from previous experience with similar cases.

Because diagnosis requires the interplay of analytic and non-analytic processes, one might expect that both scientific and clinical understanding will contribute to diagnostic justification, but that their relative contributions will differ, depending on the nature of the case (e.g. typical or atypical) and the amount of relevant clinical exposure in the sample studied. Validated measurement of diagnostic justification18 requires students to conclude an SP encounter by generating a short written description of the thought process used to achieve a final diagnosis. Students must explain how the pertinent positive and negative findings they identified influenced their thinking as they moved from an initial set of hypotheses to a final diagnosis. Thus, the score of a diagnosis justification essay reflects the quality of the thought process linking the differential diagnosis, final diagnosis and key findings. A diagnostic justification essay requires students to articulate their thinking, which favours analytic modes of thought.19 However, scoring is agnostic with regard to whether the reasoning process described moved forward from initial pattern recognition or backward from a biomedical understanding of the chief complaint.20 A stronger contribution of biomedical knowledge to diagnostic justification would suggest that diagnostic strategy is driven more by the student’s biomedical knowledge than by clinical pattern recognition. Conversely, a stronger contribution of clinical cognition would suggest that the student’s pattern recognition, acquired through clinical exposure, drives the generation of initial hypotheses, identification of key findings and selection of a diagnosis.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Data on the academic performance of 133 students (68 male, 17 under-represented minorities), comprising the graduating classes of 2011 and 2012 at Southern Illinois University School of Medicine, Springfield, Illinois, USA, were analysed. This analysis was determined as exempt from formal requirements for approval by the school’s Committee for Research Involving Human Subjects. The medical school from which the sample was taken employs an integrated curriculum in which students engage in SP encounters as part of their instruction and formal assessment during the first 2 years of training. The first 2 years of instruction also employ a hybrid, problem-based learning curriculum in which traditional lectures are interspersed with small-group instruction driven by patient cases. At the beginning of Year 4, following clerkships, students must pass a senior clinical competency examination in order to graduate. The examination comprises 14 SP cases, nine of which require a written diagnostic justification following the patient encounter. The graduating classes of 2011 and 2012 were the first two classes to generate diagnostic justification essays as part of this examination.18

Consistent with best practice in SEM,21 three measures were selected as performance indicators of students’ biomedical knowledge. The first two indicators comprised the average unit scores in Years 1 and 2 of the curriculum, respectively. The first year comprised three organ-based units: (i) Cardiovascular/Respiratory/Renal; (ii) Sensorimotor Systems and Behaviour, and (iii) Endocrine/Reproduction/Gastrointestinal. These focused on healthy physiological function and lasted 13–20 weeks each. The score for each unit consisted of the unit mid-term and final multiple-choice question (MCQ) examination scores plus scores on laboratory examinations in gross anatomy, neuroanatomy and histology. More weight was given to the MCQ examinations (72%) than to the laboratory examinations in the calculation of the Year 1 unit score. Year 2 comprised four units: (i) Haematology, Immunology and Infectious Disease; (ii) Cardiovascular/Respiratory/Renal; (iii) Neuromuscular/Behaviour, and (iv) Endocrine/Reproduction/Gastrointestinal. These focused on pathophysiology and lasted 7–10 weeks each. The score for each Year 2 unit consisted solely of the final MCQ examination score. The third indicator of biomedical knowledge selected was the US Medical Licensing Examination (USMLE) Step 1 examination score. Among the available data, these scores were considered the most widely recognised indicators of biomedical knowledge developed by a medical school curriculum.

Two measures were used as performance indicators of students’ clinical cognition: the scores on SP examinations conducted in Years 1 and 2, respectively. For these measures, a composite score was made by averaging the percentage of correct answers on four checklists used to assess each SP encounter. These four checklists assessed the quality of history taking and physical examination, findings, differential diagnosis, and patient satisfaction, respectively. Patient satisfaction was included as it is influenced heavily by the SP’s perception of communication clarity, which reflects student mastery of information gathering. The Year 1 composite comprised six cases and Year 2 comprised 12. All cases involved a chief complaint and required the student to conduct a focused interview and diagnose the underlying condition. Although several additional aspects of clinical competence were assessed in the examinations (e.g. selection of laboratory tests, patient management), composites generated from the four aspects selected were considered to best reflect the key components of clinical information gathering and interpretation and to be the most direct analogue of the USMLE Step 2 Clinical Skill examination.22

Academic performance data from Year 3, the clerkship year, were excluded because: (i) this research investigates diagnostic strategy acquired as a result of integrating basic science instruction with clinical exposure, which occurs primarily in the first 2 years of medical school; (ii) the psychometric qualities and broad generalisability of some clerkship performance data are suspect,23 and (iii) the use of clerkship measures would introduce additional types of assessment method that are confounded by academic year (e.g. basic science knowledge would be assessed using supervisor ratings in Year 3 and MCQ examination scores in Years 1 and 2). Examining the contributions of Year 1 and Year 2 academic performance to Year 4 academic performance represents the most controlled analysis that could be conducted in the absence of laboratory experimentation.

One measure was used as a performance indicator of diagnostic justification; this was the diagnostic justification essay administered as part of the senior clinical competency examination.18 The nine cases for which a diagnostic justification essay was required featured common chief complaints with typical symptom presentations. The diagnostic justification essays were graded by two raters and, as is consistent with other analyses of similar data,18,24 the average of the scores of both raters across all nine cases was used. Inter-rater reliability among pairs of raters for the diagnostic reasoning essay has been found to be acceptable in related research (intraclass correlation coefficients for pairs of raters were 0.75 and 0.64 for the graduating classes of 2011 and 2012, respectively).18 Because just one measure of diagnostic justification was used, but a hypothetical construct was desired, the loading of the diagnostic justification essay on its respective Diagnostic Justification factor was determined using an estimate of its internal consistency reliability.25

The descriptive statistics (means, standard deviations [SDs]), reliabilities, frequency distributions and inter-correlations of the variables were examined using IBM spss Statistics 19 (IBM Corp., Armonk, NY, USA). The variable covariance matrix was then subjected to SEM using lisrel 8.80 with variable inter-correlations and SDs as input. A model was specified to test the hypothesis that biomedical knowledge and clinical cognition developed in Years 1 and 2 of medical school make separable but significant contributions to diagnostic justification assessed at the beginning of Year 4 of medical school. The goodness-of-fit of this model was assessed using the following indices: the non-significant chi-squared test (p > 0.05); the root mean square error of approximation (RMSEA < 0.06); the comparative fit index (CFI > 0.95), and the standardised mean square residual (SRMR ≤ 0.08).26

Sometimes, fit indices can be quite good even when a model does not describe a particular dataset well.27 This can occur when there is little common variance among the variables, but the variance that is present is well accounted for by the model parameters. Such a model has limited practical relevance because much remains unknown about what the input variables actually measure. For this reason, the model R2 for each performance indicator was also considered an aspect of fit.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Table 1 shows the inter-correlations among the six performance indicators subjected to SEM. Internal consistency reliabilities are shown on the diagonal. Where available, the reliabilities of the performance indicators, except one (Year 1 SP examinations) were above 0.70. The inter-correlations were all positive, nearly all statistically significant, and some substantial. The pattern of generally higher correlations among similar types of performance indicator (e.g. those of biomedical knowledge) and lower correlations among different types of performance indicator was expected.21Table 1 also reveals that despite the lower reliability of SP examinations in Year 1, the correlation between them and basic science examinations was higher than in Year 2 (0.30 and 0.15, respectively). The correlation between Year 2 SP examination scores and USMLE Step 1, taken at the end of Year 2, was 0.10.

Table 1.   Inter-correlations among the academic performance indicators of biomedical knowledge (BK), clinical cognition (CC) and diagnostic justification (DXJ)
 Year 1 BKYear 1 CCYear 2 BKYear 2 CCStep 1DXJ
  1. * p < 0.02

Year 1 BK0.880     
Year 1 CC0.302*0.570    
Year 2 BK0.800*0.228*0.910   
Year 2 CC0.256*0.604*0.1460.720  
Step 10.732*0.1610.782*0.098 
DXJ0.403*0.335*0.508*0.394*0.414*0.810

Figure 1 shows the SEM results. All factor loadings were significant at p = 0.05. The correlation between Clinical Cognition and Biomedical Knowledge was 0.26, suggesting that these constructs were distinct, but related. Both constructs made moderate and statistically significant contributions to Diagnostic Justification (0.40 and 0.47 for Clinical Cognition and Biomedical Knowledge, respectively). The chi-squared test was significant (χ27 = 17.57, p = 0.014), indicating that this model was not an exact fit to the data. However, the ratio of the chi-squared value to degrees of freedom was 2.51, which is within the recommended limit of 3.21 The RMSEA was higher than desired (0.10, with a 90% confidence interval of 0.04–0.17). However, both the CFI (0.97) and the SRMR (0.04) were within the recommended ranges, suggesting that lack of fit may have stemmed from multiple minor discrepancies, rather than a large error in the model specification. The CFI value indicates that the specified model fit the data 97% better than an independent model in which no relationship among the performance indicators is specified. The SRMR value indicates that the model-predicted correlations among the performance indicators deviated from their actual correlations, on average, by 0.04. The R2-values of the performance indicators were all > 0.50, ranging from 0.52 (Year 1 Clinical Cognition) to 0.88 (Year 2 Biomedical Knowledge) with a median of 0.70, suggesting that, overall, the model accounted for meaningful variance among the variables included. Examination of the possibilities for improving model fit indicated no sound theoretical basis for making modifications.

image

Figure 1.  Structural equation model of the relationships among biomedical knowledge, clinical cognition and diagnostic justification. Model fit: χ27 = 17.57 (p = 0.014); comparative fit index = 0.97; root mean square error of approximation (RMSEA) = 0.10 (90% confidence interval 0.04–0.17); standardised mean square residual (SRMR) = 0.04

Download figure to PowerPoint

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

The SEM results suggest that within the first 2 years of medical school, students came to possess separate but complementary cognitive tools that contributed to the development of an integrated diagnostic strategy at the beginning of Year 4. The low correlation between Biomedical Knowledge and Clinical Cognition suggests that biomedical knowledge was not a primary driver of students’ information gathering and interpretation during SP examinations. Rather, some unmeasured factor, possibly pattern recognition enabled by previous clinical exposure, served to drive this process. Biomedical Knowledge and Clinical Cognition each had a moderate relationship with Diagnostic Justification, with a slightly stronger contribution made by Biomedical Knowledge. Evidently, more scientific knowledge and more effective clinical understanding in the first 2 years of medical school were both associated with higher-quality explanations of diagnostic thinking approximately 1 year later. The slightly greater contribution of Biomedical Knowledge to Diagnostic Justification should be expected in medical students, given that a sample of clinical encounters sufficient to enable pattern recognition as a dominant diagnostic strategy even in common, typical cases is likely to take a decade or more to develop.28

The curriculum in which this study was conducted places heavy emphasis on the development of analytic reasoning skills to focus information gathering and interpretation during clinical encounters. It features patient cases as opportunities for the development of such skills, but does not explicitly encourage using familiarity to guide diagnostic strategy, as related laboratory experimentation has done.4,8,9 It would be inconsistent with empirical findings29 and adult learning theory to assume that students never learn implicitly nor become attuned to the implications of past cases for the diagnosis of future patients. The present results suggest that it is possible for a store of symptom constellations to be acquired and applied to diagnostic strategy despite singular emphasis on developing reasoning skill.

The present results also suggest that including a measure of diagnostic justification in SP examinations may capture student progress more comprehensively than assessment of the patient encounter alone because the former method captures both the scientific and clinical elements of students’ diagnostic strategy. We consider the ability to justify diagnoses in this way to lie at the heart of diagnostic strategy and indeed at the heart of the abilities that medical school is designed to foster. Requiring students to explain their diagnostic strategy may also promote the development of integrated diagnostic strategies by stimulating reflection on how proximal, patient-centred information in patient cases represents distal, pathophysiological states.3,5,30

Although no comparison group enrolled in a traditional curriculum was included in the present analysis, the relationship between basic science examination scores and SP examination scores was moderate31 (0.30) in Year 1, small (0.15) in Year 2 and smaller (0.10) at the end of Year 2, which tentatively suggests a curricular impact that should be explored in future study. One might expect cognitive assessments administered at different times to be more weakly correlated than assessments administered at the same time.32 However, in this analysis the weakening association occurred between assessments administered at the same time (e.g. unit scores and SP examination scores in Year 1 versus those in Year 2). This decreases the likelihood that the timing of assessment will explain the present findings. The relative reliabilities of the assessments also fail to provide an explanation; the Year 2 SP examinations score had higher reliability than the Year 1 score. It is unknown why the relationship between these components of academic performance was never strong, although students who have chosen medicine as a career might be expected to have acquired some relevant clinical exposure prior to matriculation. This expectation is consistent with adult learning theory and is supported by laboratory research, which showed that college undergraduates without a background in biomedical science could rapidly develop a store of symptom constellations that enhanced diagnostic accuracy.8,9

An important limitation of the present study is that systematic differences among the performance assessment methods used to represent each factor (i.e. MCQ tests versus checklists versus essays) cannot be ruled out completely as an explanation for the present findings.33 Future research should involve SEMs that use multiple types of performance indicator for each latent construct. For example, objective tests and video analysis of SP encounters with stimulated recall are additional methods of assessing diagnostic competence (as well as diagnostic justification) that would resolve uncertainty regarding the contribution of method to the present findings.

Also important is the fact that the present analysis does not constitute a mechanistic description of diagnostic strategy, which requires a comprehensive causal analysis.34,35 Here, the blended application of scientific and patient-centred information processing has been inferred from the interrelations among performance measures, just as in other research diagnostic strategy has been inferred from patterns of accuracy produced by differing experimental conditions.2,4,8,9 Moreover, the lack of a control group and of more longitudinal trends in the data prevent the conclusion that the impact of an integrated curriculum accounts for the results found.

At present, it appears safe to conclude that the diagnostic strategy of medical students in an integrated curriculum involves a combination of biomedical knowledge and pattern recognition, with a slightly greater contribution made by biomedical knowledge. Although it cannot be concluded that the integrated curriculum causally influenced this practice, these findings support the generalisability of results from laboratory experiments showing that students rapidly learn to use a combination of analytic and non-analytic cognitive processes to achieve a diagnosis.

Contributors:  ATC led the data analyses and write-up of the present manuscript. RGW acquired the data, provided input on analyses, and edited drafts of the manuscript. DLK and NKR provided input on the analyses and edited drafts of the manuscript. All authors approved the final manuscript for publication.

Acknowledgements:  none.

Funding:  none.

Conflicts of interest:  none.

Ethical approval:  this study was determined to be exempt from requirements for ethics approval by the Springfield Committee for Research Involving Human Subjects.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  • 1
    Eva KW. What every teacher needs to know about clinical reasoning. Med Educ2005;39:98106.
  • 2
    Kulatunga-Moruzi C, Brooks L, Norman G. Coordination of analytic and similarity-based processing strategies and expertise in dermatological diagnosis. Teach Learn Med2001;13:1106.
  • 3
    Schmidt HG, Norman GR, Boshuizen HP. A cognitive perspective on medical expertise: theory and implications. Acad Med1990;65:61121.
  • 4
    Kulatunga-Moruzi C, Brooks LR, Norman GR. Teaching post-training: influencing diagnostic strategy with instructions at test. J Exp Psychol Appl2011;17 (3):195209.
  • 5
    Boshuizen HA, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates, and novices. Cog Sci1992;16:15384.
  • 6
    Norman GT, Schmidt HG. The psychological basis of problem-based learning: a review of the evidence. Acad Med1992;67:55765.
  • 7
    Barrows HS. A taxonomy of problem-based learning methods. Med Educ1986;20:4816.
  • 8
    Ark T, Brooks LR, Eva KW. Giving learners the best of both worlds: do clinical teachers need to guard against teaching pattern recognition?Acad Med2006;81:4059.
  • 9
    Ark TK, Brooks LR, Eva KW. The benefits of flexibility: the pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Med Educ2007;41:2817.
  • 10
    Gustafsson J. A unifying model for the structure of intellectual abilities. Intelligence1984;8:179203.
  • 11
    Cianciolo AT, Grigorenko EL, Jarvin L, Gil G, Drebot ME, Sternberg RJ. Practical intelligence and tacit knowledge: advancements in the measurement of developing expertise. Learn Indiv Diff2006;16 (3):23553.
  • 12
    Engle RW, Tuholski SW, Laughlin JE, Conway ARA. Working memory, short-term memory and general fluid intelligence: a latent variable approach. J Exp Psychol Gen1999;128 (3):30931.
  • 13
    Violato C, Hecker KG. How to use structural equation modelling in medical education research: a brief guide. Teach Learn Med2007;19 (4):36271.
  • 14
    Donnon T, Violato C. Medical students’ clinical reasoning skills as a function of basic science achievement and clinical competency measures: a structural equation model. Acad Med2006;10 (Suppl):1203.
  • 15
    Wimmers PF, Splinter TAW, Hancock GR, Schmidt HG. Clinical competence: general ability of case-specific?Adv Health Sci Educ2007;12:299314.
  • 16
    de Bruin ABH, Schmidt HG, Rikers RMJP. The role of basic science knowledge and clinical knowledge in diagnostic reasoning: a structural equation modelling approach. Acad Med2005;80:76573.
  • 17
    Kirlik A. Brunswikian theory and method as a foundation for simulation-based research on clinical judgement. Simul Healthc2010;5 (5):2559.
  • 18
    Williams RG, Klamen DL. Examining the diagnostic justification abilities of fourth-year medical students. Acad Med2012;87 (8):100814.
  • 19
    Wagner RK. Tacit knowledge in everyday intelligent behaviour. J Pers Soc Psychol1987;52 (6):123647.
  • 20
    Patel VL, Groen GJ. Knowledge-based solution strategies in medical reasoning. Cog Sci1986;10:91116.
  • 21
    Kline RB. Principles and Practice of Structural Equation Modeling. New York, NY: Guilford Press 1998.
  • 22
    Berg K, Winward M, Clauser BE, Veloski JA, Berg D, Dillon GF, Veloski JJ.The relationship between performance on a medical school’s clinical skills assessment and USMLE Step 2 CS. Acad Med2008;10 (Suppl):3740.
  • 23
    Alexander EK, Osman NY, Walling JL, Mitchell VG. Variation and imprecision of clerkship grading in US medical schools. Acad Med2012;87:1070–6.
  • 24
    Clauser BE, Harik P, Margolis MJ, Mee J, Swygert K, Rebbecchi T. The generalisability of documentation scores from the USMLE Step 2 Clinical Skills examination. Acad Med2008;10 (Suppl):414.
  • 25
    Bollen KA. Structural Equations with Latent Variables. New York, NY: John Wiley & Sons 1989.
  • 26
    Hu L, Bentler PM. Cut-off criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model1999;6 (1):155.
  • 27
    Tomarken AJ, Waller NG. Potential problems with ‘well fitting’ models. J Abnorm Psychol2003;112 (4):57898.
  • 28
    Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev1993;100 (3):363406.
  • 29
    Durning SJ, LaRochelle J, Pangaro L, Artino AR Jr, Boulet J, van der Vleuten C, Hemmer P, Denton D, Schuwirth L. Does the authenticity of pre-clinical teaching format affect subsequent clinical clerkship outcomes? A prospective randomised crossover trial. Teach Learn Med2012;24 (2):17782.
  • 30
    Charlin B, Lubarsky S, Millette B, Crevier F, Audetat M-C, Charbonneau A, Fon NC, Hoff L, Bourdy C. Clinical reasoning processes: unravelling complexity through graphical representation. Med Educ2012;46:45463.
  • 31
    Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Hillsdale, NJ: Lawrence Erlbaum Associates 1988.
  • 32
    Humphreys LG. Investigation of the simplex. Psychometrika1960;25:31323.
  • 33
    Forsythe GB, McGaghie WC, Friedman CP. Construct validity of medical clinical competence measures: a multitrait–multimethod matrix study using confirmatory factor analysis. Am Educ Res J1986;23 (2):31536.
  • 34
    Borsboom D, Mellenbergh GJ, van Heerden J. The concept of validity. Psychol Rev2004;111 (4):106171.
  • 35
    Colliver JA, Conlee MJ, Verhulst SJ. From test validity to construct validity… and back?Med Educ2012;46:36671.