Evolution of Rheumatology Training: Is It About the Journey or the Destination?

Authors


Address correspondence to Brian F. Mandell, MD, PhD, FACR, MACP, Cleveland Clinic Lerner College of Medicine, Department of Rheumatic and Immunologic Disease, 9500 Euclid Avenue NA1-10, Cleveland, OH 44195. E-mail: mandelb@ccf.org.

You've got to be careful if you don't know where you're going 'cause you might not get there. Yogi Berra

Back in the day, it was only partially tongue in cheek that postgraduate medical education was characterized as a “see one, do one, teach one” experience. The “one” was for dramatic effect, but the absence of evaluation prior to moving on to the next step was real, and the principle of moving along a progression of responsibility was (and is) educationally sound. It was taken for granted that the experience of being a rheumatology fellow, with lots of time spent in the laboratory, hospital, and clinic in the presence of an academic clinical faculty, would result in the graduation of proficient rheumatologists. There seemed to be a tacit assumption that there wasn't that much unique clinical material to learn, and fellows in most “academic” programs spent virtually all of their second year in the laboratory. Passing the certification examination at the completion of 2 years of training was accepted as a guarantee of clinical competency. As academic departments became saturated with research faculty, and the value of clinical rheumatologists in the community became increasingly appreciated, more graduating rheumatology fellows pursued pure clinical careers. Yet, training programs at leading academic medical centers continued to offer essentially the same research-heavy training experience. Rheumatology was evolving into a clinical specialty able to incorporate trial evidence (more than just for nonsteroidal antiinflammatory drugs) into practice, but where was the evidence that our fellows were being optimally trained to practice the breadth of clinical rheumatology? The destination was changing, but for years the journey stayed the same.

For a fellow, then, the journey was fairly clear—put in your time and pass an examination at the end of training. If you wanted a faculty position, the time spent in the laboratory mattered, and an extra year of laboratory experience was of value. Junior faculty strong on energy and confidence, but weak on experience, often supervised and taught the fellows. Relatively little time and energy was spent (or was required) on developing the structure of the training program or providing formal evaluations with linked feedback to the fellows. Program directors were expected to spend time teaching either in the clinic or in the laboratory, and high clinical productivity from teaching faculty at most institutions was not a priority. Our fellows were bright; we figured they would learn what they needed to know.

Then, for a confluence of reasons, increased oversight and regulation of postgraduate medical training programs became a national mission. The Accreditation Council for Graduate Medical Education (ACGME) began to exert increased control over the structure of programs in internal medicine and its specialties. New metrics for program certification were enforced through regularly scheduled site visits and program reviews. The stick was a shorter certification cycle and threat of probation; the carrot was a longer time between formal program reviews (and suggestions offered for program improvement, including mandates for appropriate institutional support for education). While successful completion of subspecialty certification examinations by fellows was monitored, the main emphasis of program certification was on the program structure per se, not on the trainees. Program review metrics included the number of clinic sessions, journal clubs, dedicated teaching sessions, and faculty publications. In an effort to assure that residency and fellowship programs provided authentic scholarly and clinical training experiences with appropriate teaching, and were not simply ways to gain cheap clinical and laboratory labor, the focus of reviews was on the program infrastructure. It was assumed that if the program structure met the metrics, the fellows would be competent.

This enhanced program scrutiny was a well-intentioned and valuable effort and ultimately led to significant improvement in many programs, although it also had unintended consequences. Additional administrative support became necessary, which cost more money. Program directors needed to devote many hours per week to the management and buffing of their program's documentation, as opposed to actually teaching their fellows. In fact, there was still relatively little focus on the fellows—it was all about the program and what educational opportunities were offered; relatively little formalized effort was devoted to determining what the fellows actually learned and how well they could function as consultant rheumatologists by the time they matriculated.

More recently, the spotlight of oversight has appropriately swung toward the trainees and their acquisition of professional competence. It is ironic and appropriate that a current education conceptual lexicon describing trainee progression includes terms of advancement such as “learner/novice, performer/manager, and master”; channeling the older paradigm of “see, do, and teach.” The nexus of this concept was correct back in the day, and is correct now. However, a critical focus of current oversight is to guarantee that all trainees demonstrably fulfill this progression prior to being certified as proficient clinical rheumatologists. And therein lie the challenges—how do we guarantee appropriate and sufficient opportunities for observed demonstration and assessment? And what is critically important, how do we define what we mean by clinical proficiency [1]—how do we define the destination?

While I do not believe we should rigidly define in numerical terms the requisite clinical experience that our fellows should accrue prior to being able to sit for the certification examination, “seeing one …” and then reading is not ideal. Rheumatologists treat complex chronic multisystem diseases. Although after matriculation we all continue to learn more about the myriad of patient responses to their diseases, by the time of matriculation, our trainees need to accrue sufficient experience in the management of these patients that they are able to demonstrate their management and problem solving skills to the satisfaction of multiple faculty and not just have received satisfactory evaluations by faculty who have witnessed only various portions of the fellow's patient encounters. Initiated years ago with the leadership efforts of Dennis Boulware and Walter Barr within the American College of Rheumatology (ACR), and now as represented by the efforts of the Carolinas Fellows Collaborative in this issue of Arthritis Care & Research [2], we are making steady progress toward better describing what a successful educational journey looks like. But we still need to more clearly define the destination and implement more robust evaluation tools to assure that our fellows' steps along the journey are appropriately directed and the destination is reached.

Criscione-Schreiber et al [2] elaborate their exemplary interprogram cooperation in creating competency-based goals with linked evaluations as a roadmap to the journey that is fellowship training. This was accomplished by reflecting on their different programs' successful learning activities and in a group effort, generating and vetting goals and objectives for each activity, matching them with the core competencies as put forth by the ACGME. They also provide a temporal dimension, with behavioral expectations for progression of skills as the fellows advance through their training: “read about, discuss, demonstrate with observation, demonstrate independently, teach…” (in other words: “see … , do … , teach …”). Many of their learning activities seem fairly standard, yet they are tailored to the rich offerings of these major medical centers (outpatient clinics, inpatient consultation, lupus clinic, scleroderma clinic, pediatric rheumatology clinic, electromyography, podiatry, etc.). Their goals seem reasonable and achievable. But I wonder if the activities and the journey itself would look different if instead of starting with existing activities, the authors had started with a rigorously defined destination, the proficient clinical rheumatologist, and worked backward to describe the goals and objectives for each step of the journey (learning activity) that might be used to successfully reach that destination. Might there have been more core rotations (with specific learning objectives) outside of the traditional department of rheumatology? The authors do not provide us with evidence that their linked goals and evaluations result in improved understanding of their fellows' strengths and weaknesses, and therefore are better able to facilitate the design of personalized learning plans. Hopefully this information will be forthcoming, as it will help us all.

Implicit in creating training programs that matriculate proficient clinical rheumatologists is that we have the ability and tools to accurately evaluate the desired competencies (knowledge, skills, and attitudes) that permit each graduating fellow to independently provide high-quality, cost-effective, and compassionate clinical care. But many programs are still relying on traditional grading-style evaluations (rated as 1–9) for fellows at the completion of each clinical rotation, an approach that I find particularly unhelpful when trying to provide granular constructive feedback to trainees, or even determine if they are making appropriate progress toward proficiency.

We have a way to go in uniformly implementing useful evaluation tools. A summative evaluation at the conclusion of training may be sufficient (but accurate?) to meet the requirement to declare a fellow as “competent.” But during training, more detailed evaluations are needed. These formative evaluations are of value only if they can provide feedback for trainees (and faculty) as how best to tailor the ongoing educational experience for the individual fellow in an effort to assure the desired level of proficiency. Unfortunately, we don't have enough practical, concrete anchors to define expectations for our fellows as they progress through training. Structured situational examinations (objective structured clinical examinations) and simulation venues may facilitate objective evaluation [3], but these tools need to be further developed, shared, and validated.

I hope that this example of successful interprogram collaboration by the Carolinas Fellows Collaborative group [2] will spur the ACR to further support efforts in an ongoing and inclusive way to develop practical educational milestones and evaluation tools (hopefully with tracking software that will readily permit research and interprogram assessment) for our fellows that can be used as we move forward to implement the new graduate medical education accreditation system, as endorsed by the ACGME [4].

It is time to fill in the evaluation dots in the “see … , do … , and teach …” model and to concretely define for ourselves the essence of the proficient clinical rheumatologist, the final product of fellowship training. Ultimately it is about both the journey and the destination.

AUTHOR CONTRIBUTIONS

Dr. Mandell drafted the article, revised it critically for important intellectual content, and approved the final version to be published.

Ancillary