SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Getting Started in Education Research
  4. Examples of Education Research Studies
  5. Summary
  6. References

A plenary panel session at the 2012 Academic Emergency Medicine consensus conference “Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success” discussed barriers educators face in imagining, designing, and implementing studies to address educational challenges. This proceedings article presents a general approach to getting started in education research. Four examples of studies from the medical education literature that illustrate a distinct way to approach specific research questions are discussed. The study designs used are applicable to a variety of education research problems in emergency medicine (EM). Potential applications of studies are discussed, as well as effects and lessons learned.

Most instructional methods in medical education are based on tradition and philosophy rather than the results of well-designed studies. Despite the plethora of educational problems that emergency medicine (EM) educators face, critical appraisals of education research studies in the EM literature in the past 3 years have identified few original contributions that reach publication.[1-3] Common barriers medical education researchers encounter include lack of training opportunities, protected time, or funding for research; competing administrative and leadership roles; small numbers of learners; and difficulty defining relevant and measurable outcomes.[4] Among published studies, deficiencies in reporting quality are common.[5-10]

The purpose of this article is to provide readers with strategies to overcome the barriers they face in the education research process, from idea to publication. We begin by summarizing a three-step approach to education research design and presenting practical tips for getting started. We then summarize four original research studies in health professions education to illustrate how educational problems can be addressed with research studies.

Getting Started in Education Research

  1. Top of page
  2. Abstract
  3. Getting Started in Education Research
  4. Examples of Education Research Studies
  5. Summary
  6. References

Three Steps in Planning a Scholarly Project

Beckman and Cook[11] have presented a three-step approach to developing scholarly projects in medical education:

1. Define the Study Question. The most important part of any study is the research question (or study goal or hypothesis). Even the most rigorous study will fail to have an effect on the field if it does not answer a question that is both important (i.e., the answer would influence practice or future research) and novel (i.e., the answer is unknown).

The best way to establish a question's importance is to employ a strong conceptual framework, which is a theory, approach, or model for how things work. This allows educators at future times and in other settings to adopt and build on a study's results.[12] Conceptual frameworks also assist educators in defining and selecting the study variables and in predicting and interpreting the results.[13] A question's importance also derives from its timeliness or effect on current practice, but in the absence of a conceptual framework such answers will usually have limited influence over time. Cook and others[14, 15] have argued that the most important questions—those with the greatest potential for long-term effect—are those that clarify how things work, for whom, and in what circumstances.

The best way to establish a question's novelty is to rigorously review the literature. Even if no studies are found for the question being considered, the researcher should diligently seek studies that address similar questions or that may offer suggestions for key elements of the framework, training intervention, or assessment. Such relevant work may often be found in other fields such as surgery, nursing, or non–health professions literature. The literature review should culminate in a focused problem statement—a clear summary of what remains unknown. The research question (or objective) then follows naturally from the problem statement: “The evidence thus far shows ____. What remains unknown is _____. Thus, we sought to _____.”

Educators can identify important questions in their daily teaching activities each time they ask, “I wonder if we should do it [this way] or [that way]?” If a literature search for evidence and theory-based principles does not answer the question, this may be the start of a worthwhile project.

2. Identify Study Designs and Methods. Three guiding principles for research study design are:

First, identify the general class of study design most relevant to the research question. This is essential, because the standards for excellence and potential pitfalls vary depending on the design. Studies of educational interventions (e.g., a new course or training approach) will require an experimental study design, studies that evaluate educational assessments (e.g., a new testing approach or tool) will invoke a validity study design, studies seeking to understand current attitudes or learning needs may employ a cross-sectional (e.g., survey) design, and studies attempting to summarize published literature will use a systematic or nonsystematic literature review.[16-20]

Second, realize that all studies have flaws. However, some flaws matter more than others. In addressing study flaws it is helpful to focus on reproducibility and validity threats, i.e., threats to the validity of study interpretations. Ask, “If another researcher at another institution were to conduct a study using different methods but addressing the same question, would he or she be likely to arrive at the same answer?” Answering this question will focus primarily on threats to study validity—issues such as preexisting differences among participants, selection bias, instrumentation problems, and implementation bias.[16] The most important threats, and the specific ways in which they can manifest, vary for different research designs.

Third, be realistic. Carefully consider factors such as the number of potential participants, the logistics of implementing an intervention, and the feasibility of assessing outcomes. The standards of Glassick et al.[21] for scholarship are useful in judging the rigor of your scholarly project, however small. Estimate the needed sample size using a credible approximation of the anticipated effect.[22] It is also important that they ensure that institutional review board approval is obtained, as learners are a vulnerable study population.[23]

3. Select Outcomes. Nearly all research studies require some type of outcome. To avoid prematurely settling on an inferior outcome measure, we suggest first identifying the broad class of outcomes (e.g. “knowledge” or “skill” [in a controlled setting] or “behaviors” [at the bedside]), then considering different measures available for a given outcome, and then selecting a specific instrument.[24] Ten tips for success in education research have been published previously and are summarized in Table 1.[25]

Table 1. Tips for Getting Started in Education Research
TipBrief Notes
Get some trainingTraining can occur locally (local workshops), at national meetings (in particular at education-oriented conferences), and through formal degrees (e.g., masters-level training).
Find a mentorThe importance of a mentor cannot be overemphasized. If no single person at your institution possesses the needed skills or sufficient time, consider working with multiple mentors or looking outside your institution.
Ask important questionsSee main text.
Start small and growPowerhouse investigators did not start off that way; they started with small projects and grew into their current position. A poster at a national meeting may be the first step in a series of progressively insightful studies.
Aim highIt is okay to start small, but don't settle for mediocrity. Whatever you do, do it well. The standards of Glassick et al.[21] for scholarship are useful in judging the rigor of your scholarly project, however small.
There's no such thing as a perfect studyIf you wait for the perfect study (or perfect opportunity), you'll be waiting a very long time. If the question is important and novel, any answer (even if incomplete) will be better than none.
Where will you find the time?Be realistic in budgeting time, money, and other resources.
Remember ethical issuesLearners are a vulnerable population.[23] Many education journals now require institutional review board approval for all studies involving human subjects.
NetworkBuild relationships locally and nationally through involvement in committees and professional organizations. Approach new contacts with a specific purpose (e.g., a question).
This is hard workEducation research is fun and rewarding, but it is not easy.

Examples of Education Research Studies

  1. Top of page
  2. Abstract
  3. Getting Started in Education Research
  4. Examples of Education Research Studies
  5. Summary
  6. References

Example 1: Study of Fidelity in Simulation-based Education

Our first example is a randomized trial comparing different approaches (different fidelity) for simulation-based education.[26] The primary investigator of this study began with a general interest in developing a simulator to train urologists and answering two questions: how can simulation be effective, and what are the essential key steps for training a particular procedure? Guided by a literature search, he defined a specific research question: is a bench model superior to didactic training for learning a technical skill and is there a difference in training effect between low-fidelity and high-fidelity models? The conceptual framework for this study drew upon the literature regarding transfer of learning, which addresses the factors that influence how learned knowledge or skills from one setting can be effectively applied to another. Instead of comparing training on a simulator to training in a clinical setting, the investigators chose to compare one simulator to another. The research hypothesis was twofold: 1) that both hands-on training models (low- and high-fidelity) would lead to greater gains than the didactic training and 2) there would be no difference in effect between high-fidelity training and low-fidelity training.

The primary investigator identified a procedure that he wanted to teach (midureter kidney stone extraction) and defined the key steps that needed to be performed to complete this procedure. These steps were crucial to developing both a procedure checklist and a low-fidelity model that the authors hypothesized might be as effective as currently available, expensive, high-fidelity models. Forty fourth-year medical students on their surgical rotation were recruited to participate in a randomized, prospective three-arm study. This study had a pre–post design: all subjects were tested before and after the intervention on a high-fidelity simulator. The three interventional arms were a control arm (didactic presentation only), a low-fidelity training arm, and a high-fidelity training arm.

The results supported that hands-on training was superior to didactic instruction. Students trained on both the high- and low-fidelity models did significantly better than the group that received didactic instruction only on all outcomes (global rating scale, checklist score, pass rating, and time to task completion.) However, there were no significant differences between the low- and high-fidelity groups with respect to the same measures. These results suggest that as long as a low-fidelity model is designed to incorporate essential features required in an actual procedure, a realistic visual appearance may not be necessary.[26]

Some lessons can be taken away from this example.

  1. In general, generating project ideas is the easy part. The majority of the time invested in the project will be spent refining the specific research questions and hypotheses, evaluating their relation to the literature, and navigating logistical issues and project management issues as they arise.
  2. Do not underestimate the potential downstream effect of even small-scale studies. This example answered a relevant research question, added new knowledge to the existing literature, and established a foundation for this young researcher to develop a niche in education research, which led to several subsequent studies.[27-30]
  3. It is helpful to start with general intuition or “gut instinct.” This will help ensure that the researcher has a passion for the research question, which is necessary because the path toward scholarship will require a significant amount of work. However, it is important to follow that up with discussions with an expert in the field and a thorough literature review to make sure one's gut is not leading one astray.
  4. Be flexible and thoughtful in developing the specific research question. The final question might be quite different than the initial draft.

Example 2: Study of an Innovative Approach to Interprofessional Communication

Our second example is an evaluation of an innovative approach to improving the team communication skills of fourth-year medical students.[31] The investigator team set out to develop a novel approach to measure the communication and decision-making involved in telephone consultations with nurses about patient care problems and an accompanying measurement instrument.

A multidisciplinary team of physicians, nurses, and education experts developed a set of standardized surgical patient scenarios that might prompt a nurse to page a physician (mock pages). They also developed a structured checklist of desirable and undesirable questions or actions that the responding physician might ask or take. Study participants included 79 fourth-year medical students participating in surgery “boot camp” programs at five institutions (trained group) and 10 new surgery interns at two institutions (untrained group). A trained nurse periodically paged participants and presented them with hypothetical surgical scenarios and then assessed their performance using the checklist.

Inter-rater reliability and internal consistency were acceptable to high (alpha from 0.65 to 0.92). The results demonstrated clear differences in the level of performance of the trained versus the untrained participants, with large effect sizes. This provides evidence for the validity of skill judgments made on data from this assessment tool.[31]

Although designed as an assessment, the mock page technique has also proven to be a valuable training resource and has been well received by students and faculty from each of the participating institutions. The mock page curriculum has subsequently been adopted by other departments and institutions, and there is current discussion about including it in the “National Boot Camp Curriculum” for surgery clerkships.

As an example of how an educational problem can be addressed through research, this study offers several lessons:

  1. Prior to developing the mock pages cases for this study, the team conducted a careful review of the existing literature to avoid duplicating efforts and to take full advantage of findings from other researchers.
  2. The investigator team worked diligently to provide a consistent intervention among the participating medical schools.
  3. The multi-institutional nature of the research was another strength and highlights the importance of communication and attention to the well-being of the collaborative research team.
  4. The results of the study are meaningful because they reflect clinically and educationally relevant outcomes (communication and decision-making) as well as both statistically and practically relevant effects from the intervention. Therefore, this project had both beneficial research outcomes and practical implications for educational practice.

Example 3: Study of Improving Radiology Interpretation Using Deliberate Practice

Our third example is a cross-sectional study that assessed the individual learning curves of trainees as they engaged in deliberate practice in interpreting ankle radiographs.[32] Research in a wide range of domains, such as chess, sports, and music, has suggested that experiential learning is not optimal for developing expertise.[33] The most potent predictor of performance improvement in these domains is the amount of accumulated engagement in deliberate practice, where: 1) individuals engage in tasks with the explicit goal of improving a particular aspect of performance without a teacher; 2) tasks offer accurate and immediate feedback; 3) individuals can engage in the same or similar tasks and can thus gradually improve by repetition, successive refinement, and problem-solving; and 4) individuals can engage in the training when they are ready and rested, until concentration is reduced.[34]

The authors of this study wanted to build on prior assessments of the effectiveness of deliberate practice for improving performance in medical tasks by specifically demonstrating how learning curves can describe proficiency improvements.[35] The authors aimed to demonstrate that learning curves could be used by medical educators to define at which point practice is most efficient and how much practice is required to achieve a defined level of mastery for a specific task.

Researchers created a case bank of 234 pediatric ankle radiographs with official case reports. Each case presented a clinical summary of patient symptoms, followed by a three-view ankle radiograph series. Thirty-two pediatric trainees rotating through a pediatric ED were recruited to participate in the study, along with six pediatric EM attending physicians to serve as a reference standard. Subjects were prompted to classify the case as normal or abnormal, and identify the location of the abnormality if applicable. Subjects were then provided with immediate feedback in the form of the official radiology report. Cases were scored and longitudinal learning curves were generated based on calculated test characteristics (such as accuracy, sensitivity, and specificity). While individuals displayed dramatic differences in learning curves, group-level summary curves for each of the test characteristics were similar. An overall pattern demonstrated a phase of irregular performance until 20 cases and then a maximal increase in performance over time from 21 to 50 cases. An inflection point where learning slowed was noted at 50 cases, but learning did not stop even after all cases had been completed.

This example illustrates how deliberate practice can be studied in an EM setting and also how learning curves can describe performance improvements in learners over time. This may allow medical educators to understand both their learners' rate of learning and the effectiveness and efficiency of a specific learning intervention for either a group or an individual. These results may have applicability to other types of medical skills, such as exam skills (cardiac auscultation, pulmonary auscultation), other imaging interpretation, visual diagnosis, or perceptual skills training.

Example 4: Study of a New Tool for Simulation-based Assessment

Central venous catheterization (CVC) is a procedure commonly performed by practitioners in many medical specialties. Several studies have evaluated how to train health professionals in CVC using simulation.[36] However, assessments of CVC proficiency have infrequently been described. The purpose of this study was to evaluate an instrument designed to determine if residents have mastered the key steps of inserting an internal jugular central line.[37] The study team began by carefully developing an instrument based on institutional practice guidelines and by having the instrument reviewed by several internal focus groups. They videotaped the participants of a training workshop as they performed a CVC. These videotapes were then independently reviewed by two reviewers. However, the team encountered difficulty interpreting the results of data analyses. At this point, a new investigator was added to the team—a researcher with experience planning, conducting, and publishing validity studies. This new addition to the team suggested a new conceptual framework for thinking about validity studies. Over the course of several meetings, the study team refined the scope and aims of the study, defined the target audience, selected a target journal, and revised the analysis plan. The project was slightly delayed when one video reviewer was delayed in completing his review assignment, but ultimately the study was published in a well-respected journal.[37]

Key lessons from this experience include:

  1. Plan the study well in advance. In this case, a midstream reformulation was possible without sacrificing the rigor of the final product. However, not all studies are so fortunate; failure to plan carefully will often allow a flaw that might otherwise have been easily avoided with a minor adjustment in methods. Even in this case, advance planning might have permitted collection of additional useful information during the earliest stages of the study.
  2. Pay close attention to team composition. If the team lacks necessary expertise, it helps to get an expert involved early. Ensure that all team members are committed to the project, that roles are clearly defined, and that each member has sufficient resources (including time) to complete required tasks. Adjustments in the roles and expectations of individual members may sometimes be necessary midway through a project.
  3. Conceptual frameworks apply not only to study interventions, but also to study designs. In this case, the use of a modern framework for thinking about the validity of assessment scores added substantially to the rigor and clarity of the final report.
  4. The old framework for validity studies of face validity, content validity, criterion validity, and construct validity has been replaced by a new model. In the new model, validity is viewed as a hypothesis that can be tested through the collection and synthesis of validity evidence.[38] Evidence can derive from sources of content, internal structure, relations with other variables, response process, and consequences. This evidence is then synthesized to construct a rational validity argument.[17]

Summary

  1. Top of page
  2. Abstract
  3. Getting Started in Education Research
  4. Examples of Education Research Studies
  5. Summary
  6. References

Early career education researchers can overcome barriers to conducting quality studies by defining relevant research questions that stoke their passion, applying a structured approach to the research process, and selecting appropriate designs and methods. A well-designed study that addresses a timely educational problem can create a ripple effect by raising new questions and inspiring follow-up studies that will affect educational practice, contribute to the collective knowledge, and promote researcher career advancement. By engaging in sound education research practices and challenging our colleagues and collaborators to set and achieve high standards for study quality, we can transform the state of medical education to meet the ever-changing needs of educators, learners, and ultimately patients.

References

  1. Top of page
  2. Abstract
  3. Getting Started in Education Research
  4. Examples of Education Research Studies
  5. Summary
  6. References