• critical appraisal;
  • evidence-based medicine;
  • financial conflicts of interest;
  • healthcare consumers;
  • study bias


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References


Training in evidence-based medicine is most commonly offered to physicians, medical students and health-care decision-makers.

Setting and participants

We partnered with community organizations to recruit participants and develop trainings for consumers, non-physician health-care providers and journalists in California.


We conducted half-day and one-day workshops in critical appraisal of health evidence. Workshops consisted of didactic presentations, small-group practice sessions and class discussions.

Outcome measures

We measured knowledge and confidence immediately before and after the workshops and at follow-up 6 months later. We also asked participants to describe their use of health evidence before the workshops and at follow-up.


At baseline, 41% of the consumers, 45% of the providers and 57% of the journalists correctly answered questions about health evidence. Scores increased by about 20% (P < 0.05) in all groups at the end of the workshops and remained significantly over baseline at follow-up. At baseline, 26% of the participants were confident in their understanding of critical appraisal concepts, significantly increasing to 54% after the workshops and sustained (53%) at follow-up. During discussions, participants’ comments often focused on funding and the potential effects of financial conflicts of interest on study findings. Participants did not use evidence more frequently at follow-up but said that they applied workshop skills in evaluating research, communicating with others and making decisions about health care.


It is possible to successfully conduct critical appraisal workshops to aid health-related decision making for groups who have previously not had access to this kind of training.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

Evidence-based medicine has been defined as ‘the integration of the best research evidence with clinical expertise and patient values and circumstances’.[1] The ability to apply the evidence effectively requires that users of scientific literature be able to critically appraise research and other health information. We define critical appraisal of research as a systematic approach of careful analytical evaluation of its design, conduct, analysis and dissemination.

Training in critical appraisal is commonly offered to medical students, physicians and other health-care professionals. However, the ability to evaluate scientific evidence is important for other groups as well. Health-care providers at all levels can benefit from an ability to critically appraise research that is relevant to their work. Consumers can use critical appraisal to better participate in their own health-care decision making. Consumer advocates play important roles in educating other consumers and interacting with policy makers. By using valid scientific evidence, they can promote sound public health policy.[2-4] Journalists act as intermediaries between health-care consumers and the health-care delivery system and are an important source of information for consumers and providers alike. By critically appraising their sources of health information, they can optimize their coverage of scientific research, which is sometimes inaccurate, incomplete or sensationalized.[5-7]

We adopted a community engagement strategy to develop a community-based critical appraisal training programme for consumers, consumer advocates, diverse categories of health-care providers and journalists. The specific aims of our workshops were to increase in participants’ understanding of the basic concepts of evidence-based medicine (knowledge domain), their confidence in their ability to understand research (confidence domain) and their use of evidence (behaviour domain). We hypothesized that providing a one-day workshop in critical appraisal of health information would result in increases in the knowledge, confidence and behaviour.

Workshop design

To inform the design of our workshops and evaluation instruments, we participated in an academic/community partnership consultation available to researchers through the Clinical and Translational Research Institute at the University of California, San Francisco. We also sought advice from 12 community and professional leaders identified at the consultation or by referral. We then worked in partnership with community groups and organizations to develop and offer workshops. We met with directors and representatives of our community partner groups several months in advance of each workshop to plan and conduct outreach, workshop design, recruitment and implementation. Through these collaborations, we determined that the workshop needs of our target populations included brevity, focus on research topics that interested the groups and accommodation of diverse learning styles. We took advice we received repeatedly to provide food and beverages during workshop break periods. We also incorporated elements from papers that evaluated the effectiveness of teaching critical appraisal skills to our target groups and/or described the components of a specific critical appraisal workshop or course that had been presented to at least one of our target groups.[2, 8-21] We then conducted two pilot workshops in 2009. We used participant feedback from the pilots, including comments about the format, materials and questionnaires, as well as evaluation results and feedback from our funder to refine the content and evaluation materials for the workshops that we offered in 2010. Only the 2010 workshops provide the data for this study.

Development of workshop content

The most effective continuing medical education and critical appraisal training programmes allow learners to participate actively in learning, rather than passively receiving information in lectures.[16, 22] Therefore, rather than employing a systematic ‘push’ of particular topics or actionable messages, we chose a ‘user pull’ approach that allows participants to identify research that is relevant in their work and lives. The user pull approach has been shown to link research with local application of evidence.[23] We operationalized the user pull model with a problem-based learning strategy in which we surveyed representatives of participant groups prior to each workshop to identify topics of interest to group members. We also surveyed workshop attendees after each workshop and incorporated this information into our planning sessions for subsequent workshops. In addition to asking participants for suggestions, we provided a list of tobacco-relevant topics that they could check off, including smoking/tobacco use, second-hand smoke, air quality, cancer, measurement, interventions, genetics and tobacco industry. This emphasis on tobacco content was driven by our own research interests and by our funder's mission to support research on the health effects of second-hand smoke and improve patient–provider communication about exposure. We disclosed the relationship with our funder at all planning meetings and at the beginning of each workshop.

Of the 22 planning-group representatives and 83 workshop attendees who filled in the survey forms, 71% (75/105) chose at least one tobacco-relevant topic, most frequently second-hand smoke, air quality, cancer and interventions. In response to this, we offered several papers that address tobacco and second-hand smoke.[24-29] Other frequently mentioned topics included pharmaceuticals, statistics, media/information searching and specific medical conditions. Throughout the project, we used course evaluations to improve the workshops. For example, we revised slides for clarity, provided a written glossary and included particular papers in small-group learning sessions.

To accommodate the time constraints of our various partner groups while still enabling us to achieve our aim of increasing participants’ critical appraisal skills, we designed a workshop consisting of several modules (Table 1). This modular design was intended to enable us to cover key concepts and meet our aims of increasing participants’ knowledge, confidence and evidence-using behaviour in workshops of different lengths. Modules included didactic presentations that introduced evidence-based medicine and basic research methods, one or two small-group hands-on learning sessions and class discussions. Because financial conflicts of interest are associated with biased research findings[30-34] and different presentations of risk affect users’ perceptions of research findings,[7, 35] we placed special emphasis on assessing the influence of industry sponsorship and interpretation of absolute and relative risk. We used the PICO/PECO framework (Population, Intervention/Exposure, Comparison, Outcome) and the critical appraisal tools developed by Guyatt et al.[36] to guide discussions in the small groups. We offered a choice of studies as options for the small-group discussions. These included a study of a well-known study of a statin drug, related editorials and media articles about the study that appeared over time[37-50] and a controversial study of the health effects of second-hand smoke.[24] We also offered other studies, tailored to each group's interests. Some examples of requests by our partner groups are studies of breast cancer treatment, herbal medicine and autism.[51-53] We invited members of the participant groups to co-facilitate workshop discussions.

Table 1. Modular items and learning domains for half- to full-day critical appraisal workshops for consumers, healthcare providers, and journalists
ModuleTypeLearning domainNotes
  1. Duration/subject matter of modules determined in advance with participant organizations. Light meals, snacks provided in all workshops.

  2. a

    Modules marked with asterisks were included in all workshops, regardless of length

Introduction, RCTsa DidacticKnowledgeEvidence-based healthcare, methods, results
Small group session #1a ExperientialConfidenceAppraise journal article and media reports
DiscussionQ&AKnowledge, ConfidenceWorkshop participants reconvene
Observational studiesa DidacticKnowledgeMethods, odds/risk ratios
Conflicts of interesta DidacticKnowledge, BehaviorFunding and affiliation bias
Systematic reviewsa DidacticKnowledgeMethods, meta-analysis, forest plots
Small group session #2ExperientialConfidenceCritical appraisal of article or other activity
ResourcesDidacticBehaviorData bases, trainings, other resources
Discussion, wrap-upa Q&AConfidence, Behavoir
Course evaluationa


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

Study population and recruitment

We offered workshops through consortia, graduate programmes and membership organizations for consumers and consumer advocates, health-care providers and journalists. These partner organizations sent recruitment flyers to their members and mailing lists and announced the workshops on their mailing lists, social networking sites and other venues.


We selected knowledge, confidence, and behaviour as our outcome measures.[16] We developed a questionnaire based on validated and previously used instruments,[54-57] and modified it for our study populations. We used a pre–post design with two post measurements, one immediately after the workshop and one at the 6-month follow-up. We asked all attendees to answer a series of questions at the beginning and end of each workshop module assessing their knowledge and confidence regarding that module's topic, using interactive audience response keypads. Knowledge questions were multiple choice (five options: correct answer, three distractors, ‘don't know’) or 1–5 scale (strongly agree, somewhat agree, no opinion/do not know, somewhat disagree, strongly disagree). Confidence questions were on a 1–5 scale (not at all confident to very confident). We conducted, recorded and transcribed a 15–30-min wrap-up discussion at the end of each workshop. All attendees were also invited to complete course evaluations at the end of each workshop. These activities were conducted as an integral part of the instruction and were considered to be exempt research by UCSF's Committee on Human Research.

Workshop attendees were invited to participate in our study but were not required to participate as a condition of attendance. We asked consenting study participants to complete a questionnaire with questions regarding sociodemographics and participants’ frequency of use of evidence in health-care decision making (rarely, sometimes, often, almost always). Approximately 6 months after each workshop, we sent study participants follow-up questionnaires that repeated the on-screen questions and supplemental questions concerning their use of health evidence and added open-ended questions about their use of workshop skills. Links to online questionnaires were sent via email, with print copies sent to participants who had no email addresses. We made multiple contact attempts to nonresponsive participants via telephone, email and surface mail. We continued our contact attempts up through the 1-year anniversary of the participants’ workshop. Participants who did not respond by that time were considered lost to follow-up. Participants who completed all phases of the study were entered into a raffle for a netbook computer, books on critical appraisal and other small prizes. The study was approved by the Committee on Human Research at the University of California, San Francisco, approval numbers H2758-33589 and 10-02507.

Data analysis

Quantitative analysis of knowledge and confidence

We tabulated the number of correct answers for each question by workshop type (consumer, provider or journalist) and at three data collection points: immediately before the workshop modules, immediately after the workshop modules and at follow-up. For each knowledge question, we compared the proportion of correct answers. For each confidence question, we compared the proportion of respondents who indicated 4 or 5 on a 1–5 scale of confidence in their ability to understand concepts or perform critical appraisal operations. We performed two sample z-tests of proportions for each of the subgroups as well as for all groups combined. We calculated P-values using Simple Interactive Statistical Analysis (SISA).[58]

Qualitative analysis of behaviour

We conducted content analyses for three qualitative data sources. First, each workshop's wrap-up discussion was audio-recorded and transcribed. Second, participants provided written comments on course evaluations. Third, follow-up questionnaires included open-ended questions about how workshop participants had applied critical appraisal skills in the months following the workshops. We developed coding schemes based on our initial qualitative reads of the material, and refined and developed them throughout the analytic process. Two researchers double coded the material. We found minor discrepancies in interpretation which we discussed and resolved collaboratively.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

We completed a total of nine workshops between March and December of 2010. Four of the nine workshops were conducted for consumers/consumer advocates, accounting for 57% (102/178) of the attendees. Two of these were offered in partnership with a large advocacy organization for retired workers, which contributed over half of the attendees in the category; the remaining attendees were members of consumer advocacy groups working on behalf of breast cancer patients and disability rights. Three workshops for health-care providers, accounting for 24% (43/178) of attendees, included nurses, community clinic workers, complementary and alternative medicine practitioners students and interns. Two journalist workshops were attended by working journalists and graduate journalism students, who made up 19% (33/178) of all workshop participants (attendee data not shown in tabular form.) Of the 178 workshop attendees, 153 (86%) consented to participate in the research study.

Characteristics of study participants

Table 2 describes the 153 consented participants, from whom we collected sociodemographic and other individual-level data. Proportions of participants by workshop type were similar to attendees; 56% (85/153) were consumers, 28% (43/153) were providers and 16% (25/153) were journalists. Overall, 82% (126/153) of study participants were women, with little variation by workshop category. More than two-thirds of the consumers and journalists were non-Hispanic Caucasians, compared to fewer than half of the health-care providers. Workshop participants were highly educated; most had college or advanced degrees. Few participants reported low incomes, while over half had family incomes above $50 000 with substantial proportions reporting incomes above $75 000. Although there were wide age ranges in all groups, consumers were substantially older than other participants, with an average age of 66 compared with 33 and 34 for the other groups. This is only partly because of the large number of retired workers in the two workshops we conducted specifically for them, in which the average age of participants was 68 and 73. Participants in the other consumer workshops had average ages of 51 and 58 (data not shown in tables). Moreover, several of our practitioner and journalist community workshops included students and interns, which probably lowered the average ages of these participant groups. Follow-up rates for all groups were high: we retained 79% of consumers, 88% of practitioners and 84% of journalists, with an overall retention rate of 82%.

Table 2. Characteristics of study participant characteristics, Critical Appraisal Workshops
 Consumer workshops N = 85 participants (%)Provider workshops N = 43 participants (%)Journalist workshops N = 25 participants (%)All workshops N = 153 (%)
  1. Due to rounding, some categories do not add up to 100%.

Other/Decline to state/left blank1201
Racial identity
Non-Hispanic Caucasian68447663
African American182011
Asian/Pacific Islander5301614
Native American0201
Don't know/decline/blank0501
Educational attainment
Less than High School graduate0201
High School graduate or GED5504
Technical school or some College289018
College graduate32655645
Post-graduate or Professional degree35194031
Don't know/Decline to State/missing0041
Previous research training
I don't know/decline to state/blank5545
Household income
<$15 000614129
$15 000–<$25 000142810
$25 000–<$50 00022141218
$50 000–<$75 000289419
$75 000 and over24493232
Don't know/decline to state/missing6123212
AgeMedian (range) 66 (23-89)Median(range) 33(18-72)Median(range) 34(23-78)Median(range) 56(18-89)
Don't know/decline to state0041


As shown in Table 3, at baseline, 45% of participants – 41% of consumers, 45% of providers and 57% of journalists – answered knowledge questions correctly overall. Knowledge increases were similar in magnitude for all three groups, averaging about 20% at the end of the workshops. At follow-up, much of the new knowledge had been retained, with an average 16% increase in knowledge scores from baseline. Immediately after the workshop, we found significant increases in participants' understanding of the concept of generalizability, the benefits of randomization in clinical trials, the difference between absolute and relative risk and the relative strength of evidence provided by well-conducted systematic reviews compared to individual studies. There was little change in participants' ability to assess how confounding factors may affect results of a study. We found an increase in overall knowledge of critical appraisal concepts in all groups.

Table 3. Changes in knowledge among workshop participants (n = 178), by workshop type
Type of participantBaseline: % answering correctly pre-workshop (n at baseline)Post-workshop: % answering correctly (P-value pre/post difference) (n after workshop)Follow-up: % answering correctly (P value pre/follow-up difference) (n at follow-up)
  1. a

    Five-choice multiple question: correct answer, three distractors, don't know.

  2. b

    Scale 1–5: Strongly agree, somewhat agree, no opinion/don't know, somewhat disagree, strongly disagree.

Overall average percent for knowledge questions
Consumers40.8 (= 90)59.7 (< 0.001) (= 86)54.3 (< 0.001) (= 64)
Providers45.0 (= 37)67.8 (≤ 0.001) (= 39)60.5 (< 0.001) (= 36)
Journalists56.9 (= 27)75.3 (< 0.001) (= 24)73.5 (= 0.002) (= 21)
All Participants44.6 (= 153)64.3 (< 0.001) (= 149)60.9 (< 0.001) (= 121)
Correctly identified “statistically significant”a
Consumers43.4 (= 99)57.0 (= 0.07) (= 94)57.8 (= 0.07) (= 64)
Providers53.7 (= 41)71.5 (= 0.07) (= 42)70.0 (= 0.58) (=35)
Journalists77.4 (= 31)80.0 (= 0.81) (= 25)76.2 (= 1.00) (= 21)
All Participants52.0 (= 171)63.9 (= 0.03) (= 161)61.6 (= 0.10) (= 120)
Correctly identified to whom a particular study's results could be generalizeda
Consumers45.8 (= 83)55.7 (= 0.21) (= 79)59.4 (= 0.10) (= 64)
Providers38.1 (= 42)56.1 (= 0.10) (= 41)58.3 (= 0.07) (= 36)
Journalists53.3 (= 30)76.0 (= 0.08) (= 25)76.2 (= 0.10) (= 21)
All Participants45.2 (= 155)59.3 (= 0.01) (= 145)62.0 (= 0.005) (= 121)
Strongly agreed that randomization makes comparable groupsb
Consumers52.1 (= 96)77.1 (< 0.001) (= 96)64.6 (= 0.11) (= 65)
Providers65.9 (= 41)90.5 (= 0.01) (= 42)77.4 (= 0.21) (= 37)
Journalists53.3 (= 30)79.1 (= 0.05) (= 24)90.4 (= 0.012) (= 21)
All Participants55.7 (= 167)80.9 (< 0.001) (= 162)73.2 (= 0.002) (= 123)
Correctly identified odds and risk ratios as measures of relative differenceb
Consumers16.5 (= 91)30.2 (= 0.03) (= 86)29.7 (= 0.05) (= 64)
Providers17.2 (= 29)34.1 (= 0.12) (= 41)22.2 (= 0.61) (= 36)
Journalists21.7 (= 23)47.8 (= 0.06) (= 23)47.6 (= 0.07) (= 21)
All Participants17.5 (= 143)34.0 (= 0.001) (= 150)30.6 (= 0.01) (= 121)
Correctly identified how absolute risk and relative risk measures report the same results differentlya
Consumers12.8 (= 78)29.8 (= 0.002) (= 77)26.5 (= 0.04) (= 64)
Providers25.0 (= 24)40.0 (= 0.26) (= 25)50.0 (= 0.053) (= 36)
Journalists48.3 (= 29)66.7 (= 0.18) (= 24)66.7 (= 0.20) (= 21)
All Participants22.9 (= 131)38.9 (= 0.01) (= 126)40.5 (= 0.003) (= 121)
Correctly identified the relationship between confounding factors, factors on the causal pathway and outcomea
Consumers68.4 (= 95)73.2 (= 0.47) (= 86)64.0 (= 0.56) (= 64)
Providers62.5 (= 4078.0 (= 0.13) (= 41)72.2 (= 0.37) (= 36)
Journalists91.3 (= 23)87.0 (= 0.64) (= 23)90.5 (= 1) (= 21)
All Participants70.3 (= 158)76.7 (= 0.20) (= 150)71.1 (= 0.88) (= 121)
Strongly agreed systematic reviews provide stronger evidence than other studiesb
Consumers40.7 (= 86)90.5 (< 0.001) (= 84)76.9 (< 0.001) (= 65)
Providers36.6 (= 41)92.7 (< 0.001) (= 41)81.1 (< 0.001) (= 37)
Journalists50.0 (= 22)90.9 (= 0.003) (= 22)66.7 (= 0.027) (= 21)
All Participants40.9 (= 149)90.9 (< 0.001) (= 147)76.4 (< 0.001) (= 123)


As shown in Table 4, participants showed increases in confidence in their ability to evaluate research information. Overall, before the workshop, an average of 26% of participants – 23% of consumers, 25% of providers and 38% of journalists – answered that they were confident or very confident of their ability to understand and/or explain various terms and concepts related to critical appraisal. After completing the workshop, 54% answered that they were confident or very confident. These results were sustained over time and at follow-up, 53% stated that they were confident or very confident. Confidence increases varied by participant type; consumers showed baseline—post-workshop gains of 25%, providers 34%, and journalists 31%, and baseline—follow-up gains of 17, 36 and 39%. Participants' attitudes about studies' funding sources and researchers' financial conflicts of interest did not change substantially over the course of our study. At all stages of the study, over 95% of participants in all groups agreed or strongly agreed that this information was important (data not shown in tabular form).

Table 4. Changes in confidence among workshop participants (n = 178), by workshop type
Type of participantBaseline: % indicating confidence pre-workshop (n at baseline)Post-workshop: % indicating confidence (P-value pre/post difference) (n after workshop)Follow-up: % indicating confidence (P value pre/follow-up difference) (n at follow-up)
  1. Answered 4 or 5 on scale: Not at all confident 1 2 3 4 5 very confident.

Overall average percent of participants indicating confidence
Consumers22.8 (= 82)47.6 (< 0.001) (= 80)40.3 (< 0.001) (= 60)
Providers24.6 (= 36)58.9 (< 0.001) (= 37)61.0 (< 0.001) (= 34)
Journalists38.0 (= 24)73.4 (< 0.001) (= 20)76.7 (< 0.001) (= 19)
All Participants25.8 (= 142)54.4 (< 0.001) (= 137)52.5 (< 0.001) (= 113)
Percent of participants indicating confidence in their ability to
Recognize sources of bias
Consumers43.6 (= 94)74.1 (< 0.001) (= 85)59.7 (= 0.04) (= 67)
Providers48.7 (= 39)90.5 (< 0.001) (= 42)73.7 (= 0.02) (= 38)
Journalists40.0 (= 30)86.4 (< 0.001) (= 22)71.4 (= 0.02) (= 21)
All Participants44.2 (= 163)80.6 (< 0.001) (= 149)65.9 (< 0.001) (= 126)
Understand the results of a scientific study
Consumers35.4 (= 96)45.2 (= 0.18) (= 84)52.2 (= 0.08) (= 67)
Providers43.9 (= 41)55.0 (= 0.32) (= 40)73.7 (= 0.007) (= 38)
Journalists43.5 (= 23)59.1 (= 0.29) (= 22)85.7 (< 0.001) (= 21)
All Participants38.8 (= 160)50.1 (= 0.05) (= 146)64.3 (< 0.001) (= 126)
Jude if a study's findings are valid
Consumers26.7 (= 86)52.8 (< 0.001) (= 87)44.7 (= 0.01) (= 67)
Providers30.0 (= 40)69.0 (< 0.001) (= 42)68.4 (< 0.001) (= 38)
Journalists36.7 (= 30)59.1 (= 0.11) (= 22)66.7 (= 0.03) (= 21)
All Participants29.5 (= 156)58.3 (< 0.001) (= 151)55.6 (< 0.001) (= 126)
Explain randomized controlled trial
Consumers29.0 (= 93)43.9 (= 0.04) (= 91)43.3 (= 0.06) (= 67)
Providers37.5 (= 40)71.4 (= 0.002) (= 42)70.3 (= 0.004) (= 37)
Journalists56.7 (= 30)81.9 (= 0.06) (= 22)95.3 (p = 0.002) (= 21)
All Participants36.2 (= 163)56.8 (< 0.001) (= 155)60.0 (< 0.001) (= 125)
Explain absolute risk reduction
Consumers9.5 (= 95)36.9 (< 0.001) (= 95)19.4 (= 0.07) (= 67)
Providers2.5 (= 40)40.5 (< 0.001) (= 42)40.5 (< 0.001) (= 37)
Journalists16.1 (= 31)83.3 (< 0.001) (= 24)66.6 (< 0.001) (= 21)
All Participants9.0 (= 166)44.7 (< 0.001) (= 161)33.6 (< 0.001) (= 125)
Explain relative risk reduction
Consumers11.5 (= 96)32.3 (= 0.001) (= 93)43.3 (< 0.001) (= 67)
Providers5.1 (= 39)38.1 (< 0.001) (= 42)70.2 (< 0.001) (= 37)
Journalists56.7 (= 30)81.9 (= 0.06) (= 22)95.3 (= 0.002) (= 21)
All Participants18.2 (= 165)40.8 (< 0.001) (= 157)60 (< 0.001) (= 125)
Explain cohort study
Consumers18.0 (= 89)45.0 (< 0.001) (= 89)37.3 (= 0.007) (= 67)
Providers22.5 (= 40)53.8 (= 0.004) (= 39)48.6 (= 0.02) (= 37)
Journalists34.8 (= 23)68.2 (= 0.03) (= 22)81.0 (= 0.002) (= 21)
All Participants21.7 (= 152)50.7 (< 0.001) (= 150)48.0 (< 0.001) (= 125)
Explain confounding factor
Consumers17.0 (= 88)43.1 (= 0.0002) (= 88)25.3 (= 0.20) (= 67)
Providers25.6 (= 39)58.5 (= 0.003) (= 41)51.3 (= 0.02) (= 37)
Journalists39.1 (= 23)58.2 (= 0.05) (= 22)71.4 (= 0.03) (= 21)
All Participants22.7 (= 150)51.0 (< 0.001) (= 151)40.8 (= 0.001) (= 125)
Explain systematic review
Consumers13.1 (= 84)58.1 (< 0.001) (= 86)37.3 (< 0.001) (= 67)
Providers5.0 (= 40)52.5 (< 0.001) (= 40)51.4 (< 0.001) (= 37)
Journalists13.6 (= 22)71.4 (< 0.001) (= 21)57.1 (= 0.003) (= 21)
All Participants11.0 (= 146)58.5 (< 0.001) (= 147)44.8 (< 0.001) (= 125)


Participants in all three groups said that they benefitted from taking the workshop and that they expected to use the specific tools and skills in their practice and their personal lives. One participant commented, ‘You realize everybody has an agenda and if you really want to know the answer you should probably just look at the research they are citing and not read the second-hand (media) report’. The participants also indicated that they would use their new skills in statistical literacy. One said, ‘When they come and say 93% or 44%, that means nothing if you don't know how they came up with it’. Others said they planned to look for systematic reviews when they need reliable evidence about a particular intervention or treatment; ‘To have a process (looking at systematic reviews) for exploring the literature when it can be so overwhelming is really important'. Several discussions focused on participants’ increased understanding of potentially biasing effects of study funding on study outcomes.

Consumers said that they would be better able to evaluate treatments that may be recommended by their health-care providers or advertised in the media and said that they felt better equipped to discuss treatment options with their doctors. One participant commented, ‘[If] the doctor wanted to give me a drug, I would know more about how to [look it up] before I said yes or no’. Others mentioned using their skills in communicating with family members and other consumers. Journalists and providers commented on their increased media and statistical literacy as a result of attending the workshop and indicated that these skills would assist them in evaluating research. The skills that the journalists found especially useful were those that allowed them to understand research terminology and evaluate research articles beyond what was presented to them in industry press releases. One journalist commented, ‘You know everybody is talking 50% rates. Is it really? Is that absolute or relative…or the issue of statistical significance does not mean clinical importance, or whether the sample is representative and randomized, and so forth. Three or four things that were taught today could help you very quickly, and it could even be news if everybody is hyping a drug to come out and say, ‘Well you know, this is not quite so.’ And that is news’.

In all workshop discussions, many comments focused on funding and the potential effects of financial conflicts of interest on study findings. People said things like ‘Follow the money’ and ‘Pay no attention to that man behind the curtain!’ A disability rights advocate who works with parents of children with autism made this comment after her group appraised the Wakefield paper,[53] Lancet's retraction [59] and media reports: ‘I thought the bias around the funder was really clear – I did not like it – everyone woke up around the autism stuff’. Participants also said that it is important to notice when no information about potential conflicts is provided. As previously described, we disclosed our funding source at the beginning of each workshop. Some participants remarked that the use of tobacco-related studies in our presentations and small-group sessions could be related to our funding. One said it was like subliminal advertising and asked ‘Are you doing it for our benefit, or are you doing it because you got the money and have to do it?’.

At baseline, most study participants said they often or almost always used health evidence in their work and daily lives (i.e. in health-care decision making, effective health advocacy, helping others understand the treatment harms and benefits, or determining whether a treatment applies to a particular person's situation (41–72%, average 54%). Except for journalists, there were no substantial changes from baseline to follow-up in the frequency that workshop participants said they used evidence. Journalists reported a 26% increase in their use of evidence to ‘Help others understand the benefits and harms of treatment options,’ with 67% responding ‘often or almost always’ at follow-up, compared to 41% pre-workshop. (Data not shown in tabular form).

Although participants did not appear to use evidence more frequently at follow-up, they said that they applied workshop skills when they did use evidence. Of the 126 study participants who returned follow-up questionnaires, 100 (55 consumers, 29 providers, and 16 journalists) provided a total of 258 answers to the follow-up question: ‘Please describe three ways you have been able to use skills or information from the workshop in your personal or professional life’. Of the 258 answers, 166 (64%) involved behaviour change: 40 of these specified use in communication, 17 specified use in health-care decision making, and the rest described using skills and information in general critical evaluation of information. The other responses were divided between the domains of knowledge (28%) and attitude (32%). Overall, 12% of answers indicated use in work, most commonly by journalists. Fourteen per cent of the answers involved evaluating bias and conflicts of interest. Only two participants said they had not used the workshop skills at all. A summary of participants' comments appears in Table 5.

Table 5. Participants' (n = 100) self-reported use of critical appraisal skills 6 months post-workshop
Domain: Knowledge
Multiple participants from all workshop types (i.e. consumer, provider, and journalist) mentioned increased knowledge and understanding of characterization of risk and presentation of results.
Multiple participants from all workshop types mentioned the importance of evaluating potential biases in studies and other information
Domain: Confidence
Multiple participants from all workshop types reported they felt better able to understand research terminology, statistical concepts, sources of bias
Domain: Behavior
General Critical Appraisal
Multiple participants from all workshop types reported being more critical/skillful reading of studies and other health information
ARR/RRR, study populations, reporting
Better understanding of terminology, statistical concepts, sources of bias
Multiple participants from all workshop types mentioned that they watched more for bias introduced by financial conflicts of interest
Critical judgment of validity and credibility
Importance of taking funding sources into account when evaluating information
Consumers primarily reported changes in communication with providers, family member, and in consumer health advocacy
Providers primarily reported changes in communication with other providers, patients, and
Journalists primarily reported changes in communication with readers/audiences
All workshop types: Question healthcare providers and others who provide health information
Healthcare decision making
All workshop types: reported using as their skills for decisions relating to themselves, family, others
Providers: reported using their skills for decisions relating to patients/clients


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

The findings of our study challenge the emphasis on critical appraisal trainings aimed at enhancing the evidence-based practice of physicians, medical students and other clinicians. We successfully conducted workshops with non-physician health workers, members of consumer/consumer advocacy groups and journalists. While participant groups began the trainings with different levels of baseline knowledge and confidence, all three showed increases in confidence and in their ability to understand key concepts in critical appraisal of health information, including scientific papers that are generally available to and used by physicians, researchers and other members of the scientifically trained elite. This suggests that while there may be dissimilarities in knowledge among the groups, they have a similar capacity to assimilate new information. Although we saw increases in confidence scores for all groups, consumers had somewhat smaller gains, suggesting that providers and journalists may be more confident of their ability to use their new knowledge, perhaps because of work-related exposure to health information. Our findings indicate that while study participants overall did not use evidence more frequently than before the workshops, their use of evidence was enhanced by the skills they acquired in all three domains: knowledge, confidence and behaviour. That journalists reported an increased use of evidence in a domain relevant to their role in interpreting and reporting health information is a potentially important finding that merits further investigation.

All groups began the workshops with the attitude that it is important to know about funding sources and researchers' personal conflicts of interest. Throughout the trainings, we stressed that while financial sponsorship does not necessarily lead to biased findings, the risk of bias is higher than in publicly funded studies of commercial products. The workshops provided participants with tools they could use to identify potential bias, ways research can be manipulated, and results presented so that the findings favour study sponsors' products. Participants began using these tools right away in small-group sessions and class discussions, including questioning the role of our sponsor in our choice of workshop materials.

If widely implemented, critical appraisal training for diverse groups of learners could have broad effects on the public's heath. Consumer advocates and journalists are important intermediaries in the dissemination of health information to policy makers, health-care providers, and the public at large. Consumers themselves are faced with an increasing amount of complex research information, both from primary sources and such secondary sources as internet web pages and direct-to-consumer-marketing. They should have access to the tools they need to evaluate this information and feel empowered to use it in their own decision making. However, consumers generally have limited access to journal articles, so designing subsequent consumer workshops with an additional module that focuses explicitly on critical appraisal of media sources and advertising might provide skills that consumers can use frequently and confidently.


Our pre-/post-follow-up study design does not allow us to demonstrate a causal relationship between participation in critical appraisal workshops and increased knowledge, confidence and use of skills. The lack of control groups makes it difficult to identify test-retest problems that may arise. Respondents might answer repeated questions based on their memory of their previous answers or their understanding of the questions may change, leading them to answer differently even if there is no change in the items being measured. However, our finding that many increases were maintained 6 months after the workshop suggest that the results are valid for this group of learners. Several of our measures were self-reported, and some investigators question the accuracy of self-reported skills acquisition and ability to apply health evidence.[16, 18] In an effort to better understand the effects of the workshop on participants, we collected and analysed both quantitative and qualitative data. Our multi-pronged, mixed-method approach contributes validity to our findings and provided insight into immediate and extended changes in knowledge, confidence and behaviours reported by our workshop participants.

Our evaluation questionnaire was based on previously used and validated instruments, but was not itself validated for the specific types of learners in our study. However, Fritche et al.[18] demonstrated construct validity in their instrument by distinguishing among groups of learners with different levels of expertise in evidence-based medicine. Similarly, our groups of consumers reported less previous training in research than the journalists and health-care providers, and on average they showed lower baseline levels in the domains of knowledge and confidence. Evaluations of evidence-based practice interventions need to take in account this study's methodological difficulties, which reflect challenges in evaluating interventions that are conducted in real-world settings. The on-screen system we used to collect pre–post data was problematic. Some responses were not initially recorded, requiring participants to repeatedly enter their choices on the key pads. Also, because we asked questions throughout the course of the workshop, some participants may not have been in the room for both the before and after versions of the questions. This can be seen in the numerical differences in the number of respondents who answered the various questions. Because of this, we used statistical methods appropriate for independent samples, rather than methods that account for missing values in classic repeated measures data. Finally, randomized controlled and pragmatic trials, process evaluations, and other procedures designed for complex interventions could provide more information and precise assessment of the effectiveness and generalizability of workshops such as ours that are guided by a consistent conceptual framework and involve several flexible components that vary by context and participant feedback.[60, 61]


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

All three groups of workshop participants were highly educated, largely affluent and predominantly Caucasian and women. It is unclear whether this training in its current form would be effective in groups with low educational attainment or with low-income populations. We believe that such trainings could indeed be successfully conducted with more demographically diverse groups of participants. Future projects of this type could be designed specifically to enrol low-income and underserved consumers by following some of the principles that guide community-based participatory research; using community advisory boards and paid peer educators, and offering workshops in bilingual formats.

Our findings suggest that it is possible to successfully conduct critical appraisal workshops to aid health-related decision making for groups who have previously not had access to this kind of training. Workshops could be particularly useful if offered in the initial stages of community-based participatory research projects, where, ideally, power, responsibility, and research activities are shared in academic-community partnerships. Community groups may lack training in research methods, and academic researchers may develop research questions and design studies without informed community input into the process. This can result in tensions between conducting research that meets the community's needs and academics' insistence on using sound methodology. Making critical appraisal training available to community partners while at the same time educating academic partners about community priorities could enhance collaborations and produce methodologically sound studies that are relevant to the needs of the community.[62] Trainings could also be incorporated into public health interventions. For example, workshops focusing on critical appraisal of tobacco industry marketing could be offered to youths and young adults.

The lack of critical appraisal skills is not the only or even the most important barrier to using evidence in health-care and health policy decision making. The ability to understand research can help people make evidence-based decisions in the face of marketing pressures and ill-conceived policies, but it cannot make up for absent or poor-quality studies, conflicts of interest, or bodies of research that underrepresent diverse populations or particular patient groups. Therefore, when new users of scientific literature develop or improve their critical appraisal skills it is likely that they will encounter much of the uncertainty that clinicians, policymakers, and researchers currently encounter. If all of these stakeholders use their critical appraisal skills to demand unbiased studies that are relevant to diverse populations and real-world situations, we could see a rise in quality and applicability of health research, leading better policies and clinical guidelines and, ultimately, improved population health.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

The authors gratefully acknowledge Dorie Apollonio for the substantial contributions in the early stages of the project, Steven Paul for statistical consulting, the UCSF/CTSI Community Engagement consultants for advice on partnering with community groups, Maureen Boland and David Krauth for research assistance, and Lisa Hirsch for proof reading and editing. We also thank workshop facilitators Martha Michel, Monique Anderson, Nancy Oliva, Ansgar Gehardus, and David Tuller. We give special thanks to our community partners: the Newcomers Health Program and the San Francisco Community Clinic Consortium at the San Francisco Department of Public Health, the Ohlone Herbal Center, Breast Cancer Action, the Disability Rights Education and Defense Fund (DREDF), the Kaiser Hospital Professional Performance Committee, the California Association of Retired Americans (CARA), the UC Berkeley Graduate School of Journalism and the Northern California Association of Healthcare Journalists.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

This project was funded by the Flight Attendant Medical Research Institute (FAMRI), Miami, FL, USA. This project was also supported by NIH/NCRR UCSF-CTSI Grant Number UL1 RR024131. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

Conflict of interest

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References

None of the authors have personal, professional or financial conflicts of interest to disclose.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Implications
  8. Acknowledgements
  9. Funding
  10. Conflict of interest
  11. References
  • 1
    Sackett D, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM, 2nd edn. New York: Churchill Livingstone, 2000.
  • 2
    Berger B, Steckelberg A, Meyer G, Kasper J, Muhlhauser I. Training of patient and consumer representatives in the basic competencies of evidence-based medicine: a feasibility study. BMC Medical Education, 2010; 10: 16.
  • 3
    Bero LA, Jadad AR. How consumers and policymakers can use systematic reviews for decision making. Annals of Internal Medicine, 1997; 127: 3742.
  • 4
    Oliver S, Clarke-Jones L, Rees R et al. Involving consumers in research and development agenda setting for the NHS: developing an evidence-based approach. Health Technology Assessment, 2004; 8: 1148, III-IV.
  • 5
    Cassels A, Hughes MA, Cole C, Mintzes B, Lexchin J, McCormack JP. Drugs in the news: an analysis of Canadian newspaper coverage of new prescription drugs. CMAJ, 2003; 168: 11331137.
  • 6
    Moynihan R. Making medical journalism healthier. Lancet, 2003; 361: 20972098.
  • 7
    Moynihan R, Bero L, Ross-Degnan D et al. Coverage by the news media of the benefits and risks of medications. New England Journal of Medicine, 2000; 342: 16451650.
  • 8
    Taylor RS, Reeves BC, Ewings PE, Taylor RJ. Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Medical Education, 2004; 4: 30.
  • 9
    Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears R. A systematic review of the effectiveness of critical appraisal skills training for clinicians. Medical Education, 2000; 34: 120125.
  • 10
    del Mar C, Glasziou P, Goodall S. Innovative evidence-based medicine workshops for general practitioners. 9th Annual Cochrane Colloquium, 2001; Lyon, France, 2001.
  • 11
    Parkes J, Hyde C, Deeks J, Milne R. Teaching critical appraisal skills in health care settings. Cochrane Database Systematic Review, 2001; 3: CD001270.
  • 12
    Green ML, Ellis PJ. Impact of an evidence-based medicine curriculum based on adult learning theory. Journal of General Internal Medicine, 1997; 12: 742750.
  • 13
    Coomarasamy A, Taylor R, Khan KS. A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Medical Teacher, 2003; 25: 7781.
  • 14
    Hyde C, Parkes J, Deeks J, Milne R. Systematic review of effectiveness of teaching critical appraisal (Structured abstract). Database of Abstracts of Reviews of Effects, 2011.
  • 15
    Forsetlund L, Bradley P, Forsen L, Nordheim L, Jamtvedt G, Bjorndal A. Randomised controlled trial of a theoretically grounded tailored intervention to diffuse evidence-based public health practice [ISRCTN23257060]. BMC Medical Education, 2003; 3: 2.
  • 16
    Nabulsi M, Harris J, Letelier L et al. Effectiveness of education in evidence-based healthcare: the current state of outcome assessments and a framework for future evaluations. International Journal of Evidence-Based Healthcare [Review], 2007; 5: 468476.
  • 17
    Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ, 2004; 329: 1017.
  • 18
    Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ, 2002; 325: 13381341.
  • 19
    Norman GR, Shannon SI. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. CMAJ, 1998; 158: 177181.
  • 20
    Dickersin K, Braun L, Mead M et al. Development and implementation of a science training course for breast cancer activists: project LEAD (leadership, education and advocacy development). Health Expectations, 2001; 4: 213220.
  • 21
    Mosconi P, Colombo C, Satolli R, Liberati A. PartecipaSalute, an Italian project to involve lay people, patients' associations and scientific-medical representatives in the health debate. Health Expectations, 2007; 10: 194204.
  • 22
    Thomson O'Brien MA, Freemantle N, Oxman AD, Wolf F, Davis DA, Herrin J. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Systematic Review, 2001; 2: CD003030.
  • 23
    Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bulletin of the World Health Organization, 2006; 84: 620628.
  • 24
    Enstrom JE, Kabat GC. Environmental tobacco smoke and tobacco related mortality in a prospective study of Californians, 1960–98. BMJ, 2003; 326: 1057.
  • 25
    Bier ID, Wilson J, Studt P, Shakleton M. Auricular acupuncture, education, and smoking cessation: a randomized, sham-controlled trial. American Journal of Public Health, 2002; 92: 16421647.
  • 26
    Kabir Z, Manning PJ, Holohan J, Keogan S, Goodman PG, Clancy L. Second-hand smoke exposure in cars and respiratory health effects in children. European Respiratory Journal, 2009; 34: 629633.
  • 27
    Lazcano-Ponce E, Benowitz N, Sanchez-Zamorano LM et al. Secondhand smoke exposure in Mexican discotheques. Nicotine & Tobacco Research, 2007; 9: 10211026.
  • 28
    Lee DJ, Gaynor JJ, Trapido E. Secondhand smoke and earaches in adolescents: the Florida Youth Cohort Study. Nicotine & Tobacco Research, 2003; 5: 943946.
  • 29
    Repace J, Hughes E, Benowitz N. Exposure to second-hand smoke air pollution assessed from bar patrons' urinary cotinine. Nicotine and Tobacco Research, 2006; 8: 701711.
  • 30
    Lexchin J. Those who have the gold make the evidence: how the pharmaceutical industry biases the outcomes of clinical trials of medications. Science and Engineering Ethics, 2012; 18: 247261.
  • 31
    Barnes DE, Bero LA. Why review articles on the health effects of passive smoking reach different conclusions. JAMA, 1998; 279: 15661570.
  • 32
    Bero LA, Rennie D. Influences on the quality of published drug studies. International Journal of Technology Assessment in Health Care, 1996; 12: 209237.
  • 33
    Bero L, Oostvogel F, Bacchetti P, Lee K. Factors associated with findings of published trials of drug-drug comparisons: why some statins appear more efficacious than others. PLoS Medicine, 2007; 4: e184.
  • 34
    Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ, 2003; 326: 11671170.
  • 35
    Akl EA, Oxman AD, Herrin J et al. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database Systematic Review, 2011; 3: CD006776.
  • 36
    Guyatt G, Rennie D, Meade MO, Cook DJ. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 2nd edn. New York: McGraw-Hill Medical, 2008.
  • 37
    Ridker PM, Danielson E, Fonseca FA et al. Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein. New England Journal of Medicine, 2008; 359: 21952207.
  • 38
    Hlatky MA. Expanding the orbit of primary prevention–moving beyond JUPITER. New England Journal of Medicine, 2008; 359: 22802282.
  • 39
    AstraZeneca. CRESTOR demonstrates dramatic CV risk reduction in a large statin outcomes study; 2008 November 9.
  • 40
    Hadler NM. Crestor, by Jove… or not. ABC News, 2008 November, 10.
  • 41
    Arnst C. Crestor study will boost statin demand. Business Week, 2008 November, 9.
  • 42
    Associated Press. FDA Panel backs Crestor for heart attack, stroke prevention. USA Today, 2009 December, 15.
  • 43
    Healy M. Effectiveness of statins is called into question. Los Angeles Times, 2010 August 9.
  • 44
    Belluck P. Cholesterol-fighting drugs show wider benefit. New York Times, 2008 November 10.
  • 45
    Editorial. Who should take a statin? New York Times. 2008 November 17.
  • 46
    Parker-Pope T. A call for caution in the rush to statins. New York Times, 2008 November 18.
  • 47
    Wilson D. Risks seen in cholesterol drug use in healthy people. New York Times, 2010 March 30.
  • 48
    Fernandez E. New heart disease test could become routine. San Francisco Chronicle, 2008 November 11.
  • 49
    Sternberg S. Crestor would save lives at $500,000 each. USA Today. 2008 November 10.
  • 50
    US Food and Drug Administration. FDA approves new indication for Crestor. 2010 February 9.
  • 51
    Hughes KS, Schnaper LA, Berry D et al. Lumpectomy plus tamoxifen with or without irradiation in women 70 years of age or older with early breast cancer. New England Journal of Medicine, 2004; 351: 971977.
  • 52
    Turner RB, Bauer R, Woelkart K, Hulsey TC, Gangemi JD. An evaluation of Echinacea angustifolia in experimental rhinovirus infections. New England Journal of Medicine, 2005; 353: 341348.
  • 53
    Wakefield AJ, Murch SH, Anthony A et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet, 1998; 351: 637641.
  • 54
    Bradley P, Oterholt C, Herrin J, Nordheim L, Bjorndal A. Comparison of directed and self-directed learning in evidence-based medicine: a randomised controlled trial. Medical Education, 2005; 39: 10271035.
  • 55
    Taylor R, Reeves B, Mears R et al. Development and validation of a questionnaire to evaluate the effectiveness of evidence-based practice teaching. Medical Education, 2001; 35: 544547.
  • 56
    Oxman AD. Critical use of research evidence (CURE). A questionnaire to assess consumers' ability to understand and use reports about the effects of health care. [Questionnaire developed for the Annual Rocky Mountain Workshop on How to Practice Evidence-Based Health Care]. In press 2009.
  • 57
    The Rocky Mountain Workshop on How to Practice Evidence-Based Health Care. 2010; Available at,
  • 58
  • 59
    Retraction–Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet, 2010;375:445.
  • 60
    Petticrew M. When are complex interventions ‘complex’? When are simple interventions ‘simple’? The European Journal of Public Health, 2011; 21: 397398.
  • 61
    Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337: a1655. doi:10.1136/bmj.a1655.
  • 62
    Minkler M, Wallerstein N (eds). Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass, 2003.