RECOVER evidence and knowledge gap analysis on veterinary CPR. Part 1: Evidence analysis and consensus process: collaborative path toward small animal CPR guidelines


  • Manuel Boller Dr. med. vet., MTR, DACVECC,

    Corresponding author
    • Department of Emergency Medicine, School of Medicine, Center for Resuscitation Science, and the Department of Clinical Studies, School of Veterinary Medicine, University of Pennsylvania, Philadelphia, PA
    Search for more papers by this author
  • Daniel J. Fletcher PhD, DVM, DACVECC

    1. College of Veterinary Medicine, Department of Clinical Sciences, Cornell University, Ithaca, NY
    Search for more papers by this author

  • Drs. M. Boller and D. J. Fletcher are equal first co-authors.

  • The authors declare no conflicts of interests.

Address correspondence and reprint requests to

Dr. Manuel Boller, Center for Resuscitation Science, School of Medicine, University of Pennsylvania, 125 S 31st St - Suite 1200, Philadelphia, PA 19104, USA.




To describe the methodology used by the Reassessment Campaign on Veterinary Resuscitation (RECOVER) to evaluate the scientific evidence relevant to small animal CPR and to compose consensus-based clinical CPR guidelines for dogs and cats.


This report is part of a series of 7 articles on the RECOVER evidence and knowledge gap analysis and consensus-based small animal CPR guidelines. It describes the organizational structure of RECOVER, the evaluation process employed, consisting of standardized literature searches, the analysis of relevant articles according to study design, species and predefined quality markers, and the drafting of clinical CPR guidelines based on these data. Therefore, this article serves as the methodology section for the subsequent 6 RECOVER articles.


Academia, referral practice.


RECOVER is a collaborative initiative that systematically evaluated the evidence on 74 topics relevant to small animal CPR and generated 101 clinical CPR guidelines from this analysis. All primary contributors were veterinary specialists, approximately evenly split between academic institutions and private referral practices. The evidence evaluation and guideline drafting processes were conducted according to a predefined sequence of steps designed to reduce bias and increase the repeatability of the findings, including multiple levels of review, culminating in a consensus process. Many knowledge gaps were identified that will allow prioritization of research efforts in veterinary CPR.


Collaborative systematic evidence review is organizationally challenging but feasible and effective in veterinary medicine. More experience is needed to refine the process.


American College of Veterinary Anesthesia


American College of Veterinary Emergency and Critical Care


advanced life support


basic life support


Commonwealth Agricultural Bureaux


cardiopulmonary arrest


cardiopulmonary resuscitation


level of evidence




randomized controlled trial


Veterinary Emergency and Critical Care Society




Less than 6% of dogs and cats that experience cardiopulmonary arrest (CPA) in the hospital survive to hospital discharge.[1-3] The survival rate is approximately 20% in humans that experience in-hospital cardiac arrest.[4, 5] Despite many differences between humans and dogs or cats, this disparity suggests that CPA outcomes could be considerably improved in veterinary patients. A comprehensive treatment strategy to optimize survival from small animal CPA that includes preparedness and prevention measures, basic life support (BLS) and advanced life support (ALS), and post-cardiac arrest (PCA) care has been proposed.[6] However, consensus-based guidelines for such strategies do not exist in veterinary medicine, nor has the evidence been systematically evaluated and graded to build the foundation for such guidelines. Current veterinary CPR recommendations have been derived from guidelines for humans (eg, the American Heart Association guidelines for CPR and emergency cardiovascular care) or based on veterinary expert opinion.[6-12] In addition, there appears to be disagreement about how to best perform CPR among veterinary clinicians, even among boarded emergency and critical care specialists.[13]

The Reassessment Campaign on Veterinary Resuscitation (RECOVER) was designed to systematically evaluate the evidence on the clinical practice of veterinary CPR with 2 overarching goals: first to devise clinical guidelines on how to best treat CPA in dogs and cats, and second to identify important knowledge gaps in veterinary CPR that need to be filled in order to improve the quality of recommendations, and thus the quality of patient care in the future. This will allow construction and implementation of educational initiatives based on the clinical guidelines and will further build the foundation for coordinated research initiatives in veterinary resuscitation.

The RECOVER organization consisted entirely of volunteers, including 2 co-chairs that provided oversight for all major phases of project management from conception and initiation to dissemination of the findings, an advisory board composed of experts in various fields related to CPR, and over 80 veterinarians participating in 1 of 5 resuscitation topic domains responsible for evidence gathering and analysis in the areas of preparedness and prevention, BLS, ALS, monitoring, or PCA care (Figure 1). Each domain was led by 1 or 2 domain chairs who organized and supported a group of veterinarians, each tasked with answering a specific clinically oriented question on veterinary CPR. They first identified the relevant literature and graded each study according to predefined strength and quality metrics. As each question was asked, evaluated, and answered on a structured worksheet, these reviewers were named worksheet (WS) authors. They were recruited via email invitations distributed to all members of the American College of Veterinary Emergency and Critical Care (ACVECC) and the American College of Veterinary Anesthesia (ACVA). Consequently, most participants were diplomates of these 2 colleges. A total of 68 WS authors, 7 domain chairs, and 12 advisory board members contributed to the initiative. While most were located in the United States, volunteers from Canada and Europe also participated. An anonymous electronic survey of the WS authors after completion of the WS process (response rate = 79%) revealed roughly equal participation from members of academic institutions (48%) and private specialty practices (52%). Despite a significant time commitment (37 ± 17 hours/WS author; > 2,000 hours total), the vast majority considered the importance of the project well worth the effort (96%) and would volunteer again (94%).

Figure 1.

RECOVER organizational chart. RECOVER, Reassessment Campaign on Veterinary Resuscitation; ILCOR, International Liaison Committee on Resuscitation; ACVECC, American College of Veterinary Emergency and Critical Care; JVECC, Journal of Veterinary Emergency and Critical Care; VECCS, Veterinary Emergency and Critical Care Society; EVECCS, European Veterinary Emergency and Critical Care Society; AVECCT, Academy of Veterinary Emergency and Critical Care Technicians; ACVA, American College of Veterinary Anesthesia.

The evidence evaluation methodology was highly collaborative and similar to the evidence evaluation processes used by the International Liaison Committee on Resuscitation (ILCOR), the organization that has conducted evidence analysis for treatment recommendation development in human CPR since 1992.[14, 15] Ties between the 2 organizations (RECOVER and ILCOR) already existed at the initiation of the project but were further strengthened during its evolution. Moreover, RECOVER was also endorsed and supported by the ACVECC and the Veterinary Emergency and Critical Care Society (VECCS). We are optimistic that RECOVER is a sustainable initiative because it is composed of a large number of specialist volunteers from private practice and academia willing to commit to future projects and has obtained broad endorsement from important organizations in the fields of veterinary and human resuscitation. This paper describes the methodology used to develop the RECOVER veterinary CPR guidelines.

Evidence Evaluation Process

The RECOVER evidence evaluation process included a series of steps: identification of relevant topics, execution of a standardized literature search, assessment of the identified relevant articles, and finally assessment and integration of the evidence.

Asking relevant clinical questions

The first task undertaken by the domain chairs and the advisory board was the identification of relevant clinical questions for each of the 5 RECOVER domains. As a first step, the 277 questions investigated by ILCOR in 2010 were evaluated ( and those most relevant to veterinary medicine were modified for use by RECOVER. Veterinary-specific topics not covered by the ILCOR questions were then identified and added. A total of 87 questions were composed, reviewed by the domain chairs and the advisory board, and categorized by priority scores. Each question was assigned to a single reviewer who explicitly declared any conflicts of interests. To keep the scope of the work manageable, 74 high priority questions were chosen for investigation, and the remaining questions, all with low priority scores, were excluded from review. However, it was recognized that not all significant topics could be investigated at this time, and that many questions remain to be answered in future initiatives.

Similar to the ILCOR evidence evaluation process,[15] the RECOVER questions were written in a standardized PICO (Population-Intervention-Comparison-Outcome) format to facilitate clear differentiation of the components of each question and the development of the literature search strategy.[16] An example ALS PICO question is “In dogs and cats with cardiac arrest due to VF (P), does the use of CPR before defibrillation (I) as opposed to defibrillation first (C), improve outcome (O) (eg, ROSC, survival)?” The strongest evidence for or against an intervention would emerge from a well-controlled trial (such as a randomized controlled trial), with the intervention and control groups as described in the PICO question using the same target species (dogs and/or cats with cardiac arrest and VF). The outcome was left unspecified in the questions, but WS authors were asked to clearly state outcome measures used in each reviewed article.

The search strategies

The literature search strategy for each question was designed to reduce reviewer bias toward preferential selection of articles and to identify all relevant literature. Each WS author was required to use 2 electronic databases for the literature search, MEDLINE, and the Commonwealth Agricultural Bureaux (CAB) Abstracts database, which offers the most comprehensive indexing and abstracting of the veterinary literature.[17] While MEDLINE is a free resource usually accessed via PubMed (, CAB requires a subscription. Peers in the same domain or domain chairs executed CAB search requests for WS authors without access. Other search engines and approaches were permitted to minimize the risk of excluding relevant articles. Reviewers were instructed to use the “cited by” option of Scopus, Google Scholar, or Web of Science, taking a classic or landmark relevant paper as a starting point. In addition, the citations of topic-related review articles were examined for relevant publications. Each search strategy, including database, search terms, and number of hits were detailed on the worksheet. In addition, criteria used to exclude articles from further analysis were clearly stated as part of the search strategy. In order to be included in the scientific review, articles needed to be peer reviewed original research published in English. Abstracts and reviews were excluded from analysis. It was recognized that a substantial portion of knowledge, such as all research published in languages other than English, would be excluded from analysis, and a bias toward studies with positive outcomes would be fostered by not including abstracts.[18] In the future, more international collaboration will be important to alleviate some of these limitations. Additional exclusion criteria may have been applied that were specific for each PICO question (eg, mild therapeutic hypothermia in conditions other than CPA). Only articles deemed relevant by application of these criteria underwent detailed review. The search strategy was reviewed by the responsible domain chair, commented on and revised as necessary, and only when approved did further assessment of each relevant article ensue (Figure 2).

Figure 2.

Worksheet and guidelines flow chart. PICO, Population-Intervention-Comparison-Outcome; RECOVER, Reassessment Campaign on Veterinary Resuscitation; IVECCS, International Veterinary Emergency and Critical Care Symposium.

Assessment of relevant articles

All relevant articles were reviewed in detail and each was assigned a level of evidence (LOE) according to criteria defined a priori. The LOE categorization provided a mechanism for building an overview of the overall strength of the evidence supporting and opposing the PICO question. The LOE is a characteristic of the study and grades it according to the likelihood of biased results. For example, a well executed, prospective, randomized, controlled interventional trial is graded a higher LOE than a similar study without randomization. RECOVER used an LOE scale that was modified from the 2010 ILCOR evidence evaluation process in order to increase the weight of data originating from studies in the target species (dogs and/or cats).[15] Within the target species the highest level of evidence (LOE 1) was assigned to clinical randomized controlled trials (RCTs), and the lowest (LOE 5) included case series and reports (Table 1). Experimental laboratory studies involving dogs or cats were classified as LOE 3. If the research did not involve dogs or cats, the study was categorized as LOE 6. This predominantly included clinical studies in humans and experimental studies in swine and rodents. A table concisely describing the study characteristics relevant to LOE assignment was included in the WS author instructions to ensure consistency between WS authors (Table 2).

Table 1. Levels of evidence. LOE 1 suggests the highest, and LOE 6 the lowest level of evidence. Target species refers to dog or cat
LevelStudy characteristics
LOE 1Randomized controlled trials or meta-analyses of RCTs in target species:
 Clinical studies that prospectively collect data and randomly allocate the animals to intervention or control groups; or meta-analyses of these studies
LOE 2Prospective clinical studies in target species using concurrent controls (ie, controls recruited at the same time as experimental subjects) without randomization. These studies can be:
 1. Interventional clinical: Include animals that are allocated to intervention or control groups concurrently, but in a nonrandom fashion
 2. Observational clinical: Include cohort and case control studies
LOE 3Experimental laboratory study in target species:
 These studies can be randomized, blinded, and controlled but do not have to be. The study design needs to be reported and the study will be categorized according to methodological quality (good/fair/poor).
LOE 4Clinical retrospective studies in target species:
 The study and control groups have been selected from a previous period in time.
LOE 5Case series and case reports in target species:
 A single group of animals exposed to the intervention (or factor under study), but without a control group.
LOE 6Studies, experimental or clinical, that are not directly related to the specific target species (ie, not dogs or cats) or target population (eg, not cardiac arrest). These could be different species/populations, including experimental models in nontarget species, and includes high quality studies in humans only (such as meta-analyses, RCTs, and clinical studies with concurrent controls, including observational studies; these are the human equivalents to our LOE 1 and 2).
Table 2. Table to guide study allocation to different levels of evidence (LOE). Target species refers to dogs or cats
 LOE criteria present (• mandatory/○ optional)
 Target speciesClinicalRandomizedControlledConcurrent
LOE 2 
LOE 3 
LOE 4  
LOE 5   
LOE 6 

In addition, a series of quality items were used to assess the methodological soundness of a study within a certain LOE. Examples of general quality factors are similarity of treatment and control groups at the start of the study, the relevance of the study for the question asked, the clinical relevance of the effect size observed, and control for confounders. The number of quality items applicable to each study was used by the WS authors to assign a quality term (“good,” “fair,” or “poor”) to each study. The list of quality items used in RECOVER closely resembled that used in the ILCOR 2010 process for LOE 1, 2, 4, and 5[15] but was adjusted to meet the needs of RECOVER in LOE 3 and LOE 6. Experimental animal studies (LOE 3) were considered “good” if they were randomized and had a control group, “fair” if a control group was included but not randomized and “poor” if uncontrolled. In clinical studies in humans (LOE 6), RCTs were considered “good,” nonrandomized studies with concurrent controls “fair,” and those with retrospective (historic) controls or large retrospective studies “poor.” Uncontrolled studies were not considered in LOE 6. For studies in other nontarget species, such as swine and rodents, the quality factors used for LOE 3 applied.

The grid of evidence

Once LOE and quality were assessed, all relevant articles were plotted in 1 of 3 tables, according to their direction of support for the PICO question asked. The studies could either provide supporting evidence, neutral evidence, or opposing evidence. In addition, each study was marked according to the outcome measures used (eg, blood pressure, ROSC, survival to hospital discharge, and others) (Figure 3). This provided a graphical overview of the strength of the evidence for the conclusion, with studies in the upper and left portions of the table providing the strongest evidence, and studies in the lower right providing the weakest evidence.

Summarizing and integrating the evidence

The WS authors were asked to summarize the results of their review in a short narrative, including overall balance between supportive, neutral, and opposing evidence, the clinical relevance of that evidence for CPR in dogs and cats, and the outcomes stated. Particular attention was given to the benefits and risks associated with the interventions examined. Furthermore, the reviewer's insight into the topic allowed them to identify contradictions within the cited studies. A succinct conclusion was then written directly relating the evidence to the clinical question, stating the overall answer to the question, and commenting on any clinical recommendations for veterinary CPR to be drawn.

Finally, the reviewers identified the major knowledge gaps that emerged from the evidence evaluation process, focusing on gaps that need to be addressed before the clinical question can be answered conclusively.

After completion, the worksheet draft was reviewed by the domain chair(s), and the worksheets edited collaboratively by the WS author and domain chair until the evidence evaluation sheet was considered complete (Figure 2). Throughout the process, Internet-based collaboration1 and reference management2 tools were used to facilitate communication, the sharing of documents, and to allow central reference management for all domains.

Clinical Guidelines

Completed and approved worksheets were reviewed by the RECOVER chairs, who then drafted 101 clinical CPR guidelines based on the evidence analysis. The guidelines were designed to be succinct and clinically applicable. Thus recommendations were made whenever possible, even if the evidence analysis made it clear that the scientific basis for or against a treatment was weak. To address this variability in the evidence base upon which each recommendation was made, all guidelines were appended with (1) a Class, summarizing the size of the documented treatment effect expressed as risk:benefit ratio, and (2) a Level, summarizing the confidence that this risk:benefit ratio was true based on the amount and quality of evidence available. In addition, the guidelines were worded in a standardized way to reflect class and level of the recommendation.

Class and level of recommendations

The RECOVER class and level recommendation system paralleled the system used in the 2010 AHA guidelines for CPR and Emergency Cardiovascular Care (Table 3).[19] A Class I recommendation indicated that the benefit of an intervention far outweighed the associated risk and suggested that the treatment or procedure should be administered or performed. Level A then indicated that multiple high-quality and/or high LOE studies supported this recommendation. For RECOVER, this meant multiple high LOE or high quality studies were in support of the treatment or procedure examined and no evidence of harm emerged from the evidence analysis. An example is that CPR should be performed in 2-minute cycles without interruption, and duration of pauses between cycles minimized (I-A).[20, 21] On the other end of the evidence spectrum, a Level C recommendation was not supported by strong scientific data, but rather by case reports or series, expert opinion, or clinical standards. However, despite the low weight of evidence, a treatment or procedure could still have been recommended if the potential benefit far outweighed the risk (Class I). An example is the recommendation that post-cardiac arrest monitoring should be sufficient to detect impending reoccurrence of CPA (I-C).[20, 22] If the benefit of an intervention was less clear and additional research was needed to further demonstrate the usefulness of a treatment, the recommendation was categorized as Class II. If the expected treatment effect was clearly visible, it was assigned a Class IIa, while Class IIb was reserved for treatments with less clear or conflicting evidence on their usefulness but no substantial evidence of harm. An example for a Class IIa recommendation is that the use of atropine is reasonable in dogs and cats with asystole or PEA potentially associated with increased vagal tone (IIa-B).[20, 23] A Class IIb recommendation is that seizure prophylaxis with barbiturates in dogs and cats post-cardiac arrest may be considered (IIb-B).[20, 24] If at any level of evidence an intervention was considered to be more harmful than beneficial, it was assigned a Class III recommendation. An example is that fast rewarming at a rate > 1°C/h is not recommended in hypothermic dogs and cats post-cardiac arrest (III-A).[20, 24]

Figure 3.

Evidence neutral to the question on the use of vasopressin during CPR. The grid allows overall assessment of level of evidence (LOE) and the methodological quality (good/fair/poor), as well as the endpoints examined in the studies.

Table 3. Class and level of recommendation. The class of each recommendation describes the size of the treatment effect or the benefit-to-risk ratio and the level is an expression of the weight of evidence in support of the class assignment
Class of recommendation
Benefit >>> riskBenefit >> riskBenefit ≥riskRisk > benefit
(should be performed)(reasonable to perform)(may be considered)(should not be performed since it is not helpful and may be harmful)
Level of recommendation
(multiple populations)(limited populations)(very limited populations) 
Multiple high quality and/or high level of evidence studies.Few to no high quality and/or high level of evidence studies.Consensus opinion, expert opinion, standard of care. 


The drafted guidelines applicable to preparedness and prevention, BLS, ALS, monitoring, and PCA care were discussed with the respective domain chairs via phone conferences and reworded until consensus was reached. Consensus was defined as either being in mutual agreement on the exact wording of the guideline, or in the rare case where this could not be achieved despite prolonged discussion, mutual agreement on wording that all members could live with. The guidelines were then made accessible to the RECOVER advisory board and their comments were solicited and integrated. The documents were then posted on the RECOVER website ( and the most controversial guidelines were presented and discussed at the IVECCS 2011 meeting during a 3-hour session. The members of several professional organizations (ACVECC/VECCS/European Veterinary Emergency and Critical Care Society/Academy of Veterinary Emergency and Critical Care Technicians/ACVA/European College of Veterinary Anaesthesia and Analgesia) were invited by email to review and comment on the guidelines. The website was designed to allow blog-like commenting and discussion from any interested registered individual for 4 weeks. Comments were noted, discussed, and integrated into the consensus process.


The Journal of Veterinary Emergency and Critical Care was chosen as the primary mode of dissemination of the RECOVER evidence evaluation findings and clinical guidelines. A writing group was formed for each domain and an additional group for a separate clinical guidelines manuscript. The objective of each domain manuscript was to provide the reader with a succinct but comprehensive overview of all evidence evaluated, and thus was composed as a synopsis of the science evaluated during the RECOVER worksheet process. Accordingly, the knowledge gaps were also included in these 5 articles.[21-25] The WS authors were asked to critically review the drafted manuscript sections that related to their topic prior to finalizing the articles. The clinical CPR guidelines that emerged from all 5 domains were bundled together in 1 article.[20] The manuscript was succinctly written, with the idea that in-depth information could easily be gathered by consulting the science evaluation manuscripts, and in an attempt to provide practically useful information in an accessible format to ease implementation in the clinical setting. To provide concise overviews of the extensive clinical recommendations, treatment algorithms for CPR, and PCA care as well as a dosing chart for the most relevant medications were generated and are included in the guidelines article.[20]


The RECOVER initiative methodology enabled an efficient, systematic review of a large body of literature, primarily due to the contributions of a large number of volunteer WS authors. The stated willingness of these reviewers to participate in future initiatives combined with the organizational infrastructure that has been developed suggest that RECOVER is a sustainable effort, leading not only to the possibility of reevaluation of the CPR evidence in 5 years, but even to extend the scope to other issues relevant to resuscitation.

This initial process has also been a considerable learning experience for everyone involved and consequently several insufficiencies were detected that need to be remedied in the future. Instructional tools for the WS authors must be developed to increase the efficiency and quality of the very complex evidence review process. It is a testament to the dedication of the WS authors that they were able to generate high quality, comprehensive worksheets with minimal instruction, but better support tools such as webinars, interactive tutorials, or in-person workshops are needed. A more extensive administrative structure is also needed to allow the RECOVER chairs and domain chairs to focus on the science rather than the organization.

Despite these limitations, the evidence analysis and grading process leading to consensus guidelines on CPR in small animals was transparent and reproducible. It allowed a large number of colleagues to collaboratively review a huge body of literature in an organized fashion and in a short period of time. And by doing so they not only broke new ground in the area of veterinary resuscitation, but also built a foundation for future projects.


The authors would like to thank the American College of Veterinary Emergency and Critical Care (ACVECC) and the Veterinary Emergency and Critical Care Society for their financial and scientific support, as well as Armelle deLaforcade, the ACVECC Executive Secretary and Kathleen Liard, ACVECC Staff Assistant for their administrative and organizational support. This work would have been impossible without the tireless efforts of the worksheet authors in the 5 RECOVER domains. Their contribution to this product can not be overstated, and their dedication to this challenging task serves as an inspiration to the veterinary profession. Furthermore, the domain chairs that guided these worksheets authors deserve great credit. The domains were chaired by Drs. Maureen McMichael (Preparedness and Prevention), Kate Hopper (BLS), Elizabeth Rozanski and John Rush (ALS), Benjamin Brainard (Monitoring), and Sean Smarick and Steve Haskins (PCA care). Also, we would like to thank the RECOVER Advisory Board for their guidance and invaluable input during the planning and execution of this initiative: Dennis Burkett, ACVECC Past-President; Gary Stamp, VECCS Executive Director; Dan Chan, JVECC Liaison; Elisa Mazaferro, Private Practice Liaison; Vinay Nadkarni, ILCOR Liaison; Erika Pratt, Industry Liaison; Andrea Steele, AVECCT Liaison; Janet Olson, Animal Rescue Liaison; Joris Robben, EVECCS Past-President; Kenneth Drobatz, ACVECC Expert; William W. Muir, ACVECC and ACVA Expert; Erik Hofmeister, ACVA Expert. We would also like to extend special thanks to Dr. Joris Robben for graciously moderating the RECOVER session at IVECCS 2011. Finally, we would like to thank the many members of the veterinary community who provided input on the RECOVER guidelines at the IVECCS 2011 session and during the open comment period via the RECOVER web site.


  1. 1

    Basecamp, 37signals LLC, Chicago, IL.

  2. 2

    Mendeley, Mendeley Inc, New York, NY.