Probing Guideline Fundamentals: An Alternate Perspective on Adherence

Authors


Corresponding Author: Peter Teitelbaum, MD, Riverside Travel Medicine Clinic, 411-1919 Riverside Drive, Ottawa, Ontario K1H1A2, Canada. E-mail: travelclinic@rogers.com

Guideline panels have become an integral part of the medical landscape. With their content expertise and epidemiologic resources, they are well placed to provide practitioners with credible advice. However, the advice is not always taken.

In this issue of the Journal of Travel Medicine, Duffy and colleagues present one such example of low adherence to guidelines. They conducted interviews at three major US airports with travelers bound for countries endemic for Japanese encephalitis (JE).[1] The authors compared the number of individuals immunized against the disease with the number eligible according to US guidelines (Advisory Committee on Immunization Practices). They found a notably low uptake of the vaccine, with many of these travelers not recalling any discussion of JE vaccine at the clinic they attended.

A gap between guideline and practice has been observed in several areas of medicine, with the discrepancy not uncommonly attributed to the health care provider. There is, however, another plausible explanation: the difficulty can lie with the guidelines themselves. If these are perceived as unrealistic or if their derivations are inadequately explained, practitioners may be reluctant to implement them.[2]

Issues around JE immunization provide a good example of the difficulties inherent in guideline formulation. The disease is severe both in terms of mortality and sequelae. However, it is also rare in those who visit regions where the disease exists. The most comprehensive review of incidence in travelers to endemic areas is a 2010 paper by Hills and colleagues. The authors found 55 published cases internationally through the years 1973 to 2008.[3] Adjusted for an unknown degree of underreporting, they estimated a frequency for US travelers of <1 case per 1 million. The raw data indicated a considerably lower incidence of <0.2 cases per 1 million.

Consistent with these statistics are the findings of Ratnam and colleagues in their Brief Communication, also in this issue.[4] They measured seroconversions, not cases, of JE in 387 short-term Australian travelers to endemic areas. Seroconversion implies infection with or without clinical illness. There are many subclinical infections for every case of JE, with estimates of ratios ranging at least from 25:1 to 300:1.[5] In this study no seroconversions were identified, an expected result given the sample size.

The SA-14-2 inactivated JE vaccine is the product currently used in most developed countries. It is among the most expensive travel vaccines and this adds to the challenge of formulating well-considered guidelines. Duffy's interviews did not show cost to be an important impediment to acceptance[1] but this would run counter to the experience of many travel medicine providers.

How can guideline committees weave these disparate variables—the rarity and severity of the disease, as well as vaccine efficacy, duration, known and unknown side effects, and cost—into a meaningful recommendation?

A basic outline may be described as follows: Disease and vaccine data are retrieved from the literature, graded for quality, and assembled for use. A well-conceived algorithm accepts and mathematically integrates the data and is designed to calculate net vaccine benefit. This provides an objective basis for guidelines which are then published with a plain-language version of the algorithm. There is little room for arbitrariness in such a system. Users can see the assumptions and the logical underpinnings of what is being recommended. Those who disagree with any component of this decision-making process are free to make their own changes.

In practice, however, this is not how most recommendations come to pass. Guideline panels gather and assess data, often with considerable effort, but many appear to be working without a specific algorithm. Not surprisingly, there is apt to be a lack of transparency about how guidelines have been formulated. Referencing of data sources is not sufficient. What method has been used to systematically turn data into recommendation? What is the logical set of operations being applied to the data? How are the disease and vaccine variables being combined and computed to contribute to the result? Further, the panel will need to assign values to a set of constants within the algorithm. A threshold for acceptable risk must be agreed upon. These should be included in the published version of the algorithm.

In the absence of an explicit blueprint, panels must utilize strategies which are less evidence based. There is a tendency to “err on the side of caution” seeking to avoid even very low levels of risk. This results in overly prescriptive recommendations which expose large numbers of individuals to minor side effects, unknown numbers to more important adverse events, and can be expensive for the benefit gained. Panels also tend to align their advice, taking consideration of the recommendations made in other jurisdictions when formulating their own. This circular motion can give the misleading impression that different groups have reached similar conclusions independently, enhancing a perception that recommendations are well founded.

These imperfect approaches do not serve the traveler optimally, spread the prescriptive tendency, and protect the guideline panels themselves. They are a response to a lack of appropriate tools. While good progress has been made over the years on the data side of evidence-based medicine, advances are needed on the operational side—that is, the rational use of that hard-won data.

Declaration of Interests

The author states that he has no conflicts of interest.

Ancillary