Clinical practice guidelines and economic analyses of new and established medical interventions have enjoyed a robust, albeit turbulent, increase in popularity over the past 2 decades. Thousands of reports and publications from both disciplines are now available from medline, the National Guidelines Clearinghouse, and other sources. What accounts for this proliferation? Many might argue that the one overriding impetus for both is the rapidly rising cost of medical care. On this one subject, the goals of guidelines and economic analyses are alike: encourage health care providers to adopt a “best practice” approach to care for a given patient population with a given condition. Adopting such an approach will reduce practice variation, decrease the amount of ineffective and therefore wasteful care, and improve outcomes by encouraging the use of proven therapies. Nevertheless, the “guidelines crowd” and the “economic analysis crowd” work, largely, in isolation and as Wallace and colleagues demonstrate in this issue of JGIM,1 for the most part ignore each other when they compile and report their findings. Why is this so? As I discuss below, the primary problem is that each discipline has a different and oftentimes conflicting view of what is “best practice.” There are also several secondary issues that hinder clinical practice guidelines and economic analysis from being the match that some of us might have thought was “made in heaven.”
WHAT IS BEST PRACTICE? TWO CONFLICTING PERSPECTIVES
The fundamental force behind the creation of clinical practice guidelines—improving the standard practice of medicine—inevitably exerts a strong influence on the perspective and recommendations of these documents. In the best guidelines, recommendations are supported by high-quality evidence, with a strong preference for controlled clinical trials.2 Panels of clinicians and other professionals with expertise in evaluation of experimental data are selected to review the evidence and make recommendations regarding the definition of the patient population of interest, treatment pathways, and outcomes. Outcomes are usually chosen based on their (1) commonality of use as endpoints in past studies and clinical trials; (2) familiarity to clinicians or administrators; (3) measurability using existing data systems, and; (4) feasibility in evaluating trends within a relatively short time horizon (e.g., hospital readmission rates, peak flow readings). Experts in guideline development also suggest active participation in the guidelines creation process by those who are the target of guidelines as a way to improve the acceptance by practitioners.3
This process of creating clinical practice guidelines raises several roadblocks to the incorporation of economic data. Until very recently, economic analyses were rarely conducted alongside clinical trials.4 Even when high quality economic data exist (unfortunately, often a rarity5,6), guideline developers recognize that economic efficiency arguments rarely sway the clinicians to whom these guidelines are usually directed, and in some cases, may actually engender resistance to their implementation.7 As a result, the process of creating “best practice” recommendations invariably directs attention away from economic studies, even when cost control concerns are a behind-the-scenes impetus for developing the guideline.
Like clinical practice guidelines, economic analyses usually start with a particular patient population, a clinical problem, and a set of treatment alternatives. From the perspective of an economic analysis, however, the definition of “best practice” is usually quite different from a clinical practice guideline. Economic analyses are concerned with the efficient allocation of limited resources across the health system, with the goal of maximizing the health of a population for a given budget constraint. As a result, treatments are weighed based on their incremental health value for given cost relative to alternative uses of the money. Treatments are applied until the point at which their health returns are diminishing in relation to cost. In contrast, clinicians are inclined to treat (or screen) until there is no more health to be gained. In my opinion, this issue is central to the apparent disconnection between clinical practice guidelines and economic analyses.
The distinction between marginal and absolute benefit is less of an issue in cases in which an intervention has been found to be cost-effective at the level of use recommended in the guideline. In contrast, applications in which a medical technology is effective but cost-ineffective will cause economists to make one recommendation and clinical practice guidelines creators to make another. Each may be loath to include information that questions the rationale behind their position. As a result, economic analyses may appear more frequently in guidelines in which the bulk of economic studies support a given guideline's recommendations (e.g., Wallace and colleagues' findings for smoking cessation) than they would in guidelines in which the economic evaluations are less favorable (e.g., surgical therapy for breast cancer).
A second important area of conflict is the recommendation by opinion leaders that quality-adjusted life years (QALYs) should be used as the unit of outcome for economic analyses.8 Despite their theoretical appeal as a tool for prioritization of health resources, QALYs have not been embraced by most decision makers. There are several reasons, but perhaps the most important is the discrepancy between how QALYs are derived and how clinical decisions are made. QALYs are a combination of years of life and quality of life, expressed as utilities.9 Clinical studies—even those concerned with survival–do not express outcomes in terms of years of expected life. Utilities can be difficult to collect, may be insensitive to small but meaningful changes in health,10,11 and are not familiar to most clinicians or administrators. Furthermore, QALYs, in essence, compute the “net present value” of life expectancy and quality of life over a lifetime, an often unreasonably long perspective for those who make decisions in health plans or for patient care.
A third important issue is the funding source for guidelines and economic analyses. Often these studies are sponsored by manufacturers with a particular agenda, such as weighing the benefits of their product against alternative treatments. For a newer product, economic analyses often base their efficacy data on placebo-controlled studies used to support approval by the Food and Drug Administration, then use modeling techniques to compare the products to standard treatments. Even well-constructed models do not meet most standards for high-quality clinical evidence and therefore may be viewed with suspicion by guidelines committees. In addition, the publication bias problem against negative studies that exists in the clinical literature is also found in the economics literature,12,13 leaving reviewers with few studies that might challenge the recommendations of a given guideline. Unfortunately, Wallace and colleagues do not note the funding sources for the guidelines and economic analysis they have reviewed, nor do they report whether guidelines and cost-effectiveness analyses tend to coexist when manufacturers sponsor both.
Finally, from the health plan manager's perspective, a favorable cost-effectiveness ratio does not necessarily mean that adopting or promoting a technology is desirable. Health plan managers know that “cost-effective” does not mean “less expensive.” Increasing the use of a “cost-effective” technology means that a plan's expenditures will rise and, under a budget constraint, can only be made up by restricting the use of other technologies, reducing fees paid for care or other technologies, or raising premiums. In rare cases, interventions have the happy fate of being both effective and cost-saving; smoking cessation is one example. Even in this case, however, this will be good news to health plans only if the savings occur in a relatively short time horizon.14
The weight of guidelines for health plans lies in the clinical consensus that a particular medical intervention is desirable. Particularly for technologies that are supported by a well-respected guideline (such as the U.S. Preventive Services Task Force), an economic analysis will be useful only if it can help limit care to subpopulations in which the health benefit is highest for the cost. This is usually not the purview of economists, and as noted above, is not foremost in the minds of guidelines committees advocating wider use of the technology.
Is there a need for integration of economic analyses into clinical practice guidelines? Yes. Given both the steep increases we are experiencing in health care costs and the wave of expensive new technologies currently in the development pipeline,15 I would argue that clinical practice guidelines must evolve so that they promote the practice of effective and efficient medicine. To do this, the developers of guidelines and economic analyses need to find a common ground. This requires that both parties will have to accept some compromises from their current positions regarding the definition of “best practice.”
Developers of clinical practice guidelines could begin by agreeing that treatments with cost-effectiveness ratios greater than a widely agreed upon standard should be highlighted as such or (my preference) eliminated from recommended practice. Some argue that thresholds—such as $50,000 per QALY gained16—are arbitrary and will do little to control health care cost growth in the long run.17 This argument sidesteps the important issue that some threshold is needed if economic studies are ever going to have “teeth” as decision aids. Some standards, such as those advocated for Canada,18 should be considered for the United States. Additionally, a standard would help familiarize clinicians with the metric of cost-effectiveness and how widely used treatments measure up. Guidelines developers should also pay more attention to quality of life issues, as assessed by widely used measurement instruments.
Those who conduct economic analyses could do much to improve their acceptance in clinical practice guidelines. While embracing an “industry standard” such as cost per QALY, economists should also acknowledge that the payer and clinical perspectives are those by which decisions are made in our health economy. Correspondingly, economic analyses should include a table of results using a short-term, payer perspective with widely accepted clinical endpoints as the unit of effectiveness. When results from the short- and long-term perspective conflict, this can be addressed in the discussion. The Academy of Managed Care Pharmacy Guidelines for Economic Analysis of Medicines offers a useful template for how such evaluations should be reported.19 It is also important to expand independent funding opportunities for economic studies so that researchers can rely less on industry funding (since the latter inevitably results in a manuscript supporting the “value” of a particular product). To improve the timeliness, validity, and usefulness of economic evaluations, I also suggest supporting more “piggyback” economic analyses alongside clinical trials, and “line item” identification of economics as part of guideline development funds.
In conclusion, clinical practice guidelines and economic evaluations of new medical technologies are here to stay and are likely to play an increasingly significant role in future health care decision making. Part of the maturing process for these disciplines, however, is for the promoters of each to recognize that they are first and foremost aids in clinical practice and policy. A measure of compromise from each group, with the goal of increasing the usefulness of clinical practice guidelines, would do much to make it a workable marriage.
Scott D. Ramsey, MD, PhD,—Fred Hutchinson Cancer Research Center, Seattle, Wash and Department of Medicine, University of Washington, Seattle, Wash.