SEARCH

SEARCH BY CITATION

Over the past 50 years, major advances have been made in both the methods and application of health technology assessment (HTA). For example, the Cochrane Collaboration and Evidence-Based Medicine movement have provided systematized approaches of critically reviewing the scientific literature. More recently, new methods for making indirect comparisons are appearing in the literature, which—when applied judiciously—can be used to compare treatments in certain circumstances in which no head-to-head study has been conducted. Although the need for HTA was first explicitly conceptualized and implemented in the United States in the 1960s and 1970s, many other countries followed suit. Indeed, in a number of countries, for example, Australia, Canada, Sweden, and the United Kingdom, rationalizing the use of new medical technologies became the focus of many governmental and quasi-governmental health agencies.

Over recent years, the United States has witnessed renewed interest in HTA, including calls and legislation, for developing a Centre for Comparative Effectiveness; however, there is still much debate on the essential elements of this concept. The ongoing discourse can benefit from the HTA experiences of countries with well-established processes of HTA and the evolving methodologies that facilitate unbiased assessment of comparative effectiveness. This special issue showcases the presentations made at the Apples and Oranges? Assessing Comparative Effectiveness and Comparative Value in the US and Other Countries symposium held on May 15, 2009. The symposium, sponsored by Oxford Outcomes, the National Pharmaceutical Council, and Shire Pharmaceuticals, brought together a group of international thought leaders who compared and contrasted the experiences of countries that use formal HTA processes to review new medicines and technologies and reviewed recent trends in comparative effectiveness methodology.

The articles included in this supplement are edited versions of the symposium presentations. The symposium was divided into two sections. In the first section, authors discussed the experience with HTA in three jurisdictions:

  • • 
    The United Kingdom, presented by Ron Akehurst, BSc, Hon MFPHM, Professor of Health Economics. Dean and Chair of Executive Board, School of Health and Related Research, University of Sheffield, Sheffield, UK. The author describes aspects of the Technology Assessment Program that are underpinned by a fixed health-care budget and operated by the National Institute for Health and Clinical Excellence (NICE). It is noted that the processes are under great political and public scrutiny, are well recognized for their robustness and transparency, and that despite the decisions being sometimes highly contentious, NICE plays an important role in facilitating public discussion of cost-effectiveness thresholds of medical technologies. Moreover, although difficult to prove, NICE appraisal is believed to be instrumental in improving various health outcomes in a number of disease areas.
  • • 
    Sweden, presented by Egon Jonsson PhD, Executive Director and CEO, Institute of Health Economics, Alberta, Canada. The author discusses the history of HTA in Sweden and describes Sweden's current wide-ranging policy-orientated HTA process that covers the whole health-care spectrum. HTA in Sweden involves multiple stakeholders and comprehensively examines comparative effectiveness within disease areas; for example, drug and alcohol abuse, depression, obesity, and hypertension. The review results provide an evidence base for best practices; however, it is recognized that multiple barriers to the uptake of effective technologies do exist.
  • • 
    Canada, presented by Ron Goeree, BA MA, Associate Professor, Department of Clinical Epidemiology & Biostatistics, McMaster University & Director, Program for Assessment of Technology in Health, Hamilton, ON, Canada. The author describes the policy-orientated HTA process of one of the largest health jurisdictions in Canada. Here, technologies may undergo conditionally funded field evaluation (CFFE), also known as coverage with evidence development. This approach facilitates the formulation of evidence-based policy and informs budgetary decisions. Examples of the monetary impact of CFFE indicate improved value for money in various disease areas; however, there are challenges associated with effectively implementing CFFE that will require strong political will to overcome.

In the second section, six methodological issues germane to comparative effectiveness research were addressed:

  • • 
    Comparative assessment for medications and devices presented by Scott Ramsey, MD, PhD, Professor, University of Washington, and Member, Fred Hutchinson Cancer Research Center, Seattle, WA, USA. The author compares the regulatory processes and costs associated with approval of drugs with those for devices. The primary limitation of the regulatory approval process for drugs is noted as the lack of generalizability to “real world” patient populations; and for devices, as the lack of requirement for randomized trials.
  • • 
    Contemporary challenges in deriving summary estimates of comparative effectiveness using meta-analysis presented by Ed Mills, MSc, PhD, LLM, Research Scientist, BC Centre for Excellence in HIV/AIDS, Vancouver, BC, Canada and University of Ottawa. The author discusses the use of meta-analysis for identifying significant differences between outcomes. As the success of meta-analysis is based on the premise that the overall direction of effect will be relatively consistent between clinical trials, it is proposed that that broad pooling of data can then help resolve uncertainty that may arise due to studies being underpowered, particularly for sub-group analysis.
  • • 
    Reflecting heterogeneity in patient benefits: the role of sub-group analysis with comparative effectiveness presented by Mark Sculpher PhD, Professor of Health Economics and Director of the Programme on Economic Evaluation and Health Technology Assessment, University of York, Centre for Health Economics, York, UK. The author discusses methods for incorporating multifactorial heterogeneity to capture sub-group benefit across outcomes. The importance of expressing health benefit on a single scale is highlighted, as well as the impact of medical history and patient preference on absolute health benefit.
  • • 
    Transportability between countries of comparative effectiveness presented by Andrew Briggs, BA, MSc, DPhil, Lindsay Chair in Health Policy and Economic Evaluation, and University of Glasgow, Glasgow, UK. The author discusses the merit and limitations associated with extrapolating comparative effectiveness data geographically. Examples are used to illustrate methodological approaches that can be applied to examine the significance of differences in treatment effects and the effect of treatment on quality of life in different regions. It is noted that these effects appear to work on different scales and that analyses must accommodate multiple components to determine net clinical benefit.
  • • 
    Experimental and observational data and formulary listing presented by Raulo Frear, Pharm D, Director, The Regence Group. The author discusses the formulary decision-making process by a large Health Insurance Group in the United States. Emphasis is placed on the role of formulating key research questions, adherence to a systematic approach to critical appraisal, and transparency of the process. Examples are used to illustrate the utility of the application of critical appraisal of scientific evidence in informing formulary decisions.
  • • 
    Net Clinical Benefit: the art and science of jointly estimating benefits and risks of medical treatment presented by Adrian Towse, MA, MPhil, Director, Office of Health Economics, London. The author raises the issue of trade-offs between positive and negative health effects, the utility of the QALY as a risk–benefit measure, and how patient willingness to trade affects health outcomes for different sub-groups of patients. It is noted that this field is in its infancy and whether or not weighting systems that account for positive and negative health effects improve health-care decision-making remains to be established.

The context for the symposium is the arguable assertion that there is a widening gap between what consumers in health care can expect and what can be delivered, and that there are a number of contributing factors. First and foremost is the rising cost of health care which, every year, continues to increase faster than inflation and has led to concerns for payers worldwide. A second major factor is the growing awareness of cost constraints by payers worldwide; for example, in the United States, it has now reached the point where the financial viability of the health-care system as a whole is being called into question. A third factor, in many jurisdictions worldwide, is a growing expectation from consumers as to what health care can deliver and how the health-care system should help them maintain healthy lives.

Among other considerations, the ongoing introduction of new health technologies significantly contributes to rising health-care costs. Health technology sales worldwide were estimated at US$200 billion in 2008, with the United States' share of the world market accounting for about 45% [1]. Indeed, the medical technology industry invests heavily in research and development, leading to many new and effective health-care technologies coming to market; the large majority of which come at an increased cost. Not surprisingly, HTA and the comparative value of new health technologies are of particular interest to payers of today. That said, both Europe and the United States have a long history of applying evidence-based health care. For example, evidence-based practices were first documented in Sweden in the mid-1960s, whereas HTA per se essentially began in the 1970s in response to the introduction of new, very expensive technologies, the first of which was the computerized tomography scanner (see Jonsson, this issue), and evaluation of many other new technologies, including electronic fetal monitoring, mammography, and magnetic resonance imaging followed. The United States saw one of the first agencies to assess the performance of new technologies, the Office of Technology Assessment, established in 1972, which existed until 1994, whereas HTA agencies were established in Sweden, the UK, and Canada in 1987, 1999, and 2003, respectively. The latest iteration of HTA in the United States has been coined comparative effectiveness research. Gail Wilensky (former Medicare head) has proposed that the United States needs a body for comparative effectiveness research that is independent of the federal government [2]. In an interesting contrast, the United Kingdom's NICE has had to become more responsive to government through the introduction of single technology assessments and through the consideration of a different threshold of life-sustaining technologies, and so on. What we need to ask now is the question posed by Sorensen and colleagues [3], namely how could an independent HTA review body, such as NICE, be instituted in the United States? And is this desirable and feasible?

It is clear that answering these questions requires an understanding of what is meant by comparative effectiveness research. The United States Institute of Medicine and the Congress initially proposed that it was the relative value of different treatments, particularly in terms of efficacy or effectiveness, whereas the Center for Medical Technology Policy and American College of Physicians included cost-effectiveness in their definitions. One organization outside of the United States—the Organization for Economic Co-operation and Development—boldly suggested that any center for comparative effectiveness in the United States would likely only be undertaking cost-effectiveness assessments. That vision contrasts with statements from Health and Human Secretary Sebelius who emphasized that a center for comparative effectiveness research would provide additional information to patients and providers in order that they may make the best decisions possible regarding their treatment options. Subsequent to this symposium, the United States Institute of Medicine published a working definition of comparative effectiveness research (see Ramsey, this issue). This definition includes consumers, clinicians, purchasers, and policymakers among the stakeholders, suggesting that effectiveness, safety, and cost-effectiveness should be considered. There are clearly a number of important debates and questions about how to best assess the effectiveness of medical technologies. First, whether or not to include costs and budget impact is a very important decision. As health-care costs steadily increase with the aging population, this has become the topic of the day for many governments and health-care providers. Certainly, global economic events of the last year have emphasized the immediacy of the issue, and the reality of hard choices in other realms suggests that this reality needs to be embraced by the health-care sector.

Why Do We Need Comparative Effectiveness?

  1. Top of page
  2. Why Do We Need Comparative Effectiveness?
  3. Static and Dynamic Efficiency
  4. References

Regardless of the definition, it is important to understand why we need comparative effectiveness research, and to acknowledge that it is really because of an economic market failure. Information itself is a “global public good,” with tremendous potential impact. Economic theory predicts that a purely private market system will tend to undersupply the socially optimal amount of information because disclosure makes manufacturers vulnerable to “information free-riders,” making it difficult for manufacturers to capture sufficient value from their innovations. Although the patent system works well to promote innovation and provide information up to the point of launch (in the case of pharmaceuticals), typically there are many unknowns regarding the effectiveness and safety in the real world after products are on the market. Indeed, there is a paucity of evidence on what is optimal practice. This applies to medical technologies and services, including drugs, devices, hospital distribution, and the specialities needed for optimal health-care implementation. Furthermore, although Canada (see Goeree, this issue) and Sweden (see Jonsson, this issue) are taking steps to address the dearth of follow-up data, once a technology is on the market, there are few mechanisms for collecting and disseminating information on how well a technology works. As a public good and because all citizens of the world benefit from reliable and valid information, a large part of that responsibility should fall on central governments and it will require finding a way to finance ongoing data collection from a global and public perspective.

Fixed health-care budgets, as applied in the United Kingdom, are the characteristic that differentiates “extra-welfarism” from “welfarism,” i.e., whether to rely more on social preferences (the former) versus individual preferences. Setting a fixed health-care budget is a social choice that recognizes that resources are constrained and that hard choices must be made. In contrast, in other jurisdictions there is no rigid budget; for example, in the United States, this plus the abundance of subsidized health plans have lead to health-care spending that is much higher than in many other countries (approximately 16% of GDP in the United States compared to between 8% and 12% in other Organization for Economic Co-operation and Development countries).

An important question regarding comparative effectiveness in countries that do not take costs into account is why is there so little use of pharmacoeconomic information by payers. This is in stark contrast to the way it is used in other jurisdictions such as Australia, Canada (see Goeree, this issue), or the United Kingdom (see Akehurst, this issue). One reason is that the ethical underpinning of the latter health-care systems leans much more toward “objective utilitarianism” and seeks to derive the greatest societal value within a fixed health budget. In contrast, the United States' leans more toward libertarianism and the rights of individuals, with another reason being a lack of incentives to make decisions that require explicit trade-offs. This is related to many factors but most notably to tax subsidies for health insurance. In the United States, this has been part of the recent discussion on health-care reform, which may lead to pharmacoeconomics playing a greater role in health-care delivery in the future.

Static and Dynamic Efficiency

  1. Top of page
  2. Why Do We Need Comparative Effectiveness?
  3. Static and Dynamic Efficiency
  4. References

Pharmacoeconomic cost-effectiveness analyses generally involve calculating static efficiency in which the immediate price that the payer has to pay is compared to the outcomes. This market price of an innovative branded drugs is generally far above the short-term marginal cost of making, marketing, and distributing the product. Over the longer term, of course, the revenues generated must reflect the financing of the whole enterprise of discovery through development to a marketable product. The limited monopoly rights conferred under the patent system are the primary tools used to finance research and development and can be considered an issue of “dynamic efficiency.” To improve upon this, we need to determine how to generate the optimal amount of innovation over time by considering the link between the reimbursement system and these long-term incentives, not only in developed countries but in other countries as well. For example, Brazil, Russia, India, and China (the “BRIC” countries) could afford to contribute to the global research and development effort. The challenge then becomes how to harness this global demand to promote a greater rate of innovation. Although pharmacoeconomists tend to focus on drugs, in industrialized countries they account for less than 20% of the health-care budget, so what is really needed is to look at processes in the global health care and R&D systems. Notably, as discussed by Egon Jonsson in this issue, by applying a more comprehensive approach that considers all treatments for a particular disease area, Sweden may be considered among the most progressive countries undertaking HTA.

Finally, although ideally technologies should be better evaluated in terms of safety, phase IV trials are relatively rare and trials powered for safety end points require such large sample sizes that it is unrealistic that these will be carried out for all products. A practical alternative is to monitor a risk–benefit ratio that involves ongoing data collection. As discussed by Scott Ramsey in this issue, this poses a greater problem for devices than drugs because of the challenges associated with evaluating devices around blinding and “operator competence.” It is also worth noting that a recent emerging conflict has arisen during current debates about performance-based reimbursement for drugs around operator competence, in which pharmaceutical companies are voicing concerns that physicians, as operators in this sense, are out of their control, and may influence whether or not a drug performs according to the manufacturers' claims. Clearly, many more intriguing questions will arise in the evolving debate on comparative effectiveness research.

Source of financial support: Oxford Outcomes, the National Pharmaceutical Council, and Shire Pharmaceuticals.

References

  1. Top of page
  2. Why Do We Need Comparative Effectiveness?
  3. Static and Dynamic Efficiency
  4. References