SEARCH

SEARCH BY CITATION

Keywords:

  • COPD;
  • cost-utility;
  • Markov model;
  • value of information

ABSTRACT

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Objective:  Value of information (VOI) analysis informs decision-makers about the expected value of conducting more research to support a decision. This expected value of (partial) perfect information (EV(P)PI) can be estimated by simultaneously eliminating uncertainty on all (or some) parameters involved in model-based decision-making. This study aimed to calculate the EVPPI, before and after collecting additional information on the parameter of a probabilistic Markov model with the highest EVPPI.

Methods:  The model assessed the 5-year costs per quality-adjusted life year (QALY) of three bronchodilators in chronic obstructive pulmonary disease (COPD). It had identified tiotropium as the bronchodilator with the highest expected net benefit. Total EVPI was estimated plus the EVPPIs for four groups of parameters: 1) transition probabilities between COPD severity stages; 2) exacerbation probabilities; 3) utility weights; and 4) costs. Partial EVPI analyses were performed using one-level and two-level sampling algorithms.

Results:  Before additional research, the total EVPI was €1985 per patient at a threshold value of €20,000 per QALY. EVPPIs were €1081 for utilities, €724 for transition probabilities, and relatively small for exacerbation probabilities and costs. A large study was performed to obtain more precise EQ-5D utilities by COPD severity stages. After using posterior utilities, the EVPPI for utilities decreased to almost zero. The total EVPI for the updated model was reduced to €1037. With an EVPPI of €856, transition probabilities were now the single most important parameter contributing to the EVPI.

Conclusions:  This VOI analysis clearly identified parameters for which additional research is most worthwhile. After conducting additional research on the most important parameter, i.e., the utilities, total EVPI was substantially reduced.


Introduction

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Decision-analytic models are commonly used to analyze the costs and cost-effectiveness of pharmaceuticals. Originally, these models were mostly deterministic and only considered uncertainty around model parameters in sensitivity analyses. In later years, these models developed into probabilistic models in which uncertainty around input parameters was considered simultaneously by entering prespecified distributions for these parameters [1,2]. These probabilistic models allowed displaying the resulting uncertainty around costs and effects on cost-effectiveness planes and by means of cost-effectiveness acceptability curves (CEACs) [3] and frontiers [4]. Recently, value of information (VOI) analysis has received increasing attention in the area of economic evaluations in health care [5–9] and can be seen as a valuable extension of probabilistic cost-effectiveness analysis, because, unlike CEACs, it provides information on the consequences of adopting the wrong treatment strategy.

In probabilistic cost-effectiveness models, the treatment strategy to adopt is identified as the strategy with the highest expected net benefit. Net monetary benefit is calculated as the total number of health effects, in this case quality-adjusted life years (QALYs), multiplied by the willingness to pay (WTP) for a QALY minus the total costs: (QALY × WTP) − C [10,11]. Expected net benefit is defined as the mean of the net benefits across all model iterations. VOI analysis is a Bayesian decision analytic approach which acknowledges that the decision to adopt and reimburse the strategy with the highest expected net benefit is based on currently available information that is surrounded by uncertainty. As long as there is uncertainty, there will always be a chance the wrong decision is made. Making the wrong decision comes with a cost that is equal to the benefits forgone because of the wrong decision. The expected costs of uncertainty can be determined by: 1) the probability that a decision based on mean net benefit is wrong; and 2) the size of the opportunity loss if the wrong decision is made. A VOI analysis informs decision-makers about the expected costs of uncertainty and, hence, the value of collecting additional information to eliminate or reduce uncertainty [9]. The total expected value of perfect information (EVPI) estimates the value of simultaneously eliminating all uncertainty on all parameters involved in taking a decision [7]. A VOI analysis may also provide information on the parameters for which additional research is most useful. Estimates of partial EVPI (EVPPI) can identify the parameters which uncertainties contribute most to the overall decision uncertainty. This information is valuable because a decision-maker does not only have to decide which treatment strategy to adopt but also whether more research regarding the decision is desirable. Since 2004, the National Institute for Clinical Excellence “Guide to the Methods of Technology Appraisal” states that candidate topics for future research may be best prioritized by considering the value of additional information in reducing the degree of decision uncertainty [12].

Value of information has been developed and successfully applied outside the health-care sector [13–16]. Health Technology Assessment (HTA)-researchers have adopted and further developed the concept for application in health-care decision-making [5,17]. Several authors have presented methods to calculate partial EVPI in probabilistic decision analytic models [8,17,18]. There has been some confusion because these different computational approaches all differ slightly. Hence, we have recently seen the publication of a few articles further clarifying the methodology [19,20]. Although the number of case studies is increasing [9,21–24], the number of actual applications of VOI in health care is still limited.

This article describes a VOI analysis based on a previously published probabilistic Markov model to assess the cost-effectiveness of bronchodilator therapy in chronic obstructive pulmonary disease (COPD) [25,26]. The objective of this article is to determine the impact that actually collecting additional data on one key parameter of the model, namely utilities, has on the overall model uncertainty. Hence, our study is an application of the VOI methodology to the realistic problem of choosing between bronchodilators in COPD. In the methods section, we first present the model, the prior utility data, and the posterior utility data that were obtained after a Bayesian update of the prior data with the newly collected data. Then, we present the VOI methodology and the sampling algorithms that we have used to calculate the EVPI and partial EVPI. We introduce a notation that can be easily understood by researchers without a mathematical background. In the Results section, we present the value of collecting additional information before and after collecting new utility data.

Methods

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

The COPD Markov Model

Structure of the Markov model.  The VOI analysis was applied to a probabilistic Markov model comparing the 5-year costs and effects of treating patients with moderate to very severe COPD with either one of three bronchodilators: tiotropium, salmeterol or ipratropium. Details of the model have been published previously [25,26], but in brief, COPD patients were classified into three disease states of increasing severity based on prebronchodilator forced expiratory volume (FEV) in 1 second as a percentage of the predicted value (FEV1 % pred.) according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines [27]: moderate COPD (50% ≤ FEV1 % pred. < 80%), severe COPD (30% ≤ FEV1 % pred. < 50%), and very severe COPD (FEV1 % pred. < 30%).

In prespecified time intervals of 1 month (i.e., Markov cycles), patients could remain in the same disease state, transition between disease states or die. During each cycle, patients also had a certain risk to experience an exacerbation, which was either severe or nonsevere, the risk to experience an exacerbation varied by disease state and treatment group. Given disease state, exacerbation probabilities were assumed to be constant over time. Resource use and costs were assigned to disease states and exacerbations. Primary outcome of the model was the cost per QALY over a time period of 5 years.

Model input parameters.  Probabilistic input parameters of the model included transition probabilities between disease states, the risk to experience an exacerbation, utilities associated with disease states and exacerbations, and costs (resource use) of maintenance therapy and exacerbations. Uncertainty around these parameters was considered simultaneously and the parameters were entered into the model as prespecified, independent distributions. We adopted a Dirichlet distribution for transitions between disease states [28], beta distributions for exacerbations and utilities and a gamma distribution for the estimation of resource use [29]. Second-order Monte-Carlo simulations were undertaken in which values were randomly drawn from these distributions. Fixed model parameters included discount rates for costs and effects and the baseline distribution of patients over COPD severity states. Details on the prior and posterior beta distribution of utilities are given in the next section.

All input parameters reflect the Dutch situation. The model starts with 73% of the patients in moderate COPD, 21% in severe COPD, and 6% in very severe COPD [30,31]. Probabilities to transition between disease states and to have exacerbations were based on clinical trial data [32–34]. The mortality risk in each disease state was estimated by combining the age- and sex-specific all cause mortality rates among COPD patients [35] with a relative mortality risk of 1.2 per 10 units decline in FEV1 % pred. [30]. Resource use and costs (price level 2001) were published previously [25]. Utility values per disease state for the base-case analysis were based on empirical data from an observational study in patients with COPD classified into the GOLD stages [36]. During cycles in which patients experienced an exacerbation, it was assumed that the utility value was reduced by 15% in case of a nonsevere exacerbation [37] and by 50% in case of a severe exacerbation [38]. In accordance with the latest Dutch guidelines for pharmacoeconomic research [39], the discount rate was set to 4% for costs and 1.5% for utilities.

Prior and posterior utility data.  Before additional data collection, the parameter with the highest EVPPI was the utility value for each of the different COPD severity states (see results). This model parameter contributed most to the overall uncertainty as to which bronchodilator treatment to adopt given the currently available information. In this initial analysis, we used mean (SE) EQ-5D index scores for moderate, severe, and very severe COPD of 0.755 (0.031), 0.748 (0.060), and 0.549 (0.104), respectively [36]. These were obtained as part of the Swedish Obstructive Lung Disease in Northern Sweden study among a sample of 179 COPD patients [40]. Then, additional utility research was performed in a sample of 1235 patients from a large ongoing trial in patients with moderate to very severe COPD, who completed the EQ-5D questionnaire at baseline [41]. As in the Swedish study, the EQ-5D scores were valued using the “MVH A1 value set,” which was developed in the UK at the Measurement and Valuation of Health study [42]. The mean (SD, SE) EQ-5D index scores for moderate (n = 622), severe (n = 513), and very severe COPD (n = 91) in this study were 0.787 (SD: 0.195, SE: 0.008), 0.750 (SD: 0.211; SE: 0.009), and 0.647 (SD: 0.230, SE: 0.024) for moderate, severe, and very severe COPD, respectively. On average, the standard errors of these latter values were about one-quarter of the initial values. The newly collected information on utilities was combined with the prior information on utilities using formal Bayesian updating to obtain the posterior utilities. Table 1 shows the prior and posterior parameters of the beta distribution that was assigned to the utilities. The posterior beta distribution has parameters α2, calculated as α0 + α1, and β2, calculated as β0 + β1 (Table 1).

Table 1.  Prior and posterior parameters of the beta distribution of utility values
 Prior utilitiesNew utilitiesPosterior utilities
α0β0α1β1α2β2
  1. COPD, chronic obstructive pulmonary disease.

Moderate COPD without exacerbations144.5746.912060.55557.682205.12604.60
Moderate COPD with nonsevere exacerbations99.4555.52237.72117.64337.17173.16
Moderate COPD with severe exacerbations12.9421.3414.6122.5227.5543.86
Severe COPD without exacerbations38.4212.941625.16541.721663.58554.66
Severe COPD with nonsevere exacerbations35.2420.19247.91140.97283.15161.15
Severe COPD with severe exacerbations9.1515.3115.0125.0224.1640.34
Very severe COPD without exacerbations12.029.87253.77138.46265.79148.33
Very severe COPD with nonsevere exacerbations10.0111.44141.60115.88151.61127.32
Very severe COPD with severe exacerbations3.689.7214.5330.3818.2140.10

VOI Analysis

EVPI.  The overall EVPI is equal to the net benefit of the optimal strategy given perfect information minus the net benefit of the strategy that would be adopted given current information, averaged over all model iterations. In other words, the EVPI is equal to the average of the maximum net benefits across all model iterations (i.e., the expected net benefit using perfect information), minus the maximum of the average expected net benefits across all treatment strategies (i.e., the expected net benefit using the currently available [imperfect] information) [8]. Formally, this denoted as inline image where NB(d, θ) is the net benefit function for decision d and parameters θ, and Eθ denotes an expectation over the full joint distribution of θ. In this case, the expected value of a parameter is obtained by taking the mean value of that parameter over N simulations. For ease of comprehension, we will use the following notation: the EVPI will be denoted as Meanθ[MaxT(NB)] − MaxT[Meanθ(NB)]. The overall EVPI analysis was performed using a one-level sampling algorithm (see Table 2).

Table 2.  Method of calculating the overall EVPI
Model iterationsNet benefit (NB)Maximum NB across iterationsNB gained when having perfect information
TiotropiumSalmeterolIpratropium
  • *

    Tiotropium has the highest expected NB, MaxT[Mean(NB)], and is the optimal strategy with currently available information.

  • This is the mean of the maximum expected NBs across all model iterations, Mean[MaxT(NB)], i.e., the expected NB with perfect information.

  • EVPI is calculated as the average gain in NB across the model iterations, and is equal to Mean[MaxT(NB)] (60,457) minus MaxT[Mean(NB)] (59,419).

  • EVPI, expected value of perfect information.

158,63058,51657,33358,63058,630 − 58,630 = 0
257,52555,86761,13061,13061,130 − 57,525 = 3605
359,52759,85458,29359,85459,854 − 59,527 = 327
460,42857,78755,75460,42860,428 − 60,428 = 0
5. . . . . . . . . . . . . . . 
n = 10,000NBtioNBsalNBipraMaxT(NB)MaxT(NB) − NBtio EVPI = 
Mean59,419*57,48856,60260,457(0 + 3605 + 327 + 0 + etc)/10,000 = 1037 60,457 − 59,419 = 1037
MaxT[Meanθ(NB)]  Meanθ[MaxT(NB)]Meanθ[MaxT(NB)] − MaxT[Meanθ(NB)]

EVPPI.  In the current model, there were four groups of parameters contributing to the overall uncertainty in the model: 1) transition probabilities; 2) probabilities of experiencing an exacerbation; 3) utilities; and 4) costs. Each group contains individual model parameters that were related and often obtained from the same sources but vary by COPD severity and exacerbation severity. The parameters were grouped because, for additional research to be efficient, it will probably be designed to generate information on multiple individual parameters within a group. We calculated the EVPPI (partial EVPI) for these four subsets of parameters, i.e., the parameters of interest. This partial EVPI analysis was performed to get insight into the value of obtaining perfect information on these parameters to guide future research toward those parameters with the highest expected VOI [5].

A partial EVPI was performed using a two-level sampling algorithm in which multiple simulations were performed for different values of the parameter of interest [7]. The two-level sampling algorithm uses two nested levels of Monte Carlo sampling over the plausible ranges for both the parameter(s) of interest (θi), and the remaining uncertain parameters (θc). Before running the two-level sampling algorithm, an EVPI analysis was performed as described earlier and the value of the treatment with the highest mean net benefit (MaxT[Meanθ(NB)]) is recorded. Also, the mean of the maximum expected net benefits across all model iterations (Meanθ[MaxT(NB)]) is recorded. The two-level sampling algorithm that we have applied is outlined in Box 1. After completion of all steps in Box 1, all data that are needed to calculate the partial EVPI for the parameter of interest are available. The partial EVPI for the parameter of interest can be calculated by either one of three methods: 1) subtracting the difference between the Meanθi[Meanθc{MaxT(NB|θi)}] and the Meanθi[MaxT{Meanθc(NB|θi)}] from the overall EVPI [18]; 2) subtracting the MaxT[Meanθ(NB)] from the Meanθi[MaxT{Meanθc(NB|θi)}][20]; or 3) subtracting the MaxT[Meanθi{Meanθc(NB|θi)}] from the Meanθi[MaxT{Meanθc(NB|θi)}][43]. Mathematically, these calculations are equivalent when the numbers of iterations in the model simulations are large enough. In this article, we have used the first method [18]. Figure 1 shows the two-level analyses and the three methods to determine the partial EVPI.

image

Figure 1. Illustration of the two-level partial EVPI sampling algorithm given perfect information on θi. EVPI, expected value of perfect information; EVPPI, expected value of partial perfect information.

Download figure to PowerPoint

Box 1

Two-level sampling algorithm for the calculation of partial EVPI

Start outer loop

Step 1: Sample once from the parameter(s) of interest (θi) and hold that parameter constant at its sampled value.

Start inner loop

Step 2: Sample once from all model parameters not of interest (θc) and run the model once (i.e., one model iteration).

Step 3: Record the net benefit of each treatment strategy given perfect information on θi: NB|θi.

Step 4: Determine the treatment with the highest net benefit given perfect information on θi and record this value: MaxT(NB|θi).

Step 5: Repeat steps 2, 3 and 4 a large number of times (in this study 1000 times).

End of inner loop

Step 6: Record the mean net benefit of each strategy Meanθc(NB|θi).

Step 7: Record the mean of the MaxT(NB|θi): Meanθc{MaxT(NB|θi)}.

Step 8: Determine the treatment with the highest mean net benefit and record this value: MaxT{Meanθc(NB|θi)}.

Step 9: Repeat steps 1 to 8 a large number of times (in this study 500 times)

End of outer loop.

Step 10: Calculate the mean of the values recorded in steps 6 to 8. These are the Meanθi{Meanθc(NB|θi)}, the Meanθi[Meanθc{MaxT(NB|θi)}], and the Meanθi[MaxT{Meanθc(NB|θi)}].

Step 11: Calculate the partial EVPI by 1) subtracting the difference between the Meanθi[Meanθc{MaxT(NB|θi)}] and the Meanθi[MaxT{Meanθc(NB|θi)}] from the EVPI; 2) subtracting the MaxT[Meanθ(NB)] from the Meanθi[MaxT{Meanθc(NB|θi)}]; or 3) subtracting the MaxT[Meanθi{Meanθc(NB|θi)}] from the Meanθi[MaxT{Meanθc(NB|θi)}].

Although the number of iterations that is sufficient to estimate partial EVPI can be estimated [8], computer time can be an insuperable problem to straightforward two-level partial EVPI analysis [8]. To work around the computer-time problems associated with two-level sampling, a one-level algorithm has been proposed as a shortcut to partial EVPI analysis [8,17,18]. This shortcut algorithm is almost the same as the algorithm for the calculation of the EVPI that is outlined in Table 2. Nevertheless, instead of sampling from all parameters simultaneously, the one-level partial EVPI algorithm involves sampling from the parameter of interest, while keeping all other parameters constant at their mean value [19,20]. The partial EVPI is then calculated as 1) the difference between the Meanθ[MaxT(NB|1)] and the MaxT[Meanθ(NB|1)][18] where the underscore |1 denotes the situation that the parameters not of interest (θc) are fixed at their mean value; or 2) as the difference between the Meanθ[MaxT(NB|1)] and the MaxT[Meanθ(NB)][20]. Again, these calculations are equivalent for sufficiently large numbers of iterations and we have used the first method [18]. The validity of the one-level algorithm is, among other factors, conditionally upon the linearity of the model and upon the absence of correlation between (the subset of) the parameter of interest and the other variables in the model.

Analysis.  The analysis was performed from a health-care perspective. The number of iterations for the EVPI analysis and the one-level EVPPI analysis was set to 10,000. For the two-level algorithms, the number of inner loops was set to 1000, whereas the number of outer loops was set to 500. The number of outer loops was smaller than the number of inner loops because, in this specific analysis, there is less variation in the outer loop than in the inner loop. In the outer loop, we sample from the utility parameter(s) (θi) only and hold these parameters constant at their sampled values. In the inner loop, we sample from all other model parameters not of interest (θc) and run the model for each set of parameters that is drawn. In addition, studies by Brennan et al. suggest that fewer samples on the outer level and larger numbers of samples at the inner level might be the most efficient approach when trying to minimize bias and random error [20]. In his case studies, stable results were obtained with 500 inner loops and 100 outer loops. Because of the complexity of our model, we have chosen a larger number of inner and outer loops, but the numbers were chosen such that one analysis could be performed within 12 hours (i.e., could be run during the night). All analyses were performed in Excel in Microsoft Office 10.

The overall EVPI and partial EVPIs were calculated before and after the model was filled with posterior utility data. The EVPI was plotted as a function of the threshold value of the maximum WTP for a QALY (i.e., ceiling ratio). The partial EVPI analysis was performed for the four subgroups of input parameters: transition probabilities, exacerbation probabilities, utilities and costs of maintenance therapy and exacerbations. In the partial EVPI analysis, a threshold value of €20,000 for a QALY was used. This value was chosen because it is frequently mentioned in debates about the cost-effectiveness of health-care interventions in The Netherlands. The value originates from two Dutch medical practice guidelines, where it has been used as a reference point [44,45] and it has frequently been used thereafter [46].

Results

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

Cost-Effectiveness

Table 3 shows the cost-effectiveness outcomes before and after the utility study was conducted. The table shows that in both analyses, tiotropium was associated with improved health outcomes and lower overall costs compared with salmeterol and ipratropium. Mean costs per patient for tiotropium were €239 (SE €977) lower than for salmeterol, which in turn were €751 (SE €1501) lower than for ipratropium. Differences between treatments in the number of QALYs were small. In both analyses, the number of QALYs was about 0.15 higher for tiotropium than for salmeterol and ipratropium, which both generated a similar number of QALYs. The standard errors for QALYs ranged from 0.17 to 0.28 in the initial analysis and from 0.10 to 0.22 in the analysis based on the newly collected utilities. At a threshold value of €20,000 for a QALY, the expected net monetary benefits before the utility study were €56,820 for tiotropium, €54,380 for salmeterol, and €53,629 for ipratropium. After collecting additional utility data, these values increased to €59,419 for tiotropium, €57,488 for salmeterol, and €56,602 for ipratropium.

Table 3.  Mean (SE) 5-year costs and QALYs per patient before and after the additional utility study
OutcomesTiotropiumSalmeterolIpratropium
  1. QALY, quality-adjusted life year.

Costs (in Euros)7380 (504)7620 (840)8371 (1257)
QALYs: before the additional utility study3.21 (0.17)3.10 (0.23)3.10 (0.28)
QALYs: after the additional utility study3.34 (0.10)3.26 (0.16)3.25 (0.22)

Based on mean values, tiotropium dominated both salmeterol and ipratropium, because it was both more effective and less costly. The CEACs of the two analyses are presented in Fig. 2. The curves are quite similar. If decision-makers are not willing to pay anything for an additional QALY, the probability that tiotropium was optimal was around 48% in both analyses compared with 36% for salmeterol and 17% for ipratropium. For tiotropium, this increased to 59% at a ceiling ratio of €3000 in the prior analysis and 61% at a ceiling ratio of €3400 in the posterior analysis. The greater precision on utilities slightly increased the probability that tiotropium had the highest net benefit and reduced the probability that ipratropium had the highest net benefit. At a ceiling ratio of €5000, the probability that tiotropium had the highest net benefit increased from 58% in the prior analysis to 61% in the posterior analyses, whereas the probability that ipratropium was most cost-effective decreased from 15% to 12%. At a ceiling ratio of €10,000, the probability that tiotropium had the highest expected net benefit increased from 54% in the prior analysis to 59% in the posterior analyses. For ipratropium, it reduced from 20% to 16%.

image

Figure 2. Cost-effectiveness acceptability curves of the cost per QALY before (closed symbols) and after (open symbols) the additional utility study for patients treated with tiotropium, salmeterol or ipratropium. QALY, quality-adjusted life year.

Download figure to PowerPoint

EVPI

Figure 3 presents the overall EVPIs before the utility study and after the utility study as a function of the ceiling ratio. Before the utility study, the EVPI started at a value of €377 per patient and reduced to a minimum of €284 per patient at a value of the ceiling ratio of approximately €1800 per QALY. From this point onwards, the EVPI increased steadily reaching a value of €1985 per patient at a ceiling ratio of €20,000 and a value of €4301 per patient at a ceiling ratio of €40,000 per QALY. After the results from the utility study had been incorporated into the model, the EVPI started at €390, reduced to a minimum of €259 at a value of the ceiling ration of approximately €3700 and remained well below the first curve. In the updated model, the EVPI was about €1040 at a ceiling ratio of €20,000 and €2386 at a ceiling ratio of €40,000.

image

Figure 3. EVPI before and after the additional utility study. Continuous gray line: EVPI before collecting additional information on utilities; dashed black line: EVPI after collecting additional information on utilities. EVPI, expected value of perfect information.

Download figure to PowerPoint

In both curves, the EVPI initially fell as the value of the ceiling ratio increased because the reduced probability of incurring an opportunity loss outweighed the increased valuation of this opportunity loss. Because the ceiling ratio increased from €0 to €2000, there was a substantial increase in the probability that tiotropium had the highest net monetary benefit, i.e., an increase from 48% to 60% in the prior analysis. This increase outweighed the decrease in the value of the opportunity loss and caused a reduction in EVPI. At higher values of the ceiling ratio, the decrease in opportunity loss outweighed the small increase in probability of incurring an opportunity loss.

Partial EVPI

The results of the partial EVPI analyses are presented in Table 4. This table includes the results of the two-level and the one-level algorithms before and after the additional information on utilities was collected. Both the one-level and two-level algorithms for calculating partial EVPI before the additional data collection showed that, among the different groups of parameters, utilities had the highest partial EVPI. Research that would eliminate the uncertainty in this subset of parameters would be worth €1081 per patient. The second highest was the partial EVPI for transition probabilities, which was €724 per patient. Doing additional research on exacerbation probabilities and costs would not be of any value, because the partial EVPIs for these parameters were very small. After the utility study, the partial EVPI for utilities dropped to about €0. In that analysis, only the partial EVPI for transition probabilities was substantial, €856. Hence, doing further additional research on transition probabilities remained potentially worthwhile.

Table 4.  EVPI and Partial EVPI results for the base-case and alternative analysis
 Before collecting additional information on utilitiesAfter collecting additional information on utilities
  1. The EVPPI is expressed in Euros. The value of the ceiling ratio was 20,000. Number of iterations for the one-level analysis: 10,000; number of iterations for the two-level analysis: inner loop 1000, outer loop 500.

  2. EVPI, expected value of perfect information; EVPPI, expected value of partial perfect information.

EVPI19851037
Partial EVPI for parameter subsetTwo-levelOne-levelTwo-levelOne-level
 Transition probabilities724694856860
 Exacerbation probabilities3000
 Utility weights108181301
 Costs7000

Population EVPI

So far, we have presented the (partial) EVPI per patient. To determine a priori whether additional data collection is beneficial, the costs of data collection should be compared to the population EVPI. The population EVPI can be calculated by multiplying the EVPI per patient with the number of patients eligible for treatment. Using the Dutch COPD population model by Hoogendoorn et al. [30], the number of patients with physician-diagnosed moderate to very severe COPD in The Netherlands has been estimated to be 243,000. Even if we assume that only 20% of these patients are facing the choice between bronchodilators, the population EVPI for The Netherlands would already be €96 million. If the COPD populations of all Western countries were taken into account, the population EVPI would have been enormous. Hence, given the large potential target population for COPD treatments, the value of additional information will easily outweigh the costs of collecting this information from a societal perspective.

Discussion

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References

The results of the VOI analysis before collecting additional data showed that the overall EVPI for the choice between tiotropium, salmeterol, and ipratropium was €1985 per patient at a ceiling ratio of €20,000. This is the absolute limit of the value of further research that would completely eliminate the uncertainty around the parameters in the model. Partial EVPI analyses showed that the costs of uncertainty were highest for the utility values, followed by the transition probabilities between COPD severity stages. Hence, collecting additional data on utilities was potentially of most value; the expected value of eliminating uncertainty in this subset of parameters was €1081. After having collected additional information on utilities, the overall EVPI was considerably reduced. In this posterior analysis, the remaining overall EVPI was €1070 and almost entirely because of uncertainty around disease state transitions. The partial EVPI of the utilities was reduced to almost zero. Collecting additional data on exacerbation probabilities or resource utilization did not appear to be of value.

That it would be most beneficial to collect additional information on utilities and transitions between disease states was something that we had not expected beforehand, because we had found in our previous studies that exacerbations and the costs of these exacerbations were important drivers of the cost-effectiveness of bronchodilator treatment [25,26]. Scenarios in which we had assumed a complete absence of a difference between the treatments in terms of exacerbation rates had a relatively great impact on the cost-effectiveness ratios. Nevertheless, such extreme uncertainties are unrealistic and not represented by the model. The model does include all the uncertainty around exacerbation rates that was observed in the clinical trials. Moreover, it is important to note that, although sensitivity analyses have shown that exacerbation rates influence the cost-effectiveness rates of tiotropium compared to its alternatives, the cost-effectiveness ratios remained below the decision threshold of €20,000 per QALY. This difference in observations from sensitivity analyses and EVPI analyses clearly illustrates the benefit of doing an EVPI analysis. The value of collecting additional information for a particular parameter depends not only on the association with cost-effectiveness, but also on the prior uncertainty about this parameter. Trying to interpret the joint impact of the strength of the association and the uncertainty without doing a formal VOI analysis is difficult and may easily lead to false conclusions about the parameters for which additional data collection is most useful.

The crucial question after an EVPPI analysis on currently available information is whether the expected value of additional information outweighs and justifies the cost of collecting additional information. The actual costs of this utility study are hard to estimate because the EQ-5D data were collected within the context of a large clinical trial that was designed to measure the decline in lung function over time [47]. Hence, the EQ-5D data could be collected at relatively little additional costs. If we were to set up a new study of the same size just for the purpose of collecting utility values, the costs would probably have been higher. Nevertheless, to get a sufficiently precise estimate of utilities by GOLD stage would probably require far fewer patients and thus, a much cheaper study. But in some jurisdictions, cost-effectiveness models can only be filled with country-specific data. Consequently, data from patients in each separate country are needed. In that case, a large multinational trial is a good opportunity to collect these data.

It is interesting to note, from the CEACs in Fig. 2, that even though the EVPI was strongly reduced by the use of new utility estimates with substantially lower standard errors, the decision about which treatment to adopt does not change. In both cases, tiotropium has the highest probability of being optimal as well as the highest expected net benefit over the whole range of threshold ICERs studied, and should therefore be adopted. Nevertheless, before additional data collection, the partial EVPI of utilities was high and this partial EVPI was our best estimate of the change in expected net benefit that we could get by doing additional research. It is important to stress that acceptability curves show just one element of the EVPI, namely the probability that a decision based on the mean net benefit is correct. Thus, the probability of making the wrong decision is the complement of the curve. The curves do not show the second element of the EVPI, which is the magnitude of the opportunity loss or, in other words, the consequence of making the wrong decision. It is precisely this magnitude of the opportunity loss that is considerably reduced by doing the additional utility study, which has considerably reduced the SE of the QALY outcome. This limitation of the acceptability curve is, for instance, discussed by Groot Koerkamp et al. [48] In general, there is no one on one relationship between the probability that an alternative is the “true” preferred alternative and the VOI. When the acceptability curve decreases, the EVPI necessarily increases, but the reverse is not true [49]. Thus, the acceptability curve on its own might lead to wrong conclusions by policymakers, as many people would be inclined to think that at 95% certainty of making the right decision, there is minimal value in additional research, whereas at 65% certainty this value from research would be high, whereas in truth the opposite might be the case.

A word of caution is necessary about the uncertainty incorporated in the model and about EVPI analysis in general. An EVPI analysis only provides information about the values of eliminating uncertainty around the probabilistic parameters included in the model. The characterization of the parameters and the different types of uncertainty is a major challenge and some forms of uncertainty may not have been taken into account. For example, the new EQ-5D utilities were obtained in a multinational trial and we are uncertain how well these utilities represent the heterogeneity of the COPD population in The Netherlands. Another example relates to the disease state transitions. We incorporated all uncertainty around these transitions as observed in clinical trials, but we are also uncertain as to what extent these trials reflect real life. To fully represent uncertainty and to establish a full EVPI analysis would require a parameterization of these types of uncertainty.

One could argue that the high EVPI for utilities is partly due to the fact that we have defined independent beta distributions for moderate, severe, and very severe COPD with and without exacerbations. Hence, it may occur that the utility value that is drawn for severe COPD is better than the value drawn for moderate COPD. We have not built in an association between the utilities by disease severity, in the sense that, when a low utility is drawn for moderate COPD, a lower value should be drawn for severe COPD. Such an association would reduce the uncertainty and the EVPI. Nevertheless, we doubt whether it reflects reality, because it is well known that the association between lung function and quality of life, though present, is rather weak [50], even at a group level. Therefore, we have chosen to let the uncertainty around the utility data speak for itself and not build in, a priory, a hierarchy for which the evidence is not strong. At the number of model runs in the current analyses, the average utility value for moderate COPD is better than for severe COPD, which in turn is better than for very severe COPD.

The results of the EVPPI analyses presented in Table 4 clearly show that the sum of the partial EVPIs does not add up to the total EVPI. This feature of partial EVPI analysis has also been explored in other publications [19,20]. The EVPI indicates the value of perfect information. Perfect information means that we have perfect information about the parameter of interest θi, and perfect information about the complementary parameters, θc, at the same time. The partial EVPI for θi is the value of perfect information about θi, given the uncertainty about θc, whereas the partial EVPI for θc is the value of perfect information about θc, given the uncertainty about θi. Summing the partial EVPIs for θi and θc does not return the EVPI. For this to happen, we should sum the partial EVPI for θi and the value of perfect information about θc, given perfect information about θi. The difference between the EVPI and the sum of the partial EVPIs complicates straightforward interpretation of a VOI analysis. Obtaining perfect information on one parameter of interest does not reduce the EVPI to the same extent as may have been expected (falsely) from the value of the partial EVPI for that parameter. Hence, the importance of a parameter for further research should not be judged by the reduction in EVPI [19]. The added value of a partial EVPI analysis is to set priorities with regard to the parameters for which additional data collection is most beneficial by ranking them according to their expected value of research that would eliminate the uncertainty.

In this study, we compared the outcomes of a partial EVPI analysis obtained with a two-level analysis with a one-level analysis. Theoretically, the two-level analysis provides the correct values of the EVPPI. The required numbers of inner and outer loop iterations are not known beforehand and, among other factors, depend on the distribution and the number of uncertain model parameters. In our analysis, the results of the two-level analysis may have been biased because of the limited number of iterations. Nevertheless, repeating some of the two-level analyses showed that the consistency of outcomes across analyses was good. For example, the EVPPIs of exacerbation probabilities and costs were consistently around zero and the EVPPI of utilities in five posterior analyses varied between €843 and €868. When a model is perfectly linear and no correlation exists between input parameters, the one-level sampling algorithms will provide estimates of the EVPPI that are equal to the two-level sampling algorithms. In this case, these assumptions are not fulfilled. The Dirichlet distribution that was used for transition probabilities, by definition, introduces correlation in the transition rates. Moreover, an inherent characteristic of a Markov simulation is the multiplication of matrices with transition probabilities over subsequent cycles, causing the transitions to be nonlinear. Hence, for that reason, the results of the one-level sampling approach should be interpreted with care. Nevertheless, the results of the EVPPI analysis with the one-level and the two-level sampling approach both indicated that the EVPPI was highest for utilities and transition probabilities although in absolute terms the EVPPIs of especially the utilities differ.

In conclusion, this study has clearly shown the benefits of doing a VOI analysis. Before any additional data collection, the VOI analysis at a ceiling ratio of €20,000 estimated the EVPI to be €1985 per patient and identified the utilities as the subset of parameters with the highest EVPI. After additional research on utilities was performed and a formal Bayesian update of the utilities was conducted, the EVPPI of utilities was reduced to almost zero. The EVPI was still €1070 and largely driven by the uncertainty in transition probabilities, which would be the next best candidate for doing additional research in the future. Such research should focus on estimates of lung function decline over time, because this decline drives the transition probabilities between COPD severity stages as defined by GOLD.

We would like to thank the anonymous reviewers for their valuable comments on an earlier draft of this article.

Source of financial support: The study has been financially supported by an unrestricted grant from Boehringer Ingelheim International and Pfizer Global Pharmaceuticals.

References

  1. Top of page
  2. ABSTRACT
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  • 1
    Briggs AH. Handling uncertainty in cost-effectiveness models. Pharmacoeconomics 2000;17:479500.
  • 2
    Briggs A. Probabilistic analysis of cost-effectiveness models: statistical representation of parameter uncertainty. Value Health 2005;8:12.
  • 3
    Van Hout BA, Al MJ, Gordon GS, Rutten FFH. Costs, effects and c/e-ratio's alongside a clinical trial. Health Econ 1994;3:30919.
  • 4
    Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ 2001;10:77987.
  • 5
    Claxton K. The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. J Health Econ 1999;18:34164.
  • 6
    Claxton K, Sculpher M, Drummond M. A rational framework for decision making by the National Institute for Clinical Excellence (NICE). Lancet 2002; 360:71115.
  • 7
    Ades AE, Lu G, Claxton K. Expected value of sample information calculations in medical decision modeling. Med Decis Making 2004;24:20727.
  • 8
    Tappenden P, Chilcott JB, Eggington S, et al. Methods for expected value of information analysis in complex health economic models: developments on the health economics of interferon-beta and glatiramer acetate for multiple sclerosis. Health Technol Assess 2004;8: iii, 1–78.
  • 9
    Claxton K, Ginnelly L, Sculpher M, et al. A pilot study on the use of decision theory and value of information analysis as part of the NHS Health Technology Assessment programme. Health Technol Assess 2004;8:1103, iii.
  • 10
    Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in cost-effectiveness analysis. Med Decis Making 1998;18(2 Suppl.):S6880.
  • 11
    Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and cost-effectiveness analysis. Health Econ 2002;11: 41530.
  • 12
    National Institute for Clinical Excellence. Guide to the Methods of Technology Appraisal. London: NICE, 2004.
  • 13
    Howard RA, Matheson JE, North DW. The decision to seed hurricanes. Science 1972;176:1191202.
  • 14
    Thompson K, Evans J. The value of improved national exposure information for perchloroethylene (Perc): a case study for dry cleaners. Risk Anal 1997;17:25371.
  • 15
    Yokota F, Gray G, Hammitt JK, Thompson KM. Tiered chemical testing: a value of information approach. Risk Anal 2004;24:162539.
  • 16
    Hammitt JK, Cave JAK. Research planning for food safety. A value-of-information approach. The RAND publication series R-3946-ASPE/NCTR,
  • 17
    Felli JC, Hazen GB. Sensitivity analysis and the expected value of perfect information. Med Decis Making 1998;18:95109.
  • 18
    Chilcott J, Brennan A, Booth A, et al. The role of modelling in prioritising and planning clinical trials. Health Technol Assess 2003;7:iii, 1–125.
  • 19
    Groot Koerkamp B, Myriam Hunink MG, Stijnen T, Weinstein MC. Identifying key parameters in cost-effectiveness analysis using value of information: a comparison of methods. Health Econ 2006;15:38392.
  • 20
    Brennan A, Kharroubi S, O'Hagan A, Chilcott J. Calculating partial expected value of perfect information via Monte Carlo sampling algorithms. Med Decis Making 2007;27:44870.
  • 21
    Claxton K, Neumann PJ, Araki S, Weinstein MC. Bayesian value-of-information analysis. An application to a policy model of Alzheimer's disease. Int J Technol Assess Health Care 2001;17:3855.
  • 22
    Fenwick E, Palmer S, Claxton K, et al. An iterative Bayesian approach to health technology assessment: application to a policy of preoperative optimization for patients undergoing major elective surgery. Med Decis Making 2006;26:48096.
  • 23
    Speight PM, Palmer S, Moles DR, et al. The cost-effectiveness of screening for oral cancer in primary care. Health Technol Assess 2006;10:1144, iii–iv.
  • 24
    Castelnuovo E, Thompson-Coon J, Pitt M, et al. The cost-effectiveness of testing for hepatitis C in former injecting drug users. Health Technol Assess 2006; 10:iiiiv, ix–xii, 1–93.
  • 25
    Oostenbrink JB, Rutten-van Molken MP, Monz BU, FitzGerald JM. Probabilistic Markov model to assess the cost-effectiveness of bronchodilator therapy in COPD patients in different countries. Value Health 2005;8:3246.
  • 26
    Rutten-van Molken MP, Oostenbrink JB, Miravitlles M, Monz BU. Modelling the 5-year cost effectiveness of tiotropium, salmeterol and ipratropium for the treatment of chronic obstructive pulmonary disease in Spain. Eur J Health Econ 2007;8:12335.
  • 27
    GOLD. Global Initiative for Chronic Obstructive Lung Disease. Global Strategy for Diagnosis, Management, and Prevention of COPD. Bathesda, MD: National Institutes of Health, National Hearth, Lung and Blood Institute. Available from: http://www.goldcopd.com[Accessed July 6, 2006.
  • 28
    Briggs AH, Ades AE, Price MJ. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework. Med Decis Making 2003;23:34150.
  • 29
    Price MJ, Briggs AH. Development of an economic model to assess the cost effectiveness of asthma management strategies. Pharmacoeconomics 2002;20: 18394.
  • 30
    Hoogendoorn M, Rutten-van Mölken MPMH, Hoogenveen RT, et al. A dynamic population model of disease progression in COPD. Eur Respir J 2005;26:22333.
  • 31
    Hoogendoorn M, Feenstra TL, Schermer TR, et al. Severity distribution of chronic obstructive pulmonary disease (COPD) in Dutch general practice. Respir Med 2006;100:836.
  • 32
    Vincken W, Van Noord JA, Greefhorst AP, et al. Improved health outcomes in patients with COPD during 1 year's treatment with tiotropium. Eur Respir J 2002;19:20916.
  • 33
    Casaburi R, Mahler DA, Jones PW, et al. A long-term evaluation of once-daily inhaled tiotropium in chronic obstructive pulmonary disease. Eur Respir J 2002; 19:21724.
  • 34
    Brusasco V, Hodder R, Miravitlles M, et al. Health outcomes following treatment for six months with once daily tiotropium compared with twice daily salmeterol in patients with COPD. Thorax 2003; 58:399404.
  • 35
    Feenstra TL, Van Genugten ML, Hoogenveen RT, et al. The impact of aging and smoking on the future burden of chronic obstructive pulmonary disease: a model analysis in the Netherlands. Am J Respir Crit Care Med 2001;164:5906.
  • 36
    Borg S, Ericsson A, Wedzicha JA, et al. A computer simulation model of the natural history and economic impact of chronic obstructive pulmonary disease. Value Health 2004;7:15367.
  • 37
    Paterson C, Langan CE, McKaig GA, et al. Assessing patient outcomes in acute exacerbations of chronic bronchitis: the measure your medical outcome profile (MYMOP), medical outcomes study 6-item general health survey (MOS-6A) and EuroQol (EQ-5D). Qual Life Res 2000;9:5217.
  • 38
    Spencer S, Jones PW. Time course of recovery of health status following an infective exacerbation of chronic bronchitis. Thorax 2003;58:58993.
  • 39
    College voor zorgverzekeringen. Richtlijnen voor farmaco-econonomisch onderzoek, evaluatie en actualisatie. Diemen: College voor Zorgverzekeringen, 2005.
  • 40
    Jansson SA, Andersson F, Borg S, et al. Costs of COPD in Sweden according to disease severity. Chest 2002;122:19942002.
  • 41
    Rutten-van Mölken MP, Oostenbrink JB, Tashkin DP, et al. Does quality of life of COPD patients as measured by the generic EuroQol five-dimension questionnaire differentiate between COPD severity stages? Chest 2006;130:111728.
  • 42
    Dolan P. Modeling valuations for the EuroQol health states. Med Care 1997;35:1095108.
  • 43
    Claxton K. Value of information analysis. In: SculpherM, BriggsAH, ClaxtonK, eds. Decision Modelling for Health Economic Evaluation. Oxford: Oxford University Press, 2006.
  • 44
    CBO. Behandeling en preventie van coronaire hartziekten door verlaging van de plasmacholesterolconcentratie. Consensus Cholesterol tweede herziening (Treatment and prevention of coronary heart disease using cholesterol lowering therapy. Consensus Cholesterol. Second revision [in Dutch]). Utrecht: Dutch Institute for Healthcare CBO, 1998.
  • 45
    CBO. Osteoporose. Tweede Herziening Richtlijn (Osteoporosis. Second revision clinical guideline [in Dutch]). Utrecht: Dutch Institute for Healthcare CBO, 2002.
  • 46
    Stolk EA, Van Donselaar G, Brouwer WB, Busschbach JJ. Reconciliation of economic concerns and health policy: illustration of an equity adjustment procedure using proportional shortfall. Pharmacoeconomics 2004;22:1097107.
  • 47
    Decramer M, Celli BR, Tashkin DP, et al. Clinical trial design considerations in assessing long-term functional impacts of tiotropium in COPD: the Uplift trial. J Chron Obstruct Pulm Dis 2004;1:30312.
  • 48
    Groot Koerkamp B, Hunink MG, Stijnen T, et al. Limitations of acceptability curves for presenting uncertainty in cost-effectiveness analysis. Med Decis Making 2007;27:10111.
  • 49
    Fenwick E, Briggs A. Cost-effectiveness acceptability curves in the dock: case not proven? Med Decis Making 2007;27:935.
  • 50
    Franciosi LG, Page CP, Celli BR, et al. Markers of disease severity in chronic obstructive pulmonary disease. Pulm Pharmacol Ther 2006;19:18999.