Too Much Ado about Propensity Score Models? Comparing Methods of Propensity Score Matching

Authors


Onur Baser, Thomson-Medstat, 777 East Eisenhower Parkway, 906R, Ann Arbor, MI 48108, USA. E-mail: onur.baser@thomson.com

ABSTRACT

Objective:  A large number of possible techniques are available when conducting matching procedures, yet coherent guidelines for selecting the most appropriate application do not yet exist. In this article we evaluate several matching techniques and provide a suggested guideline for selecting the best technique.

Methods:  The main purpose of a matching procedure is to reduce selection bias by increasing the balance between the treatment and control groups. The following approach, consisting of five quantifiable steps, is proposed to check for balance: 1) Using two sample t-statistics to compare the means of the treatment and control groups for each explanatory variable; 2) Comparing the mean difference as a percentage of the average standard deviations; 3) Comparing percent reduction of bias in the means of the explanatory variables before and after matching; 4) Comparing treatment and control density estimates for the explanatory variables; and 5) Comparing the density estimates of the propensity scores of the control units with those of the treated units. We investigated seven different matching techniques and how they performed with regard to proposed five steps. Moreover, we estimate the average treatment effect with multivariate analysis and compared the results with the estimates of propensity score matching techniques. The Medstat MarketScan Data Base provided data for use in empirical examples of the utility of several matching methods. We conducted nearest neighborhood matching (NNM) analyses in seven ways: replacement, 2 to 1 matching, Mahalanobis matching (MM), MM with caliper, kernel matching, radius matching, and the stratification method.

Results:  Comparing techniques according to the above criteria revealed that the choice of matching has significant effects on outcomes. Patients with asthma are compared with patients without asthma and cost of illness ranged from $2040 to $4463 depending on the type of matching. After matching, we looked at the insignificant differences or larger P-values in the mean values (criterion 1); low mean differences as a percentage of the average standard deviation (criterion 2); 100% reduction bias in the means of explanatory variables (criterion 3); and insignificant differences when comparing the density estimates of the treatment and control groups (criterion 4 and criterion 5). Mahalanobis matching with caliber yielded the better results according all five criteria (Mean = $4463, SD = $3252). We also applied multivariate analysis over the matched sample. This decreased the deviation in cost of illness estimates more than threefold (Mean = $4456, SD = $996).

Conclusion:  Sensitivity analysis of the matching techniques is especially important because none of the proposed methods in the literature is a priori superior to the others. The suggested joint consideration of propensity score matching and multivariate analysis offers an approach to assessing the robustness of the estimates.

Introduction

A key problem that often plagues observational studies is the lack of randomization in assigning individuals to either treatment or control groups. Because of this, the estimation of the effects of treatment may be biased by the existence of confounding factors.

Randomized control trials (RCTs) are viewed as the ideal evaluation technique for estimating treatment effects, because when randomization works, measurable and unmeasurable differences between treatment and control groups are minimized or avoided entirely, leaving only one variable (i.e., assignment to treatment or control group status) as the only remaining, likely cause for differences in observed outcomes. Often, however, randomization is not feasible or permissible.

Rossi and Freeman [1] noted that randomization is difficult to apply or maintain when: 1) The treatment is in its early stages. Projects in early stages may need frequent changes in structure to perfect their operation and delivery; 2) the enrollment demand is minimal. When very few patients express in the treatment, diversion of a subset of these potential patients to control status may be unacceptable; 3) patients have ethical qualms about denying treatment to those perceived to be in need; 4) time and money are limited. RCTs often require extensive management process that tend to consume large amount of time and money; 5) RCT is likely to be less generalizable to the population of interest; and 6) the integrity of the evaluation may be threatened easily. This may occur by failure of treatment or control group members to follow protocols, morbidity or mortality, or other reasons for dropping out of the evaluation [2].

Under these circumstances, observational studies would often be the design of choice if the investigator could adjust for the large confounding biases. Propensity score matching techniques have been devised for this purpose [3]. Formally, propensity score for a patient is the probability of being treated conditional on the patients’ covariates values such as demographic and clinical factors. If we have two patients, one in the treatment and one in the control group, with the same or a similar propensity score, we can consider these subjects randomly assigned to each group and thus as equivalently treated or not treated.

Unlike RCT with its long and well-documented history, the application of design processes in propensity score matching are not well established. Moreover, there are numerous factors to consider in implementing propensity score matching in general, a process further complicated by the number of matching routines available. Despite its frequent use in observational research, no coherent, rule-based decision matrix currently exists in the literature. The potential for misapplication of these techniques is high and contributes to the controversy as to the value of methodology itself.

In this article, we looked at seven different types of matching techniques and propose five quantifiable criteria to help health service researchers choose the best matching techniques for their data sets. We will briefly describes the problem of evaluation and then demonstrate why we believe propensity score matching offers a solution. We will summarize the seven different types of matching techniques with proposed tests demonstrate the application of these guidelines to the Medstat MarketScan Data (Thomson-Medstat, Ann Arbor, MI) and finally focus on a discussion of pertinent issue and ends with concluding thoughts.

Evaluation Problem and Propensity Score Matching

Empirical methods in health economics have been developed to answer counterfactual questions [4–8] such as, “What would have happened to a patient’s health had he or she been subject to an alternative treatment?” Answering this question requires random assignment of each patient to different alternative treatments as in RCTs, which is missing in observational studies. Because treatment are not randomly assigned, treated and control subjects are not comparable before treatment, so differencing outcomes may reflect these pretreatment differences rather than effects of the treatment. Pretreatment in observed and accurately measured covariates constitute an overt bias, such bias is visible in the data at hand and can be removed by adjustments.

Matching is a frequently employed method to remove overt bias and estimate the treatment effect using observational data. One way to match focuses directly on risk factors that are correlated with both the outcome and the choice of treatment. For example, suppose sex is the only important risk factor. Suppose that we observe a treatment group that consists of 30 men and 70 women and a control group that consists of 50 men and 50 women. We cannot estimate the treatment effect based on the overall average difference between these two groups because of the sex imbalance. Nevertheless, we could match the men in the treatment group to the men in the control group and likewise the women. We could then estimate the overall effect as a weighted average of: 1) the average effect for men (mean treatment cost—mean control cost for men); and 2) the average effect for women (mean treatment cost—mean control cost for women). The male/female weight could be 30/70, 50/50, or 40/60, depending on whether one wishes to estimate the treatment savings for the treated group, the control group, or the total population. But when it is necessary to account for many factors, direct matching on all of the risk factors becomes unwieldy and inefficient. An alternative is matching on the propensity score.

Propensity score matching employs a predicted probability of group membership (e.g., treatment vs. control group) based on observed predictors such as pretreatment demographic, socioeconomic and clinical characteristics usually obtained from logistic regression to create a counterfactual group. Although logistic regression is the most commonly used technique, it is also possible to use probit, semiparametric, or nonparametric regressions to estimate the probability of group membership [9]. According to this technique, the only information required is the consistent probability estimate, which is strictly between zero and one for all covariate outcomes [10]. In this respect, linear probability modeling is not a good choice because it is possible to find out-of-range predicted values for these models.

Researchers should not ignore the important statistical properties and limitations of logistic regression models. It is the general tendency to keep the control data set as large as possible to increase the likelihood of finding better matches for the treatment group. It has been shown, however, that logistic regression can sharply underestimate the probability of rare events [11]. Therefore, the rule of thumb is to choose a control data set at most nine times as large as the treatment group, so that the overall percentage of the treatment group is not below 10%. Established criteria for logistic model development have been recommended in recent literature [12–14].

Selection of covariates is another important step before matching. The causality relationship among the covariates, outcomes, and treatment variables should be derived from theoretical relationships and a sound knowledge of previous research [15,16]. Variables should only be excluded from analysis if there is consensus in clinicians that causality fails. To avoid omitted variable bias (omitting relevant variable in our model), we should include all variables that affect both treatment assignment and the outcome variables. Omitted variable bias yields inaccurate propensity scores. Because including variables only weakly related to treatment assignment usually reduces bias more than it increases variance when using matching, under most conditions these variables should be included [17,18]. Adding an interaction term should be carefully considered and should be done so if it is supported both clinically and statistically. It has been showed that adding an inappropriate interaction term could alter the estimated propensity score, possibly introducing bias to the estimate [14]. Furthermore, to avoid post-treatment bias and overmatching, we should exclude variables affected by the treatment variable. The literature discusses several statistical strategies for the selection of variables [19,20].

Researchers should also consider estimation power. The Hosmer–Lemeshow [21] test is useful for detecting the classification power of the logistic regression. The test suggests regrouping the data according to predicted probabilities (propensity scores, in this case) and then creating equal-size groups. The insignificant value of the test is needed for precise classification.

The area under the receive operator curve (ROC) value is another way to detect classification power. The ROC curve is a graph of sensitivity versus one minus specificity as the cutoff varies. The greater the predictive power, the more bowed the curve. Therefore, the area under the curve (C-statistics) can be used to determine the predictive power of logistic regression. To classify group membership correctly, c-statistics should be greater than 0.80. Poorly fit model do not create balance between the treatment and control groups, this could lead to biased estimates [22].

One final point to consider before propensity score matching is identifying the substantial overlaps between the treatment and comparison groups. Every exclusion/inclusion criteria applied to the treatment sample should also be applied to the control sample [23].

Weitzen et al. [24] reviewed 47 studies searching MEDLINE and Science Citation to identify observational studies in 2001 that addressed clinical questions using propensity score methods. Of the 47 articles reviewed, 24 (51%) were not provided information about what method was used to select variables, 30 (64%) were unclear about whether interaction terms were incorporated into propensity score, and 39 (83%) were not even considered goodness of fit of the propensity score and only 18 (38%) studies reports area under ROC curve. What is more concerning was that nearly half (22 out of 47) of the studies included no information regarding whether the propensity score created the balance between exposure groups on the characteristics considered in the propensity model.

Types of Propensity Score Matching

After researchers have estimated the propensity score, they must select a matching technique. There are five key approaches to matching treatment and control groups, each of which is described below.

Stratified Matching

In this method, the range of variation of the propensity score is divided into intervals such that within each interval, treated and control units have, on average, the same propensity score [25]. Differences in outcome measures between the treatment and control group in each interval are then calculated. The average treatment effect is thus obtained as an average of outcome measure differences per block, weighted by the distribution of treated units across the blocks. It has been shown that five classes are often sufficient to remove 95% of bias with all covariates [25].

Nearest Neighbor and 2 to 1 Matching

This method randomly orders the treatment and control patients, then selects the first treatment and finds one (two for 2 to 1 matching) control with the closest propensity score [26]. The nearest neighbor technique faces the risk of imprecise matches if the closest neighbor is numerically distant.

Radius Matching

With radius matching, each treated unit is matched only with the control unit whose propensity score falls in a predefined neighborhood of the propensity score of the treated unit [27]. The benefit of this approach is that it uses only the number of comparison units available within a predefined radius, thereby allowing for use of extra units when good matches are available and fewer units when they are not. One possible drawback is the difficulty of knowing a priori what radius is reasonable.

Kernel Matching

All treated units are matched with a weighted average of all controls, with weights inversely proportional to the distance between the propensity scores of the treated and control groups. Because all control units contribute to the weights, lower variance is achieved. Nevertheless, two decisions need to be made: the type of kernel function and the bandwidth parameter. The former appears to be unimportant [27].

Mahalanobis Metric Matching

This method randomly orders subjects and then calculates the distance between first treated subjects and all controls, where the distance d(i,j) = (u–v)TC−1(u–v) where u and v are the values of matching variables (including propensity score) and C is the sample covariance matrix of matching variables from the full set of control subjects [28].

Each of the described types can be used with replacement (in which control patients are put back into the pool for further possible matching) or without replacement. To improve the quality of matching, caliper can be used. The caliper method defines a common support region (suggested one-fourth of standard error of estimated propensity score) and discards observations whose values are outside of the range defined by caliper.

Different types bootstrapping methods can be easily applied using standard commercial software programs. STATA and SAS files are available online [29,30].

Comparing Different Types of Propensity Score Matching

The fact that several types of propensity score matching exist immediately raises the question of choice: Which one is most appropriate? Surprisingly enough, however, the literature does not offer guidelines for making this choice.

Matching estimators compare only exact matches asymptotically and therefore provide the same answers. In a finite sample, however, the specific propensity score matching technique selected makes a difference. None of the proposed propensity score matching techniques in the literature is a priori superior to the others.

The general tendency in the literature is to choose matching with replacement when the control data set is small. Matching with replacement involves a trade-off between bias and variance (Table 1). With replacement, the average quality of matching increases; thus, the bias decreases but the variance increases.

Table 1.  Trade-offs in terms of bias and efficiency
Types of matching algorithmBiasVariance
  1. KM, kernel matching; MM, Mahalanobis matching; NN, nearest neighbor; (+), increase; (–), decrease.

NN Matching
 2 to 1 matching/1 to 1 matching(+)/(–)(–)/(+)
 With/without caliper(–)/(+)(+)/(–)
MM matching
 With/without caliper(–)/(+)(+)/(–)
Bandwidth choice of KM
 Small/large(–)/(+)(+)/(–)
 NN matching/radius matching(–)/(+)(+)/(–)
 KM or MM matching/NN matching(+)/(–)(+)/(–)

If the control data set is large and evenly distributed, 2 to 1 matching seems reasonable. Thus, reduced variance, resulting from the use of more information to construct the counterfactual for each participant, is traded for increased bias because of a poorer quality match, on average.

Kernel, Mahalanobis, and radius matching work better with large, asymmetrically distributed control data sets. The stratification method is especially useful if we suspect unobservable effects in the matching. Because stratification clusters similar observations, the effects of unobservables are assumed to diminish.

We propose the following set of guidelines for selecting the most appropriate application:

  • C1. Calculate two sample t-statistics for continuous variables and chi-square tests for categorical variables, between the mean of the treatment group for each explanatory variable and the mean of the control group for each explanatory variable.

  • C2. Calculate the mean difference as a percentage of the average standard deviation: 100(XT − XC)/½ (SXT + SXC), where XT and XC are a set of covariates, and SXT, SXC are the standard deviation of these covariates in the treatment and control groups, respectively.

  • C3. Calculate the percent reduction bias in the means of the explanatory variables after matching (A) and

    before matching (I): inline image, where XIT and XIC are the mean of a covariate in the treatment and control group, respectively, before matching XAT and XAC is the mean of a covariate in the treatment and control group, respectively, after matching,

  • C4. Use the Kalmogorov–Smirnov [31] test to compare the treatment and control density estimates for explanatory variables.

  • C5. Use the Kalmogorov–Smirnov test to compare the density estimates of the propensity scores of control units with those of the treated units.

The main purpose of a matching procedure is to reduce selection bias by increasing the balance between the treatment and control groups. In this respect, one would like to see insignificant differences or larger P-values (criterion 1); low mean differences as a percentage of the average standard deviation (criterion 2); 100% reduction bias in the means of explanatory variables (criterion 3); and insignificant differences when comparing the density estimates of the treatment and control groups (criterion 4 and criterion 5). Therefore, the best matching algorithm for the data is the one which satisfies all five criteria.

Data Sources and Construction of Variables

To illustrate the implications of these techniques, MarketScan data were used to examine the cost of illness for asthma patients. Details of the patient selection criteria are provided in Crown et al. [32]. Briefly, MarketScan contains detailed descriptions of inpatient, outpatient medical, and outpatient prescription drug services for approximately 13 million persons in 2005 that were covered by corporate-sponsored health-care plans. Patients with evidence of asthma were selected from the intersection of the medical claims and encounter records, enrollment files, and pharmaceutical data files. Individuals meeting at least one of the following criteria were deemed to show evidence of asthma:

  • • At least two outpatient claims with primary or secondary diagnoses of asthma.
  • • At least one emergency room claim with primary diagnosis of asthma, and a drug transaction for an asthma medication 90 days before or 7 days after the emergency room claim.
  • • At least one inpatient claim with a primary diagnosis of asthma.
  • • A secondary diagnosis of asthma and a primary diagnosis of respiratory infection in an outpatient or inpatient claim.
  • • At least one drug transaction for a(n) anti-inflammatory agent, oral antileukotrienes, long-acting bronchodilators, or inhaled or oral short-acting beta-agonists.
  • • Patients with a diagnosis of chronic obstructive pulmonary disease (COPD) and having one or more diagnoses or procedure codes indicating pregnancy or delivery, or who were not continuously enrolled for 24 months, were excluded from our study group.

The sociodemographic characteristics included the age of the household, percentage of the patients who were female and geographic region (northeast, north central, south, west, and “other” region). Charlson index scores (CCI) were generated to capture the level and burden of comorbidity. Point of service plans and other plan types, including health maintenance organizations and preferred provider organizations, were included. The analytic file contains patients with fee-for-service (FFS) health plans and those with partially or fully capitated plans. Data on costs were not available for the capitated plans, however. Therefore, the value of patients’ service utilization under the capitated plan was priced and imputed using average payments from the MarketScan FFS inpatient and outpatient services by region, year, and procedure.

Results

The objective of this study is to estimate the cost of illness for asthma patients. Therefore we defined the treatment group as the patients with evidence of asthma, and the control group as the patients without evidence of asthma.

Table 2 shows that before matching, treatment and control groups were similar with respect to sex and plan type, but quite different in terms of age, region, and CCI. Chi-square tests were used for proportions and t-tests were used for continuous variables.

Table 2.  Descriptive table for treatment and control cohorts
VariablesTreatment (N = 1184)Control (N = 3169)Differences
MeanMeanP-values
Age28.02631.3680.000
Female0.5060.5140.646
Male0.4940.4860.646
Northeast0.2180.2340.248
North central0.2530.2130.005
South0.3780.3310.004
West0.1100.0990.314
Other region0.0420.1220.000
CCI0.9990.1590.000
Point of service0.7200.7210.908
Other plan type0.2800.2790.908

Estimation of propensity scores with logit is presented in Table 3. As expected, sex and plan type were not significant. Several interaction terms were added into the models. F-tests on these interaction terms yield insignificant results. Age was also entered with spines, but there was no evidence of significant changes in results. Overall coefficients were significant (P < 0.000), and pseudo-R2 revealed that the equation explains 24.3% of variation in the choice. The area under the ROC curve was calculated as 0.863. These values indicate the potential benefit of matching the sample according to propensity scores.

Table 3.  Estimation of propensity score with logit
VariablesCoefficientsSEP-values95% confidence interval
North central0.3010.1150.009 0.077 to  0.526
South0.1840.1040.077−0.020 to  0.387
West0.2800.1460.055−0.006 to  0.567
Other region−1.4280.2890.000−1.995 to −0.861
CCI1.8870.0690.000 1.751 to  2.023
Age−0.0330.0030.000−0.039 to −0.028
Female−0.0300.0810.716−0.189 to  0.130
Point of service−0.0430.0900.635−0.220 to  0.134
Constant−0.9340.1270.000−1.183 to −0.686
N 4353   
Prob > χ2 0.000   
Pseudo-R2 0.243   

We applied the following types of propensity matching: nearest neighbor (M1), 2 to 1 (M2), Mahalanobis (M3), Mahalanobis with caliper (M4), radius (M5), kernel (M6), and stratified matching (M7). All balance-checking criteria are presented in Table 3.

One can immediately observe that criterion 1, which is the method most commonly used in applications, can be misleading. Although all regional differences between treatment and control groups were significant according to nearest neighbor and 2 to 1 matching (M1 and M2), an overall density comparison revealed insignificant differences.

Radius, kernel, and stratified matching (M5, M6, and M7) seemed to render results similar to nearest neighbor and 2 to 1 matching. Statistics were consistent across the tables.

Mahalanobis matching (M3 and M4) was distinctively better than the others. Especially with caliper (M4), calculated as 0.06 (one-fourth of the standard error or estimated propensity score), Mahalanobis matching was able to match our treatment and control groups in all salient aspects. In terms of criterion 1, all of the variables were insignificant. The mean difference as a percentage of the average standard deviation was less than 5% for age and northeast region, and virtually nothing for the others. For most variables, Mahalanobis matching with caliper was able to decrease the bias 100%, and densities for each variable were statistically equal. Moreover, M4 was the only technique that produced propensity scores of control units and treated units with insignificant differences.

After we selected the most appropriate matching technique, we estimated total health-care expenditures in the treatment and control cohorts. We did this in two ways: 1) by examining the differences in means of expenditure between the treatment and matched control units; and 2) by running a regression, in which the independent variables are the treatment indicator and the same variables used in propensity score estimation, and by estimating the marginal effects of the treatment indicator. Following Manning and Mullahy [33], we used a generalized linear model with a log link and gamma family. Table 4 presents these results.

Table 4.  Estimated total health-care expenditures ($)
Matching typeTreatmentControlDifferenceSERegression-based
DifferenceSE
Unmatched10,3983,3457,053  7424,247  489
M1: Nearest neighbor10,3987,3773,0211,2753,9691,135
M2: 2 to 110,3987,3643,034  9785,1571,232
M3: Mahalanobis10,3986,8923,5062,2814,8231,205
M4: Mahalanobis with caliber11,1046,6414,4633,2524,456  994
M5: Radius10,3987,7862,6121,2784,601  659
M6: Kernel10,3987,9422,4562,2814,8231,205
M7: Stratified10,3988,3582,0402,5643,7541,009

Mahalanobis with caliper matching estimated the treatment effect as $4463. This is $2590 less than the unmatched difference and $2423 more than the amount obtained by the wrongly chosen method of stratified matching. The difference is both practically and statistically significant.

Using the selected matching technique, the regression-based difference between the matched control and treatment groups was $4456. By running a regression after matching, we were able to decrease the standard errors almost threefold. Note that correct matching technique provides the estimate that is closest to the regression counterpart ($4463 − $4456 = $7).

Another advantage of running regressions is that regression allows for convergence of the values. For example, the unmatched difference after regression was $4247, which represents a difference of only $209 from the correctly chosen propensity score technique. Similarly, in the case of stratified matching, the difference would be only $702 after regression, rather than $2423, which is precisely the difference without regression.

Discussion

Many researchers are accustomed to data from planned experiments where patients are assigned randomly to treatment or control group status. In many other instances, though, researchers often have to rely on nonexperimental, observational data to estimate the effects of treatments on outcomes, such as total health-care costs, which are difficult or impossible to estimate in the artificial setting of a clinical trial. In this latter environment, we observe costs for treated and untreated patients, but the two groups are often unbalanced with respect to important risk factors that may influence the outcomes of interest. As a result of this imbalance, two forms of bias can arise when comparisons are made between the treatment and control groups: overt bias, which is measurable, and hidden bias, which is unobservable. Overt bias can be removed with properly applied propensity score matching, whereas hidden bias must be addressed by other methods.

It is worth noting this distinction between overt and hidden bias, when considering the merits of propensity score matching in comparison to the merits of a randomized trial. When both are feasible, randomization would be preferred, because propensity score matching would remove only observed differences (overt bias) between treatment and control groups, whereas RCTs, when properly conducted, would remove both observed and unobserved differences.

The performance of different matching estimators varies case by case and depends largely on the data structure at hand. Researchers should not introduce any additional bias, such as choice bias resulting from failure to check balancing criteria. Empirical work shows the value of multivariate analysis in this respect. Supported by multivariate analysis, the average treatment estimates converge on each other, regardless of the propensity score type chosen. Occasionally, researchers are unwilling to lose any observations from the treatment group. In these cases, the present research proposes using a propensity score matching that retains the largest treatment group, while supporting the use of multivariate analysis after matching.

The heterogeneity of a real-life patient population and a lack of standardized analysis in observational studies may make clinicians suspicious of the results of propensity score matching. Presentation of Tables 3 and 5 to clinicians is thus clearly valuable. Table 3 shows the process whereby we eliminate the heterogeneity of a real-life population, and Table 5 demonstrates that the right type of propensity score matching yields an answer similar to that of multivariate analysis.

Table 5.  Balance-checking criteria
VariablesM1M2M3M4M5M6M7
  1. CCI, Charlson index scores.

C1: T-test or chi-square test P-values
 Age0.0000.0000.0010.7090.0000.0000.000
 Female0.2670.8680.8700.9990.2330.3760.255
 Northeast0.0050.0030.5140.4820.0020.0000.006
 North central0.0000.0000.9620.9990.0000.0000.000
 South0.0000.0000.9660.9990.0030.0000.000
 West0.0000.0000.9480.9990.6100.0240.450
 Other region0.0000.0000.9990.9990.0000.0000.000
 CCI0.0000.0000.2550.9990.0000.0000.000
 Point of service0.4070.9370.8910.9990.4550.6890.515
 Other plan type0.4070.9370.8910.9990.4550.6890.515
C2: The mean difference as a percentage of the average standard deviation
 Age57.90055.50013.5002.20050.40056.10048.500
 Female4.6000.6000.7000.0004.9003.6004.100
 Northeast11.60010.8002.7004.20012.80015.90010.200
 North central20.60017.1000.2000.00018.00019.60016.400
 South29.00028.7000.2000.00012.20020.10025.700
 West17.40015.1000.3000.0002.1009.3003.500
 Other region43.40042.3000.0000.00034.40038.00030.400
 CCI26.70025.7004.7000.00019.30025.00020.700
 Point of service3.4000.3000.6000.0004.9001.6002.800
 Other plan type3.4000.3000.6000.0004.9001.6002.800
C3: Percent reduction bias in means of explanatory variables
 Age182.700172.20037.900123.400143.000174.400151.600
 Female191.800137.800143.200100.000212.900132.400150.800
 Northeast201.000178.100166.300 54.600233.100316.100250.400
 North central109.800 75.900102.100100.000 84.800 99.500 88.500
 South187.000184.300 98.200100.00025.400103.400154.500
 West676.800595.500108.100100.000163.800394.100389.500
 Other region149.900141.500100.000100.000 83.300108.900135.500
 CCI130.700129.400105.100100.000123.000128.900120.500
 Point of service759.600 28.400 43.300100.000212.900316.200268.600
 Other plan type759.600 28.400 43.300100.000212.900316.200268.600
C4: Comparison of treatment and control density estimates
 Age0.0000.0000.0000.1560.0000.0000.000
 Female0.9990.9990.9990.8300.9990.9900.890
 Northeast0.9990.9990.9990.9990.9720.9720.950
 North central0.7870.3740.9990.9990.1290.1290.850
 South0.9970.9990.2560.9990.0500.0500.080
 West0.9690.6770.6160.9270.9990.9990.490
 Other region0.7610.9530.9990.9990.1230.1230.350
 CCI0.0000.0000.0000.1530.0000.0000.000
 Point of service0.9990.9990.9890.8640.9990.9990.988
 Other plan type0.9990.9990.9890.8640.9990.9990.988
C5: Comparison of the density estimates of the propensity scores of control units with those of the treated units
 Propensity scores0.0000.0000.0010.1920.0000.0000.000

Two limitations should be noted. First, if two groups do not have substantial overlap, then substantial error may be introduced. New methods are under development to account for limited overlap in the estimation of the average treatment effect [33]. Second, matching may not eliminate hidden bias. Suppose, for example, that we match people according to certain observable factors and then attribute any resulting difference in outcomes to differences in treatment. It is quite possible that the outcomes of patients with the same observable characteristics can vary widely because of some unobservable factor, such as physician- or practice-prescribing patterns. The bounding approach under propensity score matching is proposed to address this issue [4]. Rosenbaum assumes a latent unobservable factor and answers such questions as how strongly an unmeasured variable must influence a selection process in order for it to undermine the implication of matching analysis. The other solution would be to use techniques different from that of propensity score matching, such as instrumental variable estimation or difference in difference estimation. But these estimators are confounded by their own limitations.

Conclusion

The merits of using propensity score matching technique have become increasingly recognized over the years as its application has grown. Nevertheless, different propensity score matching techniques may produce different results in finite samples. Sensitivity analysis of the matching techniques is especially important because none of the proposed methods in the literature is a priori superior to the others. This article has discussed the seven common types of propensity score matching techniques and provides guidelines to choose among them.

A case study, involving the analysis of US health expenditure data, has been presented to highlight how different types of propensity score matching techniques have substantial effect on the magnitude of treatment effect. Regression analysis is used to support the argument.

The discussion in this article does not provide detailed or rigorous treatment of the theory that underlies the matching technique. In recent years, several articles on propensity score matching techniques, with various level of sophistication, have been published. Curious readers are encouraged to consult these articles for a more detailed analysis [3,4,10,15,17–20,25–28,34,35].

This article benefited greatly from insightful comments offered by Ron J. Ozminkowski, Kathy Schulman, Tami Mark, Robert Houchens, and four anonymous referees. The opinions expressed in this article are the author’s and do not necessarily reflect the opinions of their affiliated organizations.

Source of financial support: None.

Ancillary