SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

We collect experimental choice data and estimate preferences for mobile phones with a special focus on measuring consumer price evaluations when consumers face difficult choice tasks. For this purpose we employ a heteroskedastic random coefficient logit model in which the variance of the extreme value error term depends on variables that affect the consistency (with rational behavior) of choice. We specify the price of some products in the choice experiment with discount and seek to answer the question how consumers evaluate these. We find evidence in our data that the prices specified with discount significantly compromise the consistency of choice. Copyright © 2009 John Wiley & Sons, Ltd.


1. INTRODUCTION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

Price is a key factor for consumers to make a choice between products or brands. In most models of consumer choice, a price variable is included as an important explanatory variable. The usual assumption is that consumers are able to correctly interpret and compare prices across choice options. In this paper we challenge this assumption by analyzing experimental choice data, where we manipulated the choice options such that the choice task is difficult, and we examine whether individuals are able to correctly compare price differences, if there are any. The task complexity is created by allowing products to effectively have the same price, but the wording is confounded by introducing price discounts. For example, we make individuals compare prices like 135 euros and 150 euros with 10% discount, while these are of course the same.

Choice experiments provide a useful framework to collect data when real-life data are not available or are more costly. Such situations may occur when one would like to estimate or test hypotheses about the distribution of consumer preferences for a given product. For such problems, experiments offer the advantage that the prices of the products are exogenous. Estimation of consumer preferences using revealed preference data in most cases cannot assume price exogeneity and therefore it requires expensive datasets containing a large number of observations (Berry et al., 1995). The recent literature has witnessed a growing interest in such choice experiments especially in marketing (where the literature is huge; we only mention a pioneering work by Louviere and Woodworth, 1983) but also in various other demand studies, for example, on environmental issues (e.g., Adamowicz et al., 1997; Layton and Brown, 2000), on transportation problems (e.g., Brownstone and Train, 1999; Small et al., 2005), on health care issues (e.g., Scott, 2001; San-Miguel et al., 2002), and on other demand problems (Revelt and Train, 1998; van Ophem et al., 1999).

Most of this literature assumes that consumers are rational utility maximizers. One reason for this assumption is that it allows for the application of reasonably easy-to-analyze models for observed consumer choice. In contrast, there is also substantial literature on consumer decision making which allows for the possibility that consumers do not always behave perfectly rationally, even when they intend to do so (see Bettman et al., 1993). This phenomenon is often coined bounded rationality (e.g., Rubinstein, 1998). The drivers of this bounded rationality are found in the effort that consumers have to make to arrive at a choice. This effort depends on potentially difficult factors of the choice task like the number of effective tasks per respondent, the number of alternatives, the way the choice alternatives are specified, the number of product characteristics involved, and possibly other factors (see, for example, Tyebjee, 1979; Johnson and Payne, 1985). In the sequel, we refer to these as choice complexity variables.

A number of papers analyzing choice experiments made important steps towards measuring how the different choice complexity variables influence consumer choice (e.g., Mazzotta and Opaluch, 1995; Dellaert et al., 1999; Swait and Adamowicz, 2001). The basic methodology for such an analysis is to relate the choice complexity variables to the consistency (with rational behavior) of choice, which is typically measured by the effect of the choice complexity variables on the variance of the error term in the utility. Along these lines, by employing a heteroskedastic logit model, DeShazo and Fermo (2002) find empirical evidence that most of the choice complexity variables listed in the previous paragraph affect the consistency of choice negatively. These authors expressed the way the choice alternatives are specified in a choice set by three variables, namely, the number of characteristics whose levels differ across alternatives and the mean and standard deviation of the dispersion of the characteristics within alternatives. They show that omitting these variables may yield over- or underestimation of welfare by up to 30%

In this paper, by employing a heteroskedastic random coefficient logit model (McFadden and Train, 2000), we examine whether it can happen that choice complexity leads to making mistakes about prices of products. For example, and as we will consider below, we seek to answer the question whether consumers can understand that 135 euros is the same price as 150 euros with a 10% discount, in case the product also has a variety of other characteristics.

We collect experimental data and estimate preferences for mobile phones, with a special focus on measuring the effect of specifying the price. We do so in a way similar to measuring the effect of the choice complexity variables on the variance of the error term in the utility. We specify some prices with discount and the others without discount and investigate whether the prices specified with discount cause difficulties in making choices. For consistency of our parameter estimates it is crucial that we also include variables that measure the consistency of choice. Because of this, besides the price specification variable we include the mean dispersion of the characteristics within alternatives as defined by DeShazo and Fermo (2002) and a variable that measures how similar the alternatives are in terms of utility. This variable is related to the number of characteristics with levels that differ across alternatives (i.e., the first of DeShazo and Fermo's variables; for details see Section 2). We find evidence in our data that this latter variable as well as the price specification variable significantly compromise the consistency of choice.

The main contribution of our paper for consumer research is that we demonstrate that the usual assumption that consumers are able to correctly interpret and compare prices across choice options does not always hold. The estimation results of our model, however, imply two other findings of potential interest. The first is the empirical result that we find the choice complexity variable that measures how similar the alternatives are in terms of utility to be statistically significant. The other finding of potential interest is the implication of this result for the design of statistically efficient experiments. Several authors have advocated the selection of choice alternatives whose utility is similar as a tool for constructing statistically efficient experimental designs (e.g., Huber and Zwerina, 1996; Arora and Huber, 2001; Toubia et al., 2004). However, our empirical results suggest that such experimental designs cause choices to be inconsistent, which leads eventually to inconsistent estimation of consumer preferences. We provide guidelines on how one can still use the ideas from these papers to construct statistically efficient designs in such circumstances.

The paper is organized as follows. We provide details about the model and the choice complexity variables in Section 2 and we formulate the main hypotheses of the paper. In Section 3 we give details regarding the estimation of the model by maximum likelihood. In Section 4 we discuss some issues regarding the collection of the data and present the estimation results on mobile phone preferences. In Section 5 we discuss the implication for the design of experiments mentioned in the previous paragraph. We conclude the paper with a section containing a summary and possible topics for further research.

2. THE MODEL

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

We use a random coefficient logit model modified in such a way that it accounts for choice task complexity, which yields a heteroskedastic random coefficient logit model. Random coefficient logit models have been successfully employed in the literature (e.g., Berry et al., 1995; Revelt and Train, 1998; Small et al., 2005) to model consumer heterogeneity, which we believe is relevant for our investigation. Heteroskedasticity is an important feature of models analyzing experimental choice data because they measure the effect of choice complexity on the variance of choice. This latter aspect is important in choice experiments because a choice task that is more complex is expected to have a less precise choice response. If choice complexity is relevant and not captured by the model then the model will not be correctly specified and it will yield inconsistent estimates of the parameters of interest. We illustrate this in a simulation study in the next section.

In a typical choice experiment a respondent faces the task of choosing the best from one or more sets of hypothetical products. Suppose that each respondent is given S choice sets, each consisting of J hypothetical products, and the respondent must choose the most preferred from each. We define the utility for a respondent i who chooses hypothetical product j from choice set s as a heteroskedastic random coefficient logit, that is,

  • equation image(1)

where xjs is a K × 1 vector of characteristics of product j from respondent i's choice set s, β is the mean and ω is the standard deviation parameter vector of the random coefficient of xjs, Vis is a diagonal matrix with standard normal random variables on its diagonal, which represent latent consumer characteristics, and εijs are i.i.d. type I extreme value random variables common to logit models.1 Since this random variable has a coefficient σs, the variance of the utility depends on the choice set. Throughout this paper we sometimes refer to equation image as the variance of choice, which is meant to measure the consistency of choice.

In a (mixed) logit model used to analyze revealed preference data, the variance of choice is typically restricted to be equal to 1 because it cannot be identified by the estimation procedure. If in the utility specification in equation (1) we take equation image then this will assume that in the experiment all choice sets are equally difficult to choose from. There are reasons to believe that this is not so; let us illustrate this by a specific example. Suppose that in a choice set with two alternatives one alternative Pareto dominates the other and in another choice set none of the two alternatives dominates the other. In the former type of choice set it is expected that without much effort respondents choose the dominating alternative. However, in the second type of choice set it is not so easy to make a choice, and therefore respondents are likely to adopt some simplifying decision rule instead of taking the full effort of making a choice. This idea is also supported by the more general findings of the behavioral economics and human decision science literature; for a comprehensive discussion we refer to DeShazo and Fermo (2002). In terms of the utility specification in equation (1), this phenomenon forces the error term corresponding to the more complex choice set to be systematically higher than the error term in the choice set where one alternative dominates the other. Consequently, the variance of choice equation image in the more complex choice set is expected to be higher than the variance of choice in the less complex choice set.

We quantify the complexity of a choice set s by defining a column vector of explanatory variables cs based on the attributes of the alternatives in the choice set. Then we postulate that the variance of choice depends exponentially on these explanatory variables; more precisely, we assume that σs = exp(−cs′γ) (DeShazo and Fermo, 2002). The parameter vector γ then measures the effect of choice complexity on the variance of choice. Obviously, a negative component of the parameter vector means that the corresponding choice complexity variable increases the variance of choice.

We note that under the assumption of perfectly rational respondents the complexity of the choice sets would not be relevant, so choice set complexity would provide no reason to specify different variances of choice across choice sets. In this sense, therefore, our model provides a way of measuring bounded rationality.

In the model we use three variables for measuring choice complexity so that for choice set s we have cs = (cs1, cs2, cs3)′. We define these variables for choice sets with two alternatives since we only use such choice sets. We define the first variable, which we call Mean Dispersion, as

  • equation image(2)

where zjk is the characteristic k of alternative j in respondent i's choice set s recoded so that it takes − 1, 0, 1 for each three-level characteristic and − 0.5, 0.5 for each two-level characteristic, increasing as the attractiveness of the characteristic increases (e.g., for the highest price it is − 1 and for the lowest price it is 1).2 DeShazo and Fermo (2002) use this variable with the name ‘Mean SDk’. We note that in our study we have characteristics with two and three levels (see Table A.I(i) in Appendix A).

The variable SDj captures the dispersion of attribute levels within alternative j. If the dispersions for the alternatives are large then so will cs1 be. For example, if a choice set contains hypothetical products that have either only highly attractive or only highly unattractive characteristics, then cs1 will be small, reflecting that the task of choosing from this choice set is relatively easy. DeShazo and Fermo (2002) also include the standard deviation of the dispersion as a variable that can potentially affect the variance of choice. Although they find this variable to be significant in their empirical analysis, its effect on the variance of choice turns out to be minor, so for reasons of parsimony we do not include it in our model.

The second choice complexity variable cs2 is meant to measure the similarity of alternatives in terms of utility. To choose from alternatives similar in terms of utility is expected to be more difficult than to choose from alternatives of which one dominates the others. More formally, for a pair of choice alternatives we compute the difference of the characteristics and their signs d = (sign(z11z21), …, sign(z1Kz2K))′, where sign(z1kz2k) is 1 if z1kz2k > 0, 0 if z1kz2k = 0, and − 1 otherwise (where zjk are as above). In case the number of negative components of d is larger than the number of positive components, we replace d by − d. Let n and p denote the number of non-zero and positive components of d, respectively. We define the second choice complexity variable as cs2 = np. For our six-attribute case (see again Appendix A) this variable will be 0 if and only if one alternative (weakly or strongly) dominates the other and it will be 3 if, for example, three characteristics of the first alternative are higher and the rest lower than those of the second alternative. In other words, this variable measures the number of trade-offs, a respondent needs to make when making a choice. The larger the number of trade-offs, the more difficult the choice task will be.3 Due to this, we will refer to this choice complexity variable as the Number of Trade-Offs. We note that this variable is expected to be correlated with utility balance as defined by Huber and Zwerina (1996). This is because a choice set with a large Number of Trade-Offs is likely to be more utility balanced than a choice set with few trade-offs, which, on the other hand, are likely to be unbalanced in terms of utility.

We define our third choice complexity variable cs3 as the variable by which we measure the effect of price evaluation on the variance of choice. This is a dummy variable that takes the value 1 if the price of one of the alternatives in choice set s is given by a discount and 0 otherwise (see the survey form in Table in Appendix A). This is the main variable of interest in our model. If its coefficient turns out to be significantly negative, we can draw the conclusion that the way prices are specified causes the respondents to have significant difficulties in making the choice. We refer to this variable as Price Specification. The main null hypothesis in this paper is that the parameter of this variable is zero against the alternative that it is negative.

We assume that respondents treat a discounted hypothetical product at the discounted price value (e.g., they treat ‘€189, now with 10% discount’ as €170). There may be concerns, as expressed by two reviewers, that the prices specified by discount may provide information to the respondents that the model does not capture. Our assumption regarding this aspect is that the discounted price provides no information to respondents that has a systematic effect on the mean utility. Below we test the null hypothesis that the effect of the price discount on the mean utility is zero. For this we also estimate the model in which we include the price discount dummy variable in the mean utility. This variable is different from Price Specification defined above because it characterizes the alternatives, while Price Specification characterizes the choice sets. In order to make this distinction explicit we refer to the variable included in the mean utility as Price Discount. Since our data are not able to identify the coefficients of both of these variables, from this model we exclude the Price Specification variable.

3. ESTIMATION AND INFORMATION MATRIX

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

The parameters of the model can be estimated by the method of maximum likelihood. The log-likelihood function can be expressed as a constant plus

  • equation image(3)

where yijs is 1 if respondent i chooses alternative j in choice set s and 0 otherwise, and πijs is the probability that yijs = 1. This probability is given by the formula πijs = ∫math imagepijs(υ)ϕ(υ)dυ, where

  • equation image

and ϕ(υ) is the probability density function of the K-dimensional standard normal distribution. Note that the probability pijs depends on i only through xijs and cis. The subscript i in xijs and cis reflects that different respondents may be given different choice sets. The latent characteristics Vis are integrated out.

In order to compute asymptotic standard errors and design the experiment in an efficient way we need to use the variance of the parameter estimator, or equivalently, the Fisher information matrix. This can be computed as the variance of the first-order conditions of the log-likelihood evaluated at the true parameter values. We denote the vector of all parameters by θ = (β′, ω′, γ′)′ and its true value by θ0 = (β0′, γ0′, ω0′)′. The information matrix is given by

  • equation image(4)

where

  • equation image

where equation image is the diagonal matrix with diagonal πs = (π1s, …, πJs)′, and

  • equation image

Here Pis(υ) is the diagonal matrix with diagonal pis(υ) = (pils(υ), …, piJs(υ)) and Xis = (xils, …, xiJs)′. We provide a short derivation of the information matrix in Appendix B.

3.1. Simulation Study

Here we present a small simulation exercise to study the effects of omitting the choice complexity variables when they are relevant. This is important for illustrating the inconsistency of the estimates when the choice complexity is incorrectly omitted. For this we consider a heteroskedastic standard logit model uijs = xjs′β+ σsεijs, where xjs is a column vector with four components and each component takes the value 1, 2 or 3. The variance of choice is given by one choice complexity variable defined as the Number of Trade-offs. The true values of the parameters are β = (−1, 1.5, 1, 0.5)′ and we consider three cases for the choice complexity parameter, namely, γ = − 0.1 or − 0.3 or − 0.5. This way we can study the effect of omitting the choice complexity variable when its true effect on the variance of choice varies from small to large.

We generate choice data based on 120 choice sets with two alternatives by simulating 40 choices for each choice set. This setup can have the interpretation that there are 400 respondents and each of them is given a design of 12 choice sets. Further, the total number of different designs is 10, and they are distributed evenly across the respondents so that each design is given to exactly 40 respondents. We note that this setup is quite similar to our experimental data collection framework.

We replicate the data generation 100 times and for each replication we estimate the true model and the homoskedastic logit model, that is, where the choice complexity variable is omitted. The results are presented in Table I. The true values are given in the first column. The second and third columns contain the means and standard deviations for the true model and the fourth and fifth columns contain the means and standard deviations for the model with omitted choice complexity. For the low effect choice complexity case (γ = − 0.1, upper part of the table) the inconsistency is relatively low but clearly noticeable. As the effect of the choice complexity increases, so does the inconsistency. For instance, the estimate of the first parameter is almost a half of the true value in the moderate effect choice complexity case (γ = − 0.3, middle part of the table) and almost a third of the true value in the large effect choice complexity case (γ = − 0.5, lower part of the table). We note that in all cases the estimates are biased towards zero and the bias is different for parameters of different signs (see the estimates of β1 = − 1 and β3 = 1).

Table I. Simulation results for the standard logit model with true choice complexity parameters equal to − 0.1, − 0.3, − 0.5
True valuesTrue modelChoice complexity omitted
MeansSDMeansSD
− 1.0− 1.0220.076− 0.8390.038
1.51.5240.0851.3240.038
1.01.0190.0610.8910.037
0.50.5090.0470.4340.035
− 0.1− 0.1100.036
− 1.0− 1.0100.088− 0.5650.030
1.51.5220.1011.0280.032
1.01.0140.0720.6980.033
0.50.5060.0470.3350.026
− 0.3− 0.3020.045
− 1.0− 0.9980.093− 0.3590.027
1.51.4980.1010.7920.025
1.01.0010.0750.5450.023
0.50.5090.0550.2790.023
− 0.5− 0.5010.046

The conclusion we can draw from this simulation study is that we can only omit the choice complexity variables if we are convinced that their effect is negligible. However, typically it is quite unrealistic to have such information without estimating the effect of the choice complexity variables. If their effect turns out not to be negligible, then we can expect our estimates to be inconsistent.

4. DATA COLLECTION AND RESULTS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

In this section we describe the data collection (first subsection) and present the estimation results (second subsection) for the model specified above. As mentioned in Section 2, the main motivation is to investigate the effect of specifying the prices by discount on the variance of choice. This implies the null hypothesis that the coefficient of the Price Specification variable is zero against the alternative that it is negative. A similar null hypothesis is of interest regarding the other two choice complexity variable Mean Dispersion and Number of Trade-offs. We compare several alternative models in order to select a model which explains our data in the best way.

4.1. Data Collection

In order to collect the data we create a so-called experimental design, which contains the characteristics of all hypothetical products from all choice sets given to all the respondents. In order to make the choice tasks easier for the respondents, we adopt a commonly used idea and specify each characteristic by a limited number of levels. Any hypothetical product is just a combination of the characteristic levels.

We collect data from students on mobile phone preferences. For the hypothetical mobile phones we use the following characteristics: Price, Price per Minute, Extras, Network, SMS Price and Design. ‘Price’ is just the purchase price of the hypothetical mobile phone, ‘Price per Minute’ is the cost of talking for one minute on the mobile phone, ‘Extras’ represent different extra features that the mobile phone has, ‘Network’ specifies the company that provides the service, ‘SMS Price’ is the cost of sending an SMS and ‘Design’ refers to the way the mobile phone looks. The levels of these characteristics are presented in Table A.I(i) in Appendix A. The first three characteristics have three levels and the last three characteristics have two levels. The characteristics Price and Price per Minute are assumed to be quantitative, while the characteristic Extras is assumed to be qualitative (see Table A.I(i) for its coding).

For determining the number of choice sets per respondent and the number of alternatives in a choice set we rely on empirical results by Swait and Adamowicz (2001). Our purpose is to determine the design size so that the complexity of the choice task is moderate. Swait and Adamowicz find that the subjects in their experiment provide responses with no fatigue for the first half of choice sets of a 16-choice set design, in which each choice set has three alternatives. Our interpretation of this finding is that respondents can evaluate 24 hypothetical products grouped in small choice sets without getting significantly distracted. This is the fact that provided us the reason to give each respondent 12 choice sets with two alternatives.

When constructing the experimental design we rely on ideas from the statistics literature on optimal Bayesian design (see, for example, Chaloner and Verdinelli, 1995) and on recent developments on designs for choice experiments in marketing (Sándor and Wedel, 2005). More precisely, we postulate a parsimonious model with a reduced number of random coefficients and construct the experimental design by maximizing a one-dimensional function of the information matrix. Even though this model is not completely the same as the models we present below, due to some common blocks of the information matrix we expect our experimental design to be fairly efficient also for these models.4 The main strength of the approach from Sándor and Wedel (2005) is the construction of a large number of different choice sets, which yields an experimental design with improved statistical efficiency due to the large variation in the explanatory variables.

Based on our experimental design we collect 466 × 12 = 5592 observations. In Table A.I we provide some summary statistics regarding the explanatory variables used. The rightmost column of Table A.I(i) shows the frequencies at which the characteristic levels appear in the experimental design. The main impression is that the levels are fairly balanced, in fact exactly balanced for the levels of Price and Price/min. The levels show the highest unbalance for the levels of Extras. Table A.I(ii) presents descriptive statistics for the choice complexity variables Mean Dispersion and Number of Trade-offs. What is striking about these figures is that the standard deviation of Mean dispersion is very low. Regarding the Price Specification variable, out of the 12 choice sets given to a respondent, in five choice sets one of the alternatives is specified with discount. Thus the Price Specification dummy takes the value 1 for five out of every 12 choice sets and 0 in the rest of the seven choice sets. When the price is specified with discount the variable Price takes the discounted value. For example, if the price is specified as ‘€189, now with 10% discount’, the value of Price will be €170.

4.2. Estimation Results

We estimate five alternative models. The motivation for estimating exactly these models is to find a parsimonious model with a good explanatory power on the one hand, and to test some hypotheses of interest on the other hand. The alternative models are presented in Table II. Model (1) has the largest number of parameters because it contains all mean parameters apart from Price Discount, all standard deviations apart from Extras2 and all three choice complexity variables. Model (2) omits the three obviously insignificant parameters from Model (1), which are the standard deviation parameters of Network, SMS Price and Design, and the Mean Dispersion parameter. Model (3) omits all random coefficients, so it provides estimates for the standard logit model with the choice complexity variables that are significant in Model (2). Model (4) omits the choice complexity variables from Model (2), so it allows testing of the statistical significance of these. Model (5) adds the Price Discount variable to the variables of Model (2) and excludes the Price Specification variable. The Price Discount is a dummy variable that characterizes each alternative by taking the value 1 if the price is specified by discount and 0 otherwise.

Table II. Estimates for alternative models (asymptotic standard errors in parentheses)
 Model (1)Model (2)Model (3)Model (4)Model (5)
Mean parameters
 Price− 1.428− 1.482− 1.048− 1.192− 1.302
 (0.657)(0.256)(0.071)(0.154)(0.194)
 Price/min− 1.242− 1.293− 0.904− 1.030− 1.126
 (0.579)(0.236)(0.067)(0.143)(0.178)
 Extras1− 0.493− 0.488− 0.332− 0.414− 0.428
 (0.228)(0.101)(0.047)(0.071)(0.081)
 Extras2− 0.179− 0.190− 0.143− 0.144− 0.161
 (0.095)(0.058)(0.037)(0.041)(0.049)
 Network− 0.290− 0.301− 0.204− 0.259− 0.271
 (0.141)(0.061)(0.026)(0.043)(0.049)
 SMS Price− 0.275− 0.281− 0.194− 0.241− 0.244
 (0.133)(0.057)(0.025)(0.039)(0.045)
 Design0.2530.2540.1700.2100.220
 (0.123)(0.058)(0.029)(0.040)(0.047)
 Price Discount0.011
 (0.097)
Standard deviation parameters
 Price0.9731.0570.8850.907
 (0.443)(0.323) (0.231)(0.266)
 Price/min0.7650.8820.8230.764
 (0.435)(0.372) (0.267)(0.312)
 Extras10.7120.7320.7260.688
 (0.374)(0.288) (0.215)(0.247)
 Network0.142
 (0.939) 
 SMS Price0.201
 (0.924) 
 Design0.445
 (0.416) 
Choice complexity parameters
 Mean Dispersion0.048
 (0.877) 
 Number of Trade-offs− 0.123− 0.149− 0.152− 0.132
 (0.087)(0.058)(0.037) (0.057)
 Price Specification− 0.227− 0.233− 0.136
 (0.114)(0.094)(0.054) 
Prediction error (average RMSE)0.296600.296580.297450.298330.29795
Log-likelihood− 2702.71− 2703.37− 2710.33− 2709.18− 2706.90
Number of observations5592 

In the estimation process the probabilities πijs involved in the log-likelihood (3) need to be estimated by Monte Carlo simulations. In order to achieve very high precision for these, we employed quasi-random samples based on a (0, 3, s)-net in base 7 of dimension up to s = 6 and sample size 343 (see Niederreiter, 1992, for the deterministic and Owen, 1995, for the randomized version). Similar samples have been shown to reduce computation time substantially with respect to pseudo-random samples in a random coefficient logit estimation problem (Sándor and Train, 2004).

Before discussing the estimation results we present results that compare the five models through prediction and likelihood ratio tests. Table II contains the results on predictive ability of the alternative models. For this we conducted an out-of-sample prediction exercise by randomly selecting a hold-out sample of 20 respondents, which corresponds to 240 observations. We re-estimated the models and used the estimates to predict the choice in the hold-out sample. Then we computed the prediction error by comparing the predicted and realized choices based on the average of the root mean squared errors. According to our results Model (2) has the lowest prediction error, followed by Models (1) and (3). Model (4) has the highest prediction error, which suggests that the Number of Trade-offs and Price Specification variables are important for prediction. The fact that Model (5) has a prediction error higher than Model (2) suggests that the Price Discount variable does not provide extra information in predicting choices. We can draw the same conclusion from a t-test on the significance of the Price Discount variable in Model (5).

Regarding the comparison of Model (2) with Models (3), (4) and (5) we make some remarks. Model (3) is a standard logit model that differs from Model (2) by not having random coefficients. Its estimates are quite different from those of Model (2), showing the well-known inferiority of this model for empirical work. When we compare Model (4) to Model (2) we find that the estimates of the former are biased towards zero. This is similar to the observation made for the simulation results where the choice complexity variable is incorrectly omitted. The same observation holds for the comparison of Models (5) and (2). Based on this, we assess that the estimate of the Price Discount coefficient would not be large enough to be significant if we could include the Price Specification variable in Model (5).

Table III contains the results on the likelihood ratio tests. The results are in line with the comparisons regarding predictive ability. In conducting the tests we considered Model (2) as the base model and compared it to the other models. According to the test results, Model (2) is not rejected against Model (1). Further, Models (3) and (4) are rejected against Model (2) and the rejections are strong in the sense that they hold at both 1% and 5% significance. This implies that, on the one hand, our respondents' preferences for mobile phones are significantly heterogeneous. On the other hand, and more importantly, we can conclude that the Number of Trade-offs and Price Specification variables are crucial for explaining the data.

Table III. Likelihood ratio tests for model comparisons
Competing modelsLR statisticNo. of restrictionsCritical value at 5% (1%)Rejected
(2) vs. (1)1.3249.49 (13.28)No
(3) vs. (2)13.9437.82 (11.35)Yes
(4) vs. (2)11.6325.99 (9.21)Yes

Based on the predictive ability and likelihood ratio test comparisons we draw the conclusion that Model (2) is to be preferred. Next we discuss the estimates of this model. In Table II we can see that the estimates of all parameters are significant at the 5% level. This allows us to conclude regarding the main null hypothesis of our paper: the way prices are specified causes the respondents to have significant difficulties in making the choice.

The estimates of the mean parameters in Model (2) appear to be intuitive. The purchase price (Price), the cost of per minute talk (Price/min) and the cost of sending an SMS Price) affect the utility negatively (estimates − 1.482, − 1.293, − 0.281, respectively). The impact of the variable that describes extra features of the hypothetical mobile phones (Extras) is also intuitive. The impact on utility of having Internet in addition to games is (−0.190 − (−0.488))≃0.3, while the impact of also having camera is ((0.488 + 0.190)− (−0.190))≃0.87. We can conclude that camera has an impact on utility much higher than Internet. The estimate of the mean parameter of the variable that specifies the network that provides the service (Network) is − 0.301. This shows a preference for the large and well-known networks KPN and Vodafone. Finally, the estimate of the mean parameter of the variable Design (0.254) implies that mobile phones with a trendy design are preferred to mobile phones characterized as having a ‘basic’ design.

Let us provide some details about the possible practical consequences regarding the significance of the choice complexity variables. If in a choice set one of the alternatives has its price specified with discount, then, with everything else equal, the variance of choice is about 1.6 times as high. If in a choice set the Number of Trade-Offs is 3 (see choice set 4 in Table ; alternative A is more attractive in Price, Price per Minute and Extras while alternative B in the other three characteristics), then, with everything else the same, the variance of choice with respect to a choice set in which one alternative dominates the other (see choice set 5 in Table ; A dominates B) is about 2.5 times as high. If, in addition, one of the prices in this choice set were defined by discount, then the variance of choice in this choice set would go up to 1.6 × 2.5 = 4.

5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

Our empirical results have implications for questions about the design of experiments and, within this, the statistical efficiency of designs. These problems, after some pioneering work in the 1990s (e.g., Huber and Zwerina, 1996), have received considerable attention recently in the marketing and statistics literature (e.g., Street and Burgess, 2004; Toubia et al., 2004; Sándor and Wedel, 2005). In spite of the progress made, most of this research has been concerned only with statistical efficiency (with the possible exception of the latter paper) and ignored the cognitive aspects of designs. In other words, there has been little inquiry into how choice complexity affects the design of experiments, and further, what the implications of this are for statistical efficiency if choice complexity is taken into account. In the next two paragraphs we explain how the empirical results from our paper can help clarify these issues. Then in the remainder of this section we discuss the impact of our results on the possible consequences of using utility balance as a design construction criterion.

With respect to the problem of design of experiments an important question that arises is how to use our results when designing an experiment. In other words, how should we design the choice sets of a new experiment knowing that the number of trade-offs in a choice set, or more generally, some specific choice complexity variable is expected to negatively affect the consistency of choice? There are two approaches by which such knowledge can be incorporated in designing an experiment, as also suggested by DeShazo and Fermo (2002): namely, (i) to design choice sets with low values of the choice complexity variables; and (ii) to design choice sets in which the choice complexity variables vary and include them in the model. In practice, employing a mixture of these two approaches is advisable. Those choice complexity variables, like the number of alternatives, that can be set to values whose effect is small should be set to these values (we believe that the mean dispersion variable in our model has this feature), while those variables for which this is problematic should be varied and included in the model.

An example of this type of variable is the number of trade-offs as defined in Section 2. Designing choice sets for which the number of trade-offs takes only small values means that we design choice sets in which one alternative dominates or almost dominates the others. However, it is intuitively plausible and illustrated in some situations (e.g., Huber and Zwerina, 1996) that such experimental designs have low information content as measured by some one-dimensional function of the information matrix. Therefore, choice sets with few trade-offs will not constitute a practically useful design and should not be used. Hence, with respect to the number of trade-offs one should follow approach (ii). In practice, approach (ii) can be accomplished by designing the experiment with an algorithm based on statistical efficiency considerations, as suggested by Sàndor and Wedel (2005). This way the problem of designing choice sets for which the number of trade-offs varies is naturally solved because the design will be efficient also with respect to the parameter corresponding to this variable.

Now we turn to the discussion regarding the use of utility balance for design of experiments. A choice set has the property of utility balance (also called choice balance) if the choice probabilities of the alternatives in the choice set are as equal as possible. Choice sets with utility balance are intuitively believed to provide more information on the respondents' preferences than unbalanced choice sets. Due to this some authors advocate selecting choice sets with utility balance in combination with an algorithm that improves the statistical efficiency of the whole experimental design (Huber and Zwerina, 1996; Arora and Huber, 2001; Toubia et al., 2004).

Below we will argue that in the case when utility balance causes a choice set to be more complex, the statistical efficiency of an experimental design by selecting choice sets with utility balance is likely to be compromised. Indeed, as one would intuitively believe, choice sets with utility balance are more complex, just as choice sets with a higher number of trade-offs. This is because, as mentioned in Section 2, utility balance in a choice set is expected to be correlated with the number of trade-offs in the choice set, and therefore one expects that utility balance also increases the variance of choice. As discussed above, our empirical results establish that the number of trade-offs make a choice set more complex. This then suggests that utility balance is likely to make a choice set more complex. Therefore, neither the estimation nor the evaluation of statistical efficiency will be correct, unless one uses a model including choice complexity (like the number of trade-offs) variables. In conclusion, if we ignore choice complexity in our model when designing the experiment by selecting choice sets with utility balance, we will obtain choice data that are inconsistent with our model. For the same reason, the experimental design obtained cannot be statistically efficient.

Still, in the presence of choice complexity the algorithms developed for constructing statistically efficient designs are useful. For this one needs to incorporate the variables causing choice complexity in the model, for example in the way suggested by DeShazo and Fermo (2002), which we also follow. Then selecting choice sets with utility balance is expected to increase the statistical efficiency of the experimental design for the model that contains the choice complexity variables.

6. CONCLUSIONS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

In this paper we have described an empirical analysis based on experimental data on mobile phone preferences. In the analysis we measure the effect of some choice complexity variables with a special focus on the way consumers evaluate the price of the products, which we sometimes specify with discount. Our data suggest statistical evidence that this way of specifying prices affects the consistency of choice negatively. We find similar evidence regarding the choice complexity variable that measures the number of trade-offs a respondent needs to make when making a choice.

We find that consumers can make mistakes in the presence of distracting information. Our finding could have consequences on evaluating marketing campaigns involving price charges on advertising. Too much distracting information may cause the claim in the advertising message to be misunderstood, or may make consumers buy products at too high price levels.

Our results also suggest that the choice complexity variable that measures the mean dispersion of the characteristics within alternatives, found statistically significant by DeShazo and Fermo (2002), does not significantly affect the consistency of choice in our case. We believe this is due to the fact that we use choice sets with two alternatives in our experiment and therefore the variation in the mean dispersion is too low (see also Table (ii) for the standard deviation of the mean dispersion). This finding calls for further inquires regarding the effect of choice complexity variables under various conditions.

As we discussed in Section 5, our results have implications for the design of experiments. In this regard it would be interesting to study the effect of choice complexity on statistical efficiency. Such a study could compare the values of an efficiency criterion (e.g., D-optimality) for models with and without choice complexity variables in various situations. Then it would be possible to draw conclusions about the effect of choice complexity on the efficiency loss when in the model the choice complexity variables are incorrectly omitted.

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

We thank the special issue editor Pradeep Chintagunta and two anonymous reviewers for many detailed comments that were very helpful in revising our manuscript.

. APPENDIX A: ADDITIONAL INFORMATION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information
Table 4. Table A.I.
(i) Characteristics levels, their coding and their frequency in the experimental design
VariablePresentedCoded Frequency (%)
Price (euro)100− 1 33.33
 1350 33.33
 1701 33.33
Price/min (euro)0.25− 1 33.33
 0.300 33.33
 0.351 33.33
ExtrasGames1028.54
 Games and Internet0138.54
 Games and Internet & camera− 1− 132.92
NetworkKPN or Vodafone− 1 48.33
 Other than KPN and Vodafone1 51.67
SMS price (euro)0.17− 1 48.75
 0.231 51.25
DesignBasic− 1 48.13
 Trendy1 51.87
(ii) Descriptive statistics for the choice complexity variables
VariableMeanSDMinimumMaximum
Mean Dispersion0.5880.0770.2970.729
Number of Trade-offs1.6080.84603
Table 5. Table A.II. Example of a design with six choice sets
 Product AProduct B
Price€135€189, now with 10% discount
NetworkOther than KPN and VodafoneOther than KPN and Vodafone
Price/minute€0.25€0.35
SMS price€0.17€0.23
DesignTrendyTrendy
ExtrasGames and Internet and cameraGames and Internet and camera
Price€100€170
NetworkKPN or VodafoneOther than KPN and Vodafone
Price/minute€0.25€0.30
SMS price€0.17€0.23
DesignTrendyBasic
ExtrasGames and InternetGames and Internet and camera
Price€135€125, now with 20% discount
NetworkKPN or VodafoneOther than KPN and Vodafone
Price/minute€0.25€0.35
SMS price€0.17€0.17
DesignBasicTrendy
ExtrasGamesGames and Internet and camera
Price€100€170
NetworkOther than KPN and VodafoneKPN or Vodafone
Price/minute€0.25€0.30
SMS price€0.23€0.17
DesignBasicTrendy
ExtrasGames and InternetGames
Price€135€170
NetworkOther than KPN and VodafoneOther than KPN and Vodafone
Price/minute€0.30€0.35
SMS price€0.17€0.23
DesignTrendyTrendy
ExtrasGames and Internet and cameraGames
Price€170€150, now with 10% discount
NetworkKPN or VodafoneOther than KPN and Vodafone
Price/minute€0.25€0.30
SMS price€0.23€0.17
DesignBasicTrendy
ExtrasGames and InternetGames and Internet and camera

. APPENDIX B: DERIVATION OF THE INFORMATION MATRIX

In this appendix we derive the Fisher information matrix for the random coefficient logit model with choice complexity given in (4). For saving notation, we derive the formulas only for one respondent and one choice set; then the information matrix in (4) will be the sum over all consumers and choice sets. Also, we write integrals like ∫math image(·)ϕ(υ1)…ϕ(υK)dυ as ∫(·)dΦ (e.g., ∫math imagepj(υ)ϕ(υ1)…ϕ(υK)dυ≡∫pjdΦ).

The log-likelihood is a constant plus

  • equation image

where yj is 1 if the consumer chooses j and 0 otherwise, πj is the probability that yj = 1, y = (y1, …, yJ)′ and π = (π1, …, πJ)′. The Fisher information matrix is given by the formula equation image, where X and c is the design matrix and the choice complexity vector corresponding to the choice set. Using the components β, ω, γ of θ, we can write the information matrix as

  • equation image

The upper triangular part is determined by the lower triangular part by the symmetry of the information matrix. We need to compute the first-order derivatives of L.

We note that for a parameter vector λ that is one of β, ω, γ it holds that

  • equation image(5)

where equation image is the diagonal matrix with diagonal π. So we need to compute equation image for λ = β, ω, γ. Since π = ∫pdΦ, we have that equation image. The vector p is defined as p = (p1, …, pJ)′ with components

  • equation image

Hence we obtain

  • equation image

Using these, the formula equation image and (5) we obtain

  • equation image

Now we are able to compute the components of the information matrix. For this we introduce the notation

  • equation image

and use the fact that equation image. Then

  • equation image

The other components can be computed in a similar way. The information matrix becomes

  • equation image

where

  • equation image

REFERENCES

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information
  • Adamowicz W, Swait J, Boxall P, Louviere J, Williams M. 1997. Perceptions versus objective measures of environmental quality in combined revealed and stated preference models of environmental valuation. Journal of Environmental Economics and Management 32: 6584.
  • Arora N, Huber J. 2001. Improving parameter estimates and model prediction by aggregate customization in choice experiments. Journal of Consumer Research 28: 273283.
  • Berry S, Levinsohn J, Pakes A. 1995. Automobile prices in market equilibrium. Econometrica 63: 841890.
  • Bettman JR, Johnson EJ, Luce MF, Payne JW. 1993. Correlation, conflict and choice. Journal of Experimental Psychology: Learning, Memory and Cognition 19: 931951.
  • Brownstone D, Train K. 1999. Forecasting new product penetration with flexible substitution patterns. Journal of Econometrics 89: 109129.
  • Chaloner K, Verdinelli I. 1995. Bayesian experimental design: a review. Statistical Science 10(3): 273304.
  • Dellaert BGC, Brazell JD, Louviere JJ. 1999. The effect of attribute variation on consumer choice consistency, Marketing Letters 10: 139147.
  • DeShazo J, Fermo G. 2002. Designing choice sets for stated preference methods: the effects of complexity on choice consistency. Journal of Environmental Economics and Management 44: 123143.
  • Huber J, Zwerina K. 1996. The importance of utility balance in efficient choice design. Journal of Marketing Research 33: 307317.
  • Johnson E, Payne J. 1985. Effort and accuracy in choice. Management Science 31: 395414.
  • Layton D, Brown G. 2000. Heterogenous preferences regarding global climate change. Review of Economics and Statistics 82: 616624.
  • Louviere J. 2003. Complex statistical choice models: are the assumptions true, and if not, what are the consequences? Presented at Discrete Choice Workshop in Health Economics, University of Oxford.
  • Louviere J, Woodworth G. 1983. Design and analysis of simulated consumer choice or allocation experiments: an approach based on aggregate data. Journal of Marketing Research 20: 350367.
  • Mazzotta MJ, Opaluch J. 1995. Decision making when choices are complex: a test of Heiner's Hypothesis. Land Economics 71: 500515.
  • McFadden D, Train K. 2000. Mixed MNL models for discrete response. Journal of Applied Econometrics 15: 447470.
  • Niederreiter H. 1992. Random Number Generation and Quasi-Monte Carlo Methods. SIAM: Philadelphia; PA.
  • Owen A. 1995. Randomly permuted (t,m,s)-nets and (t,s)-sequences. In Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, NiederreiterH, ShiuePJ-S (eds). Springer: New York; 299317.
  • Revelt D, Train K. 1998. Mixed logit with repeated choices: households' choices of appliance efficiency level. Review of Economics and Statistics 80: 647657.
  • Rubinstein A. 1998. Modeling Bounded Rationality. MIT Press: Cambridge MA.
  • San-Miguel F, Ryan M, Scott A. 2002. Are preferences stable? The case of health care. Journal of Economic Behavior and Organization 48: 114.
  • Sándor Z, Train K. 2004. Quasi-random simulation of discrete choice models. Transportation Research B 38: 313327.
  • Sándor Z, Wedel M. 2005. Heterogeneous Bayesian conjoint choice designs. Journal of Marketing Research 42: 210218.
  • Scott A. 2001. Eliciting GP's preferences for pecuniary and non-pecuniary job characteristics. Journal of Health Economics 20: 329347.
  • Small KE, Winston C, Yan J. 2005. Uncovering the distribution of motorists' preferences for travel time and reliability. Econometrica 73: 13671382.
  • Street D, Burgess L. 2004. Optimal and near-optimal pairs for the estimation of effects in 2-level choice experiments. Journal of Statistical Planning and Inference 118: 185199.
  • Swait J, Adamowicz W. 2001. The influence of task complexity on consumer choice: a latent class model of decision strategy switching. Journal of Consumer Research 28: 135148.
  • Toubia O, Hauser JR, Simester DI. 2004. Polyhedral methods for adaptive choice-based conjoint analysis. Journal of Marketing Research 41: 116131.
  • Tyebjee T. 1979. Response time, conflict and involvement in brand choice. Journal of Consumer Research 6: 295304.
  • van Ophem H, Stam P, van Praag B. 1999. Multichoice logit: modeling incomplete preference rankings of classical concerts. Journal of Business and Economic Statistics 17: 117128.
  • 1

    Note that respondents are assumed to have different latent characteristics Vis in different choice sets. A reason for adopting this assumption is that there are reasons to believe that respondents may treat different choice tasks differently (Louviere, 2003). Some authors use the model that assumes the same latent characteristics Vi over the choice sets (e.g., Revelt and Train, 1998).

  • 2

    For qualitative variables, like Network in our study, sometimes we need to rely on subjective judgement regarding the way we should recode the characteristic levels.

  • 3

    This task complexity variable is related to the variable called ‘NADA’ (i.e., the number of attributes whose levels differ across alternatives) by DeShazo and Fermo (2002), but not the same. We believe that our variable captures better the complexity of choice in situations when, for example, one alternative dominates the other.

  • 4

    In order to achieve the same efficiency, one needs about 14% more respondents for a randomly generated design, which implies 65 respondents in addition to the 466. We note that random designs can be considered appropriate for this efficiency comparison because they do not depend on any prior parameter or model specification.

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. THE MODEL
  5. 3. ESTIMATION AND INFORMATION MATRIX
  6. 4. DATA COLLECTION AND RESULTS
  7. 5. IMPLICATIONS FOR THE DESIGN OF EXPERIMENTS
  8. 6. CONCLUSIONS
  9. Acknowledgements
  10. . APPENDIX A: ADDITIONAL INFORMATION
  11. REFERENCES
  12. Supporting Information

The JAE Data Archive directory is available at http://qed.econ.queensu.ca/jae/datasets/sandor001/ .

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.