SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

In this paper, the authors propose that the canonical customer–toolkit dyad in mass customization (MC) should be complemented with user communities. Many companies in various industries have begun to offer their customers the opportunity to design their own products online. The companies provide Web-based MC toolkits that allow customers who prefer individualized products to tailor items such as sneakers, personal computers (PCs), cars, kitchens, cereals, or skis to their specific preferences. Most existing MC toolkits are based on the underlying concept of an isolated, dyadic interaction process between the individual customer and the MC toolkit. Information from external sources is not provided. As a result, most academic research on MC toolkits has focused on this dyadic perspective. The main premise of this paper is that novice MC toolkit users in particular might largely benefit from information given by other customers. Pioneering research shows that customers in the computer gaming and digital music instruments industries are willing to support each other for the sake of efficient toolkit use (e.g., how certain toolkit functions work). Expanding on their work, the present paper provides evidence that peer assistance appears also extremely useful in the two other major phases of the customer's individual self-design process, namely, the development of an initial idea and the evaluation of a preliminary design solution. Two controlled experiments were conducted in which 191 subjects used an MC toolkit to design their own individual skis. The authors found that during the phase of developing an initial idea, having access to other users' designs as potential starting points stimulates the integration of existing solution chunks into the problem-solving process, which indicates more systematic problem-solving behavior. Peer customer input also turned out to have positive effects on the evaluation of preliminary design solutions. Providing other customers' opinions on interim design solutions stimulated favorable problem-solving behavior, namely, the integration of external feedback. The use of these two problem-solving heuristics in turn leads to an improved process outcome—that is, self-designed products that meet the preferences of the customers more effectively (measured in terms of perceived preference fit, purchase intention, and willingness to pay). These findings have important theoretical and managerial implications.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

Many companies in various industries have begun to offer their customers the opportunity to design their own products online. The companies provide Web-based mass customization (MC) toolkits that allow customers who prefer individualized products to tailor items such as sneakers (e.g., Nike), personal computers (PCs; e.g., Dell), cars (e.g., Mini), kitchens (e.g., IKEA), cereals (e.g., General Mills), or skis (e.g., Edelwiser) to their specific preferences. MC toolkits are defined as a set of user-friendly design tools that allow trial-and-error experimentation processes and deliver immediate simulated feedback on the outcome of design ideas. Once a satisfactory design is found, the product specifications can be transferred into the firm's production system, and the custom product is subsequently produced and delivered to the customer (e.g., Dellaert and Stremersch, 2005; von Hippel, 2001; von Hippel and Katz, 2002).

Most existing MC toolkits are based on the underlying concept of an isolated, dyadic interaction process between the customer and the MC toolkit. For example, consider the toolkit offered by the ski manufacturer Edelwiser, which allows the user to design the entire face of a pair of carving skis (see http://www.edelwiser.com). The user starts with a pair of blank skis and can add text in different colors, sizes, and styles, can create graphical elements as desired, and can move them back and forth until the desired placement is found. The entire self-design process is based on isolated interaction between the individual customer and the toolkit. Information from other customers (e.g., feedback on preliminary designs) is not provided. As a result, most academic research on MC toolkits has focused on this dyadic perspective and has analyzed how toolkits should be designed to facilitate effective dyadic interaction (Dellaert and Stremersch, 2005; Huffman and Kahn, 1998; Randall, Terwiesch, and Ulrich, 2005, 2007; von Hippel, 2001; von Hippel and Katz, 2002).

The main premise of this paper is that the customer–toolkit dyad should be expanded to include user communities. The success of virtual user communities such as those seen in open-source software, Wikipedia, and many other forums and joint projects in which peer-to-peer information is exchanged and diffused for the benefit of the community and others suggests that MC toolkits might also benefit from breaking up the dyadic perspective. Various researchers have reported that self-designing a product with an MC toolkit might place an excessive strain on the individual novice customer (Dellaert and Stremersch, 2005; Huffman and Kahn, 1998)—especially if the underlying toolkit offers high levels of design freedom. This has problematic consequences, because such a customer might not be able to generate a product that fits her own preferences in a satisfactory manner, which would severely reduce her willingness to pay a premium for MC products (Franke and Piller, 2004; Schreier, 2006; Randall, Terwiesch, and Ulrich, 2007).

Ill-structured problems in general and MC self-design tasks (e.g., designing the entire face of a pair of skis from scratch) in particular are characterized by a large number of open constraints (Goel and Pirolli, 1992; Reitman, 1965; Simon, 1973). Structuring and resolving problems of this type involves dealing with these constraints by gathering missing information regarding potential problem goals, possible solution paths, and evaluation criteria (Guindon, 1990; Simon, 1973). Experienced problem solvers such as industrial designers or architects compensate for missing information by making assumptions based on their internally stored knowledge and experience. If they feel they need additional information, they also access external sources of information, for example, by consulting the literature or asking peers for advice (Eckert and Stacey, 1998; Pearce et al., 1992; Wood and Agogino, 1996).

Most customers lack experience in developing their own products and cannot fall back on proven strategies and criteria when self-designing a product with an MC toolkit (Jeppesen, 2005; Randall, Terwiesch, and Ulrich, 2005). In many cases, they also have only limited insights regarding their own preferences (and thus also regarding the problem structure) and find it difficult to develop an initial idea (Huffman and Kahn, 1998; Simonson, 2005). As a result, many novice MC toolkit users could benefit from external sources of information. External information may be helpful in all three major phases of MC self-design processes based on Newell and Simon's (1972) theory of human problem solving: (1) development of an initial idea; (2) generation of a (preliminary) design; and (3) design evaluation (Figure 1).

image

Figure 1. Dyadic Interaction and Complementary Functions of a User Community

Download figure to PowerPoint

Despite their potential impact on the quality of the outcome of self-design processes, the complementary function of information from peers in the first and the third phase has hardly attracted attention in MC research thus far. Regarding the second phase, however, Jeppesen (2005), Jeppesen and Frederiksen (2006), and Jeppesen and Molin (2003) provide strong empirical evidence that external information from user communities is beneficial to individual self-design processes. Their findings are based on several case studies in the computer gaming and digital music instruments industries, where a number of leading-edge MC toolkit providers offer online platforms that facilitate information exchange among customers (discussion forums). Jeppesen (2005) shows that experienced toolkit users are willing to support others with regard to efficient toolkit use (e.g., how certain toolkit functions work) and that this peer-based help improves individual problem solving—particularly in the second phase, when the user aims to generate a preliminary design. Jeppesen concludes that the establishment of user-to-user help functions is “a promising way for firms to reduce the burden of support and to create conditions for better toolkit use” (ibid., p. 359).

The present study aims to extend this line of research. The main premise is that individual self-design processes in MC may work more effectively if the customer–toolkit dyad is complemented by input from peers in the first phase (development of an initial idea) and the third phase (evaluating preliminary solutions; Figure 1). Two controlled experiments were conducted in which 191 subjects used an MC toolkit to design their own individual skis. It is found that providing other users' designs as potential starting points in the first phase stimulates the integration of existing solution chunks, which indicates more systematic problem-solving behavior. Peer input also turned out to have positive effects in the third phase. Providing other customers' opinions on interim design solutions stimulated favorable problem-solving behavior, namely, the integration of external feedback. The use of these two problem-solving heuristics in turn leads to an improved process outcome—that is, self-designed products that meet the preferences of the customer more effectively (measured in terms of perceived preference fit, purchase intention, and willingness to pay [WTP]).

Development of Hypotheses

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

The process of creatively designing something new generally begins with the development of an initial design idea (Goel and Pirolli, 1992; Guindon, 1990; Newell and Simon, 1972; von Hippel and Katz, 2002). Based on their own preferences or external requirements, designers try to anticipate how the object to be developed should look. In the literature on problem solving, this initial phase is regarded as crucial to the success of problem-solving processes (Goel and Pirolli, 1992; Guindon, 1990; Purcell and Gero, 1996; Simon, 1973). By developing an internal representation of the possible goal states, the problem solver limits the design task to a certain category of adequate solutions. This relieves her from having to consider a potentially unlimited number of solutions, and it allows goal-directed—and therefore more efficient—problem-solving behavior (Newell and Simon, 1972; Simon, 1973).

When confronted with a completely new design task, designers sometimes face difficulties in coming up with an initial design idea (Wood and Agogino, 1996). Due to a situational lack of creativity or experience in the design of a particular kind of object, they might not be able to predetermine a target design from scratch (Guindon, 1990; Wood and Agogino, 1996). One common form of behavior among designers in such situations is to generate and explore different design alternatives themselves to learn more about “good” and “bad” designs (von Hippel, 1994; von Hippel and Katz, 2002). This heuristic problem-solving method of trial-and-error learning is a time-consuming cognitive burden because it is not goal directed. That is why experienced designers often employ a much more efficient heuristic in framing the design problem: They systematically search for appealing designs and design elements that already exist and can be adapted, modified, and changed into new forms to meet new requirements during new product development (Akin, 1978; Van Lehn, 1998). This “integration of existing solution chunks” can be observed, for example, in the creative problem-solving behavior of fashion designers who search for inspiration when developing new styles (Eckert and Stacey, 1998; Lawson, 2000) or of architects when planning new buildings (Chi, Glaser, and Farr, 1988; Pearce et al., 1992; Pirolli and Anderson, 1985).

If professional designers benefit from internally and externally stored designs and design elements, then novice MC toolkit users should profit even more from the integration of those existing solution chunks. Novice toolkit users are not familiar with the process of self-designing a product, and they usually have only limited insight into their preferences for different product attributes (Dellaert and Stremersch, 2005; Huffman and Kahn, 1998; Simonson, 2005). Having no clear target design in mind, novices will easily feel overwhelmed by the numerous potential design options (Chase and Simon, 1973; Huffman and Kahn, 1998). However, in the traditional customer–toolkit dyad, existing solution chunks cannot be retrieved easily if they are not provided by the toolkit. Of course, the customer can browse the Internet in search of inspiration or try to collect this information offline by scanning catalogs, visiting shops, or observing products in use. Searching for inspiration in this way involves considerable transaction costs and is not necessarily effective (Eckert and Stacey, 1998). It can therefore be argued that novice toolkit users will integrate more existing solution chunks in the phase of developing the initial idea if the MC toolkit includes design solutions generated previously by other MC toolkit users. In this way, the costs of retrieval should be relatively low for the customer. As the existing customer designs originate from peers who faced a similar situation, they should exhibit a wide variety of attractive and up-to-date designs and therefore foster creativity in the individual toolkit user (Purcell and Gero, 1996). For the manufacturer, the use of designs generated by customers (as opposed to professional designers) brings about concrete cost advantages, as research has shown that user community members are often willing to support each other free of charge (Jeppesen, 2005; Jeppesen and Frederiksen, 2006; Jeppesen and Molin, 2003) and often freely reveal their designs (Jeppesen and Frederiksen, 2006; Prügl and Schreier, 2006). This makes existing peer-based solution chunks a potentially helpful—and at the same time inexpensive—means of user support (Jeppesen, 2005).

On the basis of the previous considerations, it is argued that toolkit users who are offered predesigned, peer-based designs as stimuli are more likely to integrate existing solution chunks than customers who are forced to rely on other (toolkit-external) sources of inspiration. In line with the theory of creative problem solving, design processes that integrate information chunks to a greater degree shall be more structured and will generate a more positive outcome (Chi, Glaser, and Farr, 1988; Eckert and Stacey, 1998; Pirolli and Anderson, 1985).

H1: Providing an MC toolkit user with peer-generated design solutions will enhance the integration of existing solution chunks into the individual customer's MC toolkit self-design process.

H2: The more the individual customer integrates existing solution chunks into the MC toolkit self-design process, the better the outcome of the self-design process will be (measured in terms of perceived preference fit, willingness to pay, and purchase intention).

In the phase of evaluating a (preliminary) design solution, information provided by peers might also be useful to the MC toolkit user. During the design process, a designer repeatedly checks whether the solution meets her own preferences and external requirements (Dorst and Cross, 2001; Lawson, 2000; Maher, Poon, and Boulanger, 1996). By evaluating the preliminary design, the designer is able to reduce her uncertainty about the quality of the solution generated. This evaluation enables the designer to identify and correct major flaws in the design to improve the outcome (von Hippel and Katz, 2002; Ilgen, Fisher, and Taylor, 1979; Morrison and Bies, 1991).

Professional designers often carry out these evaluation processes on their own and rely on their comprehensive experience when judging the quality of a design. However, even professional designers are sometimes unable to evaluate a preliminary design solution. Especially when confronted with a completely novel design task, they may perceive uncertainty regarding the adequacy of the preliminary design (Ashford and Cummings, 1983; Goel and Pirolli, 1992). Due to their lack of experience, they have limited knowledge about their own preferences or common practices and norms concerning the design of that specific type of product (Akin, 1978; Bonnardel and Sumner, 1996). Therefore, they seek information from external sources to evaluate their preliminary designs. One way of obtaining such information is to present the preliminary design to peers. Industrial designers and architects, for example, are reported to discuss their sketches of preliminary designs with colleagues before they proceed to generate a detailed design (Gabriel and Maher, 2002). Empirical studies show that if professional designers integrate external feedback into the design process, the design outcome tends to be superior (Curtis, Krasner, and Iscoe, 1988). Also, scholarly research usually benefits from feedback given by peer reviewers (e.g., Scott, 2007). Feedback is generally regarded as a valuable resource in identifying the weaknesses of a potential solution and in gathering useful information on how to enhance the solution (Ashford and Cummings, 1983; Morrison and Bies, 1991).

In the traditional toolkit–user dyad, it is not easy for the customer to obtain external feedback on her (preliminary) design solution. Most MC toolkits provide their users with a more or less accurate visual representation of the design created as well as its technical features and price information (von Hippel and Katz, 2002). This kind of feedback leaves the actual evaluation task to the customer and does not provide guidance in improving the design. Like any other designer, novice toolkit users might try to compensate for a lack of experience in the evaluation of a particular design solution by seeking external feedback from others. However, the dyadic conception of MC toolkits generally makes it difficult to share and discuss such design solutions with peers. Therefore, obtaining genuine external feedback again involves high transaction costs. A customer can, for example, invite friends to inspect the design as shown on the PC screen, she can produce a screenshot of the design and e-mail it to a peer who is willing and able to give feedback, and she can also describe the design idea verbally and seek feedback in this way. However, this process may prove difficult, as the novice toolkit user has to find others who are willing to evaluate their designs and are capable of giving useful tips on how to improve the design further (Ashford and Cummings, 1983; Morrison and Bies, 1991). Novices in particular might abandon the search for such feedback information due to its uncertain value and high transaction costs. Moreover, it has been found that “poor performers” (as novices often are) generally tend to avoid diagnostic information due to ego-defensive motives (Zuckerman et al., 1979). Especially in situations when (negative) feedback can be directly attributed to the person seeking it, individuals with low task abilities often tend to avoid feedback information rather than seeking it (Ashford and Tsui, 1991; Janis and Mann, 1977; Lambird and Mann, 2006; Willerman, Lewitt, and Tellegen, 1960).

As feedback provided by peers within a user community is both easy to obtain and anonymous in the sense that the customer searching for feedback does not have to reveal her “real” identity, feedback should involve less risk for ego-defending motives (Bargh, McKenna, and Fitzsimmons, 2002). Feedback-seeking behavior should therefore be enhanced by such an MC toolkit function. It is therefore hypothesized that MC toolkits that provide peer-based feedback information will lead to more external feedback being processed by the individual customer. In turn, more external feedback on preliminary design solutions should have a positive influence on the outcome of the self-design process.

H3: Providing an MC toolkit user with peer-based feedback on preliminary design solutions will stimulate the integration of external feedback into the individual customer's MC toolkit self-design process.

H4: The more the individual customer integrates external feedback information on preliminary design solutions into the MC toolkit self-design process, the better the outcome of the self-design process will be (measured in terms of perceived preference fit, willingness to pay, and purchase intention).

Study 1: Peer Information in the Stage of Idea Development

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

Method

Overview. In Study 1, the authors explore the impact of peer information on individual self-design during idea development (Phase 1). Hypotheses are tested by means of a one-factor between-subject experiment with access to peer designs being manipulated. Participants were invited to self-design an individual product using the toolkit provided by the ski manufacturer Edelwiser (see earlier discussion). This toolkit allows users to design carving skis according to their individual preferences using a set of design tools. The toolkit was made accessible on prepared PCs in separate booths. Participants were offered soft drinks and snacks to create a natural environment. Before starting the self-design process, participants were randomly assigned to either the experimental group (access to other users' designs; n=57) or the control group (no access to other users' designs; n=56). After designing their custom products, participants completed a questionnaire containing the key measures which test the hypotheses.

Participants were management students from the authors' university (55% females) who were 24 years old on average (standard deviation [SD]=5.08). Participation was based on self-selection, and students were attracted by announcing in various relevant media (e.g., university newsletters, websites, blackboards) that all study participants would be able to enter a raffle for self-designed high-end carving skis. This procedure ensured that participants exhibited sufficiently high product category involvement in general and a high level of interest in individual self-designed carving skis in particular. In addition, by revealing the activity to be carried out, the authors intentionally facilitated preexperimental problem-solving behavior among participants—namely, the tasks of starting to develop an initial design idea and potentially seeking external information for this purpose. The sample appears to consist almost exclusively of novice MC toolkit users, as the mean design expertise score comes to 2.16 (SD=1.42) on a seven-point scale (1=very low expertise; 7=very high expertise; see the Appendix for specific items). It is noted that the data might be biased toward young and adept people who are familiar with the Internet but who at the same time have little experience in self-design processes and who are highly interested in this product category. However, this particular group is among the major target segments of the underlying brand Edelwiser, and it has also been noted to be of particular interest for MC in general (Franke and Piller, 2004).

Manipulations. Participants in the control group were allowed to use only the default toolkit, which does not provide other users' designs. In other words, the individual design process starts with a blank white pair of skis. For participants in the experimental group, the authors offered peer-generated design solutions and included them in the MC toolkit. To this end, they had conducted a pilot study in which they asked professional designers to select the most appealing designs from a set of 250 designs created by users of the Edelwiser toolkit during the last season. The three professional designers were provided with a list comprising all 250 ski designs and asked to rate them on a five-point scale where 1 constituted a very good design and 5 a very bad design. The evaluations were averaged, and the ski designs with an overall rating of 1 were selected, which left a total of 28 designs. These different peer-generated ski designs were made available to participants via a button labeled “Community library,” which was integrated into the MC toolkit for the experiment.

In that area of the toolkit, participants could inspect the designs and integrate them (or parts of them) into their own design process. Subjects could completely rework the designs as they were based on modular structure. Every design element could be adapted, moved back and forth, complemented with new elements, deleted, or simply inspected for how it was done. Again, the only difference between the toolkits in the two groups was that one included other users' designs (experimental group) whereas the other did not (control group; Figure 2). Note that both groups could theoretically integrate existing solution chunks into their individual self-design process (e.g., all of them could use mental or other toolkit-external solution chunks; since they knew that they had the opportunity to design a ski face themselves, they also could have thought about design ideas before the experiment). Unlike the others, however, participants in the experimental group received an explicit stimulus to do so from the community library function. There were no time constraints, and the individual self-design processes lasted 47 minutes on average (meanexperimental group=48.35; SD=17.99; meancontrol group=45.71; SD=16.02; p=.41).

image

Figure 2. The Edelwiser MC Toolkit with and without a Community Library

Download figure to PowerPoint

Measurement. Immediately after finishing the design process, participants completed a physical questionnaire. All measurement items and descriptive statistics are listed in the Appendix. The level of “integration of existing solution chunks” is measured using four items, for example, “I started to design my custom skis by adapting an existing ski design,” on a seven-point scale (1=strongly disagree; 7=strongly agree). Due to a lack of existing scales, the authors developed new items based on extant literature (Chi, Glaser, and Farr, 1988; Pearce et al., 1992; Pirolli and Anderson, 1985). Exploratory factor analysis led to one extracted factor (explained variance=59%), thus suggesting unidimensionality. The alpha of the scale also surpassed the .7 threshold (.75). To assess the validity of the construct, a confirmatory factor analysis (CFA) was employed, which resulted in satisfactory overall fit statistics (e.g., adjusted goodness-of-fit index [AGFI]=.94; goodness-of-fit index [GFI]=.99; comparative fit index [CFI]=1.00; incremental fit index [IFI]=1.00). All factor loadings were positive and significant, which points to a sound degree of convergent validity.

The perceived quality of the outcome of the self-design process (i.e., the quality of the self-designed skis) is measured in terms of (1) perceived preference fit, (2) purchase intention, and (3) willingness to pay. Preference fit (the perceived fit between product and preferences) is measured using three items (alpha=.83), which were in part borrowed from Huffman and Kahn (1998); purchase intention is measured using the single item developed by Juster (1966); WTP is measured using the open-ended contingent valuation approach (“How much would you be willing to pay for your self-designed pair of Edelwiser skis?”; Jones, 1975). All three variables are found to be positively and significantly correlated with each other (r's>.20; p's<.01), which generally points to a valid measurement of the participants' perceptions of the quality of the self-designed skis.

Finally, the authors measured the participants' product category involvement and design expertise as control variables. Product category involvement is measured by the proxy “WTP for a pair of white Edelwiser skis” (“How much would you be willing to pay for a pair of white Edelwiser skis?”; participants were given the opportunity to inspect a physical “blank” model of the carving skis before starting the self-design process). Design expertise is measured by four items (alpha=.79), which were developed on the basis of extant literature (Ball, Evans, and Dennis, 1994; Ball and Ormerod, 2000). One example reads, “I had already designed a ski or a similar product before this experiment.” The scales were averaged for further analyses.

Findings

The findings confirm H1 and H2 (Table 1). H1 was tested using analysis of variance (ANOVA) and H2 using ordinary least squares (OLS) regressions. In H1, it is stated that providing MC toolkit users with other users' designs would stimulate the integration of existing solution chunks into the individual customers' self-design process. In line with this prediction, the authors find that participants in the experimental group (access to other users' designs) report having used this heuristic (mean=3.64) more heavily than participants in the control group (mean=2.63; p<.001). In H2, it is stated that the more a customer integrates existing solution chunks into her self-design process, the better the outcome of the self-design process will be. Regardless of the underlying dependent variable (preference fit; WTP; purchase intention), H2 was supported (controlling for product category involvement and design expertise): The more existing solution chunks are used, the better the customer's perceived outcome becomes (β=.23; β=.19; β=.23; p's<.05).

Table 1. Findings of Study 1
 Test of H1: Analysis of Variance (ANOVA)
Access to Other Users' Designs (Experimental Group) n=57No Access to Other Users' Designs (Control Group) n=56 F-value (p-value)
Mean (SD)Mean (SD)
Integration of Existing Solution Chunks into the Self-Design Process3.642.6313.323
(1.84)(.97)(<.001)
 Test of H2: Ordinary Least Squares (OLS) Regressions
DV: Preference Fit n=113DV: Willingness to Pay n=113DV: Purchase Intention n=113
β (p-value)β (p-value)β (p-value)
Integration of Existing Solution Chunks into the Self-Design Process.23 (.02).19 (.03).23 (.01)
Product Category Involvement.11 (.26).55 (.00).33 (.00)
Design Expertise.14 (.14).01 (.90).14 (.13)
R 2 (Adjusted R2).08 (.05).30 (.28).16 (.14)
F-Value (p-Value)2.973 (.04)15.363 (.00)6.775 (.00)

Study 2: Peer Information in the Design Evaluation Stage

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

Method

Overview. In Study 2, the authors explore the impact of peer-based feedback information on individual self-designs in the evaluation stage (Phase 3). The hypotheses are tested by means of a one-factor between-subject experiment by manipulating the provision of peer feedback on users' interim designs as a function of the MC toolkit. The same settings and toolkit (Edelwiser) were employed as in Study 1. Participants were randomly assigned to either the experimental group (provision of feedback function; n=41) or the control group (no provision of feedback function; n=37). Again, the participants were management students from the authors' university (55% females) who were 24 years old on average (SD=3.72). The same implications as those discussed in Study 1 apply to this sample. In this study, the sample again consisted almost exclusively of novice users (mean=2.33; SD=1.51; 1=very low expertise; 7=very high expertise).

Manipulations. Participants in the control group were able to use only the default toolkit, which does not provide a “feedback feature”—that is, subjects were not offered peer feedback on their interim design ideas. For participants in the experimental group, the following manipulation was performed: After participants had designed a satisfactory ski design at t0, they were instructed to come back after one week (t1) to revise their designs if desired. In the meantime, the authors arranged for three toolkit users (recruited from the Edelwiser community) to review the participants' designs. They were instructed to comment on the individual designs in a way that would allow participants to improve them. The feedback was given in writing, and the style was similar to user-to-user support in online communities. Equivalence (i.e., a consistent stimulus level for all subjects in the treatment group) was achieved using the following procedure. First, the peer reviewers were provided with exemplary feedback. This example was accompanied by a general explanation of what the feedback should look like. The most important point in this briefing was to ensure equivalence among the different instances of feedback, that is, an identical level of constructive criticism on the different design solutions. For this purpose, the peer reviewers were told to focus on at least two but no more than three flaws in each design. Note that the peer reviewers were also instructed to make such suggestions only for improving the design that could be realized using the Edelwiser MC toolkit. Second, after receiving the feedback, the authors paraphrased each set of comments into a demotic and friendly tone and randomly integrated them into one of three different standardized texts that resembled an informal peer-to-peer e-mail with a uniform introduction text and a uniform complimentary closing (Figure 3). In total, three sets of comments for each self-design in the treatment group were obtained.

image

Figure 3. Examples of Feedback Provided by Peer Reviewers

Download figure to PowerPoint

The feedback information was distributed to subjects at the beginning of t1, and they were told that they could then rework their designs if desired. They were informed that they could use the peer feedback at their own discretion (i.e., use it or discard it) when continuing their self-design processes. As in the treatment group, participants in the control group were invited to come back and rework their designs after one week, but they were not provided with peer input. Thus, the only difference between the two groups is that to integrate external feedback into their self-design process one (experimental group) was provided with a stimulus (i.e., the written feedback sheet handed out to each subject) and one (control group) was not. Note that regardless of the stimulus both groups could have theoretically sought out and integrated external feedback between t0 and t1 on their own initiative; unlike the others, however, participants in the experimental group received an explicit stimulus to do so in the form of input from peers. As in Study 1, there were no time constraints, and subjects required an average of 52 minutes for their self-design processes at t0 (meanexperimental group=52.56; SD=15.31; meancontrol group=50.92; SD=17.86; p=.67) and 38 minutes at t1 (meanexperimental group=37.17; SD=15.25; meancontrol group=37.92; SD=16.28; p=.84).

Measurement. Immediately after finishing the design processes at t0 and t1, participants completed the questionnaire (for measurement items and descriptive statistics, see Appendix). The degree to which external feedback was integrated (only measured after t1) is captured by four items, for example, “I considered suggestions from other people on how to improve my ski design,” on a seven-point scale (1=strongly disagree; 7=strongly agree). Due to a lack of existing scales, these items were developed based on extant literature (Ashford and Cummings, 1983; Morrison and Bies, 1991). Exploratory factor analysis led to one extracted factor (explained variance=82%), and the alpha of the scale is .94. CFA delivered satisfactory overall fit statistics (e.g., AGFI=.86; GFI=.95; CFI=.99; IFI=.99), and all factor loadings were positive and significant.

In both questionnaires, the authors captured the participants' perceptions of the self-designed skis' quality by measuring the subjects' perceived preference fit (alphat0=.89; alphat1=.84), purchase intention, and WTP. The same scales as those used in Study 1 were employed to measure these dependent variables, and once again they were found to be positively correlated (r's>.26; p's<.05). Finally, the same control variables as in Study 1 were measured (product category involvement and design expertise; alpha=.78; measured after t0). The scales were averaged for further analyses.

Findings

The findings provide support for H3 and H4 (Table 2). H3 was tested using ANOVA and H4 using OLS regressions. In H3, it was stated that providing peer-based feedback on preliminary design solutions will positively stimulate the integration of external feedback into the individual customer's self-design process. In line with this conjecture, it was found that participants in the experimental group (provision of feedback) report having used this heuristic more heavily (mean=5.57) than participants in the control group (mean=1.74; p's<.001). In H4, it was stated that the more a customer integrates external feedback on preliminary design solutions, the better the outcome of the self-design process will be. Regardless of the underlying dependent variable (preference fitt1, WTPt1, purchase intentiont1), H4 could be confirmed (controlling for product category involvement, design expertise, and for preference fitt0, WTPt0, and purchase intentiont0, respectively): The more heavily external feedback is used, the better the subject's perceived outcome will be (β=.29; β=.13; β=.16; p's≤.05).

Table 2. Findings of Study 2
 Test of H3: Analysis of Variance (ANOVA) F-value (p-value)
Provision of Feedback Function (Experimental Group) n=41No Provision of Feedback Function (Control Group) n=37
Mean (SD)Mean (SD)
Integration of External Feedback into the Self-Design Process5.571.74196.948
(1.17)(1.23)(<.001)
 Test of H4: Ordinary Least Squares (OLS) Regressions
DV: Preference Fitt1DV: Willingness to Payt1DV: Purchase Intentiont1
n=78 n=78 n=78
β (p-value)β (p-value)β (p-value)
Integration of External Feedback into the Self-Design Process.29 (.01).13 (.05).16 (.05)
Product Category Involvement.09 (.32).22 (.01).13 (.08)
Design Expertise–.08 (.43).06 (.39).00 (.96)
Preference Fitt0.62 (.00)
Willingness to Payt0.69 (.00)
Purchase Intentiont0.80 (.00)
R 2 (Adjusted R2).41 (.38).74 (.72).62 (.60)
F-Value (p-Value)12.844 (.00)51.120 (.00)29.328 (.00)
 DV (Δt1t0): Δ Preference FitDV (Δt1t0): Δ Willingness to PayDV (Δt1t0): Δ Purchase Intention
n=78 n=78 n=78
β (p-value)β (p-value)β (p-value)
Integration of External Feedback into the Self-Design Process.33 (.01).26 (.03).29 (.02)
Product Category Involvement.08 (.48).08 (.47).21 (.06)
Design Expertise.06 (.61).09 (.47).02 (.90)
R 2 (Adjusted R2).13 (.09).10 (.06).12 (.09)
F-Value (p-Value)3.640 (.02)2.645 (.06)3.474 (.02)

As an additional test, the authors set the differences (Δ) between the measures at t1 and t0 (Δ preference fit, Δ WTP, Δ purchase intention) as dependent variables, because one could argue that the feedback can impact the design improvement achieved only in the second design phase (in relation to the outcome of the first phase), and, thus, the performance measure should be independent of the level of performance achieved in the first design phase. However, this does not alter the findings. Again, H4 could be confirmed: The more intensely external feedback is used, the better the subject's perceived outcome becomes (β=.33; β=.26; β=.29; p's<.05).

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

This paper extends the existing research on MC toolkits by experimentally demonstrating that peer input from other customers is beneficial to the individual customer and her self-design process. Previous research has already demonstrated this with regard to handling the toolkit per se, that is, the second phase of the self-design process (generation of a preliminary design). This pattern could be confirmed for the first and the third phase of the self-design process. In the first phase (development of an initial idea), it was found that the supply of other users' designs as potential starting points stimulates the integration of existing solution chunks, which indicates more systematic problem-solving behavior. Peer input also has positive effects in the third phase (evaluation of the preliminary design). This input stimulated favorable problem-solving behavior, namely, the integration of external feedback into the customer's problem-solving process. Both problem-solving heuristics (integration of existing solution chunks and integration of external feedback information) in turn lead to an improved process outcome, that is, self-designed products that meet the preferences of the customer more effectively. These findings have important theoretical and managerial implications.

The findings mainly suggest that the two research areas of outsourcing design tasks to customers by means of MC toolkits and the phenomenon of innovative user communities should not be examined in isolation. This has generally been the case to date, with the notable exceptions of Jeppesen (2005), Jeppesen and Frederiksen (2006), and Jeppesen and Molin (2003). Instead, these areas share a common base, namely, the fact that customers can be creative and innovative (for an overview, see von Hippel, 2005). Therefore, it makes sense to analyze the extent to which these two phenomena are related or can be used to complement each other. The findings suggest that the canonical customer–toolkit dyad can be expanded in a meaningful way to include user communities. MC toolkit users can assist each other during the development of the initial idea and during the design process and by giving each other constructive feedback on interim design solutions. This finally results in a higher level of satisfaction with the outcome of the self-design process.

The obvious next research question is how MC toolkits should be designed to facilitate such positive interaction effects. The peer-originated sample designs, which were actually integrated into the toolkit as a link leading to a “community library” (as visible in Figure 3), proved helpful to the customers. Future research should analyze the mechanisms that are most effective when it comes to deciding which user designs should be included in this library and in what patterns (e.g., number, order, grouping). It can be assumed that not all user designs will be equally interesting to other customers (Prügl and Schreier, 2006). The authors suggest collaborative filtering systems supported by customers as a promising way of obtaining quick and cost-effective peer input (e.g., voting systems; Ogawa and Piller, 2006), but more research on this issue is necessary.

Similar questions arise when it comes to the organization of peer feedback information on preliminary design solutions. Who should be assigned the task of giving feedback? Should the content of feedback be standardized in any way (e.g., feedback on specific criteria such as functionality or design attractiveness, filtering of negative or inane critique), or should it be left entirely to the customer giving the feedback? Should her “feedback track record” be revealed? It would be easy to provide customers seeking assistance with a rating feature that states whether feedback was perceived as helpful or not, as in the rating systems employed by online retailers such as Amazon or eBay. The underlying question here is the appropriate degree of control in such a system. On the one hand, it might be desirable to have a high level of control—that is, a highly “channeled” process in which the different tasks of, for example, getting and giving feedback and providing sample solutions are clearly structured and may be moderated by the company providing the MC toolkit. There is no guarantee that customers will always act in the interest of the manufacturer (Schau and Muñiz, 2006 provide an example for the Apple Newton community). On the other hand, too little freedom might create negative incentives for customers to engage in peer support. In general, the question of effective incentive schemes for peer assistance in such a system is important, but it has rarely been addressed in academic research. After all, there is a big difference to noncommercial endeavors like open-source software, where free (nonmonetary) user-to-user assistance and revealing one's own ideas and developments for free are considered an important norm (Franke and Shah, 2003; Harhoff, Henkel, and von Hippel, 2003; Jeppesen, 2005; Jeppesen and Frederiksen, 2006; Jeppesen and Molin, 2003; Prügl and Schreier, 2006). The MC toolkit visibly serves commercial interests, and the customers provide the firm with indirect benefits (Jeppesen, 2005; Jeppesen and Frederiksen, 2006). It is suggested that future research should investigate the effectiveness of different incentives such as providing company-based or peer-based recognition, establishing norms, triggering intrinsic motivation, and providing monetary rewards or token systems. On the whole, the way peer information is integrated into an MC toolkit might have a huge impact on customer perception—not only on the customers who receive feedback but also on those who provide it.

The idea of assigning the customers an important role in the core processes of an MC toolkit can be extended even further. Thomke and von Hippel (2002) suggest outsourcing the task of improving or developing the toolkit itself to the customers. They predict that some lead users who derive particular benefits from the outcome will be both able and motivated to provide valuable input even in that extreme and that the result will be self-regulating MC systems. Examples from the computer gaming industry in which leading-edge customers were not satisfied with the official toolkits provided by the manufacturer and thus “cracked” them to employ user-modified toolkits to push design possibilities even further show that this is not pure speculation (Prügl and Schreier, 2006). However, this area certainly requires further research.

Companies that already operate or plan to build an MC toolkit should consider integrating peer information to facilitate easier and better self-design processes. This can be achieved not only through the two features analyzed in this project (providing customer-generated sample solutions and integrating peer feedback) but also through process-related feedback as suggested by Jeppesen (2005). The concrete implementation will, of course, depend on the product category and the customers' preferences and characteristics. Edelwiser.com, the partner in this research project, has already laid out clear plans to implement these functions in the regular toolkit.

This research is subject to some methodological limitations that might also stimulate further research. First, the authors simulated peer contributions in a laboratory setting. The external validity of the findings could be enhanced by observing “real” user community behavior (i.e., the provision and use of peer information in a field study or a field experiment). Second, the experimental setting required the participation of students, which always involves the risk of limited external validity as this group might differ from the overall population. Scholars following this line of research should therefore involve larger samples composed of different user segments.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix
  • Akin, O. (1978). How Do Architects Design? In Artificial Intelligence and Pattern Recognition in Computer Aided Design, ed. J.-C. Latombe. IFIP: North-Holland Publishing Company.
  • Ashford, S.J. and Cummings, L.L. (1983). Feedback as an Individual Resource: Personal Strategies of Creating Information. Organizational Behavior and Human Performance 32 (3):370398.
  • Ashford, S.J. and Tsui, A.S. (1991). Self-Regulation for Managerial Effectiveness: The Role of Active Feedback Seeking. Academy of Management Journal 34 (2):251280.
  • Ball, L.J., Evans, J.St.B.T., and Dennis, I. (1994). Cognitive Processes in Engineering Design: A Longitudinal Study. Ergonomics 37 (5):17531786.
  • Ball, L.J. and Ormerod, T.C. (2000). Putting Ethnography to Work: The Case for a Cognitive Ethnography of Design. International Journal of Human-Computer Studies 53 (2):147168.
  • Bargh, J.A., McKenna, K.Y.A., and Fitzsimmons, G.M. (2002). Can You See the Real Me? Activation and Expression of the “True Self” on the Internet. Journal of Social Issues 58 (1):3348.
  • Bonnardel, N. and Sumner, T. (1996). Supporting Evaluation in Design. Acta Psychologica 91 (3):221240.
  • Chase, W.G. and Simon, H.A. (1973). Perception in Chess. Cognitive Psychology 4 (1):5581.
  • Chi, M.T.H., Glaser, R., and Farr, M.J. (1988). The Nature of Expertise. Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Curtis, B., Krasner, H., and Iscoe, N. (1988). A Field Study of the Software Design Process for Large Systems. Communications of the ACM 31 (11):12681287.
  • Dellaert, B.G.C. and Stremersch, S. (2005). Marketing Mass-Customized Products: Striking a Balance between Utility and Complexity. Journal of Marketing Research 42 (2):219227.
  • Dorst, K. and Cross, N. (2001). Creativity in the Design Process: Co-evolution of Problem-Solution. Design Studies 22 (5):425437.
  • Eckert, C. and Stacey, M. (1998). Fortune Favours only the Prepared Mind: Why Sources of Inspiration are Essential for Continuing Creativity. Creativity and Innovation Management 7 (1):916.
    Direct Link:
  • Franke, N. and Piller, F. (2004). Value Creation by Toolkits for User Innovation and Design: The Case of the Watch Market. Journal of Product Innovation Management 21 (6):401415.
  • Franke, N. and Shah, S. (2003). How Communities Support Innovative Activities: An Exploration of Assistance and Sharing among End-Users. Research Policy 32 (1):157178.
  • Gabriel, G.C. and Maher, M.L. (2002). Coding and Modelling Communication in Architectural Collaborative Design. Automation in Construction 11 (2):19992211.
  • Goel, V. and Pirolli, P.L. (1992). The Structure of Design Problem Spaces. Cognitive Science 16 (3):395429.
  • Guindon, R. (1990). Designing the Design Process: Exploiting Opportunistic Thoughts. Human-Computer Interaction 5 (2–3):305344.
  • Harhoff, D., Henkel, J., and Von Hippel, E. (2003). Profiting From Voluntary Information Spillovers: How Users Benefit by Freely Revealing Their Innovations. Research Policy 32 (10):17531769.
  • Huffman, C. and Kahn, B.E. (1998). Variety for Sale: Mass Customization or Mass Confusion. Journal of Retailing 47 (4):491513.
  • Ilgen, D.R., Fisher, C.D., and Taylor, M.S. (1979). Consequences of Individual Feedback on Behavior in Organizations. Journal of Applied Psychology 64 (4):349371.
  • Janis, I. and Mann, L. (1977). Decision making. Free Press, New York.
  • Jeppesen, L.B. (2005). User Toolkits for Innovation: Consumers Support Each Other. Journal of Product Innovation Management 22 (4):347362.
  • Jeppesen, L.B. and Frederiksen, L. (2006). Why Do Users Contribute to Firm-Hosted User Communities? The Case of Computer-Controlled Music Instruments. Organization Science 17 (1):4563.
  • Jeppesen, L.B. and Molin, M.J. (2003). Consumers as Co-developers: Learning and Innovation outside the Firm. Technology Analysis & Strategic Management 15 (3):363383.
  • Jones, D. (1975). A Survey Technique to Measure Demand under Various Pricing Strategies. Journal of Marketing 39 (3):7577.
  • Juster, F.T. (1966). Consumer Buying Intentions and Purchase Probability: An Experiment in Survey Design. New York: Columbia University Press.
  • Lambird, K.H. and Mann, T. (2006). When Do Ego Threats Lead to Self-Regulation Failure? Negative Consequences of Defensive High Self-Esteem. Personality and Social Psychology Bulletin 32 (9):11771187.
  • Lawson, B. (2000). How Designers Think: The Design Process Demystified. London: Butterworths.
  • Maher, M.L., Poon, J., and Boulanger, S. (1996). Formalising Design Exploration as Co-evolution: A Combined Gene Approach. In Advances in Formal Design Methods for CAD, ed. J.S. Gero, and F. Sudweeks. London: Chapman and Hall.
  • Morrison, E.W. and Bies, R.J. (1991). Impression Management in the Feedback-Seeking Process: A Literature Review and Research Agenda. Academy of Management Review 16 (3):522541.
  • Newell, A. and Simon, H.A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.
  • Ogawa, S. and Piller, F.T. (2006). Collective Customer Commitment: Reducing the Risks of New Product Development. MIT Sloan Management Review 47:6572.
  • Pearce, M., Goel, A.K., Kolodner, I.L., Zimring, L., Sentosa, L., and Billington, R. (1992). Case-Based Design Support: A Case Study in Architectural Design. Expert, IEEE 7 (5):1420.
  • Pirolli, P.L. and Anderson, J.R. (1985). The Roe of Learning from Examples in the Acquisition of Recursive Programming Skills. Canadian Journal of Psychology 39:240272.
  • Prügl, R. and Schreier, M. (2006). Learning from Leading-edge Customers at “The Sims”: Opening Up the Innovation Process Using Toolkits. R&D Management 36 (3):237250.
  • Purcell, T.A. and Gero, J.S. (1996). Design and Other Types of Fixation. Design Studies 17 (4):363383.
  • Randall, T., Terwiesch, C., and Ulrich, K.T. (2005). Principles for User Design of Customized Products. California Management Review 47 (4):6885.
  • Randall, T., Terwiesch, C., and Ulrich, K.T. (2007). User Design of Customized Products. Marketing Science 26 (2):268280.
  • Reitman, W.R. (1965). Cognition and Thought. New York: Wiley.
  • Schau, J. and Muñiz, A. (2006). A Tale of Tales: the Apple Newton Narratives. Journal of Strategic Marketing 14 (1):1933.
  • Schreier, M. (2006). The Value Increment of Mass-Customized Products: An empirical Assessment. Journal of Consumer Behaviour 5 (4):317327.
  • Scott, A. (2007). Peer Review and the Relevance of Science. Futures 39 (7):827845.
  • Simon, H.A. (1973). The Structure of Ill-Structured Problems. Artificial Intelligence 4 (3–4):181201.
  • Simonson, I. (2005). Determinants of Customers' Responses to Customized Offers: Conceptual Framework and Research Propositions. Journal of Marketing 69 (1):3245.
  • Thomke, S. and Von Hippel, E. (2002). Customers as Innovators: A New Way to Create Value. Harvard Business Review 80 (4):7481.
  • Van Lehn, K. (1998). Analogy Events: How Examples Are Used during Problem Solving. Cognitive Science 22 (3):347388.
  • Von Hippel, E. (1994). Sticky Information and the Locus of Problem Solving: Implications for Innovation. Management Science 40 (4):429439.
  • Von Hippel, E. (2001). PERSPECTIVE: User Toolkits for Innovation. Journal of Product Innovation Management 18 (4):247257.
  • Von Hippel, E. (2005). Democratizing Innovation. Cambridge, MA: MIT Press.
  • Von Hippel, E. and Katz, R. (2002). Shifting Innovation to Users via Toolkits. Management Science 48 (7):821831.
  • Willerman, B., Lewitt, D., and Tellegen, A. (1960). Seeking and Avoiding Self Evaluation by Working Individually or in Groups. In Decision, Values and Groups, ed. D. Wilner. New York: Pergamon.
  • Wood, W.H. and Agogino, A.M. (1996). A Case-Based Conceptual Design Information Server for Concurrent Engineering. Computer-Aided Design 28 (5):361369.
  • Zuckerman, M., Brown, R.H., Fox, G.A., Lathin, D.R., and Minasian, A.J. (1979). Determinants of Information Seeking Behavior. Journal of Research in Personality 13 (2):161179.

BIOGRAPHICAL SKETCHES

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

Dr. Nikolaus Franke is full professor of entrepreneurship and innovation at the Vienna University of Economics and Business Administration and leader of the Vienna User Innovation Research Initiative (http://www.userinnovation.at). He is interested in understanding the phenomenon of creative and innovative users and researches methods that help companies using this potential.

Peter Keinz is Ph.D. candidate at the Institute for Entrepreneurship and Innovation at the Vienna University of Economics and Business Administration and member of the Vienna User Innovation Research Initiative. In his research, he focuses on the field of collaborative innovation. In particular, he is interested in toolkits for user innovation and design.

Dr. Martin Schreier is assistant professor of marketing at Bocconi University in Milan, Italy. His research focuses on active customer integration in the design and marketing of new products (e.g., toolkits for user innovation and design, innovative user communities, lead-user research).

Appendix

  1. Top of page
  2. Abstract
  3. Introduction
  4. Development of Hypotheses
  5. Study 1: Peer Information in the Stage of Idea Development
  6. Study 2: Peer Information in the Design Evaluation Stage
  7. Discussion
  8. References
  9. BIOGRAPHICAL SKETCHES
  10. Appendix

Appendix. Measurement Scales

  • Integration of existing solution chunks into the self-design process (Study 1)

Four items: I evaluated many different ideas for ski designs before I started to design my custom skis. I started to design my custom skis by adapting an existing ski design. Every element of my ski design was self-developed. (reversed) An existing ski design served as a starting point for my own design. Measured on seven-point scales (1=strongly disagree; 7=strongly agree); alpha=.75; mean=3.15 (SD=1.58).

  • Perceived preference fit (Studies 1 and 2)

Three items: I am very satisfied with my self-designed ski design. Compared with the ski designs available at conventional stores, I prefer my self-designed skis. My self-designed skis reflect my idea of an ideal ski design. Measured on seven-point scales (1=strongly disagree; 7=strongly agree); alpha=.83; mean=5.63 (SD=.99) (Study 1); alpha=.84; mean=6.06 (SD=.90) (Study 2).

  • Purchase intention (Studies 1 and 2)

One item: If you needed skis right now, how likely is it that you would buy your self-designed Edelwiser skis? Measured on 11-point scale (1=completely unlikely, likelihood of 1 %; 11=almost sure, likelihood of 99%); mean=7.50 (SD=2.63) (Study 1); mean=7.70 (SD=2.56) (Study 2).

  • Willingness to pay (WTP) (Studies 1 and 2)

One item: How much would you be willing to pay for your self-designed pair of Edelwiser skis? Open-ended question (amount in euros); mean=261.67 (SD=87.65) (Study 1); mean=254.86 (SD=98.79) (Study 2).

  • Product category involvement (Studies 1 and 2)

One item: How much would you be willing to pay for a pair of white Edelwiser skis? Open-ended question (amount in euros); mean=140.68 (SD=90.24) (Study 1); mean=128.14 (SD=103.67) (Study 2).

  • Design expertise (Studies 1 and 2)

Four items: I am involved in design in my professional activities. I had already designed a product myself before this experiment. I had already designed skis or a similar product before this experiment. I would call myself a designer. Measured on seven-point scales (1=strongly disagree; 7=strongly agree); alpha=.79; mean=2.16 (SD=1.42) (Study 1); alpha=.78; mean=2.33 (SD=1.51) (Study 2).

  • Integration of external feedback into the self-design process (Study 2)

Five items: I considered suggestions from other people on how to improve my ski design. My final ski design is based on recommendations from other people. Tips from other people were very important in the further improvement of my design. I received feedback on my design from other people. I revised my ski design completely on my own. (reversed) Measured on seven-point scales (1=strongly disagree; 7=strongly agree); alpha=.94; mean=3.75 (SD=2.26) (Study 2).