SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

Corporate universities have emerged as a mechanism for providing companies with a wide variety of training and development activities. They are a recent but under-researched phenomenon, and given their substantial budgets, it might be expected that they would wish to evaluate what they do. The authors explore the evaluation practices of six Italian corporate universities, paying particular attention to the means by which these practices are tailored to the needs of the various stakeholders. Stakeholder-based evaluation provides the theoretical framework for the study.

The literature suggests that much evaluation of training focuses on a single stakeholder, the shareholder, and that practice draws heavily on Kirkpatrick's hierarchical model. In the context of the corporate university, however, the authors find that multi-stakeholder evaluation is used in practice. Moreover, various aspects of corporate university performance were evaluated, and data were supplied to stakeholders depending on the nature of their involvement. Stakeholder-based evaluation is argued to be a useful framework where there are a number of stakeholders, but training evaluation models other than the hierarchical one are needed if all relevant training factors are to be evaluated. The implications for research and practice are discussed.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

The corporate university (CU) has emerged during the last 20 years as a distinct organizational unit expressly dedicated to the management of training processes. CUs organize training activities with the aim of implementing the corporate strategy at different levels of the organization. To achieve this objective, CUs tend to provide training services to the company's employees, customers, partners and suppliers. These services are often provided in collaboration with research centres, universities, advisors and consultants. CUs are a part of a complex network of relationships inside and outside company boundaries. This complexity tends to increase the number of subjects that are involved in the training activities.

Training evaluation is regarded as important among practitioners and scholars alike (Lien et al., 2007). The training evaluation literature demonstrates that the process has a strategic importance for all companies because it quantifies the value of the training activities and justifies the investments made (Hashim, 2001; Noe, 2000; Preskill, 1997; Swanson & Holton, 1999). Arguably, this evaluation process is even more strategic for a CU, which, owing to the significant financial commitment for the company that operates it, must demonstrate the results achieved in a more precise manner. A reliable and solid evaluation system can make a difference to the perception of a CU's value and credibility by showing that it contributes to results. This may represent the movement of training from tactical delivery to a more strategic position (Barley, 2007, p. 56). Furthermore, because of the distinct characteristics of CUs, the training evaluation system needs to involve all the various stakeholders. Such evaluation can be seen as critical for decision making. Indeed, as stated by Paton (2005), one of the reasons why training evaluation is more strategic for the CUs is that ‘CUs' activities are often a meeting place for a wide range of stakeholders, with very distinct perspectives, concerns and professional languages (and national cultures, indeed)’ (p. 123).

This paper investigates Italian CUs in order to better understand training evaluation systems and the ways that they involve a variety of stakeholders in the training evaluation process.

Purpose of the study

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

The literature on programme evaluation, the discipline that studies the most effective methods of evaluating policies, programmes and projects in both the public and private sectors, has focused its attention on evaluation models that aim to include stakeholders in assessment processes (Alkin et al., 1998; Bryk, 1983; Greene, 1988; Mark & Shotland, 1985; Mark et al., 2000). The results of this research, however, are not often applied within the practice of training evaluation (Lewis, 1996). In practice, at a macro and at the organizational level, ‘training and development is a matter of faith: it is by no means clear that it is consistently related to better performance’ (Lewis, 1997, p. 3). Moreover, research shows that ‘among companies that regularly perform assessment of their training programs, only a small percentage actually understand the importance of not only conducting the evaluation but also using the evaluation results correctly’ (Bober & Bartlett, 2004). Against this background, the participation of stakeholders in the evaluation process has long been singled out as a key variable in the motivation to utilize evaluation findings (Cousins & Earl, 1992; Weiss, 1983).

The traditional training evaluation models are almost exclusively related to the measurement of results within the perspective of one single stakeholder, which typically corresponds to the shareholders (Michalski & Cousins, 2000). The literature on stakeholder-based evaluation suggests that if evaluation is to improve a programme's performance, it must be used instrumentally and must be structured as a system that supports actions and further decision-making processes (Flynn, 1992). For this reason, it is necessary to know the evaluation needs of the actors involved in the programme's decision-making processes and to design an evaluation system in the light of those needs.

The objective of this study is to analyse the application of a stakeholder-based approach to training evaluation. The study has a specific focus on CUs, as they are considered, because of the number of stakeholders they involve, to be a particularly suitable context for this approach. Moreover, the review of the CU literature reported in the following section shows that CUs are under-represented in research (Blass, 2001) and that ‘given the large investment and high visibility of many CUs, further research that focuses on the use of evaluation in these entities should be conducted’ (Bober & Bartlett, 2004, p. 380). In addition, the training evaluation literature suggests that only limited use is made of evaluation in training practice generally (Carnevale & Schulz, 1990; Dixon, 1996; Phillips, 1997; Robinson & Robinson, 1989; Russ-Eft & Preskill, 2001), which makes research into evaluation practices all the more important.

Six case studies of Italian CUs have been investigated and the following questions have been addressed.

  • What are the key features of CUs' training evaluation systems?
  • What are the relationships between stakeholders and CUs' evaluation systems; and in particular, what evaluation data are received by stakeholders?

This paper can be considered as building on two previous studies. The first is the study by Bober and Bartlett (2004) that investigated the use of training programme evaluation results in CUs. That study examined which organizational members used evaluation data, the purposes of using the data and the factors related to that use. The present study does not focus on the actual uses made of evaluation data by the organizational members; rather, it seeks to identify the stakeholders considered by design to be part of the evaluation system. The second study is by Blass (2001), which compares the traditional ‘public university’ sector with the CU sector, considering, as the criterion for comparison, the stakeholders that are involved. Blass's paper is based on the idea that ‘clearly, there are numerous stakeholders with differing inputs all requiring different outputs’ (p. 164), and it explains who the typical stakeholders of a CU are, the set of inputs that they provide and the outputs that they expect. This paper extends Blass's analysis, in that its aim is to identify how CUs design their evaluation systems in order to enable stakeholders to monitor the system's performance.

Theoretical framework

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

This section presents a conceptual overview of the present study. In order to outline the context of analysis, the first part approaches the concept of the CU. The distinct features of CUs are highlighted by a comparison with the traditional corporate training department. The second part introduces the most common training evaluation models and highlights advantages and limitations. The third part investigates the evaluation of training within a multi-stakeholder context.

CUs

There is no universal definition of a CU in the literature: the numerous definitions that have been proposed since the 1990s (Allen, 2002; Dealtry, 2000; El-Tannir, 2002; Jarvis, 2001; Meister, 1998; Prince, 2003; Prince & Stewart, 2002) varied significantly and emphasized the different characteristics of CUs. If we refer to the definition proposed by the European Corporate Universites and Academies Network (ECUANET), ‘a corporate university (known also as an academy, institute, learning center or college) is an organizational unit dedicated to transforming the business-orientated learning into actions. It is planned, guided and closely related to the strategy of the company in order to achieve business excellence through the improvement of personnel performance and the development of a business culture within which innovation can flourish. Besides producing value from their intellectual assets, it helps the organization to identify and hold on to key employees, while permitting personnel to implement a valid learning process based on experience and the opportunities for career development’ (ECUANET, 2006, p. 1).

In regard to operational aspects, the literature highlights that a CU

  • contributes to the company's strategy formulation and supports the organization in its strategy implementation (Dealtry, 2005; Eccles, 2004); indeed, as stated by Dealtry (2005): ‘[t]he CU, if well-founded, is the first step in ensuring that a hub of organic-learning, corporate degree programmes is timely and fits well into the overall portfolio of corporate provision so that it will serve the intellectual purpose of the organisation’ (p. 78);
  • is the main provider or coordinator of the training and development activities organized for the company's human resources and those of its customers and suppliers; it is fully supported by the company's top management; as stated in Barley (2002): ‘[a]ll CUs need senior executive support. However, they do not neccessarily need to report directly to a senior executive in order to have an impact on the organization. In fact, there are three main places where responsibility for a CU resides: a chief executives's office, a human resource office, and a business unit’ (p. 45);
  • highlights the innovation pursued through partnerships with universities, business schools and public and private research centres: ‘in the context of the CU, facilitating the development of networks and partnerships with world-class learning partners in order to deliver learning interventions within the organisation is crucial’ (Prince & Stewart, 2002, p. 806);
  • supplies its own training, development and knowledge management services with extensive use made of information and communication technologies (Macpherson et al., 2005); as Prince and Beaver (2001) stated: ‘a world-class CU is likely to be involved in the development and ongoing support of Intranets and knowledge management databases. Ensuring that individual learning is captured and made available as a resource for the benefit of the whole organization is likely to be a central function of a world-class CU’ (p. 193);
  • sometimes awards academic degrees or certificates; as stated by Andresen and Lichtenberger (2007) regarding the CU in Germany: ‘a part from company-specific programmes, the vast majority of German CUs co-operates with institutions of higher education. They offer programmes that are equivalent or identical to university or business school education and in general are designed and/or delivered by accredited institutions of higher education. Partnerships between CUs and executive education departments of higher education institutions are becoming broader, deeper and more numerous today in Germany (p. 117–18).

In order to capture the essence of CUs, it is useful to compare and contrast the CU and a traditional training department (Meister, 1998), although, inevitably this involves some over-generalization. A traditional training department operates mostly on a reactive basis, responding to specific training demands that emerge at different levels of the organization. CUs are permeated by a holistic vision and aim to plan a business training system with the objective of translating cultures, values and missions into operational action plans. Training departments are typically included in human resources departments and report to a human resources manager, whereas the CU needs a stronger commitment, usually provided by the company's senior management, who are also represented on the board of the CU. The senior managers collaborate with the CU in strategy planning activities. The training activities provided by traditional training departments tend to be ‘prescriptive’, as they are defined on the basis of training needs analysis conducted by the department's staff. A CU uses a ‘sales process’ to provide its services, based on its expertise and knowledge of needs within a competitive market.

The above summary has employed a set of variables that will subsequently be used in the case selection for this study. It should, however, be viewed more as a continuum as shown in Figure 1. For this reason, the findings of this paper can be utilized by both CUs and training departments. The dimensions used can also provide a framework for managerial decision making and action.

image

Figure 1. The continuum between the corporate university and the traditional training department.

Download figure to PowerPoint

Training evaluation

Although a variety of training evaluation models exists in the literature (Garvin, 1995; Swanson & Holton, 1999), the most renowned and widely used is the hierarchical model (Kirkpatrick, 1975, 1994, 1996). This is based on four levels of evaluation: the reaction of the participants, their learning, the degree of transfer of what they have learned into practice and the impact of this transfer on the business results at an overall level. At this last level of evaluation, from the 1990s onwards, the need to translate the impact on business results into an economic value was included (Fitz-enz, 1988; Geber, 1995; Kearsley, 1982; Phillips, 1997; Phillips & Phillips, 2001; Tesoro, 1998). Indeed, the hierarchical model was integrated with the development of specific approaches that aim to appraise, in monetary terms, the resources used and the benefits gained in order to permit a cost–benefit analysis and the determination of a measure of the rate of return of the investment in training.

The analysis of the business practice leads to three main findings: (1) the hierarchical model constitutes the reference model, even if it is implemented differently within each business context (Lien et al., 2007); (2) in the main, the first two of the four levels of evaluation are used in an organized and structured manner (Bassi et al., 1996); (3) the return on investment models are not common because training costs and benefits are often characterized by a high level of immateriality and it makes difficult their ‘translation’ into monetary values (Abernathy, 1999; Alliger & Janak, 1989; McLean, 2005; McLinden, 1995).

There are three main criticisms of the hierarchical model (Bates, 2004). The first is that the model concentrates on far too small a group of variables. In fact, the four levels of evaluation that it proposes are based on an excessively simplified vision regarding the effectiveness of training, particularly because they do not consider the influences of the organizational context. A significant research stream (Cannon-Bowers et al., 1995; Ford & Kraiger, 1995; Kontoghiorghes, 2001; Salas & Cannon-Bowers, 2001; Tannenbaum & Yukl, 1992) documented the presence of a wide range of organizational, individual, planning and implementation factors regarding training that can influence the effectiveness of the training process, before, during or after the intervention. Such studies have empirically shown the impact of training process variables on performance that the hierarchical model does not consider, such as organization culture (Tracy et al., 1995), the values and the objectives of the organizational units in which training is provided (Ford et al., 1992), and the support of line managers in the acquisition of competencies and in their transfer into working practices (Bates et al., 2000).

A second criticism concerns the assumption of causal relations between the levels of evaluation; that is, the idea that it is not possible to achieve positive results at top levels if this did not occur at the lower ones. Many studies confirm such a criticism: various empirical analyses (Alliger & Janak, 1989; Alliger et al., 1997) have highlighted a lack of correlation among the measures identified at different levels of the model, which strongly disputes the concept of linear causality. Besides, the lack of such a relationship among the levels of evaluation does not support the hypothesis, implicit in the hierarchical model, that satisfactory results at superior levels implicate satisfactory results in the lower ones (Alliger & Janak, 1989).

A third criticism of the hierarchical model is the lack of a multi-actor perspective. In fact, the point of view that the model assumes is just that of the company, which is conceived as a unitary system. This criticism is linked to the concept of organizational effectiveness: recent studies (Altschuld & Zheng, 1995) reviewed several models for the evaluation of organizational effectiveness. They stated, ‘[l]acking absolute criteria and causality related to outcome, complex organizations should turn to social referents to demonstrate their effectiveness’ (p. 203). It means that there cannot be a single, universally acceptable model of organizational effectiveness. Coming back to training evaluation, the hierarchical model seems concerned exclusively with demonstrating the benefits of training from a singular stakeholder perspective in strictly financial and/or operational terms. This leads to neglect of the evaluation needs of all the other stakeholders involved in the training process and makes it most restrictive in a context such as the CU, which is characterized by many actors.

Training evaluation in a multi-stakeholder context: the stakeholder-based approach

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

Stakeholder-based evaluation is an approach – developed in the field of programme evaluation – that identifies and is informed by particular individuals or groups. Stakeholders are the distinct groups interested in the results of an evaluation either because they are directly affected by (or involved in) programme activities or because they must make a decision about the programme or about a similar programme (Gold, 1983; Guba & Lincoln, 1989; Stake, 1983). Expanded participation in evaluation can promote utilization (Cousins & Earl, 1992), Patton (1997) stated that the use of evaluation will occur in direct proportion to its capability to ‘reduce the uncertainty of action for specific stakeholders’ (p. 348). In particular, there are basically four main positive impacts of the actors' participation in an evaluation process (Flynn, 1992):

  • 1
    ethics – because the participation allows everyone to show and enforce his own interests;
  • 2
    convenience – because the people not involved in the decision-making process can oppose the decisions made by others and make them ineffective;
  • 3
    inclusion of expert knowledge – because it is necessary to involve experts when making decisions; and
  • 4
    motivation – because the participation ensures that people are more aware of the rationality of the decisions and more interested in their efficient and effective implementation.

Following such considerations, various studies began to address the application of the stakeholder-based evaluation approach (that is, a form of participatory evaluation) to training programmes. First, it was seen that different stakeholders are called upon to evaluate the same training project, and that each of them has specific evaluation needs (Garavan, 1995; Michalski & Cousins, 2000). These studies start with the definition of a stakeholder as an entity that may affect the performance of a training process because it is called upon to take specific decisions regarding the process. Therefore, evaluation is conceived as a process which should supply stakeholders with the necessary information to support their decision-making processes. Starting from these basic assumptions, research concentrated on the evaluation needs of stakeholders within the company. Such research used attribution theory to analyse the evaluation needs of managers, participants and training experts (Brown, 1994), and through the use of concept mapping, the evaluation needs of managers, training participants and learning experts (Michalski & Cousins, 2000). This research stream then widened its range of inquiry to consider the evaluation needs of external stakeholders, such as external training suppliers, public training school operators and trade unions (Garavan, 1995).

A stakeholder-based approach to training evaluation becomes very important in the case of a CU where the learning system to be appraised is very complex, and typically involves a larger number of stakeholders than is the case with a training department. CUs might take into account, in designing their evaluation systems, the following stakeholders: the senior management of the company that founded it; the CUs' customers, that is, the company's internal organizational units or the customers of and suppliers to the company that use training services provided by the CU; the participants at the training courses and programmes; the participants' managers; the training providers, research centres and universities with which the CU establishes partnerships and alliances; the workers' trade union representatives at both a group level and subsidiary company level; and the local communities to which the CU relates. Each of these stakeholders is interested in some aspect or aspects of the CU's performance, depending on his own specific interests and objectives and his role inside the training process.

Research methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

This paper is based on exploratory multiple case studies (Yin, 1984). Such approach was considered the most suitable method for the study because it enabled the investigation of the training evaluation process and the perceptions of various organizational participants as distinct from the content of the evaluation documents. There is a limited literature about the application of stakeholder-based evaluation to training, and a case-study approach is generally considered appropriate in the early stages of research on a topic so that it can provide a new perspective.

Participants

Cases were selected because they were likely to replicate and extend an emergent theory, and not – as in the traditional, within-experiment, hypothesis-testing studies – with the aim of building a statistical sample randomly selected from the population. In particular, the selection of the cases required considerable attention because of the rapid development of CUs in recent years. Many firms have, in fact, declared that they have set up a CU without actually modifying the methods already used to manage their existing training processes. In order to use a systematic method in case selection, we identified Italian businesses that displayed the distinctive characteristics of a CU rather than the characteristics of a traditional training department. The criteria are illustrated in Figure 1. Six case studies were selected on this basis. A description of the cases is included in Table 1.

Table 1. Brief summary of the six corporate universities studied
 CorporateCorporate university
IndustryNumber of employeesTurnover (in million euros)Year of foundationNumber of employeesTraining hours delivered per year
Case 1Energy58,00036,0001999140104.600
Case 2ICT services91,00031,0002001200450.000
Case 3Food and beverage500205200281.200
Case 4Automotive160,00046,000197287890.000
Case 5Banking56,00017,000200250350.000
Case 6Energy69,00058,0002001100790.000

Data collection and analysis

The data collection was based on multiple data sources. After the collection of a significant amount of background information prior to each visit, data were collected on site at each of the six CUs. Two instruments were used. The first was a form that aided the analysis of documents available in each CU. This helped us get information about the structure of the CU (such as exernal presentation of the mission, vision and services offered) and the evaluation practices implemented (such as evaluation reports, evaluation procedures and employee memorandum). This document review guide contained sections for recording the type and the content of item(s) being reviewed.

The second instrument was an interview schedule. Interviews were conducted with key staff, managers and faculty inside the CUs. The interviewees were selected on the basis of their knowledge and position within the CU and included CU managers, individuals in ‘sales positions’ and employees coordinating specific training programs. The interviewees were first contacted by phone; they were then sent an email describing the objectives of the research project, the methodology used and the structure of the interview. All the interviews were conducted face to face, lasting between 50 and 60 min. As suggested in the literature about study research, all the interviews were conducted by a researcher who was assigned to conduct the interview and by an observer: the double presence during the interview is justified by the need to have a dual observation and analysis viewpoint, not only with regards to the content of the answers provided by the interviewee but also in respect to the relational dynamics that may have developed during the interview. A total of 27 interviews were conducted and all were taped and transcribed.

Data were analysed on a set of default variables: business context, objectives, mission and organizational structure of the CU, and the characteristics of the training services provided. Furthermore, the information obtained through the interviews was analysed in relation to the objectives of the evaluation system, the processes implemented, the actors involved and the reporting system created. In particular, each transcript was read, coded and analysed to determine the aspects of performance evaluated by the CU, the systems used and the relationship between the aspects of performance and the stakeholders. An analytical report was prepared for each organization. This report was then proposed for validation by the interviewee who held the highest hierarchical position inside each CU. In addition to data triangulation (collecting data from a variety of sources) and methodological triangulation (collecting data by two methods), an investigator triangulation was implemented (more than one researcher analysing data). At the completion of the analysis, in order to assure interpretation validity, three external readers with knowledge and experience in training and development also read the transcripts, highlighting the aspects of performance evaluated by the CUs and the relationship between these and the stakeholders.

Findings

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

This section is organized in two parts in accordance with the research questions addressed by this study.

  • 1
    Performance monitored by the CU (key features of CUs' training evaluation systems).
  • 2
    Relationships between the aspects of performance monitored and CUs' stakeholders (the evaluation data received by stakeholders)

Aspects of performance that were evaluated

This result has been reached through a two-step process. The first is the reconstruction of the key features of the evaluation system of each CU. This required detailed evaluation practice mapping. Each interviewee was asked to highlight the evaluation practices he knew and used, often designed for particular courses or training programmes. Most of the programmes were managed at a ‘local’ level and were not included in a general framework. The second step is the classification of the evaluation practices. Each practice implemented by the CUs evaluated one specific aspect of CU performance. For example, we found practices that focused on the participants' satisfaction (to what degree the CU is able to satisfy the participants' expectations); evaluation of the quality and quantity of skills and knowledge acquired by the course participants (to what extent the CU is able to transfer skills and knowledge); and evaluation of the cost of services provided by the CU compared with the cost of the same services provided in the open training market.

The evaluation practices were subdivided under 10 headings, as below. Each heading refers to a specific aspect of the performance of the CU.

Efficiency

This includes the practices that evaluate the use of resources in the CU. These practices are about the evaluation of the cost of the CU and its productivity.

Innovation

The effects of the CU's initiatives on the company are aimed at the development of competencies that do not exist within the organization and at the diffusion of innovative topics and practices. Such initiatives are generally conducted in partnership with external actors, mainly universities and research centres. The objective is to outsource those processes for which the company does not internally have the specific skills and also to start research programmes that aim to introduce innovative practices within the company. Evaluation practices have been established to assess the extent to which the CU is innovative. These are based on the following indicators: the number of partnership projects handled as main contractor; the number of new partnership projects as a ratio to all projects; the number of academic partners (appearing in the Italian rankings) by field of interest; and periodic assessment of innovative training service suppliers of the CU.

Strategic training orientation

An appraisal is made as to the extent to which the CU is strategic – that is, as to how far the training services contribute to the implementation of business strategy at different levels of the organization. One single evaluation practice emerged: the percentage of training days dedicated to topics that the company considers strategic.

Convenience.  The economic advantage of using the CU rather than other market solutions is appraised, for example, by comparing the cost of all items supplied by the CU with the cost of obtaining them from potential suppliers on the ‘open market’. The assessment practice here comprises a comparison of CU price lists with those of Italian and international competitors that have comparable levels of quality.

Impact on individual performance.  Changes in the performance results of individual workers are appraised after the participation in training programmes, particularly quantitative aspects of performance. This is measured using different assessment practices, such as an assessment made by the participant's supervisor 3–6 months after the training programme based on a checklist of the competencies on which the training was focused.

Impact on behaviour.  The participants' changes in behaviour are appraised after taking part in training programmes, particularly qualitative elements of performance. The evaluation practices here are often still at an experimental phase or are implemented only for certain training programmes. They are not used in all the training projects provided. This aspect of performance is assessed by organizing a specific assessment session between the participant and his boss after the training programme; by creating self-assessment tools for participants to be used before and after the training programme; or by observations made by trainers and based on checklists a few months after the end of the programme. An integrated use of various practices was found in the cases where this kind of performance was assessed.

Impact on the organization.  The effects on the operational or economic-financial performances of the CU's customers are appraised, drawing on the changes in behaviour of the participants after a training course. Only sporadic practices were found here, related to specific projects in which it is possible to isolate the endogenous variables in relation to the training provided. The evaluation practices found were as follows: mystery shopping (after a training intervention relating to the sales force, the trainers appraised anonymously the performances of the sales staff in terms of services provided to customers; this appraisal is compared with a control group that had not received specific training); measurement of customer satisfaction (evaluation of the degree of positive response with which customers perceive the results achieved from training); improvement in the performance of the internal processes in which the training participants operate (for example, an increase in productivity, a decrease in costs, a decrease in production times or an increase in the quality of output).

Professional development.  The evaluation here is of the possibility of using the skills acquired by training as a basis for making progress within the company's career system. The practices found in relation to this performance are as follows: in the cases in which an accreditation system is used, longitudinal analysis is organized to verify to the degree to which training results are related to people's career advancements. In other cases, on specific courses, evaluation focuses on the number of participants who, after a number of months or years, have reached a higher position in the organization.

Satisfaction.  The level of satisfaction among the participants was appraised in relation to the training programmes organized by the CU. In all the cases examined, participants fill a standardized course satisfaction test after the courses, where the level of satisfaction of each variable taken into consideration is measured using a graduated scale system. The form is always managed anonymously and, in some cases, it is filled using an electronic interface that automatically saves the results. The questionnaires aim to assess the level of satisfaction of the participants in terms of course objectives, training content, organization (duration and logistics), quality of trainers, didactic material and/or support instruments, and overall satisfaction.

Learning.  The knowledge and skills acquired during the training courses were appraised. The evaluation practices here depend on the type of training content provided. The most widely used practice consists of a test given before and after the course, sometimes supported by the use of self-assessment tests. The test after the course was conducted after a few months, typically 4–8. It should be noted that often this assessment is connected to company compliance requirements regarding standards for professional staff. In some cases, there are learning assessment practices that may lead to certification of competence for individuals performing specific roles, for example, project managers.

Table 2 shows, for each case, which aspects of performance included in the present list are evaluated.

Table 2. Performance evaluated by each corporate university studied
 Case 1Case 2Case 3Case 4Case 5Case 6
EfficiencyXXXXXX
InnovationXX   X
Strategic training orientation     X
ConvenienceXX XXX
Impact on individual performanceXX XXX
Impact on behaviourXX XXX
Impact on organizationXX XXX
Professional development   XX 
SatisfactionXXXXXX
LearningXX XXX

The relationship between the aspects of performances evaluated and the stakeholders

This section is about how each of the aspects of performance that were evaluated relates to the respective stakeholders. Given the large number of reported practices and the wide range of actors involved in the evaluation process, we decided to concentrate on the main stakeholders, for which there were designated specific evaluation reports. These stakeholders are

  • the company that created the CU and provides the strategic guidelines and overall objectives;
  • the internal or external customer, that is, the company/business unit or possibly its customers/suppliers/partners, who purchase services from the CU;
  • the workers who take part in the training programmes; and
  • the management of the CU.

Table 3 indicates the match between aspects of performance and the stakeholders to which they relate. It can be seen that each stakeholder receives structured reports relating only to the aspects of performance in which he or she is involved. For the management of the CU, the situation is different: it appraises two performances analytically, convenience and efficiency, but it receives concise data on all the other aspects of performance. The objective of this is to enable targets to be set as a basis for improvement.

Table 3. Performances and stakeholders
StakeholderPerformances
CorporateEfficiency
Innovation
Strategic training orientation
External or internal principalSatisfaction
Impact on behaviour
Impact on individual performance
Impact on organization
Professional development
Convenience
ParticipantsLearning
Corporate universityEfficiency
Convenience
Aggregate data relative to all the other performances

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

The discussion is composed of two parts: the first is about the aspects of performance evaluated by the CUs. These are compared with the levels of the hierarchical evaluation model. The objective of this comparison is to emphasize to what extent such a model succeeds in including the different performances and is good enough to represent the evaluation requirements of complex contexts such as CUs. The second part is about the relations between performance and stakeholders. This discusses the findings in the context of stakeholder-based evaluation theory.

Performance evaluated by the corporate universities and the hierachical model

The hierarchical scheme is the one illustrated earlier, with the fourth level comprising all the practices of evaluation related to the impact of training on business results, both operational and economic-financial (which might be considered as a fifth level, the return on investment, elaborated by Phillips, 1997). This model constitutes a reference model in that ‘Kirkpatrick's (1975, 1994, 1996) four levels of training evaluation are widely accepted and frequently described within the literature, including discussion about the type of evaluation method appropriate for each level’ (Bober & Bartlett, 2004, p. 364). For this reason, this paper focuses on this model, even if the literature on training evaluation comprises other models as well (such as Alliger et al., 1997; Russ-Eft & Preskill, 2001; Swanson & Holton, 1999).

Table 4 relates the aspects of performance that were evaluated to the levels of the hierarchical model.

Table 4. Performances and the hierarchical model
Hierarchical model: levels of evaluationPerformances
Level 1: SatisfactionSatisfaction
Level 2: LearningLearning
Level 3: Impact on individual performanceImpact on individual performance
Impact on behaviour
Levels 4 and 5: Impact on the organization (both operational and financial)Impact on organization
Convenience
Efficiency

It is not, however, possible to include all the aspects of performances that emerged from the research using the hierarchical model. The performances which cannot be classified in the levels listed above are the following.

  • Strategic training orientation: Strategic orientation highlights the commitment of the CU in terms of development and implementation of the business strategy within the different levels of the organization. This evaluation is useful for analysing the extent to which the business training system is able to support the expansion of the organization by supplying human resources with the skills they need mid to long term, rather than just being a ‘supplier’ of specific skills from a current, job-centred point of view.
  • Innovation: This evaluation concerns the whole training system and not each specific project. The main objective is to verify the extent to which the training provided leads to changes in the company's work procedures and encourages innovation and change.
  • Professional development: From an internal employability viewpoint, some evaluation practices underline the importance of training as the fundamental element for career development. In particular, the number of participants who achieved a promotion in their position a few months or years after the training course is assessed. Such an element of evaluation becomes more strategic due to the increasing difficulty encountered in company career planning. This evaluation process has the objective of monitoring the congruency between the training system and the career system within the organization.

These results suggest that if the hierarchical model represents a solid scheme on which training evaluation systems can be designed, it is necessary to integrate the model with further evaluation elements to evaluate the relationship between training and the organization. As has already been stated in literature, one of the criticisms made about the hierarchical model is its limited ability to relate the training to the larger organizational context (Bates, 2004). This shortcoming becomes particularly critical in complex contexts such as CUs, which have the specific objective of implementing the business strategy and relating training to the challenges that the company is facing. The findings of the present study show how the hierarchical model can be integrated by evaluation elements that respond to this demand. The aspects of performance outlined above refer to the relationships between training and business strategy, training and the company's innovation needs, and training and the company's career systems.

Contribution to the theory of stakeholder-based training evaluation

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

On the basis of the findings, it is possible to highlight the following considerations.

First, the findings confirm what emerged from the literature on stakeholder-based evaluation of training. According to his interests and objectives, and the role within the training process, each stakeholder is considered interested in different elements of the evaluation. This confirms that it is essential to know the evaluation needs of the different actors for two reasons: on the one hand, to plan evaluation systems that aim to satisfy such needs, and on the other, to highlight any possible trade-offs among performances. In fact, interests and objectives of the actors may be different and require mediation. For instance, in this research, a trade-off emerged between efficiency (of interest to the company) and convenience (typically of interest to the customer), which cannot be maximized simultaneously. In fact, the customer (both internal and external) is mainly interested in the fact that CU is cheaper than other solutions available on the market, whereas the company is more interested in the fact that the CU is able to maintain its financial independence. Knowing such trade-offs and the stakeholders they affect makes it easier to set the general objectives of the system (perhaps using mediation or consensus-building processes) and to structure a comprehensive evaluation system.

The second consideration concerns the performances on which the different stakeholders are focused. From this study it emerged, in conjuction with other available results (Collins, 2002; Russ-Eft & Preskill, 2001), that the temporal perspective in the evaluation is both short and long term; it also emerged that each actor is focused on different temporal perspectives.

  • The company views training from a long-term perspective as an instrument to implement the strategic decisions made by senior management and to instil innovative practices. The aspects of performance evaluated by the company are strategic orientation and innovation.
  • Customers view training from both a long- and a midterm perspective. Typically, they evaluate not only long-term performances (such as the impact on business results and professional development) but also midterm performances (such as impact on performance and impact on behaviour).
  • Participants were found to be focused on short-term performance related to the quantity and quality of skills acquired during the training programme (learning).

If the evaluation system aims to satisfy the stakeholders' evaluation needs, it is necessary to design it with these different temporal perspectives in mind.

The final consideration concerns the possibility of ‘translating’ the training results into an economic value. Contrary to the common perception that an evaluation system acquires greater credibility when it includes numerical economic evaluations of the results, in the CUs studied this is only partly true. The evaluation practices founded are not strictly related to the calculation of the return on investment for each specific training project. The evaluation systems are focused more on the evaluation of the cost of the CU's services compared with external solutions (convenience), on the impacts of training on operational (and not stricly financial) performance of the organization (impacts on organization), and on the sustainability of the CU on a mid- to long-term basis (efficiency).

Conclusions and implications

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References

Stakeholder-based evaluation is an approach that the programme evaluation literature has featured for some time, and today a variety of different theoretical models and empirical analyses is available. This approach is not, however, applied widely in the practices used to evaluate training. In the main, training evaluation practices remain focused on standardized models and oriented towards one single actor, which typically coincides with the shareholder. Th present research has explored whether or not CUs, with their multiple stakeholders, have tended to adopt stakeholder-based training evaluation. We find that they have and our research has identified the aspects of performance that they evaluate and how they disseminate information about these different aspects to the various stakeholders. We note the different orientations among the stakeholders.

These findings could be useful for building a bridge between the programme evaluation and human resource development (HRD) fields. This bridge may be strategic, considering the importance of training evaluation, which continues to be essential in demonstrating the added value of HRD, and the fact that the involvement of stakeholders in the strategic HRD aligning process has a positive effect on its effectiveness (Wognum & Lam, 2000).

Three limitations can be associated with the study. The first concerns the sample which is small, internally heterogeneous and strongly related to the Italian context. An expansion with a more international perspective is desirable. A second limitation concerns the group of stakeholders: the study took into consideration four main stakeholders (the managers of the CU, the company, the CU customers and the training participants), although many others could also be considered, as stated earlier. The third limitation concerns the research methodology used: the case studies have been carried out by interviewing only the CU operators, although further research could be developed using different methodologies to directly involve the other CU stakeholders. This, in fact, would make it possible to directly analyse their specific objectives and evaluation needs.

To conclude, we propose three possible directions for further research based on the results set out in this study.

  • In light of the evaluation systems used by the CUs and the aspects of performance they evaluate, a first possible development might be to develop a questionnaire based on the aspects of performance recorded here but with data collection involving a larger international CU population.
  • Based on the relationships between the aspects of performance evaluated and the stakeholders, a second possible development might be to expand knowledge by surveying different stakeholders. This extension of the research might use qualitative methodologies (such as case studies or action research), whereby the researcher interviews the different stakeholders involved in a CU and assesses their evaluation needs. Such a study would greatly benefit from a theoretical framework based not only on the hierarchical model and its extension (Kirkpatrick, 1996; Phillips & Phillips, 2001) but also on the other evaluation models developed in the HRD literature (such as Alliger et al., 1997; Russ-Eft & Preskill, 2001; Swanson & Holton, 1999).
  • The third possible development would be to focus on a comparison among evaluation practices in CUs, those used in companies that have training departments but do not have a CU and those adopted by companies that outsource all training and development activities.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Purpose of the study
  5. Theoretical framework
  6. Training evaluation in a multi-stakeholder context: the stakeholder-based approach
  7. Research methods
  8. Findings
  9. Discussion
  10. Contribution to the theory of stakeholder-based training evaluation
  11. Conclusions and implications
  12. References
  • Abernathy, D. J. (1999), ‘Thinking outside the evaluation box’, Training & Development, 53, 1923.
  • Alkin, M. C., Hofstetter, C. H. and Ai, X. (1998), ‘Stakeholder Concepts in Program Evaluation’, in A.Reynolds and H.Walberg (eds), Advances in Educational Productivity, Vol. 7 (Greenwich, CT: JAI Press), pp. 87113.
  • Allen, M. (2002), The Corporate University Handbook (New York: American Management Association).
  • Alliger, G. M. and Janak, E. A. (1989), ‘Kirkpatrick's levels of training criteria: thirty years later’, Personnel Psychology, 42, 33142.
  • Alliger, G. M., Tannenbaum, S. I., Bennett, W., Traver, H. and Shotland, A. (1997), ‘A meta-analysis of the relations among training criteria’, Personnel Psychology, 50, 34158.
  • Altschuld, J. W. and Zheng, H. Y. (1995), ‘Assessing the effectiveness of research organizations: an examination of multiple approaches’, Evaluation Review, 19, 2, 197216.
  • Andresen, M. and Lichtenberger, B. (2007), ‘The corporate university landscape in Germany’, Journal of Workplace Learning, 19, 2, 10923.
  • Barley, K. (2002), ‘Corporate Universities Structures that Reflect Organizational Cultures’, in M.Allen (ed.), The Corporate University Handbook (New York: American Management Association), pp. 4366.
  • Barley, K. (2007), ‘Learning as a Competitive Business Variable’, in M.Allen (ed.), The Next Generation Corporate University (San Francisco, CA: Wiley), pp. 3961.
  • Bassi, L., Benson, G. and Cheney, S. (1996), ‘The top ten trends’, Training & Development, 50, 2933.
  • Bates, R. (2004), ‘A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence’, Evaluation and Program Planning, 27, 3417.
  • Bates, R. A., Holton, E. F. III, Seyler, D. A. and Carvalho, M. A. (2000), ‘The role of interpersonal factors in the application of computer-based training in an industrial setting’, Human Resource Development International, 3, 1943.
  • Blass, E. (2001), ‘What's in a name? A comparative study of the traditional public university and the corporate university’, Human Resource Development International, 4, 2, 15372.
  • Bober, C. and Bartlett, K. (2004), ‘The utilization of training program evaluation in corporate universities’, HR Development Quarterly, 5, 36384.
  • Brown, D. C. (1994), How managers and training professionals attribute causality for results: implications for training evaluation. Unpublished doctoral dissertation. University of Illinois at Urbana-Champaign.
  • Bryk, A. S. (ed.) (1983), ‘Stakeholder-Based Evaluation, New Directions for Program Evaluation, Vol. 17 (San Francisco, CA: Jossey-Bass).
  • Cannon-Bowers, J. A., Salas, E., Tannenbaum, S. I. and Mathieu, J. E. (1995), ‘Toward theoretically based principles of training effectiveness: a model and initial empirical investigation’, Military Psychology, 7, 14164.
  • Carnevale, A. P. and Schulz, E. R. (1990), ‘Economic accountability for training: demands and responses’, Training & Development, 44, 7, 132.
  • Collins, D. (2002), ‘Performance-level evaluation methods used in management development studies from 1986 to 2000’, Human Resource Development Review, 1, 91110.
  • Cousins, J. B. and Earl, L. M. (1992), ‘The case for participatory evaluation’, Educational Evaluation and Policy Analysis, 14, 4, 397418.
  • Dealtry, R. (2000), ‘Case research into corporate university developments’, Journal of Workplace Learning, 6, 34257.
  • Dealtry, R. (2005), ‘Achieving integrated performance management with the corporate university’, Journal of Workplace Learning, 17, 1/2, 6578.
  • Dixon, N. M. (1996), ‘New routes to evaluation’, Training & Development, 50, 5, 827.
  • Eccles, G. (2004), ‘Marketing the corporate university or enterprise academy’, Journal of Workplace Learning, 16, 7, 41018.
  • ECUANET (2006), An Overview of Corporate Universities (Birmingham: European Corporate Universities and Academies Network).
  • El-Tannir, A. A. (2002), ‘The corporate university model for continuous learning, training and development’, Education & Training, 44, 7681.
  • Fitz-enz, J. (1988), ‘Proving the value of training’, Personnel, 3, 1723.
  • Flynn, D. J. (1992), Information Systems Requirements: Determination and Analysis (London: McGraw-Hill).
  • Ford, J. K. and Kraiger, K. (1995), ‘The application of cognitive constructs and principles to the instructional systems design model of training: implications for needs assessment, design, and transfer’, International Review of Industrial and Organizational Psychology, 10, 148.
  • Ford, J. K., Quinones, M., Sego, D. and Sorra, J. (1992), ‘Factors affecting the opportunity to use trained skills on the job’, Personnel Psychology, 45, 51127.
  • Garavan, T. N. (1995), ‘HRD stakeholders: their philosophies, values, expectations and evaluation criteria’, Journal of European Industrial Training, 19, 1730.
  • Garvin, D. A. (1995), ‘Building a Learning Organization’, in D. A.Kolb, J.Osland and I. M.Rubin (eds), The Organizational Behavior Reader (Englewood Cliffs, NJ: Prentice Hall), pp. 96109.
  • Geber, B. (1995), ‘Does your training make a difference? Prove it!’, Training, 3, 2734.
  • Gold, N. (1983), ‘Stakeholders and Program Evaluations: Characterizations and Reflections’, in A. S.Bryk (ed.), Stakeholder-Based Evaluation. New Directions for Program Evaluation, Vol. 17 (San Francisco, CA: Jossey-Bass), pp. 6372.
  • Greene, J. C. (1988), ‘Stakeholder participation and utilization in program evaluation’, Evaluation Review, 12, 91116.
  • Guba, E. G. and Lincoln, Y. S. (1989), Fourth Generation Evaluation (Newbury Park, CA: Sage).
  • Hashim, J. (2001), ‘Training evaluation: client's roles’, Journal of European Industrial Training, 25, 7, 37480.
  • Jarvis, P. (2001), Universities and Corporate Universities (London: Kogan Page).
  • Kearsley, G. (1982), Costs, Benefits, and Productivity in Training Systems (Reading, MA: Addison-Wesley).
  • Kirkpatrick, D. (1975), Evaluating Training Programs (Alexandria, VA: American Society for Training and Development).
  • Kirkpatrick, D. L. (1994), Evaluating Training Programs: The Four Levels (San Francisco, CA: Berrett-Koehler).
  • Kirkpatrick, D. L. (1996), ‘Great ideas revisited’, Training & Development, 50, 1, 549.
  • Kontoghiorghes, C. (2001), ‘Factors affecting training effectiveness in the context of the introduction of a new technology – a US case study’, International Journal of Training and Development, 5, 4, 24860.
  • Lewis, P. (1997), ‘A framework for research into training and development’, International Journal of Training and Development, 1, 28.
  • Lewis, T. (1996), ‘A model for thinking about the evaluation of training’, Performance Improvement Quarterly, 9, 1, 322.
  • Lien, B. Y-H., Hung, R. Y. and McLean, G. N. (2007), ‘Training evaluation based on cases of Taiwanese benchmarked high-tech companies’, International Journal of Training and Development, 11, 1, 3548.
  • McLean, G. N. (2005), ‘Examining approaches to HR evaluation: the strengths and weaknesses of popular measurement methods’, Strategic Human Resources, 4, 2, 247.
  • McLinden, D. J. (1995), ‘Proof, evidence, and complexity: understanding the impact of training and development in business’, Performance Improvement Quarterly, 8, 3, 318.
  • Macpherson, A., Homan, G. and Wilinson, K. (2005), ‘The implementation and use of e-learning in the corporate university’, Journal of Workplace Learning, 17, 1/2, 3348.
  • Mark, M. and Shotland, R. L. (1985), ‘Stakeholder-based evaluation and value judgments’, Evaluation Review, 9, 5, 60526.
  • Mark, M. M., Henry, G. T. and Julnes, G. (2000), Review of Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs (San Francisco, CA: Jossey-Bass).
  • Meister, J. C. (1998), Corporate University (New York: McGraw Hill).
  • Michalski, G. V. and Cousins, J. B. (2000), ‘Differences in stakeholder perceptions about training evaluation: a concept mapping/pattern matching investigation’, Evaluation and Program Planning, 23, 21130.
  • Noe, R. (2000), ‘Invited reaction: development of generalized learning transfer system inventory’, Human Resource Development Quarterly, 11, 4, 3616.
  • Paton, R. (2005), ‘Reviewing and Reporting Results’, in R.Paton, G.Peters, J.Storey and S.Taylor (eds), Handbook of Corporate University Development (Aldershot: Gower), pp. 12333.
  • Patton, M. Q. (1997), Utilization-Focused Evaluation: The New Century Text, 3rd edn (Thousand Oaks, CA: Sage).
  • Phillips, J. (1997), Handbook of Training Evaluation and Measurement Methods (Houston, TX: Gulf).
  • Phillips, P. P. and Phillips, J. J. (2001), ‘Symposium on the evaluation of training: editorial’, International Journal of Training and Development, 5, 4, 2407.
  • Preskill, H. (1997), ‘Using critical incidents to model effective evaluation practice in the teaching of evaluation’, Evaluation Practice, 18, 1, 6571.
  • Prince, C. (2003), ‘Corporate education and learning: the accreditation agenda’, Journal of Workplace Learning, 15, 17485.
  • Prince, C. and Beaver, G. (2001), ‘Facilitating organizational change: the role and development of the corporate university’, Strategic Change, 10, 18999.
  • Prince, C. and Stewart, J. (2002), ‘Corporate universities: an analytical framework’, Journal of Management Development, 21, 10, 794811.
  • Robinson, D. G. and Robinson, J. C. (1989), Training for Impact: How to Link Training to Business Needs and Measure Results (San Francisco, CA: Jossey-Bass).
  • Russ-Eft, D. and Preskill, H. (2001), Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance and Change (Thousand Oaks, CA: Sage).
  • Salas, E. and Cannon-Bowers, J. A. (2001), ‘The science of training: a decade of progress’, Annual Review of Psychology, 51, 47197.
  • Stake, R. E. (1983), ‘Stakeholder Influence in the Evaluation of Cities in Schools’, in A. S.Bryk (ed.), Stakeholder-Based Evaluation. New Directions for Program Evaluation (San Francisco, CA: Jossey-Bass), pp. 1530.
  • Swanson, R. A. and Holton, E. F. (1999), Results: How to Assess Performance, Learning, and Perceptions in Organizations (San Francisco, CA: Berrett-Koehler).
  • Tannenbaum, S. I. and Yukl, G. (1992), ‘Training and development in work organizations’, Annual Review of Psychology, 43, 399441.
  • Tesoro, F. (1998), ‘Implementing an ROI measurement process at Dell Computer’, Performance Improvement Quarterly, 11, 4, 10314.
  • Tracy, J. B., Tannenbaum, S. I. and Kavanaugh, M. J. (1995), ‘Applying trained skills on the job: the importance of work environment’, Journal of Applied Psychology, 80, 23952.
  • Weiss, C. H. (1983), ‘The stakeholder approach to evaluation: origins and promise’, New Directions for Program Evaluation, 17, 314.
  • Wognum, I. and Lam, J. F. (2000), ‘Stakeholder involvement in strategic HRD aligning: the impact on HRD effectiveness’, International Journal of Training and Development, 4, 2, 98110.
  • Yin, R. K. (1984), Case Study Research: Design and Methods (Thousand Oaks, CA: Sage).