Printer Friendly

From reactions to return on investment: a study on training evaluation practices.

Introduction

The importance of employee training has been significantly increasing in various parts of the world. Organizations in Europe, the United States, and Asia spend billions each year on employee training. (Cascio & Boudreau, 2008). However, 'what the organization gains out of its investment in training' is an issue of concern for the management. A 2006 study in the US by Accenture revealed that only 3% of CEOs were satisfied with their corporate training function (Hall, 2008). There is pressure on training function to measure its effectiveness in the increasingly competitive environment. This paper makes an attempt to explore training evaluation practices in India.

Literature Review

Training evaluation is the systematic collection of descriptive and judgmental information necessary to make effective training decisions related to the selection, adoption, values and modification of various training activities (Goldstein & Ford, 2007). It involves both formative and summative evaluation (Tarenou et al., 2007). Formative evaluation involves evaluating training during design and development (Brown & Gerhardt, 2002). Summative evaluation refers to an evaluation conducted to determine the extent to which the training program objectives are achieved. The focus of training evaluation research and practices is predominately on summative evaluation (Brown & Gerhardt, 2002).

There are three stages in the evolution of training evaluation. The first stage is practice-oriented, a theoretical stage, as represented by the Kirkpatrick four-level framework, ranging from the late 1950s to the late 1980s. The second stage is a process-driven operational stage, represented by the ROI wave spanning from the late 1980s to the early 2000s. The present stage is the research-oriented comprehensive one. There are multiple frameworks suggesting different models and levels of training evaluation in each stage (Wang & Spitzer, 2005).

Evaluation Frameworks: The CIRO (Context, Input, Reaction and Outcome) approach developed by Warr, Bird and Rackham (1970) seems to be the first framework of training evaluation. Context evaluation refers to obtaining and using data about the present operational context to decide training needs and goals. Input evaluation refers to the process of assessing the various resources available and their deployment for training. Reaction evaluation refers to assessing the participants' reaction to the program. Outcome evaluation is concerned with assessing the results obtained from the program. Thus, this model incorporates both formative and summative evaluation frameworks. However, this model does not indicate how measurement takes place (Tzeng et al., 2007). Stufflebeam, et al. (1971) proposed CIPP (Context, Input, Process and Product) model of evaluation. The four types of evaluation in this model are derived from four basic types of decisions made in education: planning decisions, structuring decisions, implementing decisions and recycling decisions. This is an effective, efficient, comprehensive and balanced evaluation model (Galvin, 1983). This model shares many of the features of the CIRO model (Roark et al., 2006). Both these models cover formative as well as summative evaluation of training. However, this model assumes rationality by decision making and ignores the diversity of interests and multiple interpretations of these agents (Bennett, 1997). Hamblin (1974) developed another model of training evaluation, which consists of five levels: reactions, learning, job behavior, functioning and ultimate value.

Similar to this model, Kirkpatrick (1976) proposed a four-level model of training evaluation, which is most popular among the academicians and practitioners. It classified training outcomes into four levels such as reactions, learning, behavior and results. Reaction evaluation is defined as assessing satisfaction of the participants with the program. Learning evaluation is concerned with the extent to which the participants have learned the knowledge, skills and abilities taught in the program. Behavior evaluation refers to the extent the knowledge, skills, and abilities learned are transferred onto the job performance. Results evaluation is concerned with monitoring outcomes made by the participants. According to this framework, higher level outcomes should not be measured unless positive changes occur in lower level outcomes. There are criticisms on the hierarchical nature of this model (Alliger & Janak, 1989; Alliger, et al., 2002; Bates, 2004). There is limited evidence to support the causal relations between the levels of evaluation of this model. It leads to an excessively simplified method of assessing training effectiveness. It neglects the evaluation needs of all the other stakeholders involved in the training process (Guerci et al., 2010). This framework devalues the evaluation of societal impact or the usefulness and availability of organizational resources (Kaufman & Keller, 1994). To address these issues Kaufman & Keller (1994) have proposed a five-level framework of training evaluation covering 'enabling' and 'societal outcomes' to Kirkpatrick's model. Having identified flaws in the Kirkpatrick model, Holton (1996) proposed an evaluation model that hypothesized three outcome levels: learning, individual performance, and organization. According to Holton (1996) these levels are influenced by primary (such as ability, motivation and environmental influences) and secondary factors (for example, those that affect motivation to learn). Kirwan and Birchall (2006) pointed out that this model solely "describes a sequence of influence on outcomes occurring in a single learning experience and does not demonstrate any feedback loops"; it does not indicate any interaction between factors of the same type.

Phillips (1995, 1997) added another fifth level, i.e. return of investment (ROI) to the four levels of evaluation developed by Kirkpatrick. But isolating the effects of the training is a major challenge in this model. To address the issues and concerns with existing training evaluation models, Brinkerhoff, (2003) proposed the Success Case Method (SCM) for evaluating training programs. It is a process for evaluating the business effect of training that is aligned with and fulfills the strategy. It assesses the effect of training by looking intentionally for the very best that training is producing. When these instances are found, they are carefully and objectively analyzed, seeking hard and corroborated evidence to irrefutably document the application and result of the training. Further, there must be adequate evidence that it was the application of the training that led to a valued outcome. If this cannot be verified, it does not qualify as a success case (Brinkerhoff, 2005). The main disadvantage of SCM is that it needs some level of judgment with respect to what trainers identify as critical success factors on the job (Casey, 2006). Dessinger and Moseley (2006) developed the Dessinger-Moseley Full-Scope Evaluation Model (SEM). It aims at integrating formative, summative, confirmative, and meta-evaluation. It helps to formulate judgments about the worth of any performance improvement intervention. However, as pointed out by the authors themselves, the evaluation of training using this model is time consuming and it requires long-term support from the organization and all the stakeholders.

Training Evaluation Practices: There are a few studies available on training evaluation practices in different countries. ASTD's 1999 report (Bassi & Van Buren, 1999) stated that the 'leading edge' companies measured 81% of their programs at reaction level evaluation; 40% programs were considered for learning evaluation; 11% were evaluated at behavioral level; and 6% of the programs were taken for results level evaluation. The study conducted by Blanchard, Thacker and Way (2000) in Canada revealed that organizations conducted reaction and learning evaluations of about two-thirds of both management and non-management level of training programs. However, more than half the organizations didn't measure their training on the job application and business results levels. Yadapadithaya (2001) found a similar pattern with respect to training evaluation in India. Al-Athari and Zairi (2002) identified that the most common level of evaluation for both government and private sector was reaction type in Kuwait.

Pulichino's (2007) study found that 84. 5% of the sampled professionals reported that they conduct reaction level evaluation and 56. 1% of them conduct learning level evaluation. But only 19.9% of the surveyed professionals reported that their organizations always or frequently assessed job behavior and 13.7% always or frequently assessed business results. Bersin's (2008) study, conducted in North America, found that most organizations focus only on measuring standard course operations. A very small number of organizations routinely measure return on investment, business impact or job impact. ASTD report (2009) identified that as high as 91.6% of the professionals mentioned that they do conduct reaction assessment. 80.8% of them stated that they gather data about learning. However 54.6% and 36.9% of them only mentioned they evaluate behavior and results respectively.

Srimannarayana (2010) found that in India all the organizations (30) studied collect feedback from the participants of the programs to conduct reaction level evaluation. With regard to learning level evaluation, 46.67% of the organizations collect information. As far as changes in the behavior are concerned, 30% of the organizations make an attempt. With respect to business results level evaluation, one organization only, out of 30 organizations collects information for this purpose using client satisfaction scores. The same organization makes an attempt to calculate return on investment on some of the training programs. The Saks and Burke (2012) study conducted in Canada found that reactions were much more frequently evaluated than learning, behavior and results. Learning was evaluated more frequently than behavior, and behavior was evaluated more frequently than results. The study conducted by Kennedy et al. (2014) indicated that only 26% of the respondents reported that they often or always conduct Level 3 evaluations and only 13% per cent of the respondents reported that they often or always conduct Level 4 evaluation.

The Setting

The setting of the present study is India, which is being considered as an emerging talent powerhouse, predicted to be among the world's five largest economies and viewed by investors, businesses, and tertiary education providers as a land of opportunities (Budhwar & Varma, 2011; Pio, 2007; Rao & Varghese, 2009). Over a period of time, training evolved and matured to a substantial degree in India (Rao, Rao &Yadav, 2007). Increased learning budgets, application of technology in training, strategic linkage of training, rapid changes in training delivery, and systematic needs assessment are the top five training trends in India (Srimannarayana, 2006). With respect to measuring training effectiveness, it is found that the traditional measures such as feedback of the participants on the training programs, number of employees trained in various training programs, training costs, and number of training days are more popular measures when compared to impact measures such as learning during training, transfer of training, performance improvements, and cost and benefit analysis (Srimannarayana, 2011).

The Objective

The present study aims at exploring existing training evaluation practices in India and to find out issues and concerns of learning and development professionals with regard to training evaluation at various levels. This study covered the four-level framework of Kirkpatrick which is extended with ROI process by Phillips (1995). According to this model reaction and planned actions measure participants' reactions to the program and outlines specific plans for implementation. Learning level measurement assesses knowledge, skills or attitude changes among the participants. Job application measures changes in the behavior on the job and specific application of the training material. Business results assessment measures business impact of the training program. ROI measures the monetary value of the results and costs for the program, usually expressed as a percentage (Phillips, 1995).

Data Collection

A questionnaire was created including questions on different aspects of the five levels of training evaluation. It covered different types of questions such as two-way questions, multiple-choice questions and open-ended questions relating to various aspects of training evaluation. It consisted of questions seeking factual information on various aspects of training evaluation at five levels. It was divided into six parts. The first five parts of the questionnaire covered the questions relating to five levels of evaluation. The last part of the questionnaire consisted of questions relating to organizational details. This questionnaire was administered on learning and development professionals of different organizations during September-November, 2015, using the convenience random sampling method with a request to give responses to the questions based on the practices prevailing in their respective organizations. 104 usable filled-in questionnaires were considered as a sample for this study.

The Sample

Business-wise, 40.38% of the sampled organizations belonged to the service sector covering advertising agencies, banking and financial services, consultancy services, defense services, healthcare, hospitality, logistics and transportation, news broadcasting, real estate, retail and telecommunication companies. 35.58% of the sampled organizations belonged to the manufacturing sector including automotive, cement, construction, electrical, electronics, fast moving censurable durables, oil and gas, paper, pharmaceutical, shipbuilding and tobacco processing units. The remaining 24.04% of the companies belonged to IT products and services and IT enabled services. Ownership-wise, an overwhelming majority (86.54%) of the sampled organizations were privately owned companies. Only 5.77% of them were public sector undertakings. 3.85% of them were joint ventures under private ownership and another 3.85% were joint ventures under private and public partnership. As far as their geographical orientation is concerned, 42.31% of them were Indian multinational companies; 30.77% were foreign multinational companies; and the remaining 26.92% of them were local Indian organizations. The average number of employees working in these organizations was calculated as 3302 with a minimum of 90 companies to a maximum of 58350 employees. Every sampled organization had a training policy and conducted at least four training programs internally during 2014-15. An overwhelming majority (89.42%) of the sampled organizations established linkages between training and their business strategies.

Reactions

It was found that all the studied organizations conducted evaluation, collecting reactions from the participants of the programs using a feedback questionnaire. As presented in Table 1, over one-fifth of the sampled organizations moved from traditional (offline) mode of reaction data collection to online collection. Another one-fifth of the organizations used both online and offline modes depending on the location of the venue of the program. However, a majority (57.69%) continued to maintain offline feedback data collection. An overwhelming majority (92.31%) of the organizations collected reaction data at the end of the program. However, a few organizations had the practice of collecting data after a few days, particularly those organizations which collected reaction data online. A few companies established a system in which the participants had to give online reaction data from their office computers after reaching their respective offices from training. This created problems in ensuring 100% data collection because participants got busy with their official work and did not give priority to filling the questionnaire. With respect to parties which considered the reaction evaluation for further actions, an interesting pattern emerged from this study. In a majority (63.46%) of the organizations, learning professionals were interested in reaction assessment. Interestingly, in addition to learning professionals, participants' supervisors (28.85%), senior managers (5.77%), and even CEO's (1.92%) showed interest in this assessment.

It was found that the organizations used the reaction evaluation for multiple purposes. An overwhelming majority (93.27%) of the organizations used this evaluation to monitor the participants' satisfaction. This was followed by improving the program (76.92%) and evaluating instructors (76.92%). It is significant to note that in a few organizations, payment of honorarium for internal deliverers was contingent upon feedback score. Over two-fifth of the companies used this evaluation to link with follow-up data, particularly, participants' learning and job application of newly learned inputs. It is interesting to note that, nearly one-fourth of the organizations used this assessment to identify training needs of program deliverers. Only 15.38% of the organizations used this assessment for marketing the programs.

Action Plans

As a part of reaction evaluation, participants are expected to prepare action plans incorporating the ideas they learned from the program and their plans for implementation. When inquired, over three-fourth of the organizations stated that their participants prepared action plans at the end of the training program (Table 2). The programs aimed at skill building, technical and functional programs such as sales, standard operating procedure, and project-related skills training which required immediate application, were considered for creating action plans. There were no action plans created for mandatory programs and some behavioral programs.

About 84% of the organizations used these action plans to guide the participants to implement their ideas; 77% used them to link with actual application of inputs on the job, and 57% used them to provide support, material, and job aids required to implement new leanings. It is important to note that nearly one-fifth of the organizations used the action plans to gain commitment for implementation of the new ideas by continuous follow-up and feedback.

Learning

An overwhelming majority (91.35%) of the organizations conducted learning level evaluation of their training programs. As shown in Table 3, the organizations used multiple methods based on the type of learning offered to the participants during training. 'Participants performing actual piece of work using the skills learned' seemed to be the most popular method (66.32%) followed by written test (63.16%). It is significant to note that every alternate organization used simulations such as task simulations, role plays, in-basket exercises and other exercises to assess the new skills learned. Over one-third of the organizations gave relevant application assignments/projects so that the participants could apply the skills while doing the assignment, and later these assignments could be evaluated to assess application of their new learning. Another important variation in methods of assessing learning was found to be 'learning papers'. Participants were expected to write papers based on the learning they had during training and submit them to their learning and development department. About one-fourth of the organizations used this method for assessing learning for external training programs. Face-to-face oral tests (16.84%) and questionnaire surveys (6.32%) seemed to be less popular methods of learning assessment.

With respect to timeframe, over one-fourth of the organizations carried out this evaluation during the program or at the end of the training program. However, about 69% of the organizations conducted it after a few days of the program. The remaining organizations followed both methods. They conducted a written test at the end of the program, gave a take home assignment, which would be assessed at a subsequent stage.

It is important to note that almost every alternate organization conducted this evaluation both online and offline. However, two-fifths of the organizations conducted it exclusively offline, be it a written test or an actual piece of work or a simulation. Less than one-tenth of the organizations used the online mode exclusively.

Generally, it is the trainer who evaluates learning of participants. Nearly half of the organizations mentioned that learning and development team including program deliverers were associated with this assessment. In over two-fourths of the organizations, it was a team of learning and line professionals who were involved in this process. Over 7% of the organizations involved external experts along with learning professionals in assessing learning. Though not significant a percentage (3%), customers were also associated with learning assessment, as these companies took feedback from them too.

As far as uses of learning evaluation are concerned, three-fourths of the organizations used it to check the participants' understanding and application levels of knowledge and skills. About 68% of the organizations used this assessment to give feedback to the participants. Over 67% used this data to improve the program in terms of its content, sequence and delivery. This information was also used to evaluate instructors (63.16%). One-fourth of the organizations used it to market the programs internally. Finally, one-tenth of the companies reported the scores of learning to higher authorities for their information.

Job Application

A great majority (87.50%) of the organizations conducted job application evaluation of training programs. As depicted in Table 4, the organizations used a wide variety of methods to get data for this level of evaluation based on the type of programs and level of participants. A structured follow-up questionnaire to participants (57.14%), supervisors (49.45%), monitoring post-training performance data (43.96%), observing participants on the job (43.96%) and comparing pre- and post-training performance data (41.96%), seemed to be the preferred methods of collecting job application data for the third level of evaluation. In addition to the above, follow-up interviews with the participants (32.97%), benchmarking with action planning (26.37%), follow-up focus group discussions with the participants (23.08%) and project assignments (9.89%) were the other methods used to assess job application of the learned inputs. The timeframe of evaluating job application varied from one month to one year. On an average, it was found that this level of evaluation is carried out after 80 days of the program completion. However, over half of the organizations mentioned that they conducted this evaluation after three months of the program. With respect to persons associated with this level of evaluation, it was found that, in case of over three-fourths of the organizations, learning professionals and line managers concerned were involved in this process. An important trend is that top management was also associated in a few organizations with this activity.

Business Results

As presented in Table 5, more than half (57.69%) of the organizations surveyed measured business results of the training programs. With respect to methods of data collection for assessing business results, performance record (86.67%) is the most popular one. Comparison of pre-and post-training data (61.67%), and questionnaire to participants' supervisors (60%) were the other popular methods of assessing business results. 46.67% of the organizations also conducted questionnaire surveys on the participants to collect data of business results. Benchmarking with action plans (40%), interviews with participants (35%), verbal feedback from participants' supervisors (6.67%) and client feedback (6.67%) were also considered in some of the organizations. When it came to timeframe to assess business results, it varied from one month to one year. However, every alternate organization stated that they conducted this assessment after three months of the program. One-fourth of the organizations stated that the timeframe was six months. The professionals who were involved in assessment of business results ranged from learning and development team to CEO. However, an overwhelming majority (86.67%) of the organizations involved learning and development team and heads concerned in this process. In the case of 8.33% of the organizations, it was the responsibility of learning and development team alone. In two cases, business heads, the CHRO, and the CFO were associated. In another case, client representatives were also involved in addition to learning professionals and business heads concerned.

Return on Investment

It can be seen from Table 6 that a significant number of the organizations (32.69%) measured return on investment (ROI) of training programs. The timeframe of this assessment varied from one month to one year. However, over half of the organizations mentioned that the timeframe of ROI analysis was three months. Another one-fifth of the organizations stated that they conducted this analysis after six months of the program. With respect to the professionals involved in this assessment, in a majority (67.65%) of the organizations, learning and development team and heads concerned were involved in ROI assessment. Nearly 15% of the companies mentioned that finance professionals were also involved in this analysis along with learning and development professionals and business heads. While three organizations had left this analysis to only learning and development team, another three companies involved the CHRO, CFO and CEO in this process. From the above analysis, it can be inferred that the lower the level of training evaluation, the higher is the number of organizations which made attempts to evaluate the training programs.

Levels of Evaluation

The respondent learning professionals were requested to give the approximate estimate of the percentage of programs which they undertook at various levels of training. Based on this information, the overall percentage of the programs measured at reaction level was calculated as 95.5%. When it came to learning level evaluation, on an average, 48% of the programs were considered. With regard to job application level evaluation, on an average, only 38% of the programs were taken up for evaluation. As far as business results level evaluation is concerned, the average percentage came further down. Only one-fourth of the programs were considered for this level of evaluation. The percentage of the programs further came down to an overall average of 18.73. Even though a significant number of organizations mentioned that they conducted higher levels of evaluation, there was a declining trend in the percentage of the programs taken up for evaluation from level 1 to level 5, as shown in Fig. 1.

Issues & Concerns

An inquiry is made here to find the issues and concerns of learning professionals with regard to training evaluation at different levels. Open-ended questions were incorporated in the questionnaire to elicit this information. After doing the content analysis of the answers given to these questions, two types of issues emerged. One is the issue the learning professionals face while doing evaluation at each level and the second is the reasons for not evaluating training at higher levels. The outcome of the analysis is presented in the following paragraphs.

Reactions: The major issue of concern of learning professionals with respect to level one evaluation is inaccurate and incomplete data given by the participants. Another one is that participants don't take feedback sessions seriously. In their hurry to leave for their destinations, they just write something without considering the accuracy of what they are writing. They, sometimes, write extravagantly flamboyant feedback and try to give 'good' information rather than 'true' information about the program. Some of the participants do not understand the questions and fail to give their feedback accurately because of inadequate knowledge of language. Further, in some cases, training deliverer who collects feedback happens to be a colleague or superior of the participants. Hence participants are reluctant to give right and unbiased feedback. There is no accountability established on participants to filling in reaction questionnaire. The learning professionals have to remind them repeatedly. This happens when the participants have to give feedback online at their work station, on their office computers once they go back to work. The participants become busy with their work after reaching their workplaces and do not give priority to fill in the reaction sheets. Because of this situation, sometimes, feedback analysis is carried out very late, which is not of much use for the learning professionals.

Learning: The learning professionals of the organizations which do not conduct learning evaluation complained that they do not get enough support, time, facilities and other resources to measure learning. Some of the line managers who undertake sessions are reluctant to help in this process. Changes in business priorities throw learning assessment on the back seat. Some of the learning professionals are under the impression that learning from behavioral programs is difficult to evaluate as results are not measurable. Therefore, getting commitment for evaluation of such type of programs from senior management is difficult. Some of the learning professionals who conduct learning assessment have issues regarding pre-and post-testing. They find it difficult to conduct pretests for all the programs for which they evaluate learning. It is not practically possible for them to give a written test or a simulation or an actual piece of work to the participants before the start of the training program because of time, resources, and availability of participants. Therefore, learning evaluation is confined to posttest, which might not give the actual picture of the learning which took place during training. Some of the learning professionals are candid enough in admitting the fact that monitoring the written test or practical test is becoming difficult because at least a few of the participants make attempts to complete the test with some unethical practices. It is a challenge for learning professionals to bring consistency in evaluation when they conduct learning evaluation for the programs which aims at learning multiple skills because of lack of time, support, and resources. Measuring the reasons for low scores becomes difficult for a few learning professionals. Another major challenge they face is availability of the participants who take learning assessment. This is an issue of concern for some learning professionals, particularly, in the situation where learning assessment is carried out after a few days of training. A few learning professionals conduct a large number of programs. The volume of information generated is quite high and sometimes it takes weeks for complete assimilation of data and making a valid learning evaluation report.

Job Application: The learning professionals of organizations which do not conduct job application measurement revealed that they do not have enough resources and people to carry out this assessment. Lack of support from the participants' managers and participants is also an issue. Some of them admitted that it is not practically possible to implement whatever is taught in the program. The organizations which conduct job application evaluation have a different set of issues. Sometimes superiors and subordinates are biased in favor of the trainee, which results in getting exaggerated information about application of inputs. Even after repeated reminders some of the participants' managers, and participants, do not give information required to assess job application. In some cases, participants are not interested to implement new learning. In some other cases, conditions under which a participant is operating are not favorable. Further, in some other cases, participants' managers don't support implementation of new ideas by their employees. Another important issue is the fact that either the participant or participant's manager, is not under any pressure or obligation to implement new learning. The participants are not rewarded for the good work because of application of new knowledge and skills. The participants' managers are also not recognized for encouraging his or her employees for implementation of new learning. If some application takes place, they are not interested in giving information. With their continued efforts, learning professionals could get data, but validation of the data becomes an issue in the situations where the participant, reporting manager/process head give different data on job application of the same participant. The other problems associated with job application level assessment include usage of different technology than what the participant was taught during training; transfer of the participant to a different job in which the skills taught are not relevant; and employee quitting after receiving training.

Business Results: Some of the learning professionals who represent the companies which do not go for this level of evaluation have some issues of concern. According to them, the process is very long, time taking, expensive, requires management support and resources which are not available. Gaining commitment from business managers for this process is another major challenge because their companies do not have a policy on business results assessment. They do not conduct job application assessment and hence business results assessment is not relevant to them. A few learning professionals have candidly admitted that they do not have enough knowledge, skills and orientation required to undertake this higher level of training evaluation. The learning professionals, who conduct business results measurement, have a different set of issues. Choosing appropriate method of data collection is an important challenge for them. They face tremendous problems in business results measurement based on the improper data collected at previous stages. They are not able to establish a direct correlation between the training and the performance indicators. External factors and last minute changes in the overall organizational strategy create challenges in evaluation of business results. For example, during sales training, employees were trained on selling skills. When they went to pitch business they were not able to generate the desired results even after implementing the new skills learned. This is due to other factors like revision in rates, government policies etc. Thus, isolating many causal factors operating at a time remains a challenge. The general expectation of a behavioral program is the improvement of morale, better team collaboration and communication. Hence such types of programs do not provide any tangible results.

Level 5 (ROI): Some of the learning professionals of the organizations which do not calculate ROI on training programs stated that they do not have any policy in place to derive ROI on training programs. As they do not carry out job application and business results assessment, calculating ROI is not relevant for their organizations. The third one is the lack of knowledge, skills, abilities and orientation to carry out ROI calculation. The learning professionals working in the companies, which carry out ROI measurement stated that the persons who are involved in calculating ROI are not properly trained. They don't know what data to collect to evaluate ROI. They find it difficult to deal with a number of variables affecting changes and the complications in the evaluation process. Being burdened with other activities they don't have the time to deal with ROI measurement which is time consuming. Fear of low ROI will result in criticism for their work. So they are afraid to get into the intricacies. Lack of support from top management is another issue. Participants, their managers and administrative staff do not cooperate in crisis situations, which arise in the measurement process. The training department finds it difficult to convert the outcome into monetary benefits. Many other factors such as market conditions, system changes, and incentives offered to employees influence post-training performance.

Conclusion

The above analysis leads to the conclusion that a significant progress is made in training evaluation in Indian organizations when compared to earlier studies in India (Yadapadithaya, 2001; Srimannarayana, 2006) and abroad (Bassi & Van Buren, 1999; Blanchard, Thacker & Way, 2000; Al-Athari & Zairi, 2002; Pulichino, 2007; Bersin, 2008; Saks & Burke, 2012; Kennedy et. al., 2014). This might be attributed to factors such as time, increased concern of top management for accountability of training function and increased professionalism among the learning professionals. Over a period of time, the concern for measuring effectiveness of training has increased across the globe. Though the situation now is better than earlier, consistent with the earlier studies, this study concludes that the lower the level of training evaluation, the higher is the number of organizations which make attempts to evaluate the training programs. Further, it may be concluded that there is a declining trend in the percentage of the programs taken up from reactions level to ROI level. A glance at the issues and concerns of learning professionals with respect to higher levels of evaluation leads to the conclusion that, consistent with earlier studies (Moller & Mallin, 1996; Twitchell et al., 2000; Pulichno, 2007; ASTD, 2009; Kennedy et. al., 2014), lack of support, resources, time, expensive nature, difficulty in accessing right data, and lack of expertise are the main reasons for not evaluating training at higher levels.

With respect to methods and modes of data collection for training evaluation, it may be concluded that performance of actual piece of work is a predominant method used for learning assessment. However, there is a trend emerging from the study that the organizations started using technology to collect reaction data and learning data online. Questionnaire appears to be a major method of data collection at job application level, while performance records are a major source for collecting business results. With respect to time frame, it may be concluded that reaction data is collected generally at the end of the program and learning data is collected after a few days of the program; job application, business results and ROI assessment is carried out after three months of the program. An interesting trend emerging from this study is that business managers, top management and, at times, clients and customers are also associated with it.

It may be stated that training evaluation is no more the exclusive domain of learning and development department. As long as training measurement is considered as an exclusive domain of learning and development team, getting support from business managers is a challenge. Increased budgets indicate the top managements' commitment to and convention for training (Srimannarayana, 2006). They expect tangible benefits from training. Learning professionals have to take the initiative to create a policy of training measurement, in which partnership of business managers is to be emphasized. If the learning professionals gain mastery over evaluation and measurement competencies, and undertake rigorous training evaluation, they can showcase that training is a business which produces tangible benefits and contributes significantly to a firm's financial performance.

References

Al-Athari, A. & Zairi, M. (2002), "'Training Evaluation: An Empirical Study in Kuwait", Journal of European Industrial Training, 26: 241-51

Alliger, G. & Janak, E. (1989), "Kirkpatrick's Levels of Training Criteria: 30 Years Later", Personnel Psychology, 42: 331-42.

Alliger, G., Tannenbaum, S., Bennett, W.,Traver, H. & Shortland, A.(2002), "A Meta-analysis on the Relations among Training Criteria", Personnel Psychology, 50: 431-58.

American Society for Training & Development (2009), The Value of Evaluation: Making Training Evaluations More Effective , Alexandria, VA: ASTD Research Department.

American Society for Training & Development (2012), 2012 State of the Industry, Alexandria, VA: ASTD Research Department.

Bassi, L. J. & Van Buren, M.,E. (1999), "The 1999 ASTD State of the Industry Report", Training & Development Magazine, Supplement, 53.

Bates, R. (2004), "A Critical Analysis of Evaluation Practice: the Kirkpatrick Model and the Principle of Beneficence", Evaluation and Program Planning, 27: 341-7.

Bennett, N. (1997), "The Voices of Evaluation", The Journal of Continuing Education in the Health Professions, 17:198-206.

Bersin, J. (2008), The Training Measurement Book: Best Practices, Proven Methodologies, and Practical Approaches, San Francisco: John Wiley & Sons.

Blanchard, P. N., Thacker, J. W. & Way, S. A. (2000), "Training Evaluation: Perspectives and Evidence from Canada", International Journal of Training and Development, 4: 295-304.

Brinkerhoff, R.O. (1989), Evaluation Training Programs in Business and Industry, Jossey-Bass, San Francisco, CA.

Brinkerhoff, R. O. (2003), The Success Case Method, San Francisco: Berrett-Koehler.

Brinkerhoff, R. O. (2005), "The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training", Advances in Developing Human Resources, 7: 86-101.

Budhwar, P. S. : Varma, A. (2011), "Emerging HR Management Trends in India and the Way Forward', Organizational Dynamics, 40: 317-25.

Brown, K. G. & Gerhardt, M. W. (2002), "Formative Evaluation: An Integrative Practice Model and Case Study", Personnel Psychology, 55: 951-83.

Cascio, W.F. & Boudreau, J.W. (2008), Investing in People: Financial Impact of Human Resource Initiatives, New Jersey: Pearson Education.

Casey, M. (2006), Problem-based Inquiry: an Experiential Approach to Training Evaluation, unpublished doctoral dissertation, University of Akron, Akron, OH

Dessinger, J.C. & Moseley, J.L. (2006), "The Full Scoop on Full-scope Evaluation", in Pershing, J.A. (ed), Handbook of Human Performance Technology: Principles, Practices, Potential, San Francisco, CA: Pfeiffer

Galvin, J. C. (1983), "What Can Trainers Learn from Educators about Evaluating Management Training?" Training and Development Journal, 37: 52-57.

Geis, G L. (1987), "Formative Evaluation: Developmental Testing and Expert Review" Performance and Instruction, 26: 1-8.

Goldstein, Irwin L. & Ford, Kevin J. (2007), Training in Organizations, New Delhi: Cengage.

Guerci, M., Bartezzaghi, E. & Solari, L. (2010), "Training Evaluation in Italian Corporate Universities: a Stakeholder-based Analysis", International Journal of Training and Development, 14, 291-308.

Hall, B. W. (2008), The New Human Capital Strategy: Improving the Value of Your Most Important Investment--Year after Year, New York: AMACOM.

Hambiln, A.C. (1974), The Evaluation and Control of Training, London: McGraw-Hill

Holton, E. (1996), "The Flawed Four-level Evaluation Model", Human Resource Development Quarterly, 7: 5-21.

Kaufman, R. & Keller, J. (1994),"Levels of Evaluation: Beyond Kirkpatrick', Human Resource Development Quarterly, 5: 371-80.

Kennedy, P. E., Chyung, S. Y., Winiecki, D. J. & Brinkerhoff, R. O. (2014), "Training Professionals' Usage and Understanding of Kirkpatrick's Level 3 and Level 4 Evaluations", International Journal of Training and Development, 18: 1-21.

Kirkpatrick, D.L. (1994), Evaluating Training Programs: The Four Levels, San Francisco, CA: Berrett-Koehler.

Kirwan, C. & Birchall, D. (2006), "Transfer of Learning from Management Development Programs: Testing the Holton Model", International Journal of Training and Development, 10: 252-68.

Moller, L. & Mallin, P. (1996), "Evaluation Practices of Instructional Designers and Organizational Supports and Barriers", Performance Improvement Quarterly, 9: 82-92.

Pio, E. (2007), "HRM and Indian Epistemologies: A Review and Avenue for Future Research", Human Resource Management Review, 17: 319-35.

Passmore, J. & Velez, M. (2012), "SOAP-M: A Training Evaluation Model for HR", Industrial and Commercial Training, 44: 31525.

Phillips, J. (1997), Handbook of Training Evaluation and Measurement Methods, Houston, TX: Gulf.

Pulichino, J. P. (2007), Usage and Value of Kirkpatrick's Four Levels of Training Evaluation, Unpublished Doctoral Dissertation, Malibu, CA: Pepperdine University.

Rao T. V. & Varghese, S. (2009), "Trends and Challenges in Developing Human Capital in India", Human Resource Development International, 12: 15-34.

Rao, T. V., Rao, R. & Yadav, T. (2007), "A Study of HRD Concepts, Structures of HRD departments, and HRD Practices in India", Vikalpa, 26: 49-63.

Roark, S., Kim, M. & Mupinga, M. (2006), "An Exploratory Study of the Extent to Which Medium-sized Organizations Evaluate Training Programs", Journal of Business and Training Education, 15: 15-20

Saks, A. M. & Burke, L. A. (2012), "An Investigation into the Relationship between Training Evaluation and the Transfer of Training", International Journal of Training and Development, 16: 118-27.

Srimannarayana, M. (2006), "Training Trends in India", Indian Journal of Training and Development, XXXVI: 51-57.

Srimannarayana, M. (2010), '"Training and Development Practices in India", Indian Journal of Training and Development, XXXX: 34-42.

Srimannarayana, M. (2011), "Measuring Training and Development", Indian Journal of Industrial Relations, 47: 117-25.

Stufflebeam, D. L., Foley, W. J., Gephart, W. J., Hammond, L. R., Merriman, H. O., & Provus, M. M. (1971), Educational Evaluation and Decision-making in Education, Illinois, Peacock

Tharenou, P., Saks, A. M. & Moore, C. (2007), "A Review and Critique of Research on Training and Organizational-level Outcomes", Human Resource Management Review, 17: 251

Twitchell, S., Holton, E. & Trott, J. W. (2000), "Technical Training Evaluation Practices in the United States", Performance Improvement Quarterly, 13: 84-109

Tzeng, G., Chiang, C. & Li, C. (2007), "Evaluating Intertwined Effects in e-learning Programs: a Novel Hybrid MCDM Model Based on Factor Analysis and DEMATEL", Expert Systems with Applications, 32: 1028-44.

Warr, P., Bird, M. & Rackham, N. (1970), Evaluation of Management Training: A Practical Framework, with Cases for Evaluating Training Needs and Results, London: Gower Press.

Wang, G. G. & Spitzer, D. R. (2005), "HRD Measurement & Evaluation: Looking Back and Moving Forward" Advances in Developing Human Resources, 7: 5-15.

Weston C, McAlpine L, Bordonaro T (1995), "A Model for Understanding Formative Evaluation in Instructional Design" Educational Technology Research and Development, 43:29-48.

Yadapadithaya, P. S. (2001), "Evaluating Corporate Training and Development: An Indian Experience", International Journal of Training and Development, 5: 261-74.

M. Srimannarayana is Professor, XLRI, Jamshedpur. E-mail: sriman@xlri.ac.in

Caption: Fig. 1 Percentage of Programs against Levels of Evaluation
Table 1 Reaction Evaluation

Mode of reaction data collection            No. of         %
                                         organizations

Offline                                       60         57.69
Online                                        23         22.12
Both                                          21         20.19

Timeframe of data collection

At the end of the program                     96         92.31
After a few days of completion of              5         4.81
  the program
Both                                           3         2.88

Parties which consider the reaction data

L&D team                                      66         63.46
L&D & participants' supervisors               30         28.85
L&D , participants' supervisors&               6         5.77
  senior managers                              2         1.92

Uses of reaction data

To monitor participants' satisfaction         97         93.27
To improve the program                        80         76.92
To evaluate instructors                       80         76.92
To link with follow-up data                   46         44.23
To identify training needs of trainers        25         24.04
To market training programs                   16         15.38

Table 2 Action Plans

Action plans                                   No. of         %
                                            organizations

Organizations which collect action plans         79         75.96

Uses of action plans

To guide application                             66         83.54
To link with actual application                  61         77.22
To support application of skills learned         45         56.96
To gain commitment to action plans               15         18.99

Table 3 Learning Evaluation

Learning Assessment                          No. of         %
                                          organizations

Organization which conducts learning           95         91.35
  assessment

Methods of assessment

Performance of actual piece of work            63         66.32
Written tests                                  60         63.16
Simulation                                     48         50.53
Assignment/project                             29         30.53
Face-to-face oral test                         16         16.84
Learning paper submission                      24         25.26
Questionnaire survey                            6         6.32

Time Frame

During/at the end of the program               26         27.37
After a few days                               66         69.47
Both                                            3         3.16

Mode of testing

Online                                          8         8.42
Offline                                        40         42.11
Both                                           47         49.47

Who are involved in this process?

L&D team                                       46         48.42
L&D & line managers                            39         41.05
L&D and external experts                        7         7.37
L&D, line managers & customers                  3         3.16

Uses of Learning Assessment

To check the participants understanding        71         74.74
  & application
To give feedback to participants               65         68.42
To improve programs                            64         67.37
To evaluate instructors                        60         63.16
To market the programs                         24         25.26
To provide an opportunity to practice          10         10.53

Table 4 Job Application

Job Application                                 No. of         %
                                              Organization

Organizations which conduct Job application        91        87.5
  evaluation

Methods

Questionnaire to participants                      52        57.14
Questionnaire to participants' Supervisors         45        49.45
Performance monitoring                             40        43.96
Observation                                        40        43.96
Comparison of pre and post training                38        41.76
Interview with participants                        30        32.97
Benchmarking with action plans                     24        26.37
Follow up sessions                                 24        26.37
Focus group discussions                            21        23.08
Project assignments                                 9         9.89

Timeframe

1 month                                            22        24.18
1.5 months                                          2          2.2
2 months                                            9         9.89
3 months                                           49        53.85
4 months                                            3          3.3
5 months                                            2          2.2
6 months                                            3          3.3
12 months                                           1          1.1

Who are involved in this process?

L&D team                                           14        15.38
L&D & line managers                                74        81.32
L&D & top management                                3          3.3

Table 5 Business Results

Business Results                                   No. of         %
                                                Organizations

Organizations which evaluate business results        60         57.69

Methods to assess business results
Performance records                                  52         86.67
Comparison of pre and post data                      37         61.67
Questionnaire to participants' supervisors           36         60.00
Questionnaire to participants                        28         46.67
Benchmarking with action plans                       24         40.00
Interview with participants                          21         35.00
Verbal feedback from the participants'                4          6.67
  supervisors
Client feedback                                       4          6.67

Timeframe

1 month                                               8         13.33
2 months                                              3          5.00
3 months                                             30         50.00
4 months                                              3          5.00
6 months                                             15         25.00
12 months                                             1          1.67

Who are involved

L&D team                                              5          8.33
L&D & heads concerned                                52         86.67
Head concerned, CHRO, & CFO                           2          3.33
L&D, heads concerned & Clint representative           1          1.67

Table 6 Return on Investment

Return on Investment (ROI)             No. of         %
                                    organizations

Organization which calculate ROI         34         32.69

Timeframe

1 month                                   3          8.82
2 months                                  1          2.94
3 months                                 19         55.88
4 months                                  3          8.82
6 months                                  7         20.59
12 months                                 1          2.94

Who are involved in this process?

L&D & heads concerned                    23         67.65
L&D, Finance & heads concerned            5         14.71
L&D alone                                 3          8.82
CHRO, CEO & CFO                           3          8.82
COPYRIGHT 2017 Shri Ram Centre for Industrial Relations and Human Resources
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Srimannarayana, M.
Publication:Indian Journal of Industrial Relations
Geographic Code:9INDI
Date:Jul 1, 2017
Words:7766
Previous Article:Professional commitment of Indian nursing employees with reference to geographic diversity.
Next Article:Social exclusion & poverty among tea garden workers in Bangladesh.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters