Printer Friendly

A Field Experiment: Instructor-Based Training vs. Computer-Based Training.

This study is motivated by the issues created by end-user computing (EUC) and its growing importance within organizations. One of the major issues related to EUC is training individuals to adopt to the new technology. As a result, both researchers and practitioners are challenged to find new ways to train end users. In response to this challenge, researchers have studied key variables such as training support and delivery techniques, and individual differences that can be manipulated to enhance training program design.

There is a critical need for computer literacy and aptitude due to the pervasiveness of computers in the workplace, massive investments in computing technology by corporations and subsequent impact on return on investment, and the dynamic nature of information systems (IS) technology change. More specifically organizations are concerned about the long-term effect of training on individual performance. The study reports the result of longitudinal study conducted in an industrial setting.

Corporate management is frequently more interested in training as opposed to education. Training is an activity related to the job and oriented toward problem solving. On the other hand formal education is preparation for a defined profession. Since formal Information Technology (IT) education often lags the needs of industry, corporations must find new ways of providing training to keep abreast of advancing IT. Corporations use various ways to train their employees such as instructor-based training (IBT), Computer-Based Training (CBT), and video training. The first author of this study conducted a field experiment to evaluate the impact of IBT versus CBT on employees' performance. The findings indicated that the major difference between IBT and CBT subjects were attributed to the performance, enrollment for the classes, motivation and general attitude toward training method, and satisfaction with the facility. A key issue the research identified was that it is difficult to sell the CBT method as a formal training tool to employees.

Field Experiment

To determine the importance of selected key variables in successful training programs a field experiment was conducted in a corporate setting. Key variables studied were the training methods and task. The training methods used were instructor-based training (IBT) and computer-based training (CBT). Tasks selected for the study were Word for Windows and Excel 5.0. The target population for this study was the end-users of information systems technology. The subjects (sample) for this study were employees of a Fortune 100 corporation located in a major southwestern city in the United States. The corporation is a major producer of semiconductor, defense systems, and information technology products. The corporate management requires all levels of its employees to take training pertaining the use of information technology applications software. The study included both hourly and salaried employees-- assembly line workers, administrative clerks, technicians, engineers, as well as managers. The study included employees who were required to or who want to learn new software packages. Because of privacy and ethics concerns, the subjects' anonymity was preserved. The reason for using the corporate employees in the experiment was to strengthen the external validity of the study. The subjects selected for this study were all novices. Subjects were allowed to select training classes based on the training schedule provided by the training center. The training center is a formal entity of the corporation that employs qualified trainers and staff. The training center offers technical as well as non-technical training to the employees year-around. Employees sign up for the training depending upon their workloads and the classes offered by the training center. A self-selecting and convenience sample was employed in this study.

Procedure

Each subject was asked to sign a letter of consent stating that participating in the experiment is voluntary. Subjects could drop out of the experiment at any time they wished. Subjects were grouped in four groups - IBT (Word), IBT (Excel), CBT (Word), and CBT (Excel). IBT and CBT contents were similar and were designed to follow similar sequence of topics.

The IBT approach used a combined traditional training, stand-up lecture and the hands-on exploratory method. The CBT approach was similar to the IBT approach except for the absence of an instructor. Thus, in CBT there is no direct interaction between the subject and an instructor. In CBT, the subjects can directly interact with a computer. CBT was a commercially available purchased software package and not custom developed for the experiment. CBT software for Microsoft Word (Ver. 6.0) and Excel (Ver. 5.0) were loaded on the computer system. Unlike IBT subjects, CBT subjects were not provided with a training manual. They were required to follow a series of actions displayed by the CBT software. No instructor was present to answer any software-related questions. The only time subjects were assisted was when the system required maintenance or the students had difficulties other than training related.

Measurement Procedure

There were several measures included in this study. These measurements were taken at various time intervals such as at the beginning of the training, at the end of the training, and one-month after training. Demographic data on each subject were collected at the beginning of the experiment.

Demographic data included subjects' gender, present use of software, current job functions, work experience, computer experience, and education. Demographic data were used in assessing the prior knowledge of the subjects.

Quantitative performance (accuracy performance)

Administering the set of questions on Microsoft Word and Excel software concepts respectively made a knowledge assessment of each subject. The subjects were tested using the same test questions at the beginning of training, at the end of training, and approximately one month after training. At each interval of training, subjects' scores on the tests were recorded. Measuring the subjects' performance after a one-month period was useful in determining the lasting nature of learning that each training approach provides. Recording the subjects' performance also determined the extent that the subjects were able to retain their learning. This measure is a surrogate for the amount of learning that will transfer to the workplace. Individual performance measures included ability to recall key commands, number of errors, and ability to correctly identify conceptual facts about the software. The questions in the instruments were grouped into four categories-menu, icon, control, and exhibit. In the menu category, subjects were asked to pick an appropriate menu item for a specific question. The icon category also included questions similar to menu category but subjects were asked to indicate the appropriate icon. The control category included questions mostly related to the page up, cursor movements, and so forth. An exhibit category was created for answering the questions related to the actual output display of the Microsoft Word 6.0 and Excel 5.0 software. Menu, icon, and exhibit questions were related to the functionality of the software package (e.g., file, save, print, etc.). It was anticipated that the subjects' performance might vary in each category depending on their preference for each category.

Qualitative performance (end-user computing satisfaction)

Several individual satisfaction measures were used and included general attitude toward using the software in the future, perceived importance of training, ease of use, overall satisfaction with the trainer and training content, and satisfaction with the facility. The end-user computing satisfaction instrument was used to measure individual satisfaction.

The overall results indicated CBT training to be more effective than IBT, which implies that corporations should consider CBT as a part of their training strategy. There was no significant difference in performance accuracy of the subjects one-month-after-training versus end-of-training. However, several reasons explain why CBT subjects performed significantly better than IBT subjects did. The average education level of CBT subjects was at least college undergraduate or better. Approximately 38% of the CBT subjects were postgraduates and 38% were graduates. Among the IBT subjects, 42% were high school graduates, 26.6% were undergraduates, 14.4% graduates, 1% postgraduate, and 15.5% had other education. Seventy-one percent of the CBT subjects and 38.8% of the IBT subjects used Microsoft Word or Excel before the training. Thus, the higher level of education and use of Word or Excel may explain why the CBT subjects performed better than IBT subjects did. During the course of this study, it was observed that the enrollment for CBT classes was much lower than that for the IBT classes. IBT and CBT classes were announced several times each month to increase enrollment, but the employees preferred IBT. The ratio of IBT to CBT participation in the study was approximately 4.3 to 1. This clearly indicates that irrespective of the training method, the majority of employees prefer the presence of an instructor in a classroom (this could be a reflection of subjects' familiarity with the training environment). Participation of CBT subjects may imply that they are self-starters and prefer self-paced training.

We also examined the impact of training task (Microsoft Word or Excel) on the subjects' performance accuracy and satisfaction at the end-of-training and a month-after-training of all IBT and CBT subjects combined, using training task as a factor with two levels, Microsoft Word and Excel.

The employees' performance data were analyzed using one-way ANOVA test and employees' satisfaction data were analyzed using Kruskal-Wallis and Mann-Whitney tests at a significance level of 5%. The analysis indicated that Excel training was more effective (p-value = 0.005) in longer retention of learning than Microsoft Word training. The rationale for this difference was that 70% of Excel subjects used Excel after the training compared to 51% of Microsoft Word subjects who used Microsoft Word after the training. However, the subjects' satisfaction was not significantly different for Excel than Microsoft Word. Overall, task difference showed significant impact on immediate and long- term training outcomes.

The mean performance of Microsoft Word task was 13.17 versus the mean performance of Excel task at 16.59. These scores are the average differences between the pre-training and end-of-training performance measures. The analysis indicated that there was no difference in accuracy at the pre-training and at the end-of-training between subjects receiving word processing training and the subjects receiving spreadsheet training (p-value = 0.302).

The net gain in mean performance of the Microsoft Word task one-month-after training was 12.68 versus the mean performance of the Excel task was 20.30. Thus, the training was more effective (p-value = 0.025) for the Excel program than for the Microsoft Word program. The number of subjects who used Microsoft Word or Excel during the period of one-month-after-training is demonstrated. The results indicate that 71% of Excel subjects used Excel during the period whereas 51% of Microsoft Word subjects used Microsoft Word during the same period. These results explain why training seemed more effective for the Excel subjects than Microsoft Word subjects. The results further indicated that the score of Microsoft Word subjects was lower one-month-after the training than at the end-of-training. Thus, it was concluded that the use of software after the training increases the retention of learning. The results demonstrated that use of training task after training results in longer retention. When the net gain in performance at the end-of-training and one month after training were compared, the results were not statistically significant (p-value = 0.11). However, the average mean score was lower for Microsoft Word subjects compared to Excel subjects.

Conclusion

The findings indicated that the major differences between IBT and CBT subjects were attributed to the performance, enrollment for the classes, motivation and general attitude toward training method, and satisfaction with the facility. Motivation or attitude were not measured but interpreted based on the general comments made by the subjects and the proportion of IBT enrollment versus CBT enrollment. The CBT subjects' overall end-of-training and one-month-after-training performance was significantly better than IBT subjects' performance. However, a key issue the research identified was that it is difficult to sell the CBT method as a formal training tool to the employees. Another implication of this research for the training manager concerns the long-term learning effect of training. If corporate leaders can correctly identify the employees who need training and who would apply the training immediately, then the result would be a cost savings by not training the employees who do not have a need to use the training material. Training managers may also need to consider the learning styles of trainees (Bohlen and Ferratt 1997). CBT may be used as a timely way to deliver a preliminary round of training before a trainee uses an IBT (Paul 1997; Filipczak 1997). It was determined by the research that CBT is an effective means of training; however, its acceptance as a formal training tool was not favorable. CBT programs are beginning to take center stage among retailers who credit CBT with reducing costs, strengthening relations, increasing employee retention and boosting the bottom line (Janoff 1999, Schultz 1998). Training managers need to critically evaluate the software interface while selecting CBT. Training managers may have to work closely with the CBT vendors and determine the parameters critical to end-users in effectively using the CBT (Paul 1997). Tips to working with CBT vendors are available (Gordon 1998). This calls for a close partnership between the managers and developers to sell CBT to end-users.

References

Alavi, Maryam, R. Ryan Nelson, and Ira R. Weiss. 1987-88. Strategies for end-user computing. Journal of Management Information Systems, 4. 28-49.

Baxter, Lynn Zander. 1993. The association of self-directed learning readiness, learning styles, self-paced instruction and confidence to perform on the job. Ph.D. diss., University of North Texas.

Bohlen, George A and Thomas W. Ferratt. 1997. End user training: An experimental comparison of lecture versus computer-based training. Journal of end user computing, 9(3),14-27.

Cole, George. 1994. Learning with computers. Accountancy 113 (May): 60-64.

Desai, Mayur S., Thomas Richards, John Paul Eddy. 1999. End-user training: a meta model. Journal of Instructional Psychology, 26(2) 74-84.

Excel. Ver. 5.0. Microsoft Corporation, Redmond, WA.

Filipczak, Bob. 1997. Training gets doomed. Training 3(8), 24-31.

Goodwin, John and Keith Rees. 1995. Computer-based training in the CPA program. Australian Accountant, 65, 50-51.

Gordon, Jack. 1998. Seven tips for working with CBT vendors. Training, 35(1), 18-22.

Gordon, Jack and Marc Hequet. 1997. Live and in person. Training, 34(3), 24-31.

Harrap, Ken. 1990. Using technology to teach technology. Computer Data, 15, 37-38.

Hequet, Marc. 1995. Doing more with less- 1995 industry report. Training, 32, 77-82.

Janoff, Barry. 1999. User-Friendly. Progressive Grocer, 78(3), 65-70.

Keyes, Jessica. 1990. The great videotape debate. Computerworld, 24, 91.

Kowal, Daniel. 1995. Training comes to its own. Managing Office Technology, 40, 25-29.

Leidner, Dorothy E. and Sirkka Jarvenpaa. 1995. The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19, 265-291.

Microsoft Word. Ver. 6.0. Microsoft Corporation, Redmond, WA.

Paul, Lauren Gibbons. 1997. The right formula for training. Datamation, 43(9), 96-101.

Schultz, David P. 1998. Supermarkets find expanded uses for computer-based training. Stores, 8(2), 45-46.

Tracey, William R. 1985. Human resources management and development handbook. New York: AMACOM, a division of the American Management Association.

Dr. Mayur S. Desai, Assistant Professor, Division of Business & Economics, Indiana University - Kokomo. Dr. Thomas Richards, Professor, College of Business Administration, University of North Texas. Dr. John P. Eddy, Professor, College of Education, University of North Texas. The first author would like to express his appreciation to his dissertation chair, Dr. J. Wayne Spence, the University of North Texas.

Correspondence concerning this article should be addressed to Dr. Mayur S. Desai, Assistant Professor, Division of Business & Economics, P.O. Box 9003, Kokomo, Indiana 46904-9003.
COPYRIGHT 2000 George Uhlig Publisher
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Eddy, John P.
Publication:Journal of Instructional Psychology
Geographic Code:1USA
Date:Dec 1, 2000
Words:2567
Previous Article:Attributes of Animation for Learning Scientific Knowledge.
Next Article:Purposes in Learner Assessment.
Topics:


Related Articles
Training that makes sense: a little planning can help you maximize your training dollar.
Virtual learning: distance education for law enforcement.
End-User Training: A Meta Model.
Driving simulators -- a true cost saver: for countries where fuel costs are negligible, the case for driving simulators may not be immediately...
Carpe Diem exercise studio expands in new digs.
Industrial strength authoring. (Reviews).
Impact of a computer-based case study on outbreak investigation skills.
The time is e-right: just as e-mail has transformed the way we communicate, e-learning will transform the way we develop professionally.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |