Printer Friendly

A framework for evaluating online courses.


Evaluating online courses has been an issue discussed in much of the contemporary literature related to distance education. Educators, administrators and students alike are concerned about the overall quality of distance education courses, when compared to traditional or face-to-face instruction. Using Berge and Meyers (2000) as a foundation, a framework for an evaluation process of online course quality is developed and explained.


While many acknowledge the importance of evaluating online courses, most have not fully discussed the mechanisms for completing or conducting such a review (Hosie & Schibeci, 2001; Sunwalkar, 2002). Much of the current direction concerning evaluation of online education appears to embrace the role or importance of technology integrated within education. There are many current examples of evaluating the effectiveness of technology within an educational unit (Cann, 1999; "How do we know it works?" 2001). These methodologies only address the utilization of technology and the presentation of education within the technological medium. While this is a laudable undertaking, it does not measure or begin to evaluate the overall effectiveness of the online learning environment. Based on the increasing use of online courses both within higher education, and within industry, a clearly defined standard or framework should be developed that provides a methodology to evaluate online courses as educational lessons or modules in a summative fashion.

Donald Kirkpatrick (1998) outlined four levels of evaluation, two of which seem particularly appropriate to online learning. Level 1 "measures how those who participate in the program react to it" (Kirkpatrick, 1998, p. 19) and level 2 measures the "participant change [in or of] attitudes, improve [d] knowledge, and/or increase [d] skill as a result of attending the program" (Kirkpatrick, 1998, p. 20). Though not specific to distance education in terms of, training development, or instructional design, Kirkpatrick's level 1 and level 2 evaluations hold the greatest possible measure for online learning.

Evaluation within Online Education

Online education is still an emerging field, and there are as many different schools of thought concerning its design and development, as there are so-called standards based online course environments. From the instructional design models of Dick and Carey (1996), to Gagne, Briggs, and Wager (1988), to Smith and Ragan (1999), and others mentioned in Gustafson and Branch (2002), Gustafson and Branch (2002) have classified these models into broad categories: classroom-oriented models, product-oriented models, and systems-oriented models. Yet, one thing that all have in common, sometimes explicitly mentioned such as the procedural evaluation methodologies in the models of Dick and Carey (1996), or Gagne, Briggs, and Wager (1988)), and often times implicitly mentioned such as the conceptual evaluation methodologies mentioned in Smith and Ragan (1999), is the need for a summative or formative evaluation.

Standards based online course environments, such as the IMS Global Learning Consortium (, the Aviation Industry CBT Committee (http://www.aicc-org/), Institute of Electrical and Electronics Engineers (IEEE) Learning Technology Standards Committee (, and the Advanced Distributed Learning Sharable Content Object Reference Model (ADL-SCORM) ( have all developed specifications concerned with elements that are included within an online course. However, none of the models address evaluation of the online course, they focus on evaluation of student or learner outcomes or content mastery.

Current methods of evaluating online education as identified in (Berge and Myers (2000), McKee and Scherer (1992) and Wiesenberg and Hutton (1995), consist of questions, normally centered on evaluating the structure of the course, communication with the instructor, expectations of grades received, and comparing an online course to a traditional classroom course. This direction marks the beginning, I believe, of an appropriate evaluation methodology for online courses. The mention of "quality" comes into the discussion often times without any real understanding of what is quality online education, or what it is not. Regardless of the expectations for quality (both from the perspective of the student and the instructor), the overall goal for evaluating online courses should be to develop a framework for evaluating quality online education (Horton, 2001; Phillips, 1996; Thurmond, 2002).

Evaluation within Instructional Design

The process of instructional design includes developing assessment instruments for individual learning objectives, and to some extent, addresses the methodology for performing a course evaluation (Dick & Carey, 1996; Gagne, 1985). It is the very nature of the instructional design process that attempts to establish evaluation criteria that contributes to the confusion. The process outlined by Dick and Carey is focused not on the effect of learning, but on evaluating the process or sub-process that created the learning, hence their use of the terms summative and formative evaluation. Specifically, from the perspective of instructional design (Gustafson and Branch, 2002), there are two focuses for evaluation, formative--after a particular stage or event in the learning or educational process, and summative--as a result of the entire educational event.

Instructional design models are traditionally focused on evaluating the development and design of a course. Regardless of the model used, there are evaluation periods at the end of the learner analysis phase, at the end of the instructional strategies phase, and a summative evaluation at the end of the development cycle. This summative evaluation is merely evaluating the development process, not the impact or effect of the education on a learner. The learner analysis phase (Dick & Carey, 1996) is focused on identifying many of the characteristics of the learners: what they know before the training, what the desired entry level abilities should be, ideal learning preferences and the learner's internal motivation (Dick & Carey, 1996, p. 91). At the end of this phase, according to Dick and Carey, the initial educational goals and objectives should be reviewed and revised based on the learner characteristics.

The other terms normally associated with evaluation are formative and summative evaluations (Dick & Carey, 1996). Formative evaluation "is the process of collecting data and information in order to improve the effectiveness of instruction" (Dick & Carey, 1996, p. 321). This type of evaluation is conducted to measure the change in the learner, from the beginning of the training to exiting the training, and identify if there has been a change (Dick & Carey, 1996). Summative evaluations are "the process of collecting data and information in order to make decisions about the acquisition or continued use of some instruction" (Dick & Carey, 1996, p. 321).

The proposed evaluation framework is not an evaluation of the development process in the return-on-investment (ROI) models, or the cost-benefits-analysis (CBA) (Kirkpatrick, 1996; Phillips, 1996) where evaluation is determined by "tangible results of the program in terms of reduced costs, improved quality, improved quantity" (Kirkpatrick, 1996, p. 295). It is, an evaluation designed to measure the degree of change within the learner that the on line learning event has been worthwhile, rewarding, enriching and above all, satisfying. There are specific types and groupings of questions that should be included within any online course evaluation. Regardless of the intended reason or focus of the evaluation, there should be some consideration given to the type and nature of the evaluation. Berge and Myers (2000) have indicated that through their review of literature, evaluation fall into three general categories: pre-course, mid-course, and post-course. A pre-course evaluation could serve to establish a benchmark, upon which a post-course evaluation could be compared to indicate if there had been any change, either positive or negative. A mid-course evaluation could provide information necessary to implement changes in a latter portion of the course.


The instruments reviewed in Berge and Myers (2000), McKee and Scherer (1992) and Wiesenberg and Hutton (1995) did not categorize the questions asked of distance education students. It is logical to group questions into categories that seek to elicit answers to questions that provide data to improve the online learning experience. Berge and Myers (2000) remark that instruments ask questions concerning student attitude. While this may have been a key decision point for each instrument, the instruments themselves lacked consistency in terms of the types of questions asked. The lack of a consistent instrument, consistent interpretation of results, or the inconsistent instrument use may be part of the confusion that institutions, faculty members, technology staff, instructional designers, and students face when determining the value of one course to another.

Developing the Framework

The instruments from Berge and Myers (2000), McKee and Soberer (1992) and Wiesanberg and Hutton (1995) have provided a framework upon which these categories have been placed. It seems logical to assume that any instrument would include (1) technology related questions, (2) learner related questions, (3) support system related questions, and (4) learning environment related questions. By posing questions from each of these four dimensions, it is possible to evaluate not only the effectiveness of the learning, but also the technological requirements. This then establishes a baseline for entry-level skills of learners embarking on the online learning and the role of the institutional support system to ensure success.

Technology related questions

There is a natural assumption on the part of the end user that the technology for learning online (the computer) is adequate (Russell, 1996). Any issues related to technology are, from the perspective of the end user, not the result of inadequate technology, but the result of the Learning Management System (LMS) or institutional support system. Technology issues are, in simple terms, under the control of the user, but the user may not see the relevance of the existing technology to the learning event. As an interesting aside to the technology related questions, Macdonald, Heap and Mason (2001) indicate that the technology skills a student needs to succeed in a course may in fact be acquired as a result of the course. Examples of questions to be included in an evaluation of an online course include the following:

* Does the end user have the necessary/required bandwidth for optimum viewing?

* Is the end user's computer speed sufficient for optimum viewing?

* Is the end user Internet browser compatible with the online learning course?

* Do the end user's computer and Interact browsers have the necessary plug-ins required for the online learning course?

* Does the end user have a connection speed to the Internet service provider sufficient for optimum viewing?

Learner related questions

The learner related questions establish a baseline for future online learning. The questions are part of the initial learner analysis, and course development should be based on the learner analysis. Though these questions are self-reporting, and their accuracy may be questioned, they may prove to provide an indication of learner abilities. Through the engagement of a student in a course the skills that are necessary to successfully complete the course may grow and develop, often times without the explicit teaching of skills. Students may not be able to accurately self-report their abilities if they do not know the processes or procedures necessary to perform the tasks (Laing, 1988; Macdonald, Heap, & Mason, 2001; Osberg, 1989). Learner-related of questions to be included in an evaluation of an online course include the following:

* Does the end user have the necessary technology skills? (Russell, 1996).

* Does the end user have the necessary reading skills?

Support system related questions

While it may be much simpler to discount questions that deal with the existing infrastructure it is important to ask the questions. There is little thought given to the presence or even functionality of the institutional information technology infrastructure, until the system becomes unresponsive. The questions in this area are designed to identify particular strengths and weaknesses of the existing infrastructure. With the growth in online education, it is natural to assume that the role of the support system will increase as well, as it's potential impact on the online learning environment and the student is great. Questions to be included in an evaluation of an online course that relate to support include the following:

* Is there an institutional support system for end users?

* Is there an available help desk system for end user issues?

* Is there help for hardware related issues?

* Is there help for software related issues?

* Does the institution offer training for new users?

Learning environment

While these questions for the learning environment are more naturally associated with the design and development process of the online learning event, they are relevant to evaluate online courses. These questions are designed to evaluate the learning process, and the perceptions of the learner concerning the efficacy of the learning event. Private industry may have greater experience base when it comes to assessing the instructional soundness of an online course. Yet, there are as many different instruments within the private sector as there are companies providing online training (Pisik, 1997). Examples of questions to be included in an evaluation of an online course include the following:

* Are the course expectations/goals/objectives clearly stated prior to the beginning of the learning event?

* Does the content reflect the objectives, which are derived from the goals?

* Is there an explicit linkage between assessment, content, and course objectives?

* Is the navigation clear for the learner?

* Are the hyperlinks active and correct?

* Has the content been broken down into small manageable chunks?

* Is there a feedback system embedded within the course?

Conclusion and Recommendations

We have provided a framework model to use when evaluating online courses. Any evaluation provides the most circumspect review of not only the process but also the product (Reeves, 2000). Online education deserves such an evaluation. If the overall goal of education is to cause change (Gagne, 1980), and the measure of this outcome is an evaluation instrument, certainly there should be an instrument for the online environment. With the increasing use of online education, it is necessary to have a framework to evaluate online education as classroom education is evaluated.


Berge, Z., & Myers, B. (2000). Evaluating computer mediated communication courses in higher education. Journal of Educational Computing Research, 23(4), 431-450.

Cann, A. (1999). Approaches to the evaluation of online learning materials. Innovations in Education and Training International, 36(1), 44-52.

Dick, W., & Carey, L. (1996). The systematic design of instruction. (4th ed.). New York, NY: HarperCollins.

Gagne, R. (Winter 1980). Preparing the learner for new learning. Theory Into Practice, 19(1), 6-9.

Gagne, R. (1985). The conditions of learning and theory of instruction. (4th ed.). Fort Worth, TX: Holt, Rinehart and Winston, Inc.

Gagne, R., Briggs, L., & Wager, W. (1988). Principles of instructional design. (3rd ed.). New York, NY: Holt, Rinehart and Winston, Inc.

Gustafson, K., & Branch, R. (2002). Survey of instructional development models. (4th ed.). Syracuse, NY: ERIC Clearinghouse on Information and Technology, Syracuse University.

Horton, W. (2001). Evaluating e-learning. Alexandria, VA: American Society for Training and Development.

Hosie, P., & Schibeci, R. (October 2001). Evaluating courseware: A need for more context bound evaluations? Australian Educational Computing, 16(2), 18-26.

How do we know it works? Evaluating the effectiveness of technology in instruction (2001, July 15). Distance Education Report, 5(4), 1-2.

Kirkpatrick, D. (1996). Evaluation. In R. L. Craig (Ed.), The ASTD training and development handbook (pp. 294-312). New York, NY: McGraw-Hill.

Kirkpatrick, D. (1998). Evaluating training programs: The four levels. (2nd ed.). San Francisco, CA: Berrett-Koehler Publishers, Inc.

Laing, J. (September 1988). Self-report: Can it be of value as an assessment technique? Journal of Counseling and Development, 67(1), 60-61.

Macdonald, J., Heap, N., & Mason, R. (2001). "Have I learnt it?" Evaluating skills for resource-based study using electronic resources. British Journal of Educational Technology, 32(4), 419-433.

McKee, B., & Scherer, M. (1992). A formative evaluation of two Gallaudet University/Rochester Institute of Technology courses offered via teleconferencing. Rochester Institute of Technology, NY: National Technical Institute for the Deaf. (ERIC Document Reproduction Service No. ED377213).

Osberg, T. (September-October 1989). Self-report reconsidered: A further look at its advantages as an assessment technique. Journal of Counseling and Development, 68(1), 111-113.

Phillips, J. (1996). Measuring the results of training. In R. L. Craig (Ed.), The ASTD training and development handbook (pp. 313-341). New York, NY: McGraw-Hill.

Pisik, G. (July-August 1997). Is this course instructionally sound? A guide to evaluating online training courses. Educational Technology, 37(4), 50-59.

Reeves, T. (2000). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101-111.

Russell, A. (1996). Six stages for learning to use technology. Proceedings of Selected Research and Development Presentations at the 1996 National Convention of the Association for Educational Communications and Technology, Indianapolis, Indiana. (ERIC Document Reproduction Service No. ED 397832).

Smith, P., & Ragan, T. (1999). Instructional design. (2nd ed.). Upper Saddle River, NJ: Merrill, Prentice Hall.

Sonwalkar, N. (January 2002). A new methodology for evaluation: The pedagogical rating of online courses. Syllabus, 15(6), 18-2 l.

Thurmond, V. (January-February, 2002). Considering theory in assessing quality of web-based courses. Nurse Educator, 27(1), 20-24.

Wiesenberg, F., & Hutton , S. (1995). Teaching a graduate program using computer mediated conferencing software. Paper presented at the Annual Meeting of the American Association for Adult and Continuing Education, Kansas City, MO. (ERIC Document Reproduction Service No. ED391100).

David M. Peter, Indiana State University

Peter is the Instructional Design Specialist with the Center for Teaching and Learning. His research interests include accessibility and usability as well us systematic evaluation and assessment of distance education.
COPYRIGHT 2003 Rapid Intellect Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Peter, David M.
Publication:Academic Exchange Quarterly
Date:Sep 22, 2003
Previous Article:Preparing special educators through collaborative partnerships.
Next Article:Stumbling toward collaboration.

Related Articles
Usage of content in Web-supported academic courses.
Supporting self-regulation in student-centered web-based learning environments.
Students perception on e-learning: a case-study.
Adopting SCORM 1.2 standards in a courseware production environment.
Online teaching: a framework for success.
The human dimension of online instruction.
Theory application for online learning success.
Design methodology for the implementation and evaluation of a scenario-based online learning environment.
Over 10,000 served: DAU performance-based logistics resources.
Online learners take the lead in self-direction.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |