Printer Friendly

Key factors for determining student satisfaction in online courses.

Many higher education institutions are either offering online courses or are planning such initiatives. Critics argue that college and university administrators are forcing online courses upon students and professors as cost-saving measures, and at some universities students have expressed discontentment with online course initiatives (Jaffee, 1998; Noble, 1998). Others are wondering aloud if online courses are in fact the answer to challenges such as rapid tuition increases and a changing student body (Feenberg, 1999; Hara & Kling, 2000; Rahm & Reed, 1998).

Historically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior face-to-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990).

Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.


Distance Education

Distance learning has been defined as a learning environment in which "students and teachers are separated by distance and sometimes by time" (Moore & Kearsley, 1996, p. 1). In 1997-98, 34% of postsecondary educational institutions offered distance education courses and an additional 20% planned to offer distance courses by 2000. Of these institutions, 77% indicated they used the Internet as one of many instructional delivery modes (National Center for Education Statistics [NCES], 1999). Enrollment in distance courses has steadily increased in the 1990s (Neeley, Niemi, & Ehrhard, 1998). In the academic year 1994-95, formal online student enrollment was 758,640 (NCES, 1998). By 1997-98, that number had increased to 1,661,100 (NCES, 1999). The recent growth in distance education has been largely credited to the availability of technology-enhanced instruction (Hobbs & Christianson, 1997).

Moore and Kearsley (1996) delineated three generations of distance learning: (a) traditional correspondence and independent study courses, (b) open universities in the 1970s, and (c) the use of broadcasting and teleconferencing tools in conjunction with computers. In modern distance education courses, instructors and students are separated by time and space but telecommunications hardware and software enable communication and collaboration. Some of the tools used in online courses include the following: (a) e-mail, (b) chat rooms, (c) threaded discussions, (d) bulletin boards, (e) file transfer protocol, and (f) digital audio and video (Belanger & Jordan, 2000).

Reports of success with online courses in higher education are plentiful. Navarro (2000) reported many students are highly satisfied with online courses. Hiltz (1993) found that communication software increased the quality of instruction, raised students' level of motivation due to greater access to their instructors, and increased their satisfaction with outcomes. Powers, Davis, and Torrence (1999) also reported high student satisfaction with their level of involvement in a graduate instructional technology course.

Student Satisfaction

Student satisfaction can be defined as the student's perception pertaining to the college experience and perceived value of the education received while attending an educational institution (Astin, 1993). Most college students spend considerable time, money, and effort in obtaining a quality education and should perceive their postsecondary educational experiences as being of high value (Knox, Lindsay, & Kolb, 1993). Satisfaction is an important "intermediate outcome" (Astin, p. 278) in that it influences the student's level of motivation (Chute, Thompson, & Hancock, 1999; Donohue & Wong, 1997), which is an important psychological factor in academic success (American Psychological Association [APA], 1997). Satisfaction is also good predictor of retention (Astin, 1993; Edwards & Waters, 1982). End-of-course surveys administered to distance learners can give evaluators valuable student satisfaction information that can be used to improve the course or program (Chute et al.).

Factors Contributing to Student Satisfaction

In traditional classroom settings, factors associated with student satisfaction are student characteristics, quality of relationships with faculty, curriculum and instruction, student life, support services, resources, and facilities. In a study with undergraduate students Astin (1993) identified the following factors as most important: (a) contact time with faculty members and administrators, (b) availability of career advisors, (c) student social life on campus, and (d) overall relationships with faculty and administrators. Bean and Bradley (1986) concluded the best predictors of student satisfaction are: (a) academic integration, (b) institutional fit, (c) quality and usefulness of education, (d) social life, and (e) difficulty of the program.

However, online courses present a different set of challenges to instructors and students. Distance learners may never visit a physical campus location and may have difficulty establishing relationships with faculty and fellow students. Researchers who study distance learners must understand and account for these differences when investigating student satisfaction.

Instructor issues. The instructor is the main predictor in student satisfaction (Finaly-Neumann, 1994; Williams & Ceci, 1997). Student satisfaction has a strong positive correlation with the performance of the instructor, particularly with his or her availability and response time (DeBourgh, 1999; Hiltz, 1993). Instructors must be perceived as available if students have questions and must be flexible (Moore & Kearsley, 1996). The instructor is not only a facilitator of learning but also a motivator for the student.

The instructor's feedback is the most important factor in satisfaction with instruction (Finaly-Neumann, 1994). Feedback on assignments must be in a timely manner to keep learners involved and motivated (Smith & Dillon, 1999). The instructor must communicate with students on a regular basis (Mood, 1995); otherwise, students may experience high levels of frustration (Hara & Kling, 2000). In addition, instructor feedback gives students the opportunity to revise assignments--an activity which acts as reinforcement of concepts introduced in the course.

Communication. Moore and Kearsley (1996) mentioned three important types of interaction in distance learning courses: (a) learner-content, (b) learner-instructor, and (c) learner-learner. Instructors should facilitate all types of interactions in their distance learning courses when possible and appropriate. Distance learners can experience feelings of isolation, and high levels of frustration and anxiety if communication and interaction between the different parties are lacking (Mood, 1995). One way to overcome the feeling of isolation is to establish a sense of community for learners in the beginning by giving them an informal warm-up period with the use of structured exercises (Wegerif, 1998).

Mood (1995) reported course goals and objectives should be clearly communicated at the beginning of the course. If students know what is expected of them, their levels of anxiety can be reduced. Instructors should encourage student participation and monitor student progress. Students should also have opportunities to become self-directed learners and structure their own learning experiences (Wegerif, 1998).

Technology. Students must have access to reliable equipment and must be familiar with the technology used in the course in order to be successful (Belanger & Jordan, 2000). Students with limited online access are at a considerable disadvantage to learners who have unlimited access (Wegerif, 1998). Online access is one of the most important factors influencing student satisfaction (Bower & Kamata, 2000). Students who report frustration with technology in the course report lower satisfaction levels (Chong, 1998; Hara & Kling, 2000).

Course management. Moore and Kearsley (1996) indicated that administrative support is instrumental for distance learning students. They suggested students should have one contact person who will be able to assist them. Access to other resources such as course textbooks, libraries, technical support, and a toll-free number to reach the university are also important to the distance learner (Mood, 1995). Students without technical support may experience high levels of frustration in the online environment (Hara & Kling, 2000).

Course web site. Learning should be meaningful, relevant, and interesting (APA, 1997). Good course web sites present information in a logical order, and their design must be attractive and consistent (Belanger & Jordan, 2000). Text must be easy to read, and downloading times should be kept to a minimum to accommodate students with slower Internet connections. Also, web pages should not be too cluttered with information (Harrison, 1999).

Navigation is an important factor in the online environment. Learners should be able to move within the course web site without getting lost (Aggarwal, 2000). External hyperlinks should only be provided if they give students access to necessary information. Irrelevant information will only confuse learners. Links must work properly or students will experience frustration (Harrison, 1999).

Interactivity. Learning environments in which social interaction and collaboration is allowed and encouraged lead to positive learning outcomes (APA, 1997). Collaborative learning tools can improve student satisfaction in the online learning environment (Bonk & Cunningham, 1998; Gunawardena & Zittle, 1998). These tools allow for group work and immediate feedback. Students are able to share and discuss viewpoints with one another online and gain insights and perspectives they otherwise would not have been exposed to. This type of social interaction environment can facilitate meaningful, active learning experiences (Bonk & Cunningham, 1998).

General information. Distance learners should be motivated, organized, and committed. They must become responsible for their own learning (Belanger & Jordan, 2000). Palloff and Pratt (1999) warned that not all students will be successful in the online learning environment. Hiltz (1993) found that students with positive attitudes were more satisfied with the online experience and spent more time actively engaged online. Expected grades by students in a course positively affect student satisfaction in the online environment (Bower & Kamata, 2000). Students reported they enjoyed the convenience of online courses, and convenience was more important than the actual face-to-face interaction with instructors and peers (Card & Horton, 2000). Maki, Maki, Patterson, and Whittaker (2000) also found students perceived the convenience of the online course as a benefit and they enjoyed the flexibility of the online learning environment.



The sample used in this study was drawn from a pool of all 507 gradate students enrolled in multiple online courses in instructional technology at a regional university in the Southeastern United States. A total of 303 students were randomly selected to participate in this study. One hundred five (34%) who had completed at least one online course responded. Of the respondents, 71% were female. The majority of respondents (59%) were between 30 and 49 years of age. All but three respondents were education majors.

Data Collection

The researchers decided to seek participants from the instructional technology graduate program because of the large number of online courses and students; however, the instrument described here may be used to evaluate any online course. The researchers e-mailed the participants with instructions for accessing and completing the Online Course Satisfaction Survey (OCSS). Participants were assigned individual passwords to prevent any unauthorized participation in the study. The estimated time for participants to complete the OCSS was 20 minutes.


The Telecourse Evaluation Questionnaire (TEQ) constructed by Biner (1993) has a total of 42 questions. The TEQ measures student attitudes toward televised distance education and addresses three factors: (a) instruction and instructor, (b) technology, and (c) course management. With Biner's permission, the researchers modified the TEQ to address issues related to online learning environments and student satisfaction. The resulting OCSS contained 60 items. Forty-two items were Likert-type scale items and addressed: (a) instructor, (b) technology, (c) course management, (d) course web site, (e) interactivity, and (f) general issues. The remaining items consisted of demographic and general information. In order to eliminate neutral responses, the instrument required participants to indicate their level of satisfaction on a 4-point scale ranging from 1, strongly disagree to 4, strongly agree.

Reliability and validity. The TEQ is an established survey and has been used in several research studies and has well-established content validity (DeBourgh, 1999; Biner, 1993). Questions added to the TEQ to form the OCSS were directly derived from existing literature pertaining to student satisfaction and course evaluations. Because the TEQ was significantly modified to reflect the technology used in web-based courses, the researchers performed a reliability analysis on the OCSS after the data collection phase.

Statistical Assumptions

The data was examined for statistical assumptions (e.g., sample size and missing data, linearity, multicollinearity, singularity, univariate and multivariate outliers). None of the cases had missing values, and no univariate or multivariate outliers were detected. In order to examine for linearity, several bivariate scatterplots were generated and examined. All of the scatterplots revealed abnormalities between the variables due to the instrument being a 4-point Likert-type scale.

The Pearson correlation coefficients were examined in a correlation matrix to determine if multicollinearity existed. Many correlation coefficients exceeded .50 and the highest correlation coefficient detected in this matrix was .91. The collinearity diagnostic demonstrated variance proportions were below .64 and this leads to the conclusion that no multicollinearity existed between any of the dependent variables. Each dependent variable was an independent measure, therefore, ruling out singularity.


Descriptive Statistics

Table 1 displays the means and standard deviations of the scores. The standard deviations, a measure of variability of the scores around the mean, were relatively minor. Variables with a correlation coefficient between .60 and .80 are considered to have a strong relationship, whereas variables with a correlation coefficient between .80 and 1.0 have a very strong relationship. There were many relationships with a correlation coefficient at or higher than .60 and several relationship were above .80.

Factor Analysis

A confirmatory factor analysis with varimax rotation was performed to subtract factors relevant to student satisfaction as identified in the literature and to examine the construct validity of the satisfaction survey. The researchers expected six factors with high subscale loadings for the online course satisfaction survey. An initial examination of the data revealed four dimensions which had eigenvalues greater than 1. The examination of the scree plot on the initial extraction (Figure 1) indicated the instrument has only three components, however. These three components are (a) instructor, (b) technology, and (c) interactivity.

The factor loadings on the instructor/instruction satisfaction dimension were satisfactory and explained 64.48% of variance. The other two components had several complex loadings. A possible explanation is students associated many of the interactivity aspects with instructional issues, and the online learners might have associated technology aspects with factors outside the course. The three extracted components explain 72.73% of variance. These results indicate the online course satisfaction survey is a true measure of satisfaction.

To determine the instrument's internal consistency reliability, the Cronbach alpha coefficient was used. The total scale reliability was high (.99). The subscale reliability was high for all six dimensions. The Cronbach alpha coefficients were as follows for the six subscales: (a) .98 for instructor, (b) .93 for technology, (c) .94 for course management, (d) .96 for course web site, (e) .83 for interactivity, and (f) .88 for general issues.


These results indicate the OCSS is a valid measure of student satisfaction even though only three factors were confirmed in this study. The OCSS was used for determining student satisfaction in an instructional technology graduate program but may be used for any online course in order to determine the level of student satisfaction. The instrument's overall reliability and the reliability of each of its six subscales are satisfactory. Other researchers may want to replicate this study with a larger sample to validate the findings.

The results indicate instructor variables are the most important factor when it comes to student satisfaction in the online environment. The questions in the instrument's instructor subscale related to communication, feedback, preparation, content knowledge, teaching methods, encouragement, accessibility, and professionalism. In other words, instructors who teach in the online environment should be good instructors.

The other two important factors that explain student satisfaction are technology and interactivity. Students need to have access to reliable equipment both personally and on the part of the institution. In addition, learners must have functional, usable tools for participation and interaction, and these tools should be used early and often. Online learners must be given plenty of opportunities to participate in discussions in order to feel involved and stay engaged in online courses.

Moore and Kearsley (1996) warned student satisfaction is not correlated with actual student achievement. However, satisfaction contributes to motivation, and motivation is a predicting factor of student success. This is reason enough to be concerned about the levels of satisfaction students experience in online courses and degree programs. The increase in numbers of online courses offered at postsecondary educational institutions and rising student enrollment in online courses and programs should prompt educators and administrators to investigate student satisfaction further. Tools such as the OCSS can be used to evaluate courses, programs and, to a certain degree, predict student attrition. Clearly, student satisfaction is a key variable in determining the success or failure of online learners, courses, and programs.

Table 1. Means and Standard Deviations for Items on the Instrument

Instruction subscale Technology subscale
Item M SD Item M SD
 1 3.26 1.01 14 3.04 0.98
 2 3.22 0.93 15 2.73 0.93
 3 3.09 1.06 16 2.89 0.98
 4 3.18 1.05 17 3.05 0.96
 5 3.30 1.02 18 3.02 0.93
 6 3.32 0.97 19 3.03 0.97
 7 3.40 0.96 20 2.98 0.88
 8 3.10 0.99 37 2.88 1.01
 9 3.27 1.07
10 3.30 0.95
11 3.18 1.10
12 3.35 1.00
13 3.22 1.08

Course management subscale Web site subscale
Item M SD Item M SD
21 3.02 0.98 28 3.09 0.95
22 3.05 0.97 29 3.13 0.96
23 2.97 0.97 30 3.21 0.91
24 2.99 0.96 31 3.11 1.01
25 2.96 0.96 32 3.05 0.94
26 2.97 0.93 33 3.02 0.94
27 3.12 0.96

Interactivity subscale General issues subscale
Item M SD Item M SD
34 2.58 1.07 38 2.90 0.82
35 2.31 1.00 39 2.69 0.99
36 2.69 1.05 40 3.34 0.98
 41 3.12 0.99
 42 2.50 1.09


Aggarwal, A. (Ed.) (2000). Web-based learning and teaching technologies: Opportunities and challenges. Hershey, PA: Idea Publishing Group.

American Psychological Association (APA) (1997). Learner-centered psychological principles: A framework for school redesign and reform. Washington, DC: Author. Retrieved August 28, 2001, from:

Astin, A.W. (1993). What matters in college? Four critical years revisited. San Francisco, CA: Jossey-Bass.

Bailey, B.L., Bauman, C., & Lata, K.A. (1998). Student retention and satisfaction: The evolution of a predictive model. Paper presented at the meeting of the Association for Institutional Research Conference, Minneapolis, MN. (ERIC Document Reproduction Service No. ED424797)

Bean, J.P., & Bradley, R.K. (1986). Untangling the satisfaction-performance relationship form college students. Journal of Higher Education, 57, 393-412.

Belanger, F., & Jordan, D.H. (2000). Evaluation and implementation of distance learning: Technologies, tools and techniques. Hershey, PA: Idea Publishing Group.

Biner, P.M. (1993). The development of an instrument to measure student attitudes toward televised courses. The American Journal of Distance Education, 7(1), 62-73.

Biner, P.M., Dean, R.S., & Mellinger, A.E. (1994). Factors underlying distance learner satisfaction with televised college-level courses. The American Journal of Distance Education, 8(1), 60-71.

Bonk, C.J., & Cunningham, D.J. (1998). Searching for learner-centered, constructivist, and sociocultural components of collaborative educational learning tools. In C.J. Bonk & K.S. King (Eds.), Electronic collaborators: Learner-centered technologies for literacy, apprenticeship, and discourse (pp. 25-50). Mahwah, NJ: Lawrence Erlbaum.

Bower, B.L., & Kamata, A. (2000). Factors influencing student satisfaction with online courses. Academic Exchange Quarterly, 4(3), 52-56.

Card, K.A., & Horton, L. (2000). Providing access to graduate education using computer-mediated communication. International Journal of Instructional Media, 27, 235.

Chong, S. M. (1998). Models of asynchronous computer conferencing for collaborative learning in large college classes. In C. J. Bonk & K. S. King (Eds.), Electronic collaborators: Learner-centered technologies for literacy, apprenticeship, and discourse (pp. 157-182). Mahwah, NJ: Lawrence Erlbaum.

Chute, A.G., Thompson, M.M., & Hancock, B.W. (1999). The McGraw-Hill handbook of distance leaming. New York: McGraw-Hill.

DeBourgh, G.A. (1999). Technology is the tool, teaching is the task: Student satisfaction in distance learning. Paper presented at the meeting of the Society for Information Technology & Teacher Education International Conference, San Antonio, TX. (ERIC Document Reproduction Service No. ED 432 226)

Donahue, T.L., & Wong, E.H. (1997). Achievement motivation and college satisfaction in traditional and nontraditional students. Education, 118, 237-243. Retrieved August 28, 2001, from: InfoTrac database (Item A20479498).

Edwards, J.E., & Waters, L.K. (1982). Involvement, ability, performance, and satisfaction as predictors of college attrition. Educational and Psychological Measurement, 42, 1149-1152.

Feenberg, A. (1999). Distance learning: Promise or threat? Retrieved October 25, 2001, from:

Finaly-Neumann, E. (1994). Course work characteristics and students' satisfaction with instructions. Journal of Instructional Psychology, 21(2), 14-19.

Gunawardena, C.N., & Zittle, R.H. (1998). Faculty development programmes in distance education in American higher education. In C. Latchem & F. Lockwood (Eds.), Staff development in open and flexible learning (pp. 105-114). New York: Routledge.

Hara, N., & Kling, R. (2000). Students' distress with a web-based distance education course: An ethnographic study of participants' experiences. Bloomington, IN: Center for Social Informatics. Retrieved August 24, 2001, from:

Harrison, N. (1999). How to design self-directed and distance learning. Boston: McGraw-Hill.

Hiltz, S.R. (1993). Correlates of learning in a virtual classroom. International Journal of Man-Machine Studies, 39, 71-98.

Hobbs, V.M., & Christianson, J.S. (1997). Virtual classrooms. Educational opportunity through two-way interactive television. Basel: Technomic Publishing.

Jaffee, D. (1998). Institutionalized resistance to asynchronous learning networks. Journal of Asynchronous Learning Networks, 2(2). Retrieved October 25, 2001, from:

Keegan, D. (1990). Foundations of distance education (2nd ed.). New York: Routledge.

Knox, W.E., Lindsay, P., & Kolb, M.N. (1993). Does college make a difference? Long-term changes in activities and attitudes. Westport, CT: Greenwood Press.

Maki, R.H., Maki, W.S., Patterson, M., & Whittaker, P.D. (2000). Evaluation of a web-based introductory psychology course: I. Learning and satisfaction in online versus lecture course. Behavlor Research Methods, Instruments, & Computers, 32, 230-239.

Mood, T.A. (1995). Distance education: An annotated bibliography. Englewood, CO: Libraries Unlimited.

Moore, M.G., & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth Publishing.

National Center for Education Statistics (NCES). (1998). Issue brief: Distance education in higher educational institutions: Incidences, audiences, and plans to expand. Retrieved October 25, 2001, from

National Center for Education Statistics (NCES). (1999). Distance education at postsecondary education institutions: 1997-98 (NCES Publication No. 2000-013). Washington, DC: U.S. Government Printing Office.

Navarro, P. (2000). The promise-and potential pitfalls-of cyberlearning. In R. A. Cole (Ed.), Issues in web-based pedagogy (pp. 281-297). Westport, CT: Greenwood Press.

Navarro, P., & Shoemaker, J. (2000). Performance and perceptions of distance learners in cyberspace. The American Journal of Distance Education, 14(2), 15-35.

Neely, L., Niemi, J.A., & Ehrhard, B.J. (1998). Classes going the distance so people don't have to: Instructional opportunities for adult learners. T.H.E. Journal Online, 26(4). Retrieved August 28, 2001, from:

Noble, D.F. (1998). Digital diploma mills: The automation of higher education. First Monday, 3(1). Retrieved October 25, 2001, from

Palloff, R.M., & Pratt, K. (1999). Building learning communities in cyberspace: Effective strategies for the classroom. San Francisco: Jossey-Bass.

Powers, S.M., Davis, M., & Torrence, E. (1999). Person-environment interaction in the virtual classroom: An initial examination. Paper presented at the National Convention of the Association for Educational Communications and Technology, Houston, TX. (ERIC Document Reproduction Service No. ED 436 185)

Rahm, D., & Reed, B.J. (1998). Tangled webs in public administration: Organizational issues in distance learning. Public Administration Management: An Interactive Journal, 3(1). Retrieved October 25, 2001, from:

Richards, C.N., & Ridley, D.R. (1997). Factors affecting college students' persistence in on-line computer-managed instruction. College Student Journal, 31, 490-495.

Shaw, S., & Polovina, S. (1999). Practical experiences of, and lessons learnt from, internet technologies in higher education. Educational Technology & Society, 2(2). Retrieved November 02, 1999, from:

Smith, P.L., & Dillon, C.L. (1999). Comparing distance learning and classroom learning: Conceptual considerations. The American Journal of Distance Education, 13(2), 6-23.

Wegerif, R. (1998). The social dimensions of asynchronous learning networks. Journal of Asynchronous Learning Networks, 2(1). Retrieved October 25, 2001, from:

Wetzel, C.D., Radtke, P.H., & Stern, H.W. (1994). Instructional effectiveness of video media. Mahwah, New Jersey: Lawrence Erlbaum.

Williams, W.M., & Ceci, S.J. (1997). "How'm I doing?" Problems with student ratings of instructors and courses. Change, 29, 12-23.


COPYRIGHT 2004 Association for the Advancement of Computing in Education (AACE)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Martindale, Trey
Publication:International Journal on E-Learning
Date:Jan 1, 2004
Previous Article:Beyond the use of new technologies in adult distance courses: an ethical approach.
Next Article:Advanced technologies for contents sharing, exchanging, and searching in e-learning systems.

Related Articles
Students' perceptions of course Web sites used in face-to-face instruction.
Understanding e-dropout.
Student's perception of quality in online courses.
Online learners take the lead in self-direction.
Virtual group problem solving in the basic communication course: lessons for online learning.
Assessing social ability in online learning environments.
Competency, readiness, and online learning.
Closing the gap: impact of student proactivity and learning goal orientation on e-learning outcomes.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters