Comparison of student learning outcomes in online and traditional classroom environments in a psychology course.
Distance education (DE) is an increasingly popular solution to campus problems of overcrowding and student requirements for flexible schedules. Changing the traditional environment of the classroom has encountered enthusiasm from some camps of faculty, administrators, and students at the same time it has received resistance from others. A primary potential benefit for institutions is more efficient use of resources, whereas students may potentially benefit from increased critical thinking, leadership, communication, and problem solving skills (Spangle, Hodne & Schierling, 2002; Swan, 2001). Correspondingly, critics have highlighted potential drawbacks of distance education for students including increased isolation from peers, lack of engagement, and lack of sufficient technical support. Empirical research validating these perspectives has been contradictory and sometimes methodologically flawed. Therefore, as the number of online course offerings increase in higher education, evaluating to what extent online learning environment may affect learning outcomes and/or student satisfaction should continue to be a research priority.
Early reviews of empirical studies comparing DE to traditional classroom learning found no difference between learning outcomes in these different settings (Russell, 1999; Phipps & Merisotis, 1999), and mounting evidence indicates many forms of learning environments may have the potential to be just as effective as traditional classroom learning experiences (Bernard et al., 2004; Tallent-Runnels et al., 2006). Moreover, an important finding from a large-scale study based on the National Survey of Student Engagement (NSSE) found a positive relationship between the use of technology and self-reported learning outcomes by students (Chen, Lambert & Guidry, 2010).
Accordingly, several authors have established best practices for online learning based on these evaluations (e.g., Sunal, Sunal, Odell & Sundberg, 2003). Some of these include the importance of creating an online community, positive student attitudes toward online learning, high levels of professor interaction, and sufficient technical support. However, with so many variables potentially contributing to effectiveness of DE, relatively few studies have investigated the differences associated with learning environment alone. Furthermore, many studies that have tried to isolate these effects have included other variables that could potentially confound the results. For example, Borthick and Jones (2000) compared test scores of an online class and a traditional class taught during different semesters, which may pose a threat to the internal validity of the study. As Nora and Snyder (2008) conclude in their review of the literature related to e-learning and student outcomes, "there is a huge gap in the research literature specifically devoted to ... investigations of the link between technology and performance indicators/outcomes ..." (p. 16). This study is designed to help fill that gap.
In addition to learning outcomes, satisfaction with DE courses has been another prominent area of concern among researchers. Student satisfaction is important to retention of the student in the major and the institution, but it is also important in that it has been linked to learning outcomes in many studies (Palmer & Holt, 2009; Smart & Cappel, 2006; Upton, 2006).
Results have been mixed when comparing student satisfaction in DE courses with those in face-to-face learning environments. For example, in a recent study comparing students in a business law class delivered online to those in a traditional classroom environment, Shelley, Swartz and Cole (2008) found that students in a face-to-face environment were significantly more satisfied with the course structure and instructor than comparable students in an online environment. Finlay, Desmet and Evans (2005) found that online instruction students had higher levels of satisfaction than the traditional classroom in an English Composition class. However, findings from other studies sometimes show the opposite. For example, Rivera and McAlister (2001) found students in online instruction were less satisfied than those in traditional classrooms.
One reason for these contradictions may be that student satisfaction is a multidimensional construct that involves several factors (Saade & Kira, 2006). Johnston, Killion and Oomen (2005) reviewed the literature for factors contributing to satisfaction with online learning and identified clarity and relevance of assignments, access to campus-based resources, availability of technical support, and orientation to the course, among others* Just as it is in a traditional classroom, students' satisfaction with their online learning experience results from the interaction of complex factors including, but not limited to, the learning environment.
Finally, questions remain regarding which types of classes can be effectively taught in online. Presumably, courses that require direct interaction or guidance between instructor and student are least suited for distance learning, for example, laboratory courses in the natural sciences or performance courses in the arts. In the area of psychology, studies have investigated the efficacy of DE versus traditional face-to-face learning environments on collaborative learning (Francescato et al., 2006) and teaching counseling skills in a synchronous online environment (Rockinson-Szapkiw & Walker, 2009), but there are no studies focused on the typical courses taken by undergraduate psychology majors which form the core of the undergraduate curriculum in psychology.
The purpose of this study was to investigate whether the current findings regarding equivalent student learning but reduced satisfaction in online versus traditional classroom environments hold true in a psychology class taught to undergraduates.
Participants were two classes of students (N = 69) in Theories of Counseling taught during the spring semester at a small, public college on the East Coast. Both groups were generally representative of the undergraduate population in the area. The students taught in class had a mean age of 22.65 (SD = 6.07) and were 27% male and 73% female. They were 82% Caucasian, 3 % African-American, 3% Hispanic, and 12% Other. Approximately 9% were sophomores, 56% were juniors, and 35% were seniors. The students taught online had a mean age of 24.13 (SD = 5.72) and were 19% male and 81% female. They were 78% Caucasian, 9% African-American, and 13% Hispanic. Approximately 13% were sophomores, 38% were juniors, and 42% were seniors.
The primary assessment instruments were 10-question multiple-choice quizzes taken from material in the text. Each class received identical quizzes and students in both classes were allowed to use their books. Quizzes were administered at the end of the week during which the chapter was discussed, and each class was limited to thirty minutes for answering the questions.
The assessment instrument measuring student satisfaction with the course and instructor was the IDEA, a standardized instrument commonly used for student feedback in higher education settings.
Students in both classes were informed at the beginning of the semester that quizzes would be open-book, limited to thirty minutes, and count toward the final course grade. Both classes were also informed that quizzes would take place at the end of the relevant week. The traditional class took their quizzes during the usual class time. The online class had a window of thirty hours during which to take their quizzes.
Students in both classes were asked to rate the quality of the course and the quality of the instruction at the end of the semester using the IDEA. Multiple items were combined to produce an overall score on a five-point scale. The traditional class responded using paper and pencil, whereas the online class was emailed a link to an online form.
Students in the online and in class conditions did not differ significantly in age (t(64) = -1.016,p > .05), gender ([X.sup.2](1, 66) = .56,p > .05), ethnicity ([X.sup.2](3, 66) = 6.92, p > .05), or class ([X.sup.2](2, 66) = 2.24, p > .05).
The quiz scores were analyzed in two steps. First, the initial quiz scores were examined to see if the two groups were comparable on the first quiz. Since the participants self selected into the traditional course and the distance education course, this check addressed the question of performance with little exposure to online verses traditional instruction. Levene's test for homogeneity of variance revealed unequal variance between the groups (F(1,65) = 14.7,p < .001). The subsequent t-test on independent groups consequently did not assume equal variances between the groups and showed a significant difference, t(~48) = 2.44, indicating that the performance of the distance education group (M = 8.06, SD = .95, n = 34) was reliably superior to that of the traditional group (M = 7.18, SD = 1.85, n = 33). This was rather surprising and may suggest that students in the distance education group were either better prepared or otherwise suited to the first quiz.
Subsequent analyses were conducted with the remaining twelve quiz scores. Since each quiz score was the result of a separate chapter, the scores of each three chapter quiz scores were blocked in order to smooth out any difficulty differences among the chapters. The resultant analysis yielded a 2 (Groups) by 4 (Blocks of Quiz scores) Mixed ANOVA. Initially, Box's test was conducted in order to assure thatthe covariance matrices were equal across groups and this assumption proved to be reasonable (F(10,14419) = 1.07, p = .38). Levene's test for equality of error variance showed no significant difference at any of the four blocks, nor did the blocks show departure from normality as measured by Mauchly's test of sphericity, suggesting that the error covariance matrix is proportional to the identity matrix.
There were no reliable differences between the groups, (F(1,55) = .02, p > .05) with the traditional group scoring almost identically (M = 23.4, SE = .50, n = 28) to the distance education group (M= 23.5, SE = .49, n = 29). There was a rather large effect for trials (F(3,165) = 21.25,p < .001) which indicates that student performance differed over blocks of various material, but most importantly, no reliable Groups by Blocks interaction was observed, (F(3,165) = 2.37, p > .05) indicating that performance between the groups over time did not reliably differ over blocks of three tests (see Figure 1).
[FIGURE 1 OMITTED]
Course and instructor satisfaction scores were only available in the aggregate, so they could not be evaluated with inferential statistics. However, the online and in class groups differed dramatically on both dimensions. The in class group reported a total satisfaction rating with the course as 4.7 and the teacher as 4.8, whereas the online group reported a total satisfaction rating with the course as 4.0 and the teacher as 3.8.
The results of this study were consistent with the findings of some other research that has shown no difference between the learning outcomes of students in a traditional classroom compared to that of students in an online environment. They are also consistent with research that has shown lower ratings for satisfaction of students in an online environment compared to their peers learning in a traditional classroom. Apparently, undergraduates may perform as well in an online environment as their counterparts in a traditional classroom, but their satisfaction with the educational experience may suffer.
Strengths and Limitations
The two primary strengths of this study are the control of most extraneous variables and the generalizability of the results. Aside from the learning environment, the instructor, semester, and course content were held constant and therefore could not have differentially influenced student performance or satisfaction. This level of control has not always been evident in other investigations seeking to examine differences related to learning environment.
In addition, students self-selected their own learning environments. While the lack of random assignment perhaps produced groups that were not equivalent at the outset in variables other than basic demographics, it was also a strength of this study in terms of producing groups likely to be representative of other online and traditional classes.
A notable limitation in this study is the small sample size, reducing the power of the analysis, and it is possible that larger comparison groups may have shown differences in learning outcomes not identified in this case. On the other hand, the differences in satisfaction with the course and instructor found in this small sample imply that learning environment was strongly associated with level of satisfaction.
Future research should continue to try to identify which variables are associated with increased student learning and satisfaction in which learning environments. In addition, it is important to determine more specifically how learning outcomes and satisfaction are related, or why, in some learning environments, they may not be related.
The need to more precisely identify which types of students learn which types of material best and are most satisfied with their experiences in which learning environments becomes increasingly important as higher education continues to move in the direction of a cafeteria model, striving to meet the needs of a diverse student body who require more choices in their higher education experience.
Bernard, R.M., Abrami, EC., Lou, Y., Borkhovski, E., Wade, A., Wozney, L., Wallet, EA., Fiset, M. & Huang, B. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439.
Borthick, A. F. & Jones D. R. (2000). The motivation for collaborative discovery learning online and its application in an Information Systems Assurance course. Issues in Accounting Education, 15(2), 181-210.
Chen, E D., Lambert, A. D. & Guidry, K. R. (2010). Engaging online learners: The impact of Web-based learning on college student engagement. Computers & Education, 54, 1222-1232.
Finlay, W., Desmet, C. & Evans, L. (2004). Is it the technology or the teacher? A comparison of online and traditional English composition classes. Journal of Educational Computing Research, 31(2), 163-180.
Francescato, D., Porcelli, R., Mebane, M., Cuddetta, M., Klobas, J. & Renzi, E (2006). Evaluation of the efficacy of collaborative learning in face-to-face and computer-supported university contexts. Computers in Human Behavior, 22, 163-176.
Johnston, J., Killion, J. & Oomen, J. (2005). Student satisfaction in the virtual classroom. The Internet Journal of Allied Health Sciences and Practice, 3(2). Available at: http://ijahsp.nova.edu/articles/vol3num2/johnston.htm
Lim, J., Kim, M., Chen, S. S. & Ryder, C. E. (2008) An empirical investigation of student achievement and satisfaction in different learning environments. Journal of Instructional Psychology, 35(2), 113-119.
Nora, A. & Snyder, B. P. (2008). Technology and higher education: The impact of e-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention, 10(1), 3-19.
Palmer, S. R. & Holt, D. M. (2009). Examining student satisfaction with wholly online learning. Journal of Computer Assisted Learning, 25, 101-113.
Parsons-Pollard, N., Lacks, R. D. & Grant, P. H. (2008).A comparative assessment of student learning outcomes in large online and traditional campus-based introduction to criminal justice courses. Criminal Justice Studies, 21 (3), 239-251.
Phipps, R., & Merisotis, J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: Institute for Higher Education Policy.
Rockinson-Szapkiw, A. J. & Walker, V. L. (2009). Web 2.0 technologies: Facilitating interaction in an online human services counseling skills course. Journal of Technology in Human Services, 27(3), 175-193.
Russell, T. L. (1999). The no significant difference phenomenon. Chapel Hill: Office of Instructional Telecommunications, University of North Carolina.
Saade, R. G. & Kira, D. (2006). The emotional state of technology acceptance. The Journal of Issues in Informing Science and Information Technology, 3,529-539.
Shelley, D. J., Swarz, L. B. & Cole, M. T. (2008). Learning business law online vs. onland: A mixed method analysis. International Journal of Information and Communication Technology Education, 4(2), 54-66.
Smart, K. L. & Cappel, J. J. (2006). Students' perceptions of online learning: A comparative study. Journal of Information Technology Education, 5, 201-219.
Spangle, M., Hodne, G. & Schierling, D. (2002) Approaching value-centered education through the eyes of an electronic generation: Strategies for distance learning. 1-26. (ERIC Document Reproduction Service No. ED474581).
Sunal, D. W., Sunal, C. S., Odell, M. R. & Sundberg, C.A. (2003). Research-supported best practices for developing online learning. The Journal of Interactive Online Learning, 2(1), 1-40.
Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22,306-331.
Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahem, T. C., Shaw, S. M. & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135.
Upton, D. (2006). Online learning in speech and language therapy: Student performance and attitudes. Education for Health, 19, 22-31.
Zhang, D. (2005). Interactive multimedia-based e-learning: A study of effectiveness. American Journal of Distance Education, 19(3), 149-162.
Jennifer Lyke, Ph.D., Associate Professor of Psychology, Psychology Program - School of social and Behavioral Sciences; Michael Frank, Ph.D., Professor, Psychology Program- School of Social and Behavioral Sciences, Richard Stockton College of New Jersey.
Correspondence concerning this article should be addressed to Dr. Jennifer Lyke at Jennifer.firstname.lastname@example.org.
|Printer friendly Cite/link Email Feedback|
|Author:||Lyke, Jennifer; Frank, Michael|
|Publication:||Journal of Instructional Psychology|
|Date:||Sep 1, 2012|
|Previous Article:||Preference for group learning is correlated with lower grade point average.|
|Next Article:||The retrospective pretest as a gauge of change.|