Comparing online and traditional classes.
This study compared the experiences of students in online and traditional statistics classes. A two-condition quasi-experiment with a pretest and a post-test was employed to compare performance, attitudes and satisfaction between two groups of learners. Results indicate that exam scores for the students in the traditional group were significantly higher than were the scores for the online group on two of five exams. Student attitudes and level of satisfaction with the two methods of course delivery did not vary significantly.
Online courses are an important and growing part of higher education. Proponents of online education note that computer technology has provided opportunities for learning that were previously unavailable to those not well served by traditional brick-and-mortar universities. Opponents see online courses as a fringe activity and worry that technology is guiding pedagogy (Peterson, 2001). Whatever one's position, the fact is that the distinction between the traditional classroom and online instruction will continue to blur as traditional classes add online components and online courses gain mainstream respectability. In this climate, investigations aiming to uncover individual, situational and institutional factors that influence student performance and satisfaction in new learning environments will become increasingly important.
Despite numerous studies on distance education in general and web-based learning in particular, the research centered on comparisons between traditional and online instruction is in its infancy. Much of the existing work focuses on student experiences in the online condition and tends to avoid direct comparisons with analogous traditional learning environments (Huang, 2002; Teh, 1999). Another set of studies in this area focuses on the instructors' experiences. Smith, Ferguson and Caris (2001) interviewed 21 online instructors with the goal of describing the faculty members' perspective. They found that online classes are not necessarily alienating experiences for students, but provide intellectually stimulating forums which create a sense of equality between professor and student. Wang and Newlin (2002) used their experience as instructors of web-based psychology classes to offer useful advice to online educators about identifying and helping low-performing cyber-students. They suggest course assessment quizzes, cyber-study groups and heightened social presence by the instructor as techniques for assisting online students.
In one experimental investigation Schulman and Sims (1999) set out to extend the earlier work of Schutte (1996) by comparing pre- and post-test scores of students in online and traditional classes. Their results, however, were inconsistent with the work they replicated. While Schutte found online students performed significantly better than their in-class counterparts, Schulman and Sims concluded that the learning of the two groups of students in their sample was equal. Schutte notes that his online students' frustration with their inability to ask questions in a face-to-face environment led them to form study groups, which may have contributed to their higher test scores; there is no evidence of heightened student-to-student interaction among the online students in the Schulman and Sims study.
MacGregor (2001) also compared students in online and traditional classes. Using survey questionnaires containing both closed- and open-ended items she evaluated how the two groups of students rated various aspects of the classes including: work load, satisfaction, comfort level and perceived amount of learning. While students in the online and traditional classes gave similar ratings of amount of learning and satisfaction, the online students had lower comfort levels and perceived the workload to be higher than the students in the traditional classes. The two types of classes studied in this case were taught by various instructors from three different disciplines, thus making the comparisons less than ideal. The present research contributes to the emerging literature that compares students' experiences in online and traditional classes. Using a research design that employs pre- and post-test measures and controls for the instructor effect, the goal was to examine what differences, if any, exist in class performance and attitudes between students in online and traditional classes. To this end the following research questions were posited:
RQ1: How do students' learning of course material in online classes compare to students' learning in traditional classes?
RQ2: How do students' attitudes regarding coursework, learning and overall satisfaction in online classes compare to students in traditional courses?
The methodology employed in this study was a two condition quasi-experiment with a pretest and post-test. The participants were students enrolled in an introductory business statistics course at a mid-sized California university. The subjects self-selected into either the online or traditional section of the course. On the first day of each class the students were told about the research project and were invited to participate; 41 students from the traditional class opted to participate in the project and 46 students in the online group agreed to take part.
Traditional class The traditional class may also be known as a "chalk-and-talk" class. The professor and the students met in the in the same physical location three-times-a-week to engage in lecture, discussion and activity related to the course goals. While the professor did occasionally augment the lecture with computer demonstrations and sometimes answered student questions via email, the bulk of the communication between the professor and students took place in the traditional face-to-face environment of the classroom.
Online class The online class was the same introductory business statistics course taught by the same professor as in the traditional class. The professor met the online students in the classroom five times during the term: on the first day of class, for one mid-term exam, two exam review sessions and again for the final exam.  Other than these meetings all communication between the professor and students took place via the internet. Lectures and homework assignments were posted on the course web site; discussions among the students and between the students and the professor took place on the same course web site; and the professor answered student questions via email.
Pretest A pretest covering the course content was administered to students in both the online and traditional class on the first day of class, in the classroom. The students were told that the test was a diagnostic instrument and would not be factored into their course grades, however, they were asked to try their best to answer the questions accurately. This test was scored on a 0 to 100-point scale, as were the remaining tests.
Midterm exams The students in both sections of the class also took three mid-term exams. These exams were administered during the third, sixth and ninth week of the term. The online students were required to take the second of these mid-term exams on campus; they completed the remaining two midterm exams online.
Post-test The course final exam was used as the post-test measurement. This exam covered much of the same material as was covered on the pretest administered at the start of the 10-week term. As with the pretest and the second midterm exam, the online students took the final exam in the classroom. Learning was operationalized as the difference between the students' pretest and post-test scores. An increase in the post-test scores was used as an indicator that learning of the course material had taken place.
Questionnaire In addition to the exams, students in both conditions responded to a questionnaire covering the following sets of variables: 1) Previous experience with online courses, 2) Familiarity and comfort level with computer technology, 3) Motivations for taking the course, 4) Confidence with respect to the course material, and 5) Demographic characteristics. Most questions were closed-ended and provided respondents with Likert-type response scales.
Sample Description In both classes, juniors and seniors made up the majority of the sample. There were approximately equal numbers of men and women in the traditional class. See Table 1, issue website http://rapidintellect.com/AEQweb/fal2005.htm In the online group women outnumbered men (69% women; 31% men). Results of the Chi Square test indicates no significant relationship between class condition (traditional and online) and gender (Chi Square equaled 3.32, p value was .073). The age ranges vary slightly (18-46 for the traditional class; 20-45 for the online class) as does the mean age of the two groups (traditional = 25.8; online = 28.6). This difference between the mean ages of the two groups was not statistically significantly different (t = -1.33, p = . 188).
The comparison of exam scores at the start of the class to the scores at the end of the term. See Table 2, issue website http://rapidintellect.com/AEQweb/fal2005.htm reveals that students in both conditions had significant gains. The students in the traditional class scored 46.39 points higher on the post-test than the pretest (t = -17.09, p < .001). The online students likewise improved significantly on the post-test, earning 36.53 points more than on the pretest (t =-11.49, p <.001).
Switching now to comparisons between the traditional and online students, Table 3 shows that the pretest scores for the two groups were virtually identical at the start of the term. See issue website http://rapidintellect.com/AEQweb/fal2005.htm As the course progressed, however, differences became more apparent. While the mean exam scores for the students in the traditional section of the course were higher than those for the online students on each of the four remaining tests, t tests indicate significant differences for test #2 and the final exam only (test #2: traditional group mean = 85.49, online group mean 74.38, t = 2.70, p < .01; final exam: traditional group mean = 85.61, online group mean = 77.00, t 2.53, p < .01).
Students in both conditions indicated a strong degree of agreement when asked if the course was what they expected, if it was well organized, if they understood the grading process and if they were given an opportunity to ask questions and get help when they needed it.  See Table 4, issue website http://rapidintellect.com/AEQweb/fal2005.htm As anticipated, students in the online condition reported rarely relying on classmates for help with homework. Surprisingly, students in the traditional class also said they said they seldom relied on classmates for help with homework.
The opinion items that show the largest differences between the students in the two classes are the questions about understanding the course content and recommending the class to other students. While 91% of the students in the traditional class said they understood the course content, only 69% of the online students reported understanding the content (Chi Square equaled 3.61, p value was less than .05). Similarly, 92% of the traditional students said that they would recommend the course to another student; 66% of the online students said that they would recommend the course (Chi Square equaled 4.82, p value was less than .05). With respect to the overall course satisfaction question, 92% of the students in the traditional class said that they were satisfied or very satisfied with the course; 69% of the students in the online section said that they were satisfied or very satisfied with the course (Chi Square equaled 3.86, p value was less than .05.)
Students in the online class were asked why they signed up for this particular online course. As Table 5 suggests, overall convenience and the more specific convenience of being able to effectively manage school and work were strong motivations for signing up for the class. See issue website http://rapidintellect.com/AEQweb/fal2005.htm When asked if they would take another online class 72% of the sample said "yes." Finally, 76% said they thought the University Statistics Department should continue to offer this particular course as an online class.
Online courses are reported to be particularly well suited to meet the needs of students who have responsibilities beyond school that make it difficult for them to attend regular university classes. The self-directed format and general flexibility allows them to pursue a degree without compromising work and family obligations. This study evaluated two groups of learners: one in the traditional classroom setting and one in an online environment. Attempts were made to create analogous experiences for the two groups by assigning the same professor, text, lecture material, homework assignments and exams to all the study participants. While the students self-selected into the two conditions, the demographic characteristics of the groups show striking similarity. The freshman, sophomore, junior, senior and graduate students in both classes were similarly distributed, with juniors and seniors as the majority in both conditions.
Only gender appeared to be different, with women making up a greater proportion of the online group than men, the number of men and women in the traditional class was approximately equal. When subjected to statistical analysis, however, the relationship between class condition and gender did not prove to be statistically significant. Among the other demographic variables, the average age of the online students seemed as if it might pose a troubling difference as the mean age of the online students was 2.8 years greater than the mean age of the traditional students; on further analysis, this too, was not a statistically significantly difference between the groups. While still lacking the desirable feature of random assignment of subjects to conditions, it is at least safe to say that the two groups of students in this study are comparable in their demographic characteristics. Using the increase in post-test exam scores as a measure of learning, it is evident that students in both conditions learned the course material. Both groups showed statistically significant increases on their post-test scores. Although a portion of this increase in the post-test scores may be attributable to the testing effect, both groups of students were subjected to the same five exams throughout the quarter; therefore any result due to the testing effect would be matched for the two groups.
Both the traditional and online groups of students started the term with equally low content knowledge, as indicated by the almost identical pretest scores of the two groups. Differences between the groups were apparent by the first exam, administered during the third week of the quarter. On this one, as well as the remaining exams, the traditional class had higher mean scores. The scores were significantly higher only on the second midterm and the final exam, however. It is important to note that the two exams that showed significant differences between the groups (the second midterm and the final) were both administered to the online students in the classroom. This may have put the online students at a disadvantage as they had become accustomed to approaching the course material in the online environment, thereby making the face-to-face setting of the classroom for the exams disconcerting. The differences between the two classes were not as marked on the opinion measures. With the exception of two items, students in both conditions reported similar degrees of agreement on the survey questions. Their expectations about the course, opinions about the instructor and understanding of the process were all consistent between the two groups. Both groups of students also had a high percentage of disagreement with the statement, "I rely on my classmates for help with homework." It seems that the face-to-face setting of the traditional classroom (and the lack of interpersonal interaction in the online condition) had no impact at all on the students' reliance on each other for help with the course material.
The two opinion items that did yield significant effects were: understanding the course content and recommending the class to other students. On both of these items the students in the traditional class reported greater understanding of the course material and a greater likelihood of recommending the course. Consistent with this were the results of the question about overall satisfaction in the course; students in the traditional class were more satisfied with their experience than were students in the online section of the course. Another notable similarity between the two groups is the low rate of attrition during the ten week term. The traditional class lost five students while six dropped out of the online class. This deviates from the pattern of high dropout rates in online instruction (Merisotis, 1999). Perhaps participation in the research motivated students who might have otherwise given up on the class to stay until the quarter ended. Regrettably the students who dropped the course were not contacted nor interviewed about why they chose to drop. Follow-up interviews with students who drop courses should be integral to future research in this area. As Merisotis notes, students who are struggling are the ones most likely to drop a class, therefore the research results may be biased toward those who are successful.
It is important to emphasize that the online class described here deviated from the strict definition of an online class. Students were required to be on campus on the first day of class, for one exam and for the final exam; recall that two of the exams for this class were administered online. During the quarter the professor added two optional exam review sessions for the online class. Thus, the students were required to be on campus three times during the quarter, and had the option to be on campus for two additional meetings. Although this structure precluded distance learners from participating in the class, it represented a good balance of face-to-face and online interaction between the professor and students. This research demonstrates that the experiences of students in traditional and online classes are not as dissimilar as one might guess. Despite the difference in exam scores between the two groups in this study, students learned in both formats. The field of online learning research contains much acreage yet to be plowed; however, it is becoming clear that future research need not focus so much on whether students can learn in online classes, rather it might be more fruitful to investigate the factors that contribute to satisfaction with the different learning environments.
Huang, H-M. (2002). Student perceptions in an online mediated environment. International Journal of Instructional Media, 29, 405-423.
MacGregor, C.J. (2001). A comparison of student perceptions in traditional and online classes. Academic Exchange Quarterly, 5, 143-148.
Merisotis, J.P. (1999). The "What's-the-difference?" debate: Outcomes of distance vs. traditional classroom-based learning. Academe, 85, 47-51.
Schulman, A.H., & Sims, R.L. (1999). Learning in an online format versus an in-class format: An experimental study. Technological Horizons in Education, 26, 54-57.
Schutte, J.G. (1996). Virtual teaching in higher education: The new intellectual superhighway or just another traffic jam? http://www.csun.edu/sociology/virexp.htm.
Smith, G.G., Ferguson, D., & Caris, M. (2001). Online vs. face-to-face. Technological Horizons in Education, 28, 18-22.
Teh, G.P.L. (1999). Assessing student perceptions of internet-based online learning environments. International Journal of Instructional Media, 26, 397-402.
Wang, A.Y., & Newlin, M.H. (2002). Predictors of performance in the virtual classroom: identifying and helping at-risk cyber-students. Technological Horizons in Education, 29, 21-27.
 This format would more correctly be classified as a hybrid rather than online class. The review sessions were unplanned when the class began. The professor added these meetings in response to student requests for extra help in preparing for the exams.
 These items were measured on a four-point scale, ranging from strongly agree on one end to strongly disagree on the other. The strongly agree and agree responses were combined as were the strongly disagree and disagree answers.
Valerie M. Sue, Ph.D. is an Assistant Professor in the Department of Communication at California State University, East Bay.
|Printer friendly Cite/link Email Feedback|
|Author:||Sue, Valerie M.|
|Publication:||Academic Exchange Quarterly|
|Date:||Sep 22, 2005|
|Previous Article:||An after-school cultural arts program.|
|Next Article:||Introduction to regression using NBA statistics.|