Printer Friendly

Online versus in-class courses: an examination of differences in learning outcomes.

There has been a steady increase in the number of students taking online courses. It was estimated that there were "3,077,000 [students] enrolled in all distance education courses ... in 2000-2001. Fifty-six percent of two and four-year degree-granting institutions offered some type of distance learning and 90 percent of those institutions deliver at least some of their courses via the internet" (D'Orsie & Day, 2006, p. 1). Moreover, as of fall 2007, nearly 4 million students participated in online courses, and 30% of institutions with education related degrees (teaching credentials and graduate degrees) had completely online programs (Allen & Seaman, 2008). Recent data stated that the University of Phoenix online program was now the fourth highest ranked institution of degree producers for minority baccalaureates (Borden, 2009). Given this steady increase in online courses, more and more universities see the need to examine how to offer their curriculum online in order to continue to attract students.

Berge (1998) noted, "impediments to online teaching and learning can be situational, epistemological, philosophical, psychological, pedagogical, technical, social, and/or cultural" (p. 2). One significant barrier to teaching online courses has been faculty concerns. Some faculty perceive that while teaching online may increase enrollment and interest in the program, it does so at the risk of decreasing student learning. Moore (2007) argued that "Administrators must also perform the particularly difficult task of channeling their faculty away from typical classroom roles and into those more appropriate for the information age" (p. ix). In an effort to address the issue of level of student learning, this study compared learning online against traditional in-class learning for students.

Much of the research in the area of online teaching and learning has not focused on learning outcomes or academic achievement. A considerable portion of the work in this area focused on issues related to teaching online such as barriers to online teaching, advantages and disadvantages of taking or teaching an online class, "how- to" descriptive articles, and social issues in online courses.

Many studies described issues such as how to teach an online class or examined the pit falls of teaching online. For example, Berge (1998) focused on the barriers to online teaching from a policy standpoint but also included a list of the advantages and disadvantages of online education. Similarly, D'Orsie and Day (2006) offered a list of 10 tips to teaching a web course. In addition, numerous books have been written that provide information on facilitating online learning (e.g., Collison, Elbaum, Haavind, & Tinker, 2000; Salmon, 2004).

A review of the literature revealed that the majority of articles about online teaching focused on improving engagement or social situations online. Oliver (1999) looked at issues of engagement online. Tuckman (2005) focused on how to motivate procrastinators online. Taylor and Maor (2000) examined constructivist learning online while Waltonen-Moore, Stuart, Newton, Oswalk, and Varonis (2006) discussed creating a cohesive learning environment online. These and many other articles' findings supported Holmberg's (2007) theory that personal relationships promote student motivation and learning. Similarly, Menchaca (2008) discussed the importance of the use of multiple technologies to appeal to multiple learning styles as well as the importance of collaboration, reflection, and building a learning community in order to facilitate successful online learning. Finally, McCrory, Putnam, and Jansen (2008) studied teaching and learning in two online courses for teachers in a master's degree program, which was in line with this paper, but the study was focused on discourse and the impact of online discussions.

Some research articles that focused on online learning had limited sample sizes or examined subject areas not related to education. For example, Schutte's (1997) study included 37 undergraduate students that were randomly assigned to the online class or the in-class group. He compared the two groups in terms of learning through the use of exams. Both groups took the exams in class. His results revealed that the online group scored 20% higher then the traditional group. McCollum's (1997) review of Schutte's work further supported these findings.

Davies and Mendall (1998) also studied teaching and learning in undergraduate online classes. Although their findings supported the idea that online and traditional students performed equally well on the course assessments, Davies and Mendall's work had a small sample size of only two full-time online students.

More recently, Johnson, Aragon, Shaik, and Palma-Rivas (2000) studied learning outcomes and student satisfaction in an online human development graduate course and in a traditional face-to-face course. They found that although the students in the on-campus course had more positive perceptions about the instructor and the overall course quality, there was no difference between the two course formats in several measures of learning outcomes. Like Davies and Mendall (1998), they had a small sample size with only 19 students in each class.

On the other hand, Connolly, MacArthur, Standsfield, and McLellan (2005), Kartha (2006), and Koory (2003) all used slightly larger sample sizes over longer periods of time, but the subject areas focused on everything from Shakespeare to computers to business and were courses geared towards undergraduate students. Finally, Legutko's (2007) study was one of few research articles that focused on a graduate course in education, but again with a small sample size of 32 face-to-face students and 29 online students.

Clearly, there has been a wide variety of works and views on the issue of teaching and learning online. But much of the work focused on the types of instructional methods used when teaching online. This focus is problematic due to the fact that, as stated previously, some faculty members "are suspicious of online courses [,] ... have significant reservation about the loss of face-to-face contact, ... and because distance education was previously viewed as an inferior form of education" (Connolly et al., 2005, p. 2). Faculty concerns, small sample sizes, and the lack of focus on the field of education, coupled with the growing number of online programs in the area of education lead to a growing need for more research in this area.

In an effort to address some of the limitations in the previous research as well as faculty concerns with student learning, this study focused on learning outcomes and compared an online course and a traditional face-to-face course in an educational research class designed for students pursing a Master of Science degree in Education.

Method

To explore issues of learning in online courses versus traditional courses with students who were enrolled in a master's degree program in education, three online courses were compared to three traditional face-to-face courses. These courses were taught over a two-year period. A total of 71 graduate students took part in the online classes (25, 23, and 23 in each of the classes) and a total of 69 took part in the face-to-face sessions (25, 22, and 22). Participants in the online courses each took part in 15 weeks of class sessions entirely online. The online courses allowed for asynchronous learning. For the traditional face-to-face students, classes were held once a week for 3-hour sessions for 15 weeks. Teaching methods in the traditional classes included small and large group work and discussions, in-class writing activities, and direct instruction using PowerPoint slides.

The online methods of instruction included small and large discussion board activities, written activities submitted via email, small group and individual activities, and direct instruction using PowerPoint slides that included audio voiceovers. These PowerPoint slides were the same slides used in the traditional classes. The lectures that were presented to the in-class students were recorded and used as the PowerPoint voiceovers for the online instruction. The instruction in both types of courses was matched in every way except for the fact that students were required to complete the work online or in class. The instructor was the same for all courses and all papers were graded by the same evaluator to avoid issues with consistency and reliability.

Students enrolled in this degree program were instructed to take the course during their first semester in the program. The course content focused on descriptive statistics and statistical inferences in educational research. In addition, students learned the principles of research design.

Experimenter or subject effects such as the Hawthorne effect were not limitations of this work due to the fact that the study was conceived of after the courses were completed. As a result, the students' performance and instructor's grading would not have been impacted by being involved in the study.

Participants

Participants were enrolled in a public institution in the Southwest. Given state requirements, each participant had previously completed a bachelor's degree and teaching credential (post-baccalaureate). The participants mirrored the nation's demographic trends for elementary school teachers in that the majority were female (127 of the 140 participants) with varying experience in the teaching field. The participants were elementary or middle school teachers. The students self-selected into the courses.

Data Sources

Participants supplied three main sources of data for analysis. First, exam grades were compared. Both groups (online and in-class) were given the same midterm and the same final exam. Both exams were multiple choice and completed online. The midterm included 40 questions, and participants had 50 minutes to complete the exam. The final exam included 40 questions, and the students had 55 minutes to complete the exam. The final was cumulative. In addition, participants were required to write a literature review and one additional shorter paper (mini-literature review comparing only two articles). Grades received on these papers were also compared. The papers were graded using a rubric (see Appendices A & B). A maximum score of 25 was possible for the mini-literature review, and a score of 50 was possible for the literature review. Students lost points for multiple errors. The instructor did not subtract a point for the first error in each of the described areas but one point was subtracted from the given area after the second error was made. Finally, students in both courses were asked to take part in an end-of-course anonymous survey of course satisfaction (Appendix C).

Analyses

Data were analyzed using descriptive and inferential statistics. More specifically, means, standard deviations, Pearson correlation tests, and independent t-tests for independent samples were used (p.<05) to determine significance when comparing scores on the exams and written work. The data collected from the survey was used to highlight and clarify the numeric findings.

Results

The results from the study were mixed. The results are reported in three sections: papers, exams, and survey data. The limitations of the study are also discussed. Based on the limitations, one additional section on course interaction was added to the analysis.

Papers

In analyzing the paper grades, no significant difference was found between the two groups of students (online vs. traditional). The mean grade was about the same for the literature review for the online students when compared to the traditional students, and the standard deviations were also about the same (see Table 1). There was no significant difference between the online and the traditional students on the literature review grades (See Table 2). Furthermore, for the mini-literature (minilit) review paper, there was no significant difference between the two groups (See Table 2). With similar standard deviations, the data indicated that the learning, based on grades, was the same for the papers whether instruction occurred in-class or online.

Exams

In analyzing the results from the exams, a significant difference was found between traditional and online students in learning. The maximum score for the midterm was 30, and the maximum score for the final was 40. For the midterm exam scores, the standard deviations were similar for traditional and online students. But the mean scores and the t-test indicated that, based on the midterm grades, the traditional students scored higher on the midterm exam. The mean score on the midterm for the traditional students was 2 points higher than for the online students (See Table 3) and the t-test results indicated that there was a significant difference between these scores (see Table 4). On the other hand, the exam scores for the final were less conclusive. The standard deviation for the online students was higher, but the mean score on the final was lower for the online students (see Table 3). The t-test did not indicate a significant difference between groups on the final exam (see Table 4).

Survey Data

In addition to the data from the exams and the papers, survey data was collected and analyzed (see Appendix B). Unlike the previous data, the results were not mixed. The results were all very positive in terms of comments related to online classes. One question asked, "Is there a difference in your learning when you complete a class session online vs. in-class?" The majority of students stated that there was no difference in their learning. For example, one student stated, "Not really. I feel like the material that I studied I learned, ether in-class or online." But four students did express issues with missing peer interaction and learning from peers. These four students' statements were in line with the quote, "I believe there is a difference because when in class you get the benefit of learning a lot more from your peers. For instance, at times you have questions that you don't know you have until someone else in class asks them. However, I strongly believe that there are positive advantages to both."

Another question on learning asked, "What are the advantages of having class online? In addition to issues such as convenience, think about how it impacts your learning." Students pointed to having to be more responsible for their own learning and being able to review material more than once. One student stated, "It is more self-guided so I can spend more time on the concepts that I need help with, and less on concepts that I can pick up quickly. I am not affected by other's learning." Another student added, "I am able to skim over the parts I already know and get into more detail on what I don't know."

As far as specific suggestions for improving learning, when asked, "What would help to increase your learning/understanding of topics when taking classes online?" the majority pointed to the idea of increasing small group and large group discussions. This is very much in line with the previous statements of the students who were concerned with the lack of peer interactions. In addition, many students noted that to improve their grades if they took an online class again, they planned to watch and complete sessions more than once (76%) and ask the instructor more questions about the material (68%).

Finally when asked, "How do you think your learning has been impacted by taking this class entirely online (positives and negatives)?" every student said the impact was positive. For example, one student said, "With an online class I have been able to focus more on the information of the class and less on the stuff that has nothing to do with the class, such as traveling time to get to class, gas, and parking. Now all of my school effort can be focused on learning the material."

In short, the survey indicated that students valued online classes. This fact led to high student satisfaction in regards to the pace, the focus on their learning needs, and the lack of travel time. This high satisfaction could lead to increased learning.

Limitations

There were a number of limitations that must be considered when reflecting on the results of this study. The first limitation in the area of instruction is the fact that since the students of the traditional group physically met together, some of the students formed study groups that met before and/or after class. Study groups were not likely to be formed with the online students because of issues of proximity. Moreover, for traditional students, when one student asked a question, all students heard the answer, and the material was reinforced through question and answer sessions. Although questions and answers were posted on the discussion board for the online classes, students would have had to take the initiative and read through the discussion board postings to receive the information. There was no guarantee that this occurred.

Furthermore, although there were attempts to vary the instructional methods used, most of the online sessions were best suited for visual learners. In addition, for students in the traditional setting, a minimum of 3 hours (one class session) was spent on each topic. For the online students, there was no way to know exactly how much time (more or less) was spent on each of the course topics. Some online students may have just completed enough work to complete the online assignments but may not have gone beyond those tasks.

Finally, there was the issue of self-selection. Students self-selected into the courses which could have created a number of biases. For example, stronger students could have all self-selected into one course or another. These differences could have had an impact on learning and may explain why the midterm grades were slightly higher for in-class students.

In addition to instructional limitations, methodological limitations must also be considered. First and maybe the biggest limitation was that the prior knowledge of the students was not assessed before the students entered the courses. Some of the differences in scores could be due to the students' prior knowledge of the course material. A pretest/ posttest design could help to alleviate these concerns in the future. The limitation of sample size could also be a concern. Although this paper's sample size was larger than that of the previous research presented, the sample size of this study was still small. A larger sample size would lend validity to the study.

Finally, there were limitations when considering the learning assessments utilized in this study. Learning is defined as "the process of acquiring knowledge or skill through study, experience or teaching" (Dictionary.com, 2007). The question remains: were the exams and papers an accurate representation of this acquired knowledge? It is possible that different examination formats or circumstances could have yielded different results. The fact that the same measures were used in the online and traditional classes helps with creditability but does not ensure validity. In future studies other exam forms and other possible measures of learning should be considered.

Course Interaction

After considering the limitation discovered in this research, one additional data source was analyzed. One major difference between students in traditional classes versus those receiving online instruction is the "known" minimum amount of time spent on the subject matter per week. In face-to-face classes, instructors can be assured that students are at least exposed to the material for three hours per week. With online learning, students were self-directed and determined how much time they spend. One assumption may be that the more time spent online in the course site, the higher the grade. The site used for online courses does not measure actual time spent in the site by each student but it does measure hits. Time and hits are not the same but one could assume that someone who entered the site (hits) more than someone who did not may have spent more time working with the course material. When using a Pearson correlation to determine the relationship between overall course grade point average and total number of hits, there was no significant correlation (r =.165, p =.330). This suggested that the number of times students entered the course site does not relate directly to improved grades in the course.

On the other hand, when one specifically considered hits in interaction or discussion areas of the course site, there was a significant relationship between hits in these areas and grade point average (r = .532, p = .001).

Implications

Based on the results of this study, the evidence suggests that there are similar learning outcomes whether students are in a traditional or online class. For many instructors, the move to online classes is of grave concern. Some instructors may argue that it would be impossible for learning online to equal that of in-class teaching. But this work further supports previous research (e.g., Johnson, Aragon, Shaik, & Palma-Rivas, 2000; Kartha 2006; Legutko, 2007) that illustrates the fact that this assumption may be false. Some may argue that the fact that there was a significant difference between the midterm grades is an indication that there is a difference between online classes and traditional classes in terms of learning. But as noted in the limitation section of this paper, this difference seems to be more of an issue of adjusting to online learning as opposed to being an indication that online classes are somehow inferior in terms of learning outcome. For example, students pointed out that they may need to view the PowerPoint lectures more than once.

In addition, the results should give some faculty members a reason to reflect on the methods used when teaching online. In line with some of the previously presented research (e.g., Holmberg , 2007; McCrory, Putnam, & Jansen, 2008; Menchaca, 2008), clearly one outcome of this study is the importance of online interaction. In class it is clear that active learning and participation are keys to effective teaching and learning, and it is clear that the same can be said for online teaching and learning. This fact is further supported by the survey data in which students overwhelmingly reported that the most important aspect of teaching and learning online is the small and large group discussions. Although only a weak positive relationship was found, the significance of the relationship is an indicator of the importance of online interaction in the overall learning and understanding of the course material. Further reinforcing this fact is the survey data that indicated that students realized that they needed to ask more questions of the instructor. Thus, student interaction and instructor interaction are important elements of the learning process.

Finally, the students' survey answers were so overwhelmingly positive that the issue of student satisfaction cannot be ignored. Positive student attitudes can lead to increased motivation. Thus, positive attitudes can only enhance the learning of students. These issues need to be considered and studied further. In the end, this work illustrates that instructors need to be open to change. Change does not, in this case, necessarily mean a reduction in learning.

Appendix A
Mini-Literature Review Rubric

Possible   Description
Score

5          Clarity, Neatness, and Grammar
5          Integration of articles/Themes
10         Analysis
5          APA


Appendix B
Literature Review Rubric

Possible
 Score     Description

   5       Conforms to the assignment, references hold to a focus on
           a suitably narrow topic, critiques of individual articles
           include the required information, and at least 8 articles
           must be included

   10      APA

   10      Well written, organized

   10      Well integrated: the author does not simply make a list
           of critiques, rather discusses the ways in which the
           studies agree, disagree, extend one another, cover the
           territory or leave specific questions unanswered

   10      Analysis

   5       Includes implications for practice and for further research
           based on the references cited (extending beyond the
           studies but not coming out of the air)


Appendix C

Complete Survey

(1) Before taking this class, what were your feelings about taking online classes?

(2) Now that you have completed this course, what are you feelings about taking classes online?

(3) Is there a difference in your learning when you complete a class session online vs. in-class? If yes, please explain.

(4) What are the advantages of having class online? In addition to issues such as convenience, think about how it impacts your learning.

(5) What are the disadvantages of having class online?

(6) What would help to increase your learning/understanding of topics when taking classes online?

(7) How do you think your learning has been impacted by taking this class entirely online (positives and negatives)?

References

Allen, I, & Seaman, J. (2008). Staying the course: Online education 2008. Boston, MA The Sloan Consortium.

Berge, Z. (1998). Barriers to online teaching in post-secondary institutions: Can policy changes fix it? Online Journal of Distance Learning Administration, 2(1), 1-22.

Borden, V. M. H. (2009, June 25). Top 100 degree producers: Undergraduate. Diverse Issues in Higher Education, 26, 13-20.

Collison, G., Elbaum, B., Haavind, S., & Tinker, R. (2000). Facilitating online learning: Effective strategies for moderators. Madison, WI: Atwood. Connolly, T. M., MacArthur, E., Standsfield, M., & McLellan, E. (2005). A quasi-experimental study of three online learning courses in computing. Computers and Education, 49(2), 345-359.

Davies, R., & Mendall, R.S. (1998). Evaluation comparison of online and classroom instruction for hepe129-fitness and lifestyle management course. Evaluation Program, Department of Instructional Psychology and Technology. Provo, UT: Brigham Young University. (ERIC Document Reproduction. Service No. ED 427752).

Dictionary.Com. (2007). Learning. Retrieved September 2, 2007 from http://dictionary. reference.com/browse/learning

D'Orsie, S., & Day, K. (2006). Ten tips for teaching a web course. Tech Directions, 65(7), 18-20.

Holmberg, B. (2007). A theory of teaching-learning conversation. In M. G. Moore (Ed.), Handbook of distance education (pp. 69-76). London, UK: Taylor & Frances.

Johnson, S., Aragon, S., Shaik, N., & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11(1), 29-49.

Kartha, C. (2006). Learning business statistics: Online vs traditional. The Business Review, Cambridge, 5(1), 27.

Koory, M. A. (2003). Differences in learning outcomes for the online and F2F versions of "an introduction to Shakespeare." Journal of Asynchronous Learning Networks, 7(2), 18-35.

Legutko, R. S. (2007). Face-to-face or cyberspace: Analysis of course delivery in a graduate educational research course. Journal of Online Learning and Teaching, 3(3). Retrieved July 3, 2009, from http://jolt.merlot.org/vol3no3/ legutko.htm

McCollum, K. (1997). A professor divides his class in two to test value of online instruction. Chronicle of Higher Education, 43(24), 23.

McCrory, R., Putnam, R., & Jansen, A. (2008). Interaction in online courses for teacher education: Subject matter and pedagogy. Journal of Technology and Teacher Education, 16(2), 155-180.

Menchaca, M. (2008). Learner and instructor identified success factors in distance education. Distance Education, 29(3), 231-252.

Moore, M. (Ed.). (2007). Handbook of distance education. London, UK: Taylor & Frances.

Oliver, R. (1999). Exploring strategies for online teaching and learning. Distance Education, 20(3), 240-254.

Salmon, G. (2004). E-moderating: The key to teaching and learning online. New York: Routledge.

Schutte, J. (1997). Virtual teaching in higher education: The new intellectual superhighway or just another traffic jam? Retrieved August 20, 2007 from http://ddi. cs.uni-potsdam.de/HyFISCH/Teleteaching/VirtualTeachingSchutte.htm

Taylor, P., & Maor, D. (2000). Assessing the efficacy of online teaching with the constructivist on-line learning environment survey. In A. Herrmann & M. M. Kulski (Eds.), Flexible futures in tertiary teaching. Proceedings of the 9th Annual Teaching Learning Forum, 2-4 February 2000. Perth, Australia: Curtin University of Technology. Retrieved August 20, 2007, from http://lsn.curtin.edu.au/tlf/tlf2000/taylor.html

Tuckman, B. W. (2005).The effect of motivational scaffolding on procrastinators' distance learning outcomes. Computers and Education, 49(2), 414-442.

Waltonen-Moore, S., Stuart, D., Newton, E., Oswalk, R., & Varonis, E. (2006). From virtual strangers to a cohesive online learning community: The evolution of online group development in a professional development course. Journal of Technology and Teacher Education, 14(2), 287-311.

Lisa Kirtman

California State University, Fullerton

Lisa Kirtman is an associate professor in the College of Education at California State University, Fullerton. Her email is lkirtman@fullerton.edu
Table 1:
Mean and Standard Deviation
for Mini-Lit Review and Literature Review

                    Statistics

Assignments         Mean   Standard Deviation

Mini-Lit Review
  Traditional *     22.0          1.49
  Online **         21.8          1.51

Literature Review
  Traditional *     44.2          2.27
  Online **         44.4          2.20

* N=69 ; ** N=71

Table 2:
t-Test Comparison of Online and Traditional Students' Grades
on the Literature Review

                     Statistics

Assignments          t-test

Mini-Lit
  P-Value            =.41

Literature Review
  P-Value            =.34

N=140

Table 3:
Mean and Standard Deviation for Midterm and Final Exams

                Statistics
                             Standard
Exam            Mean         Deviation

Midterm         26.4         2.4
  Traditional   24.8         2.6
  Online
Final           36.2         3.2
  Traditional   34.0         5.4
  Online

Table 4: t-Test Results

              Statistics
Assignments     t-test
Midterm
  P-Value        =.03
Final
  P-Value        =.06

N = 140
COPYRIGHT 2009 Caddo Gap Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Kirtman, Lisa
Publication:Issues in Teacher Education
Article Type:Report
Geographic Code:1USA
Date:Sep 22, 2009
Words:4677
Previous Article:To blend or not to blend: online and blended learning environments in undergraduate teacher education.
Next Article:The Handbook of Technological Pedagogical Content Knowledge (TPCK) for Educators.
Topics:


Related Articles
The human dimension of online instruction.
Theory application for online learning success.
Comparing online and traditional classes.
Closing the gap: impact of student proactivity and learning goal orientation on e-learning outcomes.
E-learning in Thailand: an analysis and case study.
The impact of altered realties: implications of online delivery for learners' interactions, expectations, and learning skills.
Preparing CTE leaders online: because of the high quality of the courses, the cohesiveness of the cohort model and close faculty support, the...
Teaching and assessing basic concepts to advanced applications: using bloom's taxonomy to inform graduate course design.
Guest editors' introduction: online, hybrid, and blended coursework and the practice of technology-integrated teaching and learning within teacher...
To blend or not to blend: online and blended learning environments in undergraduate teacher education.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters