Comparisons of internet-based and face-to-face learning systems based on "equivalency of experiences" according to students' academic achievements and satisfactions.
Distance education has existed more than 100 years organizationally, but it has been trusted in distance learning applications only since the second half of 1960s theoretically (Delling, 1994, as cited by Keegan, 1996). According to the distance education approaches except equivalency theory, instructor and learners are autonomous in a distance learning setting in terms of time and place. Holmberg (1989), Moore (1973) and Wedemeyer (1973; as cited by Keegan, 1996) defined distance education as an educational application that occurs between instructor and students who are separated because of time and place. In contrast, equivalency theory expresses that it can be made partly a concession especially from time autonomy.
Simonson, Schlosser, and Hanson (1999) noted that equivalency theory proposes that "the more equivalent the learning experiences of distant learners are to those of local learners, the more equivalent will be the outcomes of the educational experiences for all learners." In other words, if equivalent learning experiences can be presented to learners, then learning will be equivalent. The equivalency of the learning experiences can be explained with a simple example: a triangle and a rectangle are different geometric shapes; however, the areas of both shapes can be equivalent to each other. Similarly, learner experiences in distance and face-to-face learning environments can be equivalent in terms of quality of learning outcomes (Simonson, 1999).
In research studies where distance and face-to-face learning systems are compared, one problem is that most studies are predicated on traditional theories. In the research studies where two systems are compared, the instructor, institute, program, group, or content may be matched, but the equivalency of learning experiences for students is generally omitted.
PURPOSE OF THE STUDY
The purpose of this study is to determine whether "equivalent learning experiences" ensure equivalency, in the Internet-based and face-to-face interaction methods on learning results and student satisfaction.
1. Is there any significant difference between post- and permanence test average scores (both corrected according to pretest) in cognitive achievement levels of students learning via the Internet-based and face-to-face learning systems?
2. Is there any significant difference between post- and permanence test scores of the cognitive achievement test of students learning via the Internet-based learning system?
3. Is there any significant difference between post- and permanence test scores of the cognitive achievement test of students learning via face-to-face learning system?
4. Is there any significant difference in course-related student satisfaction between students having the Internet-based and face-to-face learning system?
In the experimental process of this study, the effect of the Internet-based and face-to-face learning on the equivalency in learning experiences, students' success, permanency of their success and their satisfaction has been observed. A 2 x 3 factorial design was used. There are at least two factors in which its effect is observed in two factorial designs. One of the two factors observes the different experimental conditions existed impartial from groups and the other shows the results of the repeated measurements of the subjects (Buyukozturk, 2001; Hovardaoglu, 1994).
The independent variable of the study has two subgroups: Internet-based learning and face-to-face learning system. The dependent variables of the study are academic achievement and satisfaction.
Gender, learning modality, university entry scores, and academic grade averages were selected as control variables of the study. There are three levels of learning modality. These are differentiating as kinesthetic, auditory and visual. The four control variables were taken as reference in assigning subjects into groups as a result the subjects were physically under control.
The study was applied to 60 subjects taking the course Instructional Technologies and Material Development in the first semester of the 2004-2005 education year in Gazi University, Gazi Faculty of Education, Department of Primary School Teaching.
OSS (Ogrenci Secme Sinavi--the student selection examination for the entry in university) entry scores, academic grade averages, gender, and dominant learning modalities were considered in the determination of the subjects and matching of the experimental groups. According to these variables, 2 experimental groups were formed. The BIG16 "learning modalities inventory" was applied to the subjects in order to determine their learning modalities. The OSS entry scores, academic grade averages, and gender of students were obtained from the Gazi University Student Affairs Registration Office.
The distribution of gender, learning modality, and learning group of the participants are given in Table 1. A total of 30 subjects (3 male and 27 female) participated in both the Internet-based learning system and the face-to-face learning system. In both the systems, there were subjects who were of various learning modalities (4 kinesthetic learners, 20 visual learners, and 6 auditory learners).
One other control variable, the average of the OSS score, was 176.303 for the Internet-based learning system, and 176.177 for the face-to-face learning system. The grade averages for the Internet-based learning system was 3.129 and 3.092 for the face-to-face learning system. Both control variables for the two groups were equivalent.
During the study, the BIG16 Learning Modality Inventory (Simsek, 2002) was utilized to identify the dominant learning modality of the subjects. The inventory has been developed for use with the 16-24 age group and has a reliability of .844. This inventory measures three learning modalities: kinesthetic, auditory and visual. The reliability coefficients are satisfactory. The inventory is made up of a total of 48 items, with 16 items each for each learning modality.
A pretest containing 33 multiple choice items and 10 open-ended questions in regards to the "visual material design" and "effective use of learning tools" units were developed and administered to the subjects before the process started to determine their level of success. However, thinking that the subjects might have become bored with taking a similar test three times, the posttest was prepared as an accompanying form to the pretest with the supervision of the experts.
A form was developed with the aim of assessing the materials developed by the students, which was a part of the pretest, posttest, and permanency tests. This form was finalized according to the comments of the 4 technology experts. The material assessment form is a 5-point Likert scale consisting of 24 items separated into 3 subheadings titled "appropriate use of the design elements," consisting of 7 items, "correct use of the design principles" consisting of 10, and "characteristics of learning materials," consisting of 7 items.
The items, which are included in the "students' satisfaction scale for Web-based distance education" developed by Parlak (2004), are collected under 5 factors which make up for 60.880 of the total variation. The internal consistency coefficient (Cronbach Alpha) for the students' satisfaction scale for Web-based distance education has been calculated as 0.95.
However, the fourth factor, titled "institutional support," included in the Web-based distance learning student satisfaction scale was taken out of the scale because it did not take place in the Internet-based learning system. For this reason, with the aim of checking the reliability of the remaining 4 factors, the common statements representing both the face-to-face and Internet-based learning systems were revised and administered again to 119 students. The internal consistency coefficient (Cronbach Alpha) of the whole scale has been calculated as .87.
A knowledge test was administered to the subjects (multiple choice and open-ended) before the experimental process in order to determine their level of knowledge. Further, 3 weeks prior, it was requested that the subjects identify their design principles and to design material appropriate to the material's own design principles.
Another knowledge test (multiple choice and open-ended) was also administered after the experimental process to determine the level of knowledge they had reached. Again, in a similar fashion, the materials that the subjects produced in accordance with the material's own design principles were collected and evaluated.
The satisfaction scale was administered to the students in the same week as the posttest was administered, with the aim of determining the interaction of subjects amongst themselves and the instructors, the general structure of the course, and the flexibility given to the students by the course or the university.
It was attempted to try to determine the permanency of the knowledge learnt by the students in both of the two learning systems. For this purpose, a permanency test was administered 4 weeks after the experimental process for the students in the face-to-face learning system. The permanency test was the same as the pretest and was administered in the same way. In addition to this, both groups were requested to produce materials according to their own design principles and these materials were scored with the material evaluation form.
A table of specifications and unit analysis prepared by one of the researchers to be used in the Internet-based learning system during the experimental process was taken into consideration and it was attempted to make it equivalent with the face-to-face system. In other words, all of the learning experiences offered to the face-to-face learning group were also ensured in forms that could be used in an Internet medium. For example, the presentation of the instructor for the face-to-face group is presented on the Web site as audio, video and animated presentations, and the exercises and practice activities in class are included in similar interactive simulations and the classroom discussions are met by chat and e-mail. The matchings of Horton (2000) and the suggestions of experts in the field were taken as a reference in regards to equivalency of learning experiences.
When examined from an educational perspective, placing content on a Web site does not guarantee learning (Yu & Lee, 2002). In the Delphi study conducted by Yu and Lee (2002), the most important problems encountered in Internet-based learning were listed as follows: insufficiency in course preparation and dissemination, lack of interaction between instructors and students, and insufficiencies in instructor training and support.
Adviced was sought from seven instructional technologists as to whether the definitions for the activities included in the face-to face and Internet-based learning systems and the knowledge level comprising the "equivalency form for the learning activities" offered equivalent learning experiences. These experts agreed that the narrated audio and listening presentation (Flash) for the Web site adapted from the PowerPoint presentation used in the face-to-face learning system, the use of the chat sessions on the Internet medium in comparison to the classroom discussions held in the face-to-face learning system, and the student activities for material development, were equivalent. The experts had some slight apprehensions about the appropriateness of the application of the activities on an Internet medium, but in general there was consensus among them. The consensus was about the equivalency of watching the video and listening to the instructor's explanation on it in the face-to-face learning system and watching the video (Flash) from the Internet; showing the exercise questions on a PowerPoint presentation from the face-to-face system and allowing students to comment and explaining the correct answers of the questions in comparison to the PowerPoint presentation and the audio file where the instructor's recorded voice explains the correct answers from the Internet medium; the discussions (critique of the material/s bought to class) in the classroom for the face-to-face learning system compared with the chat sessions on the Internet (critique of the pictures/videos presented on the Internet medium); and, finally, the assessment exams for the face-to-face learning system to be administered on the Internet medium. The main concern here was whether the chat function would be effective.
The Web site that has been designed to encompass the activities mentioned above includes the course and the exercises to be completed pertaining to each topic at the end of each course. The access to course topics from the Web site is ensured by a member login facility. The reason for this is so the students' access in the Web site can be monitored and so that the students can take notes while taking the course, just like in a traditional classroom.
RESULTS AND CONCLUSION
The students' average posttest results, corrected according to the pretest, and the average posttest results in both learning systems conducted after the experimental process, can be seen in Table 3.
When the data in Table 3 are examined, we can see that the posttest score averages, corrected according to the pretest, after the experimental process for the students in the Internet-based learning system, is 72.47. This score is 85.12 for the students learning in the face-to-face learning system. The significance of the observed differences in the two learning systems was tested with ANCOVA; it was found that there was, in fact, a significant difference [[F.sub.(1 - 57)] = 11, 66, p < .01] between the posttest average scores that were corrected in relation to the pretest scores of the students studying in the two different learning systems. It can be stated that the students in the face-to-face learning system are more successful.
When Table 4 is examined, it can be seen that there is an increase in the posttest average scores corrected according to the pretest scores (80.25) in comparison to the permanency average score of 77.47 of the test taken by the subjects in the Internet-based learning system after the experimental process. When the posttest scores of the students in the Internet-based learning system are compared with those of the students in the face-to-face system, it can be seen that there is a decrease in the posttest scores. According to the results of the ANCOVA conducted on the pretest scores of the groups after being corrected according to the permanency scores, it was found that there was a significant difference [[F.sub.(1 - 57)] = 7, 72, p < .01] between the average scores of the permanency tests which were corrected against the pretest scores of students in both the two learning systems. According to these results, it can be stated that the knowledge of the students in the face-to-face learning system is more permanent.
The significance of the relationship between the average scores for the posttest and the permanency test results was investigated by conducting a t test. These t-test results have been given in Table 5.
A significant difference was found between the posttest learning levels and the average scores for the permanency test that was administered after 4 weeks on the students who had been in the group for the Internet-based learning system, [[t.sub.(29)] = 4.25, p < .01]. The average student scores for permanency (77.47) were higher than their average posttest scores (70.90). This finding shows that it has an important effect on the permanency of the knowledge learnt by the students in the Internet-based learning system.
There was a significant increase in the success of the students' average scores for permanency tests for the acquired knowledge of students within the face-to-face learning system [[t.sub.(29)] = 2.91 p < .01]. The students' posttest average was 86.70, while the average permanency test score was 92.73. This finding shows that the face-to-face learning system has an important effect on the permanency of the knowledge learnt by the students.
There were significant differences between the pretest and posttest and between the pretest and the permanency test results of the students in either of the two learning systems. These findings show that a significant level of success was attained within the two learning systems. When the pretest scores of the students in the face-to-face learning system is controlled, the posttest scores and the permanency test scores are slightly higher in comparison to the students in the Internet-based learning system. This finding shows that the face-to-face learning system is more effective than the Web-based learning system. However, it should be taken into account that the learning levels and the learning permanency have developed for the students in both learning systems.
Based on the findings, it is possible to suggest that the things that students learn in the face-to-face learning system are more permanent. However, as stated above, the knowledge acquired by the students in the Internet-based learning system is also permanent. It would be wrong to say that either the Internet-based learning system or the face-to-face learning system was weaker or stronger than the other. It should be considered normal that the average scores for permanency tests of the students in the Internet-based learning system are slightly lower than the scores of the students in the face-to-face learning system. The reason is that there is a limited time for the students to state their thoughts in their own words and it is expected for them to "read" and "write" on the computer medium. Even though these students use their computer/Internet literacy skills in their daily lives, to use these skills in a course or in an exam is a new experience and it would take time for them to complete it in a sufficiently fast way. In fact, during the experimental process, it was observed that the students complained that they did not have enough time to write what they were thinking. It must also be mentioned that the posttest was conducted right after the experimental process and the permanency test was conducted 4 weeks later. Thus, there could have been a slowing of the reading-writing process during the exam.
FINDINGS RELATED TO SATISFACTION
The average scores for satisfaction and satisfaction sublevels of the students learning in both the two systems after the experimental process have been presented in Table 7.
Examination of Table 7 shows that the total average score of the course satisfaction of the students for the Internet-based learning system was 120.27, and 119.53 for the face-to-face learning system. The average for the "student-student interaction" satisfaction scores for the Web-based learning students was 10.80, and the score for the face-to-face students was 10.46. The average score for the "student-instructor interaction" satisfaction in terms of courses of the Internet-based learning students was 19.63, while the score was 48.86 for the face-to-face students. In regards to "course structure," the average score for satisfaction was 50.26 for students in the Internet-based learning system and 49.40 for students in the face-to-face system. The average score for "flexibility" satisfaction in regards to courses for students in the Internet-based learning system was 9.56, while the score for students in the face-to-face system was 10.80. When the satisfaction scores for subjects are taken into consideration, the factor scores (except for flexibility) and total scores for Internet-based learning were slightly higher than for the other systems.
The satisfaction scores of students concerning the course in the learning systems they are studying in does not show any significant differences [[t.sub.(58)] = 0.19, p > .05]. This finding can be interpreted that there is no significant relationship between the learning system and satisfaction.
This finding is also consistent with the literature. In his doctoral study, Casey (2004) examined personal communication satisfaction in terms of communication and administration of registration. Casey concluded that there were no significant differences for personal communication satisfaction between face-to-face or online media. In Misanchuk's (2003) doctoral thesis and in the research of Stein and Wanstreet (2003), no significant differences were found in the levels of satisfaction in students in the online group and the face-to-face group.
In Table 7, it can be seen that there is no significant difference [[t.sub.(58)] = 0.402, p > .05] between the average "Student-Student Interaction" satisfaction scores related to subjects for the students in the different learning systems. The "Student-Student Interaction" satisfaction scores did not result in a significant difference according to the learning system [t(58) = 0.414, p > .05]. This finding can be interpreted as having no significant relationship between the "student-student interaction" subfactor for satisfaction in terms of the learning system.
It was seen that there is no significant difference [t(58) = 0.414, p > .05] between the satisfaction scores for "course structure" in relation to the courses according to which students are studying. This finding suggests that there is no significant relationship between the learning system and the subfactor "course structure" for satisfaction.
It was found that there was no significant difference between the "Flexibility" satisfaction scores in relation to courses of the learning systems that the students are learning [t(58) = 0. 414, p > .05]. This finding suggests that there is no significant relationship between the learning system and the "flexibility" satisfaction factor.
When the findings for satisfaction are examined, whether it be in general or for the sub-levels for satisfaction (student-student interaction, student-instructor interaction, course structure, flexibility), it can be suggested that the level of satisfaction reached by the students are equivalent. From these findings it can be concluded that the students learning in the Internet-based learning system are at least as satisfied as the students in the face-to-face learning system. This, in fact, confirms the similarity between the learning experiences of the students.
In this research, the equivalency of the experiences of the students in the learning environments they are presented with has been demonstrated experimentally. However, it is not clear how the students of the two learning systems are affected by these kinds of practices, how it will be reflected in their careers, or whether there is a change in the way they perceive the student-student process. In order for these points to be clarified, it is suggested that quantitative studies that reflect the results of the implementation should be examined in future research studies.
During the study, it was observed that the students who were not sufficiently familiar with studying within the Internet-based learning system were those who did not have a culture of studying on a computer medium. For this reason, it is suggested that a study also be conducted on students with this culture. Ozcelik and Yildirim (2002) also found that Web-based materials were still very new concepts in our country and that for this reason they emphasized that in order for the students to question or use the cognitive tools they came across on the Internet-based system, they need to first be familiarized with the new technology and the medium they will use. They also suggest that the students receive comprehensive preinformation about the function of the Internet-based tools, the function of these cognitive tools, and how to utilize them before-hand.
Acknowledgment: This study is a brief of the first author's doctoral thesis.
Buyukozturk, S. (2001). Deneysel Desenler [Experimental designs]. Ankara, Turkey: PEGEM-A Yayincilik.
Casey, D. M. (2004). The impact of distance learning on interpersonal communication satisfaction: A comparison of online and face-to-face community college classrooms. Doctoral dissertation, University of Miami. Retrieved from ProQuest Digital Dissertations database. (Publication No. AAT 3125364)
Holmberg, B. (1989). Theory and practice of distance education. London: Routledge.
Hovardaoglu, S. (2000). Davranis Bilimleri icin Arastirma Teknikleri [Research techniques for behavioral sciences]. Ankara, Turkey: VE-GA Yayinlari.
Horton, W. (2000). Designing Web based training. New York: Wiley.
Keegan, D. (1996). The foundations of distance education. London: Croom Helm.
Misanchuk, M. G. (2003). Sense of community, satisfaction, and performance in a distance education program. Indiana University, Department of Instructional Systems Technology. Unpublished Doctoral dissertation. UMI Number: 3122714
Ozcelik, E., & Yildirim, S. (2002). Web-destekli Ogrenme Ortamlarinda Bilissel Araclarin Kullanimi: Bir Durum Calismasi [Utilization of cognitive tools in web-based learning environments: A case study]. Paper presented at the Open and Distance Education Symposium, Anadolu University, Eskisehir, Turkey. Retrieved March 4, 2005, from http:// aof20.anadolu.edu.tr/bildiriler/Erol_Ozcelik.doc
Parlak, O. (2004). Internet Temelli Uzaktan Egitimde Ogrenci Doyumu Olcegi. Unpublished doctoral dissertation, Educational Science Institute, Ankara University, Ankara, Turkey.
Simonson, M. (1999). Equivalency theory and distance education. Tech Trends, 43(5), 5-8.
Simonson, M., Schlosser, C., & Hanson, D. (1999). Theory and distance education: A new discussion. The American Journal of Distance Education, 13(1). Retrieved July 13, 2008, from http:// www.c3l.uni-oldenburg.de/cde/found/simons99 .htm
Simsek, N. (2002). BIG16 Ogrenme Bicemleri Envanteri [BIG16 Learning modality inventory]. Egitim Bilimleri ve Uygulama Dergisi, 1(1), 3347
Stein, D. S., & Wanstreet, C. E. (2003). Role of social presence, choice of online or face-to-face group format, and satisfaction with perceived knowledge gained in a distance learning environment. Paper presented at 2003 Midwest
Research to Practice Conference in Adult, Continuing, and Community Education. Retrieved March 20, 2005, from http://www.alumni-osu .org/midwest/midwest%20papers/ Stein%20&%20Wanstreet--Done.pdf
Yu, K. C., & Lee, T. W. (2002). Critical issues and problems in Web-based instruction in higher education: A Delphi study. Journal of Taiwan Normal University. Mathematics & Science Education. Retrieved March 12, 2005, from http://220.127.116.11/ntnuj/j47/se472-4.pdf
* Sercin Karatas, Gazi Universitesi, Gazi Egitim Fakultesi, L Blok 306, 06500 Teknikokullar, Ankara, Turkiye. E-mail: firstname.lastname@example.org
TABLE 1 Distribution of Students by Gender, Learning Modality, and Learning System Learning Modality Kinesthetic Visual Auditory Total Internet-based Male 2 1 0 3 learning system Female 2 19 6 27 4 20 6 30 Face-to-face Male 2 1 0 3 learning system Female 2 19 6 27 4 20 6 30 TABLE 2 Item Numbers of the Factors, Arithmetic Averages, Standard Deviation, and Cronbach Alpha Reliability Coefficients of the Satisfaction Scale Number of Arithmetical Cronbach Factor Questions Means Alpha Student-student interaction 3 3.0336 .7818 Student-instructor interaction 12 3.0903 .8610 Course structure 12 3.6218 .8566 Flexibility 3 3.0924 .7447 Whole of the scale 30 3.2975 .8698 TABLE 3 Average Posttest Scores, Corrected in Accordance With the Pretest, and Average Posttest Scores for Students in the Different Learning Systems [bar.x] Learning System n [bar.x] sd (Corrected) F p Internet 30 70.90 7.38 72.47 1.07 .304 Face-to-face 30 86.70 8.86 85.12 11.66 .001 TABLE 4 Corrected Permanency Scores for Students in Both Learning Group [bar.x] Learning System n x sd (Corrected) F p Internet 30 77.47 7.47 80.25 3.78 .057 Face-to-face 30 92.73 8.23 89.94 7.72 .007 TABLE 5 Posttest and Permanency Test Averages of Students in the Internet-Based Learning System Measure (Internet) n [bar.x] S Total Posttest 30 70.90 7.38 Permanency test 30 77.47 7.47 Base level Posttest 30 86.00 9.49 Permanency test 30 78.57 9.29 Intermediate level Posttest 30 65.17 11.74 Permanency test 30 72.77 12.20 Advanced level Posttest 30 49.37 6.73 Permanency test 30 76.23 8.81 sd t p Total 29 4.25 .000 Base level 29 4.74 .000 Intermediate level 29 3.17 .004 Advanced level 29 14.09 .000 TABLE 6 Post- and Permanency Test Averages of the Students in the Face-to-face Learning System Measure (Face-to-Face) N [bar.x] S Total Posttest 30 86.70 8.86 Permanency test 30 92.73 8.23 Base level Posttest 30 83.07 8.6101 Permanency test 30 80.97 10.3607 Intermediate level Posttest 30 61.00 9.7344 Permanency test 30 69.93 10.3355 Advanced level Posttest 30 81.23 5.8762 Permanency test 30 76.53 7.1715 sd t p Total 29 2.91 .007 Base level 29 0.92 .361 Intermediate level 29 3.23 .003 Advanced level 29 3.48 .002 TABLE 7 Course Satisfaction of the Students in the Two Groups Internet Face-to-Face (n = 30) (n = 30) [bar.x] S [bar.x] S t p Total 120.27 14.03 119.53 15.56 0.190 .812 Student-student interaction 10.80 2.85 10.46 3.36 0.402 .689 Student-insructor interaction 49.63 7.19 48.86 7.56 0.414 .680 Course structure 50.26 6.88 49.40 6.72 0.493 .624 Flexibility 9.56 3.72 10.80 3.10 0.493 .168
|Printer friendly Cite/link Email Feedback|
|Author:||Karatas, Sercin; Simsek, Nurettin|
|Publication:||Quarterly Review of Distance Education|
|Date:||Mar 22, 2009|
|Previous Article:||Using the jigsaw model to facilitate cooperative learning in an online course.|
|Next Article:||Mindset change: influences on student buy-in to online classes.|