Printer Friendly

The effects of different levels of interaction on the achievement and motivational perceptions of college students in a web-based learning environment.

 This study investigated the effects of learning materials with
 different interaction levels on achievement and motivational
 perceptions of college students in a web-based learning
 environment with a posttest only experimental design. There were
 three groups in this study: control group, reactive
 interaction group, and proactive interaction group. The control
 group received a treatment with static hyperlinks to the
 learning content; the reactive interaction group received a
 treatment that was implemented with elaborated immediate
 feedback; the proactive interaction group received a treatment
 that required generative activity. Three instruments were used
 to evaluate the effects of different treatments: an achievement
 test, an instructional material motivation survey, and an
 interview. The subjects in the study were college students in
 various education majors. The results indicated that students in
 both the reactive and proactive interaction groups outperformed
 those in the control group on the achievement test. The students
 in the reactive interaction group demonstrated significantly
 higher motivational perceptions toward the instructional
 material than those in the control group. The qualitative data
 also supported these results.


**********

Jl. of Interactive Learning Research (2003) 14(4), 367-386

Distance learning refers to any instruction through print or electronic communication media for people involved in learning in a place or time different from that of the instructor(s) or other student(s). It has many formats, from the oldest correspondence courses, audio, one-way video, two-way video, to the World Wide Web (WWW or Web). Among those, the Web is the fastest growing. It provides a pervasive new channel for education that makes education more accessible. It appeals to students, provides for flexible learning, and enables new ways of learning (Owston, 1997). Like other kinds of distance learning, it is widely believed that interaction is one of the important and fundamental factors that affect students' learning and attitudes in web-based learning (Berge, 1999; Gilbert & Moore, 1998; Moore, 1989).

Interaction is a two-way communication process. From the learner's perspective, there are three types of interaction involved in the process of learning: (a) interaction with content, (b) interaction with instructor, and (c) interaction with other students (Moore, 1989). Each type of interaction can have different effects on student achievement and attitude toward learning. From the perspective of learning, it can be argued that the most important interaction occurs between the student and the material he/she is trying to acquire or master. Milheim (1996) pointed out that interactivity between the computer and the learner is one of the most important attributes in computer based instruction because it directly impacts the communication between the educational materials and the intended learners.

Based on the quality of interaction, the student-content interaction can range from low to high levels. In low-level interaction, there is less interactivity, engagement, and cognitive processing. There is more interactivity, elaboration, and cognitive processing in high-level interaction.

Schwier and Misanchuk's (1993) identified interaction levels provide a useful starting point for developing and understanding interaction. They suggested that there were at least three levels of interaction based on the instructional quality of the interaction: reactive, proactive, and mutual interactions.

Although proposed for traditional multimedia, this categorization scheme also applies to web-based learning environments. Web-based learning is popular and growing rapidly. But people still have many concerns about web delivered courses (Windschitl, 1998). One of these concerns involves building interactivity into web courses (Gilbert & Moore, 1998). Web-based instruction (WBI) can provide all types of interaction proposed by Moore (1989) through different means. For the student-content interaction, WBI can provide many levels of interaction between the learner and the learning content. However, in reality, most web-based instructional materials only provide hyperlink interactivity, which is the primary mechanism of hypermedia. The interaction level provided by hyperlinks is low. This low level of interaction may not promote students' learning and motivation. Therefore, it appears necessary to explore more strategies to increase the interaction of WBI so that students will engage more actively with the learning content, and web-based learning will be more attractive to learners.

LITERATURE REVIEW

Few studies have focused on effects of different interaction levels in web-based learning environments. But a number of studies on feedback and generative learning have been done on interaction in computer-based instruction environments. These results have valuable implications for web-based learning. In this section, research on interaction, generative activities, and feedback are discussed.

Interaction

In the literature, the terms interaction and interactivity are used interchangeably to refer to the communication between student and subject content, student and instructor, or student and student. There are two perspectives on interaction: quantitative and qualitative (Hannafin, 1989). A quantitative view of interaction refers to external factors such as response frequency or interval, or the number of questions embedded during an instructional module. A qualitative view of interaction substantially emphasizes the learner's role in mediating interaction. The concern here is how to foster cognitive engagement--the intentional and purposeful processing of lesson content.

Although the interaction is very important and necessary for education, it appears that there is no consensus of what interactivity actually represents or involves. Even so, over the past years there have been a number of attempts to identify levels of interaction, with the underlying assumption that the higher the level, the better the product (Sims, 1997). Schwier and Misanchuk (1993) introduced a descriptive taxonomy of multimedia interaction based on the qualitative nature of interaction. It includes three dimensions: (a) levels of interaction, (b) functions played by interaction in each level, and (c) transactions at each level of interaction. Three levels of interaction are: reactive, proactive, and mutual, where:

* "A reactive interaction is a response to presented stimuli, or an answer to a given question;

* Proactive interaction emphasizes learner construction and generative activity. The learner goes beyond selecting or responding to existing structures and begins to generate unique constructions and elaborations beyond designer-imposed limits; and

* Mutual interactivity would be characterized by an artificial intelligence or virtual reality design, where the learner becomes a fully franchised citizen in the instructional environment. In such a program, the learner and system are mutually adaptive, that is, capable of changing in reaction to encounters with the others (p. 11-12).

The relationships among the three levels are hierarchical in terms of quality of interaction. That means the quality of a mutual level interaction is higher than that of a proactive level and the quality of a proactive level interaction is higher than that of a reactive level interaction because there is greater opportunity for mental engagement and learner investment at higher levels of interaction than lower (Schwier & Misanchuk, 1993).

Very few studies have focused on the effects of different interaction levels on learners' achievement and attitudes based on Schwier and Misanchuk's classification. However a number of studies have focused on generative activity strategies and feedback, which can be used to realize reactive interaction and proactive interaction levels of instruction.

Research on Generative Activities

Generative learning focuses on examining what internal processes of learning are stimulated or induced by external stimuli. Wittrock (1974, 1991) proposed the idea of generative learning with the assumption that for learning to occur, active mental participation of the learner is required. Mental connections occur as new information from the environment is integrated into existing mental structures through reorganizing existing mental structures into new frameworks, elaborating existing mental structures to become more inclusive, and reconceptualizing to gain a more exact or detailed understanding of the information. In the generative process, the learner is required to actively engage mental processes to examine the new information, and to construct (generate) a new interpretation of the information.

Generative learning has been realized through various generative activities. There are two basic families of these strategies (Grabowski, 1996). One family of strategies is used to generate organizational relationships between different components of the environment, which helps a learner understand how items are related to one another. Examples include creating titles, headings, questions, objectives, graphs, tables, and concept maps. These activities occur in the coding, organization, and conceptualization levels of thinking. Another family of generative strategies integrates relationships between external stimuli and memory. Examples include asking students to construct demonstrations, metaphors, analogies, examples, pictures, applications, paraphrases, or inferences. These activities occur in the integration and translation levels in terms of cognitive processing. The second family differs from the first because these strategies not only require deeper processing of the instructional content, but they also result in a high level of understanding.

Wittrock and Kelly's (1984) study, which involved generating examples, indicated that students required to give an example had the biggest gain in the pretest-posttest evaluation. Studies on the effects of other generative activities have yielded conflicting results. Wittrock (1991) found that students who were asked to generate text-related summaries, analogies, metaphors, and pictures had better comprehension than those who were not. Hooper, Sales, and Rysavy's (1994) found that undergraduates who generated summaries performed better than those who generated analogies. However, the experiment groups (generating either summaries or analogies) did not perform better than the control group (without any generative activities). Volk and Ritchie's (2000) study on the effectiveness of concept map generation and manipulation of objects found no significant difference between either of the two generative strategies on a posttest. However, the results indicated that students starting with concept maps showed significantly higher achievement on a delayed posttest than students beginning with manipulation of objects.

Investigations of the effects of generative activities in hypertext learning on problem solving and comprehension (Barab, Yong, & Wang, 1999) and English learning (Lin, 1995) among college students yielded positive results.

These studies on generative learning have shown that in most cases, active learner involvement produced increased learning; and learner generated activities have resulted in significant gains of learning, although the degree of the effects may be affected by the issues of organization of lesson content and quality of response (Grabowski, 1996). But, some aspects about generative activities need further exploration. First, most of the previous research has emphasized fact and concept-level learning and has not dealt with highlevel learning such as application, synthesis, or problem solving. Therefore, further research on effects of different generative activities on high-level learning is needed (Grabowski). Second, very few studies have focused on the effects of various generative activities in hypermedia environments. It is necessary to investigate what kind of roles generative activities can play in hypermedia environments, especially in web-based environments.

Research on Feedback

Feedback is information made available to learners to compare actual performance with some standard of performance. It is a critical part of the process of interaction and plays a very important role in learning. It affects students' motivation and academic performance. Research on feedback involves many factors: when to provide feedback, what kind of feedback should be offered, and feedback in different learning tasks. The following review focuses on feedback elaboration and timing of feedback, which are related to this research.

Feedback Elaboration

Feedback elaboration refers to complex explanations and/or providing additional information in response to students input. There are three types of elaboration employed during feedback: task-specific, which is drawn from the initial task demand or question, instruction-based, which contains information derived from specific lesson material but not directly from the actual question, and extra-instructional, which contains additional information from outside the immediate lesson environment.

Most studies have dealt with the task-specific and instruction-based types of feedback, while a few have addressed the extra-instructional type. Five types of feedback have been defined in previous research (Dempsey, Driscoll, & Swindell 1993): no feedback presents a question and requires a response, but does not indicate whether the learner's response is correct; simple verification feedback or knowledge of results (KOR) simply informs the learner of a correct or incorrect response; knowledge of the correct result (KCR) informs the learner what is the correct answer of the question; elaborated feedback informs students of the error and provides an explanation leading to the correct answer; try again feedback informs the students when an incorrect response has been made and allows them to make one or more additional attempts to answer correctly.

Gilman (1969) found that undergraduate science students who received KCR performed significantly better than those who did not receive KCR. The KCR group also took less time to meet the criterion than any other group.

Waldrop, Justin, and Adams (1986) conducted research for determining if elaborated feedback was more effective than "knowledge of results" in learning concepts through drill and practice computer-assisted instruction and found that immediate extended feedback following both correct and incorrect responses was superior to minimal feedback.

Lee and Dwyer (1994) found a similar result from an investigation of undergraduate students' learning of the BASIC programming. However, they got the following useful information: students who received KCR felt that insufficient feedback was given for correcting their errors; students who received KCR and elaborated feedback perceived the feedback to be valuable; students favored the "try again" for the missed problems.

Clark and Dwyer (1998) investigated the effect of different feedback types on different learning tasks (verbal, concept, and principle). No significant difference among the different types of feedback was found. Narciss (1999), however, obtained a positive result for elaborated feedback, which suggested that more informative feedback was related to better performance.

We can see that studies on feedback elaboration have yielded mixed results for verbal information learning. It appears that feedback elaboration affects effectiveness of verbal information learning, concept learning, and rule learning. Further studies need to be done, however.

Timing of Feedback

Timing of feedback deals with when feedback information is given to the learner. There are two commonly recognized types of feedback in CAI environments (Dempsey & Wager, 1988): immediate feedback is given as quickly as the computer's hardware and software will allow during instruction or testing; delayed feedback is given after a specified amount of time during instruction or testing.

Studies of immediate and delayed feedback have yielded no consistent results. Kulhavy(1977) supported the use of delayed feedback when test-items were used as the stimuli material and the correct answer was the response to be learned. Kulik and Kulik's (1988) meta analysis indicated that immediate feedback was more effective than delayed feedback when actual classroom quizzes and materials were used and delayed feedback was favored when subjects were in short-term experiments on acquisition of quiz content. Farquhar and Regian (1994) found that the effectiveness of delayed feedback was dependent on the type of feedback provided and suggested that the effectiveness of delayed over immediate feedback depends on the types of knowledge, feedback, error, and the learner's skill level.

While research results about feedback are not unambiguous, some conclusions can be drawn from existing research to make better use of feedback in instruction: feedback can serve to correct errors; in informative learning, corrective feedback is better than no feedback; and for higher cognitive tasks, delayed feedback may be more effective than immediate feedback. However, some problems are worthy of further investigation. First, most research is about the effects of feedback in traditional classroom environments and computer assisted learning environments, few studies focus on effects of feedback in web-based learning environments. Second, most studies compare the effects of with or without feedback (immediate feedback and delayed feedback). In the literature, no studies have been found that deal with the relationship between feedback and interaction levels.

Research Questions and Hypotheses

This study focused on the effects of different interaction levels on students' achievement and perceptions of motivation toward learning materials in web-based learning environments. Schwier and Misanchuk's (1993) classification of interaction levels for multimedia instruction, especially the reactive and proactive interaction levels, guided the design of learning materials for web-based learning. An immediate feedback strategy, which was used in instructional materials at the reactive interaction level, was compared with low interaction materials without this strategy. A generative activity strategy, which was used to develop instructional materials at the proactive interaction level, was compared to the low interaction and reactive interaction conditions. The following research questions and hypotheses guided the study.

Question 1. Is there a difference in achievement among groups in which students receive learning materials with low interaction level, reactive interaction level, or proactive interaction level?

Hypothesis 1. Students who receive learning material with a proactive interaction level will score higher than those who receive learning material with a reactive interaction level; and those who receive learning material with a reactive interaction level will score higher than those who receive learning material with a low interaction level on the achievement measure.

Question 2. Is there a difference in perceptions of motivation among groups in which students receive learning materials with low interaction level, reactive interaction level, or proactive interaction level?

Hypothesis 2. Students who receive learning material with a proactive interaction level will score higher than those who receive learning material with a reactive interaction level; and those who receive learning material with a reactive interaction level will score higher than those who receive learning material with low interaction level on the measure of perception of motivation.

METHODOLOGY

Research Design

This study involved a posttest only experimental design. Based on Schwier and Misanchuk's (1993) classification of interaction levels, three levels of interaction were categorized for web-based learning materials in this study: low, reactive, and proactive. Low level refers to a web site that only incorporated typical, static hyperlinks; reactive level refers to a web site that incorporated an immediate feedback strategy, which provided responses during the learning process; proactive level refers to the web site that incorporated a generative activity strategy, which asked students to generate a new example or scenario after a learning section was finished.

Independent Variable

The independent variable in the study was the interaction level of the learning material. In this study, three versions of instructional materials with low, reactive, and proactive interaction levels were implemented by following the previous categorization. Participants were randomly assigned into three groups to complete a web-based learning lesson. Participants in the first group (control group) received the learning material that provided a low level of interaction; participants in the second group received the learning material that provided a reactive level of interaction; participants in the third group received the learning material that provided a proactive level of interaction.

Dependent Variables

The dependent variables in this study were student achievement, perception of motivation toward the learning material, and time-on-task. Achievement referred to the learner's academic performance after finishing the learning as measured by a posttest; perception of motivation toward the learning material referred to students' perceptions toward the learning material in terms of motivating students to learn and was measured by the Instructional Material Motivation Survey (IMMS) (Keller, 1993); time-on-task referred to the time that students spent on the learning materials collected through the "Track students" function in WebCT.

Subjects

The participants in the study were student volunteers from an educational technology class in the School of Education at a mid-western university. They were freshmen, sophomores, juniors, seniors, and graduate students with various majors in education such as elementary education, science education, and mathematics education. The final sample had 95 students with unbalanced numbers of students in each group: the control group had 34 subjects; the reactive interaction group had 30 subjects; and the proactive interaction group had 31 subjects. They all had little prior knowledge of the content of the lesson and were familiar with WebCT, which was used as the delivery environment for the web-based instruction.

Instructional Materials

The content of the learning material was about copyright, which is normally covered in a class lecture. The instructional material was implemented in WebCT, a web course environment, using web pages created with Dreamweaver and JavaScript. The subjects logged onto WebCT and received one of three different versions of instructional materials with low interaction level, reactive interaction level, or proactive interaction level. Instructional material with low interaction level was a typical website in which students could get information (including content information and practice questions) by clicking links. Instructional material with reactive interaction level presented all practice questions displyed in a multiple-choice and/or true-false format with immediate elaborated feedback. Instructional material with proactive interaction level was implemented with generative activities. Subjects were required to generate their own new examples or scenarios.

Instruments

Three instruments were used to evaluate student achievement and perceptions of motivation toward the learning materials: achievement test, motivation survey, and interview. An immediate posttest with true-false questions, example or scenario generation, and multiple-choice questions was administered right after subjects finished the learning materials. The motivation survey, IMMS, was used to evaluate students' motivational perceptions toward the learning materials; it was administered following the posstest. Interviews were administered to randomly selected subjects from different groups in order to acquire in-depth data with regard to the students' perceptions of the learning programs.

Data Analysis

Quantitative data from post achievement posttest, motivation survey, and time-on-task for each group were analyzed with ANOVAs using SAS to test the significance of the mean differences among three groups. The Tukey's HSD test was used to test the significance of the mean difference between two groups because the sample sizes were not equivalent. The probability level for testing the research hypotheses was set at .05. Qualitative data from the interview and generative activities were analyzed by following coding protocols.

DATA ANALYSIS AND RESULTS

Preliminary tests for normality of the quantitative data (Shapirio-Wilk) and homogeneity of variance of the data (Levene's method) confirmed that the data were normally distributed and all three groups had equal variances. Therefore, analysis of variance (ANOVA) was used to examine the effect of different treatments on the dependent variables (achievement, motivation survey, and time-on-task). If the ANOVA test indicated a significant difference among three groups, the Tukey's HSD test was used to make pair-wise comparisons to find which group was significantly different from the other group(s).

Performance on the Achievement Test

Table 1 displays the descriptive statistical results for subjects' performance on the achievement test. The GLM ANOVA results (F(2,92)=14.56, p<0.0001) indicates that there was a significant difference among the mean scores of different groups. Post hoc multiple pair-wise comparisons of the means were conducted by using Tukey's studentized range (HSD) test. The HSD procedure was chosen because it could eliminate experimenter error and the group size was unbalanced. The comparison results indicate that students in both proactive interaction group and reactive interaction group out-performed those in control group. No significant difference was found between the proactive interaction group and the reactive interaction group.

Analysis of Instructional Material Motivation Survey Scores

The means and standard deviations of IMMS data are shown in Table 2. The ANOVA results (F(2,92)=3.28, p=0.04) indicate that there were statistically significant mean score differences among groups on the IMMS. The HSD multiple comparison procedure was applied to examine the differences of IMMS mean scores between groups. The results indicate that there was a significant mean score difference between the reactive interaction group and the control group; no significant differences were found between the proactive interaction group and the reactive interaction group or between the proactive interaction group and the control group.

The same procedures were used to analyze the subcategories of the IMMS. For attention, the analysis indicated that the students in the reactive interaction group obtained higher attention scores than those in the control group, but there were no significant differences for proactive interaction group compared to other two. For relevance and confidence, no significant differences on mean scores were found among groups. For satisfaction, the analysis indicated that students in the reactive interaction group obtained higher satisfaction scores than those in the control group, but there were no significant differences of satisfaction scores for proactive interaction group compared to other two.

Analysis of Time-on-Task

The means, standard deviations, minimum scores, and maximum scores (in minutes) of time-on-task for each group are listed in Table 3. The ANOVA test (F(2,94)= 11.50, p<0.0001) and HSD test results indicated that students in the proactive group (M=27.5) and the reactive group (M=22.7) spent more time on learning than those in the control group (M=17.7). No significant difference was found between the proactive interaction group and the reactive interaction group for time-on-task.

The quantitative results are summarized graphically in Figure 1. This figure illustrates the results for achievement, motivation (overall IMMS score), and time-on-task.

Supporting Qualitative Results

Qualitative data were gathered from the interviews after the experiment and the generative activities during students' learning process in the proactive interaction group. The interview explored subjects' overall reactions toward the instructional program and perceptions of the immediate feedback and generative activity strategies. Some of the subjects were randomly selected from each group and interviewed to provide their thoughts about the instructional program, immediate feedback, and generative activities.

Most of the subjects liked the learning program because it was easy to use, very informative, well organized, and beneficial. Students in the reactive interaction group were positive toward the elaborated immediate feedback because it helped their learning in different ways: reinforcing what they read, clarifying mistakes, knowing answers right away, and motivating them to learn. When they answered questions incorrectly, the immediate feedback made them review the learning content, reflect on the question and try to answer it again. The following lists student comments about immediate feedback.
 That was really nice 'cause you knew it right away. You knew if you
 Kind got the idea and what it tried to tell you. I really like
 those.

 It was good I think. It motivated the student to answer questions.

 It reinforced what you just read. If I did wrong thing, it
 clarified.

 I thought that was great. That's a wonderful feature. It let you
 know right away and let you practice what you just learned. That was
 great.

 I liked it. I liked it because I kept answering more than once. If
 you are wrong, you can keep going. The quiz also helped.

 If I were wrong, I would go back and look through the section
 following the feedback and re-answer the question and realized why I
 was wrong. I thought that was useful with the little thing coming
 up.


The reaction of students to the generative activity strategy was mixed. Half of them had positive attitudes and half of them did not mind the activity. However, the generative activities made them reflect what they learned, think more about the examples, and check the learning content. The following comments came from the generative activity group.

[FIGURE 1 OMITTED]
 Content is interesting. I thought the little self-generating things
 you have to type and create your own examples. I thought that was
 interesting 'cause it made you think the stuff you read, actually
 have to pay more attention to it.

 I thought that was good 'cause it helped incorporate what you read
 from the section and how you used it, how you actually be using it
 and how you would be violating the copyright law. It helped. It
 implemented what you learned.

 I thought it was good. Like I said it made you think the stuff you
 read and actually made you pay more attention to the content.


The generative activity data from the proactive interaction level group indicated that students completed the generative activities with high accuracy. The accuracy for six generative activities is 94%, 90%, 94%, 100%, 84%, and 94% respectively. The overall accuracy was 93%. The high accuracy means that students performed well in the generative activities during the learning process.

DISCUSSION

Treatment Effects on Achievement

The results indicated that students in reactive interaction group outperformed those in control group. Adding elaborated immediate feedback makes students interact with the learning material more and think more about what they learn. Therefore, students have more engagement and invest more in the learning process with a higher interaction between students and the learning content. This leads to a deeper processing of the learning material with better results. This result is consistent with the view that the higher the interaction level, the better the instruction (Liaw & Huang, 2000; Sims, 1997).

The results indicated that subjects who received the instructional material with proactive interaction level also performed significantly better on the achievement posttest than those who received the instructional material with low interaction level. Adding generative examples/scenarios to the learning process results in knowledge construction and generation by integrating new information with prior knowledge. This is consistent with the expectations for the proactive interaction level (Schwier & Misanchuk, 1993) and that higher interaction leads to better results (Liaw & Huang, 2000; Sims, 1997).

The results indicated that subjects who received the instructional material with proactive interaction level may have performed slightly better on the achievement posttest than those who received the instructional material with reactive interaction level. But, the difference was not statistically significant. Therefore, the hypothesis that proactive interaction would generate superior outcomes to reactive interaction was not supported in this study.

There are several possible reasons for this result. First, the small number of testing items and quality of the testing items likely affected the study results. The small numbers of testing items make it difficult to detect the expected difference between the two groups. Second, the design of the proactive interaction level material may not have been effective enough to distinguish between the reactive and proactive levels. In the learning material, a generative activity was designed to come immediately after a practice question. The students were asked to generate a similar example after the practice. This design may have led students to generate examples/scenarios without much deep processing of the learning content. Third, there might be a significant difference between proactive interaction group and reactive interaction group in the long term. However, this design used only an immediate posttest. A delayed test might show the expected results because the deeper mental processing yields better retention of the learning material.

Treatment Effects on Motivational Perceptions

The results indicated that subjects who received instructional material with reactive interaction level demonstrated significantly better motivation than those who received instructional material with low interaction level. Qualitative data from the interviews strongly support this conclusion. All the interviewed students who were in reactive interaction group expressed very positive attitudes toward the program. They thought the program was very informative, well organized, and easy to use. Many of them indicated that they learned a lot from the program especially things that would be very helpful to their future teaching. Many of them thought that the immediate feedback was a great idea; it motivated and helped their learning by giving immediate information, reinforcing what they learned, and clarifying mistakes.

The analysis of the performance of students on subcategories of IMMS indicated that the learning material with the reactive interaction level drew more attention than that with the low interaction level, and students felt more satisfied with the learning material at the reactive interaction level than did those with the learning materials at the low interaction level. Although the differences were not statistically significant in the relevance and confidence categories, the results showed a trend that adding immediate feedback was better than none in terms of increasing the relevance of the instruction and enhancing the subjects' confidence.

The results of the data analysis indicated a trend toward higher motivation scores among subjects who received the instructional material with the proactive interaction level compared to subjects who received the instructional material with the low interaction level. But, the difference was not statistically significant. Therefore, the hypothesis was not supported.

In the literature, no studies were found focusing on the motivational effects of various generative activities. This study failed to provide empirical evidence that learning materials with generative activities could arouse more motivation than those without generative activities. By examining student performance on subcategories of IMMS, it appeared that students in the proactive interaction group performed slightly better than those in the control group for all subcategories. However, the differences were not significant. The data from the interviews corroborate these results because half of the interviewed students did not have positive attitudes toward the generative activity strategy.

The results indicated that subjects who received the instructional material with proactive interaction level did not outscore those who received the instructional material with reactive interaction level and that the difference was not statistically significant either. Given that the immediate feedback was welcomed by the subjects and the generative activity strategy created mixed feelings among the subjects, the result was not surprising. The analysis on the subcategories of IMMS also indicated that both the generative activity strategy and the immediate feedback strategy had almost the same effects on drawing subjects' attention, increasing the relevance of the instruction, building subjects' confidence, and making subjects feel satisfied with the learning.

Treatment Effects on Time-on-Task

The statistical analyses of time-on-task for three groups indicated that students in the proactive interaction and reactive interaction groups spent significantly more time on the learning content than those in the control group, and students in the proactive interaction group spent more time on the learning content than those in the reactive interaction group, although the difference was not statistically significant. The interview data from the reactive interaction group indicated that students reviewed the content following the feedback information, reread the information to answer the question again, and tried different answers when they were doing the multiple choice practice questions. The interview data from the proactive interaction group also demonstrated that students reflected on the content, reflected on the example, and made sure they understood the content before they created their own examples/scenarios. All these can explain the previously mentioned conclusion. The result is consistent with Lin's (1995) research result that students engaging a deeper level of processing spend more time on the learning task.

By examining the correlation between the achievement and the time-on-task, it was found that time-on-task was not correlated with achievement (r = 0.30). This implies that students who spent more time on learning did not necessarily perform better, which supports Lin's (1995) study. Although deeper level information involves deeper mental processing and takes more time (Liu, 1992), time-on-task should not be taken as an index of the depth the processing and does not appear to be a decisive variable for determining that information has been processed successfully at a deeper level or has been well retained (Lin, 1995).

CONCLUSIONS AND RECOMMENDATIONS

The findings of this study show that students who used instructional materials with embedded immediate feedback and generative activities outperformed those who used instructional materials without any strategy added on the achievement posttest. However, the proactive interaction group did not perform significantly better than the reactive interaction group. Thus, the speculation that a higher interaction level leads to better results was only partially supported by the findings of this study. Furthermore, the findings indicated that subjects demonstrated higher motivation toward the learning material when they were exposed to instructional materials with immediate feedback.

The analysis of time-on-task showed that the group learning with generative activity strategy spent significantly more time on learning than the control group and spent more time on learning than the group learning with immediate feedback. This confirms that greater mental effort or persistent cognitive engagement takes more time. However the time-on-task was not correlated with achievement, which implies that students who spent more time on learning did not necessarily perform better.

It was expected that the generative activity strategy would enhance learning compared with the immediate feedback strategy. The findings indicated that students in the proactive interaction group did not outperform those in the reactive interaction group. Also, the students in the proactive interaction group did not demonstrate higher motivation than those in either the reactive interaction group or the control group.

As a result of these analyses, the following conclusions can be drawn. First, embedding elaborated immediate feedback into web-based learning materials enhances student performance because it can reinforce learners' learning, clarify some concepts, and guide them through the learning content. With the elaborated immediate feedback in the learning process, students can interact more with the learning content, process it more deeply, and perform better, which confirms the belief that the higher the interaction level, the better the instruction. Second, the example/scenario generation strategy enhances student performance in a web-based learning environment. It helps the learner reflect on the learning content, implement what they learn, and incorporate the learned information into their own subject areas. This strategy is effective in improving students' performance on their learning at the application and analysis levels. Third, employing elaborated immediate feedback in a web-based learning environment can motivate learners by drawing more attention and increasing satisfaction with the learning material.

Although this study yielded some encouraging findings about the effects of different interaction levels on student achievement and motivational perceptions, many issues related to different aspects of this research have been raised and need to be investigated in the future.

This study employed an immediate posttest to evaluate student performance on achievement. The results failed to support the hypothesis that students in proactive interaction group would outperform those in reactive interaction group. From the literature, it has been suggested that the deeper the mental processing, the better the retention. Therefore, delayed posttest research should be employed to examine the possible differences between proactive interaction group and reactive interaction group.

In this study, the "track student" feature in WebCT was used to track the time students spent on the learning material. However, it could not track the time student spent on different parts of the learning material, the time students spent on reflection, and the time students spent on reviewing the learning material. Those are very important aspects to understand the learner's learning process. Therefore, more advanced time tracking capabilities should be used in the future research.

To design the instructional materials with proactive interaction focusing on knowledge construction and generation, example/scenario generation activities were implemented. There are many possible generative activities involving different levels of mental processing ranging from coding to translation. Example/scenario generation involves mental processing at the integration level, which is lower than the translation level. Future research should consider employing generative activities at the translation level to increase the interaction level between the students and the learning materials.

The elaborated immediate feedback strategy was effective in enhancing student achievement and motivational perceptions. This strategy was implemented with pop-up windows. However, there are other ways to implement feedback such as uploading a separate browser window or using different layers. Future study may consider employing those techniques and assessing their effects on learning.

All these efforts may yield useful information to further our understanding of effects of learning materials with different interaction levels on student achievement and perceptions of motivation in a web-based learning environment.
Table 1

Means and Standard Deviations of Achievement Posttest or Each Group

 n [M.sub.ach] (S[D.sub.ach])

Control Group 34 22.00 (4.50)
Reactive Interaction Group 30 25.80 (3.80)
Proactive Interaction Group 31 27.00 (3.30)

Note: n = number of subjects, [M.sub.ach] = mean of achievement,
S[D.sub.ach] = standard deviation of achievement

Table 2

Means and Standard Deviations of Dependent Variables for Each Group

Variable n [M.sub.IMMS](S[D.sub.IMMS])

Control 34 111.80 (20.50)
Group

Reactive 30 123.90 (20.00)
Interaction
Group

Proactive 31 121.20 (21.50)
Interaction
Group

Variable [M.sub.att] (S[D.sub.att]) [M.sub.rel] (S[D.sub.rel])

Control 34.40 (8.20) 31.80 (6.20)
Group

Reactive 39.30 (8.00) 33.50 (5.90)
Interaction
Group

Proactive 37.60 (8.10) 33.60 (6.40)
Interaction
Group

Variable [M.sub.con] (S[D.sub.con]) [M.sub.sat] (S[D.sub.sat])

Control 30.90 (6.60) 13.80 (4.70)
Group

Reactive 33.10 (5.80) 17.90 (4.30)
Interaction
Group

Proactive 33.70 (6.90) 16.20 (4.10)
Interaction
Group

Note: n = number of subjects, [M.sub.IMMS] = mean of IMMS total,
[M.sub.att] = mean of attention, [M.sub.rel] = mean of relevance,
[M.sub.con] = mean of confidence, [M.sub.sat] = mean of satisfaction,
S[D.sub.IMMS] = standard deviation of IMMS total, S[D.sub.att] =
standard deviation of attention, S[D.sub.rel] = standard deviation
of relevance, S[D.sub.con] = standard deviation of confidence,
S[D.sub.sat] = standard deviation of satisfaction.

Table 3

Means and Standard Deviations of TOT for Each Group

 n [M.sub.TOT] (S[D.sub.TOT]) Max Min

Control Group 34 17.70 (5.70) 31 9
Reactive Interaction Group 30 22.70 (8.70) 40 10
Proactive Interaction Group 31 27.50 (9.60) 55 11


References

Barab, S., Yong, M., & Wang, J. (1998). The effects of navigational and generative activities in hypertext learning on problem solving and comprehension. International Journal of Instructional Media, 26(3), 283-305.

Berge, Z. (1999). Interaction in post-secondary web-based learning. Educational Technology, January-February, 5-11.

Clark, K., & Dwyer F. (1998). Effect of different types of computer-assisted feedback strategies on achievement and response confidence, International Journal of Instructional Media, 25(1).

Dempsey, J., Driscoll, M., & Swindell, L. (1993). Text-based feedback. In D. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback. Englewood Cliffs, NJ: Educational Technology Publications.

Dempsey, J., & Wager, S. (1988) A taxonomy for the timing of feedback in computer-based instruction. Educational Technology, 28(10), 20-25.

Farquhar, J.D., & Regian, J.W. (1994). The type and timing of feedback within an intelligent console-operation tutor. Paper presented at the 1994 Conference of the Human Factors and Ergonomics Society.

Gilbert, L., & Moore, D. (1998). Building interactivity into web course: Tools for social and instructional interaction. Educational Technology, May-June, 29-35.

Gilman, D.A. (1969). Comparison of several feedback methods for correcting errors by computer-assisted instruction. Journal of Educational Psychology, 60, 503-508.

Grabowski, B.L. (1996). Generative learning: Past, present, and future. In D.H. Jonassen (Ed.) Handbook for research on educational communications and technology. Englewood Cliffs, NJ: Educational Technology Publishing.

Hannafin, M. (1989). Interaction strategies and emerging technologies: Psychological perspective. Canadian Journal of Educational Communication, 18(3), 167-181.

Hooper, S., Sales, G., & Rysavy, S. (1994). Generating summaries and analogies alone or in pairs. Contemporary Educational Psychology, 19, 53-62.

Kulhavy, R.W. (1977). Feedback in written instruction. Review of Educational Research, 47, 211-232.

Kulik, J., & Kulik, C. (1988) Timing of feedback and verbal learning. Review of Educational Research, 58(1), 79-97.

Keller, J.M. (1999). Motivation in cyber learning environments. International Journal of Educational Technology, 1(1), 7-30.

Liaw, S., & Huang, H. (2000) Enhancing Interactivity in Web-based Instruction: A Review of the Literature. Educational Technology, 40(3), 41-45

Lee, D. & Dwyer, F. (1994). The effect of varied feedback strategies on students' cognitive and attitude development. International Journal of Instructional Media, 21(1), 13-21.

Lin, E.T. (1995). Effects of prompted self-elaborations with embedded strategic cues on second language learners in a Hypermedia environment. Unpublished doctoral dissertation, Purdue University, West Lafayette.

Liu, M. (1992). Hypermedia-assisted instruction and second language learning. (ERIC Document Reproduction Service No. ED 349 954)

Milheim, W.D. (1996). Interactivity and computer-based instruction. Journal of Educational Technology System, 24(3), 225-233.

Moore, M.C. (1989). Three types of interaction. The American Journal of Distance Education, 3(2), 1-6.

Narciss, S. (1999). Motivational effects of the informativeness of feedback. (ERIC Document Reproduction Service No. ED 430 034)

Owston, R.D. (1997). The world wide web: A technology to enhancing teaching and learning? Educational Researcher, 26(2), 27-33.

Schwier, R.A. & Misanchuk, E. (1993). Interactive multimedia interaction. Englewood Cliffs, NJ: Educational Technology Publications.

Sims, R. (1997). Interactivity: A forgotten art? [Online]. Available: http://itech1.coe.uga.edu/itfo-rum/paper10/paper10.htm

Volk, C., & Ritchie, D. (2000). Comparison of generative learning strategies. School Science and Mathematics, 100(2).

Waldrop, P., Justin, J., & Adams, T. (1986, November). A comparison of three types of feedback in a computer-assisted instruction, Educational Technology.

Wittrock, M. (1974). Learning as a generative process. Educational Psychologist, 11(2), 87-95.

Wittrock, M.C. (1991). Creative teaching of comprehension. The Elementary School Journal, 92(2), p184-191.

Wittrock, M., & Kelly, R. (1984). Teaching reading comprehension to adults in basic skills courses (Final Project Report). University of California, Los Angeles, Graduate School of Education.

Windschitl, M. (1998). The WWW and classroom research: What path should we take? Educational Researcher, 27(1), 28-33.

TIANGUANG GAO

Department of Educational Studies, Ball State University, USA

tgao@bsu.edu

JAMES D. LEHMAN

Department of Curriculum and Instruction, Purdue University, USA

lehman@purdue.edu
COPYRIGHT 2003 Association for the Advancement of Computing in Education (AACE)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lehman, James D.
Publication:Journal of Interactive Learning Research
Date:Dec 22, 2003
Words:7398
Previous Article:A case study in an integrated development and problem solving environment.
Next Article:Synchronous communication and higher-order thinking in a tertiary course in Occupational Therapy.
Topics:


Related Articles
Online learning experience: a case study.
The effects of situation-outcome-expectancies and of ARCS-strategies on self-regulated learning with web-lectures.
A model of Learner-Centered Computer-Mediated Interaction for Collaborative Distance Learning.
The best of both worlds: teaching a hybrid course.
Theory application for online learning success.
Reliability and factor structure of the Attitude Toward Tutoring Agent Scale (ATTAS).
Student perceptions of the web-based homework program WeBWorK in moderate enrollment college algebra classes.
Student's perception of quality in online courses.
A classroom research study concerning the application of a framework for planning and sequencing e-learning student interactions.
Motivation and learning strategies of students in distance education.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters