Printer Friendly

The impact of adapting content for students with individual differences.

Introduction and background

The effectiveness of online courses has been questioned, especially in relation to meeting the individual needs, perceptions, and learning outcomes of students (Akdemir & Koszalka, 2008; Rovai, 2003). Although web-based environments offer many advantages such as the ability to offer more interactivity, personalized instruction, and more independent learning (Brusilovsky, Sosnovsky, & Yudelson, 2009; Inan, Flores, Ari, & Arslan-Ari, 2010), one of the major challenges of web-based instruction has, and continues to be, accommodating students with differing profiles, expectations, prior experiences, and learning abilities (Abidi, 2009; Dogan, 2008).

Research on individual differences has found that certain learner characteristics such as field independence (Chen, 2010; Scheiter & Gerjets, 2007), high motivation (Artino, 2008), high self-efficacy (Artino, 2008; Yukselturk & Bulut, 2007), high self-regulation (Azevedo, Moos, Greene, Winters, & Cromley, 2008; Yukselturk & Bulut, 2007), and particular learning styles (Bajraktarevic, Hall, & Fullick, 2003; Graf, Liu, Kinshuk, & Yang, 2009) are more supportive of online learning than are other learner characteristics. However, as with face-to-face classrooms, in order to be effective, online instructors must make accommodations for a large proportion of students, who do not possess these characteristics. In addition, instructors must consider the differing demographics of distance learners. In comparison to face-to-face students, studies have found that distance learners are usually older (Bocchi, Eastman, & Swift, 2004; Moore & Kearsley, 2005), generally work full time (Inan, Yukselturk, & Grant, 2009), and are more likely female (Sullivan, 2001; Halsne & Gatta, 2002). These diverse background characteristics, coupled with cognitive and learning style differences among students, add to the complexities of accommodating for individual differences in web based learning environments.

Adaptive educational hypermedia systems

Adaptive Educational Hypermedia (AEH) systems have been lauded for their ability to accommodate individual differences in online learning. Through the incorporation of various instructional strategies, resources, assessments, and interfaces, AEH systems individualize instruction (e.g., content, interface, and strategies) and provide users with more personalized experiences (Inan & Grant, 2008). In their simplest form, AEH systems gather user information and preferences (Brusilovsky, 2001; Triantafillou, Pomportsis, & Demetriadis, 2003; Tsianos, Germanakos, Lekaas, Mourlas, & Samara,, 2009); make inferences based on the collected data; and then employ various adaptive methods to accommodate each individual student (Inan & Grant, 2008; Lee & Park, 2007; Shute & Zapata-Rivera, 2007).

Several adaptive systems have been designed and developed in order to accommodate learner individual differences. Examples of such systems include AHA! (Stash, Cristea, & de Bra, 2006) and INSPIRE (Papanikolaou, Grigoriadou, Kornilakis & Magoulas, 2003) which adapts instruction based on student learning styles; ELM-ART II which adapts instruction based on knowledge levels and student preferences (Weber & Specht, 1997); INTERBOOK (Brusilovsky, Eklund, & Schwarz, 1998) which adapts instruction based on knowledge level; and AES-CS (Triantafillou, Pomportsis, & Georgiadou, 2002) which adapts instruction based on student cognitive style.

Unfortunately, even though numerous AEH systems have been designed and developed, one of the major limitations in the literature is the lack of evaluation studies which document evidence of their usability and effectiveness in terms of student performance, motivation, and/or attitudes. More than others, AEH systems strongly require evaluation, due to their inherent usability problems (Hook, 2000; Jameson, 2003; Paramythis, Weibelzahl & Masthoff, 2010). However, research suggests that the lack of evaluation, or reporting of, may be due to the difficulties encountered when evaluating these types of systems (Chin, 2001; Masthoff, 2002; Weibelzahl, 2005). These difficulties include, but are not limited to, properly defining the control group, determining evaluation criteria, and selecting appropriate samples (Weibelzahl, 2005). Even with these difficulties, Gena (2005) asserts that the evaluation of adaptive systems is crucial and should be a common practice, especially formative evaluation.

Formative evaluation of AEH systems

Formative evaluation, an iterative process of collecting data and information during development, can be used to assess and improve the usability and effectiveness of AEH systems (Gena, 2005; Velsen, Geest, Rob Klaassen, & Steehouder, 2008). Formative evaluation plays a decisive role in many systematic design models because it serves as a method for quality control while concurrently focusing on cost-effective improvement throughout the product-development cycle--rather than only at the end, as with summative evaluation (Dick, Carey, & Carey, 2005; Brusilovsky, Karagiannidis, & Sampson, 2004).

Formative evaluations of AEH systems in the literature are limited but include evaluations of INSPIRE (Papanikolaou, et al., 2003), AES-CS (Triantafillou, et al., 2002), ISIS-Tutor (Brusilovsky & Pesin, 1998), PUSH (Hook, 1998) and ELMART (Weber & Specht, 1997). Papanikolaou and colleagues (2003) found that students exhibited positive attitudes towards the content structure and felt that INSPIRE was easy to follow and comprehend. Results further revealed that students preferred to have more control over system functionality. Upon collecting student feedback from evaluations of AES-CS, researchers redesigned the interface and addressed design weaknesses such as limiting the scrolling feature of content pages, keeping a consistent layout throughout the system, and providing users with more control (Triantafillou et al., 2002). Brusilovsky and Pesin (1998) found that adaptive navigation support helped users reduce their navigation efforts, while Hook (1998) found that PUSH required its users to make fewer decisions which, in turn, resulted in less cognitive load. A formative evaluation of ELMART conducted by Weber and Specht (1997) found that novice learners benefited more from the direct guidance provided than more knowledgeable students.

Purpose of study

The purpose of this study is to present the findings from the formative evaluation (field testing) of an adaptable tutorial which was designed and developed by researchers at a large southwestern university in the U.S. This adaptable tutorial was designed to individualize instructional content for students based on two characteristics--student motivation and prior knowledge. The goals for formative evaluation were, first and foremost, to assess whether the goals of the tutorial were being met (i.e., knowledge gains and motivation gains) and, secondly, to identify weak or problematic areas, in terms of usability, where the tutorial could be improved. Specific research questions were:

* Based on beginning knowledge and motivation levels (high vs. low), do students differ in terms of knowledge gains, time spent, and their overall appraisal of the tutorial?

* What were students' perceptions of the adaptable tutorial in terms of visual design, organization/navigation, content presentation, assessment feedback, and technical issues?

Methods

Description of adaptable tutorial

Combining adaptive hypermedia methods with strategies proposed by instructional theory and motivation models, an adaptable tutorial was designed/developed and iteratively evaluated by researchers. Due to the complexity of adapting to multiple learner characteristics, adaptation for this tutorial was limited to two learner characteristics: student motivation and prior knowledge. Many studies have considered prior knowledge (Brusilovsky, 2003; Chen & Paul, 2003) and motivation (Johns & Woolf, 2006; Song & Keller, 2001) to be two of the most important factors that should influence the design of web-based instruction. Further, due to the complexity of adapting to multiple levels of these factors, adaptation was limited to two discrete levels--high and low. Using combinations of high/low motivation and prior knowledge levels, four user model clusters were defined and used to categorize learners. These clusters were: (1) low motivation and low prior knowledge, (2) low motivation and high prior knowledge, (3) high motivation and low prior knowledge, and (4) high motivation and high prior knowledge.

Content for this tutorial came from an undergraduate statistics course where basic introductory statistics topics are introduced (e.g., theory of probability, sample spaces, and probability rules). In this tutorial, content was chunked and presented in three sections consisting of 4-5 dynamic web pages. After organizing the content, researchers customized the design of content (i.e., explanations, examples, and practices) by instantiating research-based instructional design guidelines recommended for students belonging to each of the four predefined user model clusters. For example, students with low prior knowledge (clusters 1 and 3) received additional explanations, resources, and content (Brusilovsky, Yudelson, & Hsiao, 2009; Kalyuga, 2006; Kenny & Pahl, 2009), had less navigation opportunities(e.g., hiding links) (Brusilovsky et al., 2009; Scheiter & Gerjets, 2007); and received well guided, slowly paced guided practice (Clarke, Ayres, & Sweller, 2005; Kalyuga, 2007; Scheiter, Gerjets, Vollmann, & Catrambone, 2009). On the other hand, students with high prior knowledge (clusters 2 and 4), for instance, had less graphics (Song & Keller, 2001); had less structured instruction (Clark et al., 2005; Kalyuga, 2007); had more control over navigation (Deubel, 2003; Kalyuga, 2007; Chrysostomou, Chen, & Xiaohui, 2009) and received minimal guidance or no guided practices (Tuovinen & Sweller, 1999). The design of content for each cluster was then further individualized based on student motivation levels. To support the adaptable tutorial, two databases were designed and developed. One database stored information on the individualized content (e.g., text content, text examples, images, Flash files, etc.) which was indexed with a composite factor value (1, 2, 3, or 4). The second database was used to store student information (i.e., demographics, and motivation and prior knowledge levels).

Upon logging into the tutorial, students were presented a prior knowledge test on probability and a survey which measured their initial motivation. Based on the results of these assessments, students were placed into one of four predefined user model clusters for the section one. As students proceeded, the adaptable tutorial would loop through the database, extract all content with composite factor values associated with the student's user model cluster, and display that content to the student along with other adaptation techniques specific to their user model cluster. Before beginning each section, students' knowledge and motivation were reassessed to update their user model cluster for the upcoming section. Based on the results of these assessments, students either maintained their current user model cluster or they were placed into another user model cluster for the upcoming section of the adaptable tutorial. If the students switched to another user model cluster, the adaptable tutorial presented the content along with appropriate adaptation techniques specific to that user model cluster.

Evaluation design and procedures

For field testing purposes, the researchers of this study used a tutor alone evaluation design (Woolf, 2009). In this evaluation design, no control group is used. Rather, only one group of students works with the same tutorial design and the study measures specific outcomes from that group. The goal of this evaluation design is to establish or identify something about the learner which can be used to predict learning outcomes on posttests (Woolf, 2009). For this study, the outcomes of interest were changes in student motivation and knowledge levels. Field-testing of the adaptable tutorial included two stages. In the first stage, data were collected from participants who studied the tutorial online. During this stage, students' prior and post knowledge, prior and post motivation and their thoughts about the tutorial were assessed. In the second stage, students studied the material in a computer laboratory setting. In addition, participants were observed by the researchers to identify any technical problems, participants were asked to fill out an evaluation survey, and focus group interviews were conducted.

Throughout this study, combinations of quantitative and qualitative methods were used to obtain diverse perspectives on the design of the tutorial. Data were obtained through questionnaires, surveys, focus group interviews, and system logs (see Table 1).

Participants

The target audience for the tutorial was undergraduate students. For field testing purposes, a total of 186 undergraduates from a large southwestern university participated in two stages. Participants came from six sections of an undergraduate introductory technology course. More than half of the students participating in the initial stage were female (53.3%). Their ages ranged from 18 to 30 years old with a mean age of 19.8. The majority of students participating in the second stage were female (55.6%), between the ages of 17 and 26. Each student received extra credit for their participation.

Learner characteristics measured

In order to make inferences as to which predefined user model cluster a student belonged to, researchers had to decide how learner' motivation and knowledge would be measured. Keller's Instructional Material Motivational Survey (IMMS) instrument was used to measure student motivation. This instrument is based on the Attention, Relevance, Confidence and Satisfaction (ARCS) motivational design model (Keller, 1987a, b) and is widely applied to the motivational evaluation of computer-based instructional materials (Huang, Diefes Dux, & Imbrie, 2006; Huang, 2010; Inan et al., 2010; Song & Keller, 2001). The IMMS scale consists of five point-likert scale items whose values ranged from 1=Strongly Disagree to 5=Strongly Agree. For each tutorial' section, students whose average motivation, as measured by the adapted IMMS, was less than or equal to 3 were placed into low motivation groups (clusters 1 or 2 depend on prior knowledge), while those whose average motivation was greater than 3 were placed into high motivation groups (clusters 3 or 4 dependent on prior knowledge levels).

For the assessment of knowledge, a locally developed 10 item multiple choice instrument was used. Items measured prior knowledge for each section of the tutorial. Students were given 1 point for correct responses and 0 points for incorrect responses. In addition, one item from each section was weighted by two points for its difficulty. For each section of the tutorial, students scoring less or than equal to 50 percent of the available points for that section were placed into the low knowledge group (clusters 1 or 3 dependent on motivation levels), while those scoring higher than 50 percent of the available points for that section were placed into the high knowledge group (clusters 2 or 4 dependent on motivation levels).

Field testing instruments

In the field-testing phases of this study, various data collection techniques and instruments were used for assessment and evaluation purposes. These include a(n):

* Student Questionnaire: Instrument used to collect student perceptions about the adaptable tutorial. To obtain student appraisal, participants were asked to rate the following question, "What is your overall rating of the tutorial?" The response scale ranged from 1 (Poor) to 5 (Excellent). In addition, this questionnaire gathered students' opinions about what they liked most and least about the adaptable tutorial.

* Evaluation Survey: Survey used to gather students' perceptions on different utilities provided within the tutorial. It included five subsections which were visual design, organization/navigation, content presentation, assessment feedback, and perceptions (Elissavet, & Economides, 2003; Kay & Knaack, 2009; Nordhoff, 2002; Sahari, Abdul Ghani, Selamat, & Yunus, 2009). Students were requested to mark a 5-point scale ranging from "Strongly Disagree" (associated with score 1) to "Strongly Agree" (associated with score 5) for each statement.

* Observation Form: Form developed to help the researchers take notes of any technical issues encountered while participants studied the materials.

* System logs: These were used to track the amount of time students spent on each page and total completion time.

* Interview Guide: A guide which was used to list interview questions and outline the topics to be investigated. Interviews were conducted to get in depth information about students' experiences with the tutorial.

Data analysis

In this study, several factorial and mixed-design ANOVAs were used to investigate how different clusters of students benefited from the tutorial in terms of knowledge gain, time spent, and appraisals of the tutorial. In addition, descriptive statistical techniques were used to assess students' thoughts on the tutorial. Qualitative data from interviews, open ended questions, and class observations were collected to supplement quantitative results. Following the collection of qualitative data, data were transferred to an electronic format and then analyzed through iterative cycles of data examination, exploration of similarities and differences among the participants, and a search for confirming and disconfirming evidence that could be incorporated into the conclusions (Merriam, 1998; Miles & Huberman, 1984).

Results

Knowledge gain group differences

The goal of the first research question was to investigate whether certain groups of students benefited more from the tutorial in terms of knowledge gains, than others. To answer this question, researchers used two mixed design ANOVAs. Using beginning prior knowledge as a between subjects factor and knowledge gains as a within-subjects factor, the first examined which group of students, high versus low beginning prior knowledge, benefited more from the tutorial in terms of knowledge gains. Results revealed a significant interaction between beginning prior knowledge groups and knowledge gain from pre to post test, F(1,184)=12.97, p<0.001. This indicated that the two groups benefited differently from the tutorial. Researchers then conducted separate paired sample t-test to examine whether this knowledge gain was significant for each group. Results showed that students in the low beginning prior knowledge group significantly increased their post knowledge test scores, t(146)=-5.82, p<0.001. On the other hand, there was no significant knowledge gain for students with high beginning prior knowledge. The second mixed design ANOVA examined which group of students, high versus low beginning motivation level, benefited more from the tutorial in terms of knowledge gains using beginning motivation level as a between subjects factor and knowledge gains as a within-subjects factor. Results from this analysis indicated that student knowledge significantly increased after studying the tutorial (F(1,184) = 9.05, p<.05) for both groups. However, the interaction between beginning motivation level and knowledge gain was not significant indicating that both motivation groups benefited similarly from the tutorial.

Time spent on tutorial group differences

The second goal was to determine which groups of students spent more time on the tutorial. Two separate ANOVAs were conducted to examine time spent on the tutorial considering beginning motivation level and prior knowledge as between subject factors. Results from the first analysis found that students with high beginning motivation levels spent more time on the adaptable tutorial (M=18.06, SD=10.51) than the students with low beginning motivation (M=13.64, SD=9.97), F(1,182)=7.95, p<.05. The second analysis, revealed that students in high beginning prior knowledge group spent more time studying the tutorial (M=19.16, SD= 10.71), than those belonging to the low knowledge group (M=15.61, SD= 10.36). However, this difference was not statistically significant.

Tutorial appraisal group differences

The third goal was to determine which groups of students were satisfied more with the tutorial. Two separate ANOVAs were conducted to examine students' overall appraisal of the tutorial considering beginning motivation and prior knowledge levels as between subject factors. In terms of appraisal ratings, no significant difference was found between students with low and high motivation levels. However, the results from the analysis indicated that students from the high beginning prior knowledge group rated the tutorial significantly higher than those from the low beginning prior knowledge group, F(1,182)= 6.89, p<.05.

Students' perceptions of the tutorial

The second goal of the evaluation was to identify weak or problematic areas in terms of usability. Various data sources including a student questionnaire, an evaluation survey and focus group interview were combined to evaluate students' perception of the system. Data were collected on student perceptions related to visual design, organization and navigation, content presentation, assessment and feedback, and technical issues.

According to the survey results, participants were overall satisfied with the visual design of the system (M=3.94, SD=0.39), organization/navigation of the system (M= 4.11, SD= 0.49), the presented content and examples provided in the system (M=3.62, SD=0.44) and the assessment/feedback of the system (M=3.50, SD=0.97). Moreover, participants' perceptions about the use of multimedia in the tutorial were moderately high (M=3.43, SD=0.49) on a five-point Likert scale. These results were also supported with the interview and student surveys. Table 2 summarizes these findings.

During the second stage of formative evaluation, students were observed while they studied the tutorial in a computer lab setting. A few computer screens were observed to freeze while the students were studying, but these technical difficulties were minimal and isolated.

Summary of major findings

The summaries of key findings are presented below:

* Low prior knowledge students benefited more from the tutorial, in terms of knowledge gains, than high prior knowledge students.

* Both high and low motivation groups of students benefited similarly in terms of motivation gains.

* Students with high motivation spent more time studying the tutorial than low motivation students.

* Overall, students found the tutorial easy to read, comprehend, navigate, and enjoyed interactivity. However, some students felt that the topic was boring, content was not relevant to their majors, and did not like the long surveys and assessment questions.

Discussion

For this study, researchers conducted two stages of formative evaluation on an adaptable tutorial prototype with undergraduate students with various backgrounds. Results from the formative evaluation found that low prior knowledge students seemed to benefit more from the tutorial, in terms of knowledge gains, than students of high prior knowledge. These results were similar to results from ELMART (Weber & Specht, 1997) which found that novice students benefited more from adaptive support provided by the tutorial, than expert learners. In terms of motivation, both high and low motivation groups benefited similarly from the tutorial. Results from appraisal ratings indicated that overall students were satisfied with the adaptive tutorial. Similar indications of positive student satisfaction were found from the evaluation of INSPIRE (Papanikolaou et al., 2003), AES-CS (Triantafillou et al., 2002), and ELMART (Weber & Specht, 1997).

In terms of tutorial usage, results indicated that high motivation students spent more time studying the tutorial than low motivation students. These results are consistent with Hodges (2004) who asserts that motivated learners are more inclined to continue learning. Some students indicated that the topic was boring and not relevant to their major. Research has indicated that by implementing relevance enhancing strategies, student motivation and performance will increase, especially for the learners who do not find the material interesting or relevant (Artino, 2008; Song & Keller, 2001).

Analysis of interview data and open ended questions found that students felt the tutorial was easy to read and follow, enjoyed the interactivity, and liked the navigation and organization. It was also discovered that some of the students felt that the material was too easy, while others found assessment questions too hard and difficult to understand. Many students reported dissatisfaction with the intermediate assessment surveys which they felt were "long and repetitive." As from the evaluation of AES-CS (Triantafillou et al., 2002), students from this study suggested that more information be provided to them about the number of sections, questions, and anticipated completion time.

Conclusion

AEH systems provide many advantages to students because they more closely address the issue of individual differences. However, the impact of such systems may be limited if only certain groups benefit more than others and/or if usability issues become problematic. Because the design and development of AEH systems is time consuming and costly, formative evaluation of such systems is crucial and should become a common practice (Brusilovsky et al., 2004; Gena, 2005; Paramythis, Weibelzahl, & Masthoff, 2010). In this study, formative evaluation results encouraged researchers to reconsider certain aspects of their tutorial's current design and to offer recommendations for future research.

First, although the intended goal of this tutorial was to benefit students with differing knowledge and motivation, results indicate that low prior knowledge students benefited more than high prior knowledge students. It is therefore recommended that future research explore how the design of content can be adapted to better meet the needs of more advanced learners. Instructional design strategies which have been shown to be effective for advanced learners include: less structured instruction (Clarke et al., 2005; Kalyuga, 2007); multiple links to additional resources (Brusilovsky et al., 2009), more control of navigation (Chrysostomou et al., 2009; Deubel, 2003; Kalyuga, 2007), and faster transition between topics (Reisslein, 2005).

Secondly, while the collection of student data was vital for placing students into their respective user model clusters, evaluation results indicated that one of the most disliked features of the tutorial was the frequency and number of assessment items. In addition to the negative impact on student perceptions, previous research suggests that the intrusiveness of assessments may have unintended effects on student motivation and performance (Salden, Aleven, Schwonke, & Renkl, 2010). It is therefore recommended that future research investigate alternative evaluation structures and/or techniques for collecting data from students learning from adaptable tutorials. Some of these assessment techniques which show promise include the use of Rapid Dynamic Assessment (Kalyuga & Sweller, 2004, 2005) and/or Bayesian Knowledge Tracing algorithm (Corbett & Anderson, 1995).

Third, although measures were taken to maintain and/or enhance student motivation with the design of content, evaluation results indicate that some students describe the topic as boring. Future research on adaptable tutorial which individualize instruction based on student motivation, may consider embedding more relevance enhancing techniques, such as including authentic content which relates to students' majors, interests, and/or hobbies. Research has found that relevance is one of the most important factors to consider when designing motivational instruction (Edelson & Joseph, 2004; Kember, Ho, & Hong, 2008).

Finally, although this study examines important issues that should be considered when designing an adaptable tutorial (i.e., examination of whether the tutorial allows students to meet objectives and the identification problematic usability issues) (Hook, 2000; Woolf, 2009), future studies should attempt to more closely examine whether the benefits of such tutorial or systems are due to adaptation (e.g., successful user modeling and adaptive decision making) or to the design of the system in general (e.g., system interface). Previous studies have found that one of the major challenges of evaluating adaptive systems is the difficulty of isolating the "adaptivity" component of the system. However, recent adaptive system evaluation frameworks, such as those proposed by Paramythis et al. (2010), show promise by providing a layered evaluation approach, in which, adaptivity is "decomposed" and evaluation is conducted in a "piece-wise" manner.

References

Akdemir, O., & Koszalka, T. A. (2008). Investigating the relationships among instructional strategies and learning styles in online environments. Computers & Education, 50(4), 1451-1461.

Abidi, S. S. R. (2009). Intelligent Information personalization: From issues to strategies. In C. Mourlas & P. Germanakos (Eds.), Intelligent User Interfaces: Adaptation and Personalization Systems and Technologies (pp. 118-146). Hershey, PA: IGI Global Press.

Artino, A. R. (2008). Motivational beliefs and perceptions of instructional quality: Predicting satisfaction with online training. Journal of Computer Assisted Learning, 24(3), 260-270.

Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. G. (2008). Why is externally facilitated regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45-72.

Bajraktarevic, N., Hall, W., & Fullick, P. (2003, May). Incorporating learning styles in hypermedia environment: Empirical evaluation. Paper presented at the Adaptive Hypermedia and Adaptive Web-Based Systems Workshop, Budapest, Hungary.

Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for teaching them. Journal of Education for Business, 79(4), 245-253.

Brusilovsky, P. (2001). Adaptive hypermedia. User Modeling and User-Adapted Interaction, 11(1-2), 87-110.

Brusilovsky, P. (2003). Adaptive navigation support in educational hypermedia: The role of student knowledge level and the case for meta-adaptation. British Journal of Educational Technology, 34(4), 487-497.

Brusilovsky, P., & Pesin, L. (1998). Adaptive navigation support in educational hypermedia: An evaluation of the ISIS-Tutor. Journal of Computing and Information Technology, 6(1), 27-38.

Brusilovsky, P., Eklund, J., & Schwarz, E. (1998). Web-based education for all: A tool for developing adaptive courseware. Computer Networks and ISDN Systems, 30(1-7), 291-300.

Brusilovsky, P., Karagiannidis, C., & Sampson, D. (2004). Layered evaluation of adaptive learning systems. International Journal of Continuing Engineering Education and Life-Long Learning, 14(4-5), 402-421.

Brusilovsky, P., Sosnovsky, S., & Yudelson, M. (2009). Addictive links: The motivational value of adaptive link annotation. New Review of Hypermedia & Multimedia, 15(1), 97-118.

Brusilovsky, P., Yudelson, M., & Hsiao, I.-H. (2009). Problem-solving examples as interactive learning objects for educational digital libraries. Journal of Educational Multimedia and Hypermedia, 18(3), 267-288.

Chen, L.-H. (2010). Web-based learning programs: Use by learners with various cognitive styles. Computers & Education, 54(4), 1028-1035.

Chen, S. Y., & Paul, R. J. (2003). Editorial: Individual differences in web-based instruction-an overview. British Journal of Educational Technology, 34(4), 385-392.

Chin, D. N. (2001). Empirical evaluation of user models and user-adapted systems. User Modeling and User-Adapted Interaction, 11(1-2), 181-194.

Chrysostomou, K., Chen, S. Y., & Xiaohui, L. (2009). Investigation of users' preferences in interactive multimedia learning systems: A data mining approach. Interactive Learning Environments, 17(2), 151-163.

Clarke, T., Ayres, P., & Sweller, J. (2005). The impact of sequencing and prior knowledge on learning mathematics through spreadsheet applications. Educational Technology Research & Development, 53(3), 15-24.

Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4(4), 253-278.

Dick, W., Carey, L., & Carey, J. (2005). The systematic design of instruction (6th ed.). Boston, MA: Allyn & Bacon.

Dogan, B. (2008). Association rule mining from an intelligent tutor. Journal of educational technology systems, 36(4), 433.

Deubel, P. (2003). An investigation of behaviorist and cognitive approaches to instructional multimedia design. Journal of Educational Multimedia and Hypermedia, 12(1), 63-90.

Edelson, D. C., & Joseph, D. M. (2004, June). The interest-driven learning design framework: motivating learning through usefulness. Paper presented at the 6th international conference on Learning Sciences Santa Monica, California.

Elissavet, G., & Economides, A. A. (2003). An evaluation instrument for hypermedia courseware. Educational Technology & Society, 6(2), 31-44.

Gena, C. (2005). Methods and techniques for the evaluation of user-adaptive systems. The Knowledge Engineering Review, 20(1), 1-37.

Graf, S., Liu, T.-C., Kinshuk, Chen, N.-S., & Yang, S. J. H. (2009). Learning styles and cognitive traits--their relationship and its benefits in web-based educational systems. Computers in Human Behavior, 25(6), 1280-1289.

Halsne A., & Gatta, L. (2002). Online versus traditionally-delivered instruction: A descriptive study of learner characteristics in a community college setting. Online Journal of Distance Learning Administration, 5(1). Retrieved September 2, 2010 from http://www.westga.edu/~distance/ojdla/spring51/halsne51.html

Hodges, C. B. (2004). Designing to motivate: Motivational techniques to incorporate in e-learning experiences. The Journal of Interactive Online Learning, (2)3. Retrieved September 2, 2010, from http://www.ncolr.org/jiol/issues/PDF/2.3.1.pdf

Hook, K. (1998). Evaluating the utility and usability of an adaptive hypermedia system. Knowledge-Based Systems, 10(5), 311-319.

Hook, K. (2000). Steps to take before IUIs become real. Journal of Interacting with Computers, 12(4), 409-426.

Huang, W., Diefes Dux, H., & Imbrie, P. K. (2006). A preliminary validation of Attention, Relevance, Confidence and Satisfaction model based Instructional Material Motivational Survey in a computer-based tutorial setting. British Journal of Educational Technology, 37(2), 243-259.

Huang, W. H. (2010). Evaluating learners' motivational and cognitive processing in an online game-based learning environment. Computers in Human Behavior, 27(2), 694-704.

Inan, F. A., Flores, R., Ari, F., & Arslan-Ari, I. (2010, April). Toward individualized online learning: The design and development of an adaptive web-based learning environment. Paper presented at the Annual Meeting of the American Educational Research Association, Denver, Colorado.

Inan, F. A., & Grant, M. M. (2008). Individualized Web-based instruction: Strategies and guidelines for instructional designer. In T. Kidd & H. Song (Eds.), Handbook of research on instructional systems & technology (pp. 582-595). Harrisburg, PA: Idea Group Publishing.

Inan, F. A., Yukselturk, E., & Grant, M. M. (2009). Profiling potential dropout students by individual characteristics in an online certificate program. International Journal of Instructional Media, 36(2), 163-176.

Jameson, A. (2003). Adaptive interfaces and agents. In J. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook (pp. 316-318). Mahwah, NJ: Lawrence Erlbaum Associates.

Johns, J., & Woolf, B. (2006, July). A dynamic mixture model to detect student motivation and proficiency. Paper presented at the 21st National Conference on Artificial Intelligence (AAAI- 2006), Boston, MA.

Kalyuga, S. (2006). Assessment of learners' organised knowledge structures in adaptive learning environments. Applied Cognitive Psychology, 20(3), 333-342.

Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19(4), 509-539.

Kalyuga, S., & Sweller, J. (2004). Measuring knowledge to optimize cognitive load factors during instruction. Journal of Educational Psychology, 96(3), 558-568.

Kalyuga, S., & Sweller, J. (2005). Rapid dynamic assessment of expertise to improve the efficiency of adaptive e-learning. Educational Technology Research and Development, 53(3), 83-93.

Kay, K., & Knaack, L. (2009). Assessing learning, quality and engagement in learning objects: The learning object evaluation scale for students (LOES-S). Educational Technology Research and Development, 57(2), 147-168.

Keller, J. M. (1987a). Development and use of the ARCS model of instructional design. Journal of Instructional Development, 10(3), 2-10.

Keller, J. M. (1987b). Strategies for stimulating the motivation to learn. Performance & Instruction, 26(8), 1-7.

Kember, D., Ho, A., & Hong, C. (2008). The importance of establishing relevance in motivating student learning. Active Learning in Higher Education, 9(3), 249-263.

Kenny, C., & Pahl, C. (2009). Intelligent and adaptive tutoring for active learning and training environments. Interactive Learning Environments, 17(2), 181-195.

Lee, J., & Park, O. (2007). Adaptive instructional system. In J. M. Spector, M. D. Merrill, J. v. Merrienboer & M. P. Driscoll (Eds.), Handbook of research for educational communications and technology (3rd ed., pp. 469-484). New York: Routledge.

Masthoff, J. (2002). The evaluation of adaptive systems. In N. V. Patel (Ed.), Adaptive evolutionary information systems (pp. 329347). Hershey, PA: Idea Group Publishing.

Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass.

Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis: A sourcebook of new methods. Beverly Hills, CA: Sage Publications.

Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view (2nd ed.). Belmont, CA: Wadsworth Publishing Co.

Nordhoff, H. (2002). The design and implementation of a computer-based course using Merrill's model of instructional design (Unpublished master's thesis). University of Pretoria, Gauteng, South Africa. Retrieved April 26, 2009, from http://upetd.up.ac.za/thesis/available/etd-08022002-094043/

Papanikolaou, K. A., Grigoriadou, M., Kornilakis, H., & Magoulas, G. D. (2003). Personalizing the interaction in a web-based educational hypermedia system: The case of INSPIRE. User-Modeling and User-Adapted Interaction, 13 (3), 213-267.

Paramythis, A., Weibelzahl, S., & Masthoff, J. (2010). Layered evaluation of interactive adaptive systems: Framework and formative methods. User Modeling and User-Adapted Interaction, 20(5), 383-453.

Reisslein, J. (2005). Learner achievement and attitudes under varying paces of transitioning to independent problem solving (Unpublished doctoral dissertation). Arizona State University, Phoenix, Arizona.

Rovai, A. (2003). In search of higher persistence rates in distance education online programs. Internet and Higher Education, 6(1), 1-16.

Sahari, N., Abdul Ghani, A. A., Selamat, H., & Yunus, A. S. (2009). Development and validation of mathematics courseware usefulness evaluation instrument for teachers. Journal of Applied Sciences, 9(3), 535-541.

Salden, R. J. C. M., Aleven, V., Schwonke, R., & Renkl, A. (2010). The expertise reversal effect and worked examples in tutored problem solving. Instructional Science, 38(3), 289-307.

Scheiter, K., & Gerjets, P. (2007). Learner control in hypermedia environments. Educational Psychology Review, 19(3), 285-307.

Scheiter, K., Gerjets, P., Vollmann, B., & Catrambone, R. (2009). The impact of learner characteristics on information utilization strategies, cognitive load experienced, and performance in hypermedia learning. Learning & Instruction, 19(5), 387-401.

Shute, V. J., & Zapata-Rivera, D. (2007). Adaptive technologies. In J. M. Spector, M. D. Merrill, J. v. Merrienboer & M. P. Driscoll (Eds.), Handbook of research for educational communications and technology (3rd ed., pp. 277-294). New York: Routledge.

Song, S. H., & Keller, J. M. (2001). Effectiveness of motivationally adaptive computer-assisted instruction on the dynamic aspects of motivation. Educational Technology Research & Development, 49(2), 5-22.

Stash, N., Cristea, A., and de Bra, P. (2006). Adaptation to Learning styles in elearning approach evaluation. In T. Reeves & S. Yamashita (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 284-291). Chesapeake, VA: AACE.

Sullivan, P. (2001). Gender differences and the online classroom. Male and female college students evaluate their experiences. Community College Journal of Research and Practice, 25(1), 805-818.

Triantafillou, E., Pomportsis, A., & Demetriadis, S. (2003). The design and the formative evaluation of an adaptive educational system based on cognitive styles. Computers & Education, 41(1), 87-103.

Triantafillou, E., Pomportis, A., & Georgiadou, E. (2002). AES-CS: Adaptive educational system based on cognitive styles. In P. Brusilovsky, N. Henze & E. Millan (Eds.), Proceedings of Workshop on Adaptive Systems for Web-Based Education at the 2nd International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (pp. 1-11). Malaga, Spain: University of Malaga.

Tsianos, N., Germanakos, P., Lekkas, Z., Mourlas, C., & Samaras, G. (2009). An assessment of human factors in adaptive hypermedia environments. In C. Mourlas & P. Germanakos (Eds.), Intelligent user interfaces: Adaptation and personalization systems and technologies (pp. 1-18). Hershey, PA: Information Science Reference.

Tuovinen, J., & Sweller, J. (1999). A comparison of cognitive load associated with discovery learning and worked examples. Journal of Educational Psychology, 91(2), 334-341.

Velsen, L., Geest, T. V. D., Klaassen, R., & Steehouder, M. (2008). User-centered evaluation of adaptive and adaptable systems: A literature review. The Knowledge Engineering Review, 23(3), 261-281.

Weber, G., & Specht, M. (1997). User modeling and adaptive navigation support in WWW-based tutoring systems. In A. Jameson, C. Paris & C. Tasso (Eds.), User Modeling: Proceedings of the Sixth International Conference (pp. 289-300). Vienna, NY: Springer-Verlag.

Weibelzahl S. (2005). Problems and pitfalls in the evaluation of adaptive systems. In S. Y. Chen & G. D. Magoulas (Eds.), Adaptable and Adaptive Hypermedia Systems (pp. 285-299). London, UK: IRM Press.

Woolf, B. P. (2009). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Burlington MA: Morgan Kaufman Publishers.

Yukselturk, E., & Bulut, S. (2007). Predictors for Student Success in an Online Course. Journal of Educational Technology & Society, 10(2), 71-83.

Raymond Flores, Fatih Ari, Fethi A. Inan and Ismahan Arslan-Ari

Educational Instructional Technology, Department of Educational Psychology and Leadership, College of Education, Texas Tech University, USA // rayflores983@gmail.com // fatihari@gmail.com // inanfethi@gmail.com // ismihanarslan@gmail.com

(Submitted October 13, 2010; Revised May 9, 2011; Accepted May 16, 2011)
Table 1. Summary of data collection process

              Procedure           Description

STAGE -1    Online Field        Students used
            Testing             Adaptable Tutorial
                                individually at a
                                distance

STAGE -2    In-Class Observed   Students used
            Field Testing       adaptable tutorial in
                                class while researcher
                                observe/log students'
                                actions

                 Instruments            # Participants

STAGE -1    - Student Questionnaire   153 Undergraduate
            - System Logs             students from various
            - Motivation Scale        degree programs
            - Achievement Tests

STAGE -2    - Observation Form        36 Undergraduate
            - Formative Evaluation    students from various
            Survey                    degree programs
            - Student Questionnaire
            - System Logs
            - Motivation Scale
            - Achievement Tests

Table 2. Summary of student evaluation of the tutorial.

                 Most liked

Visual design    Readable text, pleasing graphics and
                 animations, and high quality
                 animations
Organization/    Well-organized, easy to follow, easy
navigation       and simple navigation and quickly
                 loaded pages

Content          Segmented simple pages, useful
presentation     examples and practices, and
                 supportive graphics and photos
Assessment/      Beneficial practices and examples
feedback

                 Least Liked

Visual design    Plain and dull colors

Organization/    Scroll down structure
navigation

Content          Not beneficial for people
presentation     with high prior knowledge

Assessment/      Too many knowledge
feedback         assessments and some
                 difficult assessment
                 questions

                 Suggestions

Visual design    Use more appealing colors

Organization/    Minimize the scrolling.
navigation       Add more information about
                 the number of sections and
                 estimated time completion
Content          Add more advanced content
presentation     for individuals with high prior
                 knowledge
Assessment/      Minimize the number and
feedback         reword knowledge assessment
                 questions
COPYRIGHT 2012 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Flores, Raymond; Ari, Fatih; Inan, Fethi A.; Arslan-Ari, Ismahan
Publication:Educational Technology & Society
Article Type:Report
Geographic Code:1USA
Date:Jul 1, 2012
Words:6444
Previous Article:Exploring the factors influencing learning effectiveness in digital game-based learning.
Next Article:An ecological approach to learning dynamics.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters