Printer Friendly

A user-centric adaptive learning system for e-learning 2.0.

Introduction

Accompanying the rapid growth of Web 2.0, e-learning is evolving toward a new trend: e-learning 2.0 (Downes, 2005). Learners share their knowledge, search for the knowledge they need, and decide learning content by themselves through social software platforms. Therefore, e-learning 2.0 is a kind of collaborative and user-centric learning, which is based on collective intelligence rather than a few experts' knowledge. Some studies propose e-learning 2.0 systems that involve learners in designing, problem solving, or decision making through collaboration and communication tools (Helic, Maurer, & Scerbakov, 2004; Helic, Krottmaier, Maurer, & Scerbakov, 2005; Chow, Fan, Chan, & Wong, 2009). In an e-learning 2.0 environment, however, searching and navigating specific knowledge is typically a tedious and time-consuming task (Safran, Helic, & Gutl, 2007). Such a variety and quantity of knowledge content may cause information or cognitive overload. Consequently, learners easily lose their focus and control over their learning processes (Karrer, 2006b).

Adaptive e-learning is able to provide efficient and formal learning by supporting different learning paths and materials to fit learners' diverse needs and backgrounds (Bra, Brusilovsky, & Houben, 1999; Mallak, 2001; Blochl, Rumetshofer, & Wob, 2003). However, learning paths and content provided by most adaptive learning systems are designed by a few experts and disobey the user-centric principle in e-learning 2.0. We expect that combining usercentric learning with adaptive learning based on collective intelligence will be able to supply learners with more self-control and promote efficient learning in the e-learning 2.0 environment.

This study aims to develop a user-centric adaptive learning system called UALS. This system can be built on e-learning 2.0 platforms such as educational blogs, wikis, or blikis (Huang & Yang, 2009). These platforms enable users to share, create, and collaboratively edit knowledge content. The UALS exploits users' collective intelligence to dynamically provide adaptive learning paths and materials. This research adopts a collaborative voting approach enabling users to mutually decide material difficulties, and applies sequential pattern mining to extract concept-sequence patterns from user-created resources. The system uses sequential rules to personalize user-centric learning paths and employs Item Response Theory (IRT) to evaluate learners' abilities and recommends the most suitable learning content to them. To evaluate the UALS, this research will certify that learning guided by users' collective intelligence is comparable to expert-designed learning.

E-learning 2.0

The main characteristic of e-learning 2.0 is that students can actively control their learning content and direction, which restores learning commands to learners (Downes, 2005; Karrer, 2006a). Corresponding to Web 2.0, Internet users create and distribute content to others by social software, e.g., blogs and wikis. In e-learning 2.0, learning content is no longer produced by instructors or courseware authors and becomes more user-centric, where content is used and created by learners themselves (Downes, 2005). In other words, e-learning 2.0 links learners with other learners, as well as learners and learning resources.

Karrer (2006a) compared the e-learning generations, such as e-learning 1.0, e-learning 1.3, and e-learning 2.0. As e-learning develops, it becomes more user centric, bottom-up fashioned, and more on-demand. In e-learning 2.0, online learning becomes a platform in which content is created, shared, remixed, repurposed, and passed along rather than a medium in which content is only delivered and consumed. E-learning software becomes a content-authoring tool and not a content-consumption tool (Chow et al., 2009).

Today's e-learning 2.0 platforms are able to support the creation and sharing of knowledge content and building collective intelligence. Learners can search for knowledge content and decide which content is suitable for their learning. However, searching and organizing suitable content can easily make learners lose their focus on learning. Therefore, how to provide learning guidance based on collective intelligence is an important issue.

Item Response Theory

IRT can measure a learner's ability on the basis of strict assumptions and mathematical principles (Lord, 1980). The following formula is the one-parameter logistic model (Hambleton, 1985; Horward, 1990):

[P.sub.j]([theta]) = [e.sup.([theta]-bj)/1 + [e.sup.([theta]-bj), j = 1,2,...,n. (1)

Assume that a learner responds to the jth item, e is the mathematical constant 2.71828, n is the total number of items, [b.sub.j] represents the difficulty level of item j, and [P.sub.j]([theta]) denotes the probability that the learner can correctly respond to the [j.sup.th] item with his/her ability [theta]. In addition to the parameter item difficulty, the model can further consider the parameters discrimination level of the item and the probability that the learner responds with the correct answer by guess. For simplicity, this research focuses on the one-parameter model. Maximum likelihood estimation (MLE) is usually applied to estimate a learner's ability based on the following likelihood function (Hambleton, Swaminathan, & Rogers, 1991):

L([u.sub.1], [u.sub.2], [u.sub.3],..., [u.sub.n] | [theta]) = [[PHI].sup.n.sub.j=1][P.sub.j][([theta]).sup.uj][Q.sub.j][([theta]).sup.1-uj] (2)

where ([u.sub.1], [u.sub.2], ..., [u.sub.n] | [theta]) is a response pattern that a learner with ability [theta] responds to a set of n items. An element [u.sub.j] (1 [less than or equal to] j [less than or equal to] n) is either 1 or 0 for the jth item; 1 represents that the learner correctly responds to the item, and 0 represents that the learner incorrectly responds to the item. [Q.sub.j]([theta]) represents the probability that the learner cannot correctly respond to the jth item with their ability level [theta], and [Q.sub.j]([theta]) = 1 - [P.sub.j]([theta]). To select the most suitable item for a learner, the following information function can be used:

[I.sub.j]([theta]) = [1.7.sup.2]/[e.sup.1.7([theta]-bj)][1 + [e.sup.-1.7([theta]-bj)]] (3)

As learner ability [theta] and item difficulty b come closer, the information function value becomes higher; therefore, an item with a maximum information function value given a learner with ability [theta] has the highest recommendation priority.

IRT is usually applied in the Computerized Adaptive Test (CAT) domain. Chen, Lee, & Chen (2005) substituted learning materials for test items; they designed a personalized e-learning system based on IRT to provide learning paths that can be adapted to various difficulty levels of course materials as well as various learner abilities. The success of using IRT in the CAT and e-learning inspires this research to use IRT to measure learners' abilities and recommend adaptive learning content to them. A learner's ability is an important characteristic that should be considered because very difficult content frustrates learners and very easy content bores learners.

Concept sequences and sequential pattern analysis

In a learning process, the learning focus of each phase is called "concept" (Lee, Lee, & Leu, 2009). When learners study domain knowledge, they break their knowledge into small parts, and then rearrange or reorder it into a format that makes sense to them. Learners then develop links between these small concepts until they fully grasp the knowledge (Novak & Gowin, 1984). Epistemological order represents the order of learning concepts (Polya, 1957). An epistemological order of concepts is called a concept sequence in this research. In order to obtain the concept sequences approved by most users, the UALS use sequential pattern mining to extract the patterns of concept sequences from user-created learning materials on the Web.

Sequential pattern mining is usually used to find a recurring pattern related to time or other sequence. Sequential pattern analysis is usually applied in the business domain to help managers determine which items are bought after other items have been bought (Han & Kamber, 2001), or to analyze the browsing order of Web pages (Spiliopoulou, 2000). This research uses the Generalized Sequential Pattern (GSP) algorithm to implement sequential pattern mining because it is efficient and is able to generate all possible candidate sequences so that missing any actual sequences can be avoided (Srikant & Agrawal, 1995). The output of this algorithm is all maximal sequences in frequent-sequence sets. Maximal sequences are frequent sequences and not included in other sequences. In this study, maximal concept sequences represent the epistemological orders of these concepts, and these orders are accepted by most users. The UALS generates learning paths according to the discovered maximal concept sequences.

A sequential rule is an implication of the form, X[right arrow]Y, where Y is a sequence and X is a proper subsequence of Y, i.e., the length of Y is greater than that of X (Liu, 2007). This implication means that if a sequence X exists, we can find a sequence Y containing it. In this study, X represents a sequence formed by concepts not understood by a learner, and Y is a frequent concept sequence. If the sequential rule X[right arrow]Y has a high confidence (the proportion of all materials that contain X also contain Y), which indicates sequence Y has the relationship of implication with concept sequence X; therefore, sequence Y is a potential learning path for the learner.

System architecture

The system architecture is illustrated in Figure 1. The operation procedure can be classified into frontend and backend processes. In the backend process, the system collects materials from Web resources created by Internet users and analyzes the concepts included in them. Such materials and knowledge concepts are recorded in a material database (Step A). On the basis of the material database, the concept analysis process analyzes the concept sequences in materials by sequential pattern mining and stores frequent-concept sequences in a concept-sequence database (Step B and C). In a test items modeling process, an instructor designs pre- and post-tests based on the concepts that were recorded in a concept-sequence database and stores test questions in a test database (Step D and E). Notably, testing is not necessary in an e-learning 2.0 environment; the pre- and post-tests designed here are used for measuring learners' learning performance and evaluating the system.

The frontend process can be classified into the following stages:

1. Initial stage (Steps 1~3): Learners login this system and select a course to study, and the interface agent requests the learning-path agent to provide learning services.

2. Pre-test stage (Steps 4~7): The learning-path agent notifies the test agent to provide the learner with a pre-test. The test agent analyzes test results to find concepts not understood by the learner and transmits them to the learning-path agent.

3. Path generation stage (Steps 8~9): The learning-path agent applies sequential rules to generate an individual learning path based on concepts not understood by the learner and existent concept sequences in database. When a learning path is generated, a learning-path agent stores it in a user-profile database.

4. Learning stage (Steps 10~17): The learning-path agent notifies a material-recommendation agent to provide a learning content for a given concept. The agent recommends material matching the difficulty level to the learner's ability. The learner is asked to indicate his/her comprehension level and perceived material difficulty after s/he studies this material. A feedback agent collects learner feedback and re-evaluates learner ability and material difficulty. If the learner is able to comprehend this content, the learning-path agent navigates the next learning concept. Otherwise, the learner continues studying the same concept with different materials. This procedure repeats until all the concepts in the learning path have been learned.

5. Post-test stage (Steps 18-21): If the learning-path agent senses that the learner has already finished the entire learning path, it notifies the test agent to provide post-test questions for the learner.

[FIGURE 1 OMITTED]

Adaptive navigation support

The system collects user-created teaching materials from Web pages and blogs and divides these materials into concept units. When users share their knowledge on the Web, they usually arrange concept units in order, like a list or a catalog, based on their cognitions about domain knowledge. Each user-created teaching material presents a concept sequence, which represents the user's notion of learning order. The system discovers the concept sequences that frequently occur in user-created materials. Thus, frequent concept sequences are collaboratively decided by Internet users. The discovered sequential patterns are a kind of collective intelligence and are used to support adaptive navigation.

Concept sequences analysis

The following example demonstrates the way to find concept-sequence patterns. Assume that five materials about C++ programming are collected; then, the frequent concept sequences are generated by the following steps:

1. The concept order in each material is presented as a sequence. Table 1 shows the concept sequences in the five materials.

2. Find frequent 1-itemset in which all the items' support levels are higher than the threshold. In this example, the minimum support is set to 40%, which means that the concept which appears at least twice in the five materials will be a frequent concept.

3. If a concept is not included in frequent 1-itemset, it can be deleted; on the contrary, frequent concepts are mapped to assigned numbers for analysis (see Table 2). Table 3 shows the concept sequences after mapping.

4. Use frequent 1-itemset to find other frequent sequences with different lengths. The GSP algorithm is employed to generate candidate sequences and frequent sequences. Finally, the algorithm returns the maximal sequences. Table 4 shows all frequent sequences and the sequences < 1 5 > and < 1 2 3 4 > are the maximal sequences.

Learning path generation and navigation

To achieve personalization, a learner's prior knowledge level should be considered to generate the learning path. An instructor predefines the minimum score for each concept in the pre-test to diagnose whether a learner understands these concepts; if a learner does not get scores higher than the predefined scores of some concepts, the system knows that the learner is not able to comprehend these concepts. Notably, this pre-test procedure can be replaced by self-reporting which concepts are not yet comprehended in the e-learning 2.0 environment. Given that the minimum support is 40% and the minimum confidence is 60%, if the pre-test result indicates that the learner cannot comprehend concepts 2, 3, and 5, the learning-path generation by sequential rules is described as follows:

1. According to the maximal concept sequences in Table 4, we can present concepts not understood by the learner as sequences < 2 3 > and < 5 >, that are called learner's un-comprehended concept sequences.

2. Learner's un-comprehended concept sequences are employed to find their corresponding sequential rules: < 2 3 >[right arrow]Y and < 5 >[right arrow]Z. Y and Z are sequences that are implied by < 2 3 > and < 5 >, respectively.

3. Two rules can be found based on the minimum confidence: Rule 1: < 2 3 >[right arrow]< 1 2 3 4 > (sup. = 40%, conf. = 100%). Rule 2: < 5 >[right arrow]< 1 5 > (sup. = 40%, conf. = 67%). If a sequential rule, e.g., < 2 3 >[right arrow]< 1 2 3 4 >, does not satisfy the minimum confidence, the sub-sequences of < 1 2 3 4 >, e.g. < 1 2 3 > or < 2 3 4 >, will be considered to find other sequential rules.

4. According to these two rules, the un-comprehended concepts 2, 3, and 5, all have prior concept 1, therefore concept 1 is the first learning concept in the learning path. The confidences of rules will be used to decide the sequence priority because the confidence indicates how strong the rules are. Because Rule 1 has higher confidence than Rule 2, the concept order in the learning path is 1[right arrow]2[right arrow]3[right arrow]5. Notably, concept 4 is not a prior concept of the un-comprehended concepts 2, 3, and 5, and the learner has already understood this concept; therefore it does not need to be included in the learning path.

Adaptive presentation

The adaptive presentation mechanism considers both material difficulty and learner ability because these factors affect the suitability of materials to a learner. The following subsections will describe how to estimate material difficulty and learner ability, and how to adaptively present materials.

Adjustment of material difficulty level

It is inappropriate for an instructor to determine the difficulty levels of course materials because an instructor's view may not represent learners' view. This system automatically adjusts the difficulty levels of materials on the basis of collaborative voting approach (Jiang, Tseng, & Lin, 1999; Chen et al., 2005). A 5-point Likert scale is employed to measure learners' perceptions of material difficulty and the scale ranges from -2 (very easy) to +2 (very hard) to indicate the difficulty levels from [D.sub.1] to [D.sub.5]. The difficulty of a concept material is estimated using the following formula:

[b.sub.j](voting) = [[summation].sup.5.sub.i=1][n.sub.ij] x [D.sub.i] + [b.sub.j](initial)/[N.sub.j] + 1 (4)

where [b.sub.j] (voting) denotes the average difficulty of the jth concept material after learners give a collaborative vote. The variable [b.sub.j] (initial) is the initial difficulty of the jth concept material that can be predefined by an instructor or the material provider. The variable [n.sub.ij] represents the number of learners whose responses belong to the ith difficulty level for the jth material, and [N.sub.j] is the total number of learners who rate the jth concept material, and [N.sub.j] = [[summation].sup.5.sub.i=1][n.sub.ij].

Estimation of learner abilities and recommendation of adaptive materials

Assume that a learner responds to a set of n materials with response pattern ([u.sub.1], [u.sub.2], [u.sub.n]). A response [u.sub.j] = 1 means that the learner can understand the selected material j. On the contrary, [u.sub.j] = 0 represents that the learner cannot understand this material. Next, Formula 1 is used to calculate the probability that the learner can understand the jth concept material at an ability level [theta] on the basis of the adjusted material difficulty [b.sub.j] (voting), and Formula 2 is applied to estimate the learner's ability. The value of 6 that makes Formula 2 return the maximum value is the new estimated learner ability. Then, the information function shown in Formula 3 is applied to choose the most suitable material for the learner. The concept material with the maximum value of the information function given the new estimated learner ability [theta] is the material that best fits learner ability.

In addition to the adaptive learning-path generation and navigation approach, this adaptive presentation approach is also user-centric. Learners collaboratively rate material difficulties, and their abilities are estimated according to their response patterns and user-determined material difficulties. Both material difficulty and learner ability are adjusted dynamically.

Experimental design

This study aims to develop a user-centric adaptive learning system on the basis of collective intelligence. To evaluate its performance, a laboratory experiment was conducted in a computer room. It used a pre- and post-test experimental design to determine the effects of the proposed e-learning system. We chose "webpage design" as the experimental course because its user-created materials on the Web are sufficient to analyze patterns. Moreover, the subjects were undergraduate students who majored in Information Systems, and this course was relative to their field. The test questions in the pre-test and post-test were selected from TQC (Techficiency Quotient Certification) test bank. TQC is a computer literacy certification provided by a non-profit organization in Taiwan. In this experiment, subjects were randomly classified into three groups and requested to study through the course. Different groups of subjects used different learning mechanisms that are described as follows:

Group 1: Traditional e-learning that uses expert-defined learning paths and content--Users study through the learning paths and browse the materials that are pre-defined by an instructor.

Group 2: User-centric and adaptive learning path with expert-defined content--Users study the course following the user-centric learning paths determined by the adaptive navigation mechanism, but the material of each learning node is decided by the instructor.

Group 3: Complete user-centric adaptive learning--Users study the course following the learning paths determined by the adaptive navigation mechanism, and the materials of each learning node are determined by the adaptive presentation mechanism.

[FIGURE 2 OMITTED]

These groups utilized the same material resources and concepts that were found by the backend process. The experimental procedure comprises the following phases:

1. Pre-test: Through the pre-test, the system can obtain information about subjects' prior knowledge levels of the learning concepts.

2. Learning stage: Subjects were randomly classified into three groups and requested to study through the course. Different groups of subjects used different learning mechanisms. Their learning processes lasted for about one hour and unnecessary conversations between subjects were not allowed. Figure 2 illustrates the interface of the learning system.

3. Post-test and questionnaire: Subjects took a post-test when they finished their learning processes. The test results were used to measure their learning performance and the effectiveness of the system.

4. Data analysis: This study analyzes subjects' learning performance, and estimates learner ability and material difficulty. Data were collected from the pre-test, post-test, and the database.

Experimental result

Seventy-nine undergraduate students participated in this experiment; 27 were in Group 1, 26, in Group 2, and 26, in Group 3. The chi-square test for homogeneity reveals that the distributions of genders and learning experiences in these three groups are not different.

Analysis of learning performance

To evaluate whether the test questions in the pre-test and post-test are equally difficult, we invited 12 undergraduate students who majored in Information Systems to take these two tests and to indicate their self-perceived difficulties before the experiment. The results revealed that the test questions in the two tests were equally difficult in terms of testing scores and self-perceived difficulties.

Table 5 shows that post-test scores are significantly higher than pre-test scores in the three groups. We further analyzed the difference between subjects' post-test scores in these three groups. This study applies ANCOVA to treat pre-test scores as a covariate to eliminate its effect on post-test scores. The analysis result revealed that the post- test scores in the three groups are not significantly different (F = 1.019, p = 0.366).

We also analyzed subjects' self-perceived understanding levels about the domain knowledge before and after learning, using a 5-point Likert scale. The value of this scale ranges from 1 to 5, in which 1 indicates "completely not understood," and 5 is "completely understood." The result shows that students in user-centric learning groups (Group 2 and 3) felt that they could comprehend the domain knowledge more clearly after learning, while students in Group 1 did not (see Table 6). Furthermore, we found that increment of understanding level in Group 3 is significantly higher than that in Group 1 (p < 0.05, using Tukey post-hoc test).

Accordingly, we certify that user-centric learning is comparable to expert-designed learning because they are equally effective. Furthermore, we can infer that user-centric learning can satisfy learners' expectation and improve learner satisfaction because they can feel the enhancement of understanding level.

Analysis of frequent concept sequences and path lengths

We predefined 34 concepts of HTML (hypertext markup language) domain knowledge according to HTML books and employed these concepts to tag the concept units in online user-created materials. Fifty-three user-created HTML teaching materials were collected from the Web. After analyzing these materials by the GSP algorithm with the minimum support of 40%, 20 frequent concepts (see Table 7) and 125 frequent concept sequences were found.

This experiment used frequent concept sequences and sequential rules with a minimum confidence of 50% to construct user-centric learning paths in Groups 2 and 3. To understand the similarity or difference between usergenerated and expert-designed learning paths, we list the expert-designed path and user-generated path (generated by sequential pattern analysis) with the 20 concepts in Table 8. These paths illustrate that Internet users and the instructor have four identical notions of concept learning order in HTML domain. They are <3 4>, <2 5>, < 10 11 12 13 >, and < 14 15 16 17 18 19 >. For further understanding, some serial concepts are merged as more general concepts if they are relative to each other. Therefore, concepts 3 and 4 are merged as the general "HTML Fundamentals" concept; concepts 11, 12, and 13 are merged as "Hyperlink"; concepts 14, 15, and 16 are merged as

"Table"; and concepts 17 and 18 are merged as "Frame." Therefore, we find that most Internet users and the instructor have the consensus of the concept learning order: HTML Fundamentals--Layout Tags--Font Tags--Image [right arrow] Hyperlink [right arrow] Table [right arrow] Frame [right arrow] Form.

There are some different notions about the HTML domain between Internet users and the instructor. We list these differences and discuss them in the following:

* Different notions of Headings concept: In the user-generated path, users consider that the "Headings" concept should be learned after the general "HTML Fundamentals" concept and before "Layout Tags" concept. In the expert-designed path, "Headings" concept is after "Layout Tags," "Font Tags," and "Text Formatting" concepts. Therefore, we infer that users considered that both heading tags (i.e., <h1> to <h6>) and layout tags (e.g., &lt;p&gt;, <br>, <center>, and <pre>) belong to the general "Layout" concept. The possible reason is that heading tags control headlines of text content and they also have a blank line effect like paragraph tag &lt;p&gt; and line break tag <br>. On the other hand, the instructor considered heading tags as a kind of text formatting tag (e.g., &lt;b&gt; and &lt;i&gt;) because heading tags have bold text effect like the text formatting tag &lt;b&gt;.

* Different notions of Background Setting concept: In the user-generated path, the "Background Setting" concept (concept 7) is after the "Font" concept. The color setting is an important part in "Font" concept, and the "Background Setting" concept has many settings relating to color (background, text, and hyperlink colors in the page). Therefore, we infer that users consider that color setting is the most important concept of "Background Setting" concept. In the expert-designed path, the "Background Setting" concept is after the learning order "Font Tags [right arrow] Image [right arrow] Hyperlink." Because the "Background Setting" concept has related settings about these three concepts, we can infer that the instructor regarded "Background Setting" concept as a further concept about "Font," "Image," and "Hyperlink" concepts.

* Different notions of Lists concept: In the user-generated path, "Lists" concept (concept 9) is after the general "Text" concept; therefore, we infer that users considered that list tags (e.g., <ol> and <ul>) are related to "Text" concept because they are used to arrange a list of text items. In the expert-designed path, "Lists" concept is between "Background Setting" and "Multimedia" concepts. Consequently, we can infer that the instructor regarded "Lists" concept as an individual concept.

* The different notions of Multimedia concept: We found that "Multimedia" concept is before the learning order "Table [right arrow] Frame [right arrow] Form" in the expert-designed path, but after the learning order in the user- generated path. Generally, a concept sequence is organized in a simple-to-complex sequence for teaching. Accordingly, the instructor considered the concepts "Table," "Frame," and "Form" are more difficult for learners than "Multimedia" concept. Additionally, "Multimedia" concept is concerned with embedding audios, animations, or video objects in a Web page, and the instructor considered that audio, animation, and video are other content forms like text and images. Therefore, the instructor preferred teaching "Multimedia" right after "Text" and "Image" concepts. Internet users prefer teaching "Multimedia" concept after "Table," "Frame," and "Form" concepts is possibly because Internet users usually write HTML learning materials according to their experience of building websites. Tables, frames, and forms are more pervasive than audio or video in general Web pages. Therefore, Internet users may think "Multimedia" is not the essential concept and put it at the end of learning concepts.

The learning orders of the main concepts in the user-generated path and expert-designed path are similar. However, the users and an instructor may have different notions of some learning orders. An instructor is more concerned with concept difficulty when designing a learning order, whereas Internet users first consider common use and their selfexperience when designing a learning order.

The adaptive navigation mechanism generates a learning path according to the frequent concept sequences and the learner's un-comprehended concepts. The ANOVA analysis shows that the number of learning concepts in Group 1 is significantly higher than those in Groups 2 and 3 (F = 13.953, p < 0.001). This result means that the adaptive navigation mechanism can significantly reduce the lengths of learning paths and improve learning efficiency.

Analysis of learner abilities and material difficulties

To evaluate whether the system is able to estimate learner abilities, we divide the learners in Group 3 into three clusters: learners with high, medium, or low ability. Ability estimation ranges between -3 and +3. That is, the range of low-ability cluster is from -3 to -1, the range of medium-ability cluster is from -1 to +1, and the range of highability cluster is from +1 to +3. After eliminating two extreme cases who have an aberrant response pattern (completely understand all materials) in Group 3, 24 subjects remain in the three clusters. We use ANCOVA to analyze post-test scores in the three clusters. The F test reveals that the post-test scores in the three clusters differ significantly (F = 6.021, p < 0.01). The post-test scores in high-ability cluster are significantly higher than those in the medium-ability cluster (p < 0.05) and low-ability cluster (p < 0.01). The post-test scores in the medium-ability cluster are almost significantly higher than the post-test scores in the low-ability cluster (p = 0.067). The results reveal that the system is able to evaluate learner ability and indicate that using learners' responses and IRT to evaluate their abilities is suitable in e-learning 2.0 circumstances.

The initial material difficulties were predefined by the instructor. The material difficulties were adjusted through learners' collaborative voting, and the final material difficulties represent learners' perceptions of material difficulties. To test whether expert-defined and learner-defined material difficulties are different, we selected 20 materials that have more than 30 learners' votes and compared their initial and final difficulty estimations. The t- test illustrates that a difference exists between the learners' and the instructor's perceptions of material difficulties. The final difficulty estimation is significantly higher than the initial difficulty estimation (t = -2.068, p < 0.05). This result indicates that the instructor underestimated the material difficulties or overestimated the learner abilities.

Conclusions

This study has confirmed the practicability of user-centric adaptive learning. The UALS that employs users' collective intelligence to generate adaptive learning paths and select materials is comparable to a teaching expert. Students' learning performances in Group 2 and 3 were not different with that in Group 1. This study also found that learners have more satisfaction and learning efficiency from user-centric adaptive learning. Students in Group 1 did not perceive the improvement of their knowledge levels and the lengths of learning paths in Group 1 were significantly longer than those in Groups 2 and 3. The research results reveal that users may have different notions from an expert in concepts and learning orders; and an expert tends to overestimate learners' knowledge levels when choosing learning materials. Since a gap exists between experts' and learners' views, and learners can easily share their knowledge in the e-learning 2.0 environment, applying collective intelligence to provide formal and direct learning services for the learners is more promising and important in the future learning environment.

Table 9 compares the UALS with typical adaptive learning systems proposed in recent years. The apparent distinction between UALS and existing adaptive learning systems is that learning materials and guidance are collectively provided by users themselves instead of few experts or instructors. This study proposes an effective approach to adaptive learning in e-learning 2.0. However, to adapt learning to learner characteristics, UALS only considers learners' prior knowledge, abilities, and comprehension levels. Other important characteristics like those of preference, cognitive modality, learning style, and behavior shall be addressed for e-learning 2.0 in the future research. Moreover, evaluating the proposed system and approach in more educational programs is required to clarify its ability of generalization and reliance.

The UALS system totally utilizes user-created content and user-defined learning orders, and therefore, this system enables users to collaboratively create learning services. This system is also able to organize learning services automatically and immediately. The research findings also provide some guidelines to design e-learning 2.0 platforms: (1) An adaptive learning mechanism that can guide users to learn is very important because searching for suitable content and arranging their order by learners themselves is troublesome. (2) A tagging mechanism is necessary to help learners tag knowledge content with the appropriate concepts and supports the mining of concept-sequence patterns. Additionally, taxonomy of concepts is required for generating more general concepts and sequential patterns. (3) A sequence-arrangement mechanism is required to help users arrange and share learning orders based on their cognitions. (4) The knowledge content should be dynamically adjusted for learning based on learners' perceptions and comprehension levels. Therefore, a feedback mechanism is necessary on e-learning 2.0 platforms. (5) The platforms should record learners' learning progress to enable learners to decide when to stop or resume their learning processes and return their control. Since testing is not necessary in the e-learning 2.0 environment, the platforms should help learners decide which concepts are most appropriate for them to study.

References

Blochl, M., Rumetshofer, H., & Wob, W. (2003, September). Individualized e-learning systems enabled by a semantically determined adaptation of learning fragments. Paper presented at the 14th International Workshop on Database and Expert Systems Applications, Prague, Czech Republic.

Bra, P. D., Brusilovsky, P., & Houben, G. J. (1999). Adaptive hypermedia: From systems to framework. ACM Computing Surveys, 31(4), 1-6.

Chen, C. M., Lee, H. M., & Chen, Y. H. (2005). Personalized e-learning system using Item Response Theory. Computers & Education, 44(3), 237-255.

Chow, K. O., Fan, K. Y. K., Chan, A. Y. K., & Wong, G. T. L. (2009, February). Content-based tag generation for the grouping of tags. Paper presented at the 2009 International Conference on Mobile, Hybrid, and On-line Learning, Cancun, Mexico.

Downes, S. (2005, October). E-learning 2.0. ACM eLearn Magazine. Retrieve from http://elearnmag.acm.org/featured.cfm?aid=1104968

Fuentes, C., Carrion, M.J., Arana, C., Boticario, J.G., Barrera, C., Santos, O.,...Roberto. (2005). Alfanet: Public final report. Retrieved December 16, 2010, from the ALFANET Web site: http://adenu.ia.uned.es/alfanet/reports/ALFANET_D82.pdf

Hambleton, R. K. (1985). Item Response Theory: Principles and applications. Boston: Kluwer-Nijhoff.

Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response Theory. Newbury Park, CA: Sage. Han, J., & Kamber, M. (2001). Data mining: Concepts and techniques. New York: Morgan Kaufmann.

Helic, D., Krottmaier, H., Maurer, H., & Scerbakov, N. (2005). Enabling project-based learning in WBT systems. International Journal on E-Learning, 4(4), 445-461.

Helic, D., Maurer, H., & Scerbakov, N. (2004). Discussion forums as learning resources in web-based education. Advanced Technology for Learning, 1(1), 8-15.

Horward, W. (1990). Computerized adaptive testing: A primer. Hillsdale, New Jersey: Lawerence Erwrence Erlbaum Associates.

Huang, S.-L. & Yang, C.-W. (2009). Designing a semantic bliki system to support different types of knowledge and adaptive learning. Computers & Education, 53(3), 701-712.

Jiang, M.F., Tseng, S.S., & Lin, T.Y. (1999). Collaborative rating system for web page labeling. In P.D. Bra, & J.J. Leggett (Eds.), Proceedings of the World Conference on the WWW and Internet (pp. 569-574). Chesapeake, VA: AACE.

Karampiperis, P. & Sampson, D. (2005). Adaptive learning resources sequencing in educational hypermedia systems. Educational Technology & Society, 8(4), 128-147.

Karrer, T. (2006a, February 10). eLearning 2.0. Retrieved from http://elearningtech.blogspot.com/2006/02/what-is- elearning-20.html

Karrer, T. (2006b, February 14). eLearning 2.0: Informal learning, communities, bottom-up vs. top-down. Retrieved from http://elearningtech.blogspot.com/2006/02 /elearning-20-informal-learning.html

Lee, C.H., Lee, G.G., & Leu, Y.L. (2009). Application of automatically constructed concept map of learning to conceptual diagnosis of e-learning. Expert Systems with Applications, 36(2), 1675-1684.

Leung, E. W. C. & Li, Q. (2007). An experimental study of a personalized learning environment through open-source software tools. IEEE Transactions on Education, 50(4), 331-337.

Liu, B. (2007). Web data mining. Chicago: Springer.

Liu, H. I. & Yang, M. N. (2005). QoL guaranteed adaptation and personalization in e-learning systems. IEEE Transactions on Education, 48(4), 676-686.

Lord, F. M. (1980). Applications of Item Response Theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbawn Associates.

Mallak, L.A. (2001, August). Challenges in implementing e-learning. Paper presented at the 2001 Portland International Conference on Management of Engineering and Technology, Portland, USA.

Novak, J. D. & Gowin, D. B. (1984). Learning how to learn, Cambridge. UK: Cambridge University Press. Polya, G. (1957). How to solve it. Princeton, NJ: Princeton University Press.

Reategui, E., Boff, E., & Campbel, J.A. (2008). Personalization in an interactive learning environment through a virtual character. Computers & Education, 51(2), 530-544.

Safran, C., Helic, D., & Gutl, C. (2007, September). E-learning practices and web 2.0. Paper presented at the 2007 International Conference on Interactive Computer Aided Learning, Villach, Austria.

Spiliopoulou, M. (2000). Web usage mining for web site evaluation. Communications of the ACM, 43(8), 127-134.

Srikant, R., & Agrawal, R. (1995). Mining sequential patterns: Generalizations and performance improvements (Research Report RJ 9994). San Jose, California: IBM Almaden Research Center.

Tseng, C. R., Chu, H. C., Hwang, G. J., & Tsai, C. C. (2007). Development of an adaptive learning system with two sources of personalization information. Computer & Education, 51(2), 776-786.

Shiu-Li Huang * and Jung-Hung Shiu (1)

Department of Business Administration, National Taipei University, New Taipei City, Taiwan // (1) Department of Information Management, Ming Chuan University, Taoyuan County, Taiwan // shiulihuang@gmail.com //

vita73917@gmail.com

* Corresponding author

(Submitted August 3, 2010; Revised April 20, 2011; Accepted June 28, 2011)
Table 1. An example of concept sequences

Material ID   Concept sequence

1             < data type, overload >
2             < introduction, data type,
              process control, class &
              object, function >
3             < data type, string &
              reference, class & object >
4             < data type, process
              control, class & object,
              function, overload >
5             < overload >

Table 2. Frequent 1-itemset

Frequent 1-item   Mapped to

data type             1
process control       2
class & object        3
function              4
overload              5

Table 3. Concept sequences after mapping

Material ID   Concept sequence
              (after mapping)

1             < 1 5 >
2             < 1 2 3 4 >
3             < 1 3 >
4             < 1 2 3 4 5 >
5             < 5 >

Table 4. Frequent sequences

Frequent  Frequent 2-   Frequent 3-   Frequent 4-   Maximal
1-item    sequences     sequences     sequences     sequences

< 1 >     < 1 2 >       < 1 2 3 >     < 1 2 3 4 >   < 1 5 >
< 2 >     < 1 3 >       < 1 2 4 >                   < 1 2 3 4 >
< 3 >     < 1 4 >       < 1 3 4 >
< 4 >     < 1 5 >       < 2 3 4 >
< 5 >     < 2 3 >
          < 2 4 >
          < 3 4 >

Table 5. Paired-samples t test of pre-test and post-test
scores

Group     Pre-test score    Post-test score   t-value
          [Mean (SD)]       [Mean (SD)]       (p-value)

Group 1   17.407 (5.738)    33.519 (13.855)   6.554 *** (0.000)
Group 2   18.077 (12.048)   37.408 (15.239)   7.654 *** (0.000)
Group 3   22.212 (9.037)    42.019 (14.125)   7.592 *** (0.000)

*** p < 0.001

Table 6. Paired-samples t test of self-perceived
understanding levels

Group     Before learning   After learning   t-value
          [Mean (SD)]       [Mean (SD)]      (p-value)

Group 1   3.040 (0.539)     3.080 (0.759)    0.238 (0.814)
Group 2   2.760 (0.597)     3.360 (0.700)    3.674 ** (0.001)
Group 3   2.580 (0.776)     3.250 (0.847)    3.112 ** (0.005)

** p < 0.01

Table 7. Frequent concepts

ID   Concept

1    Headings
2    Layout Tags
3    Basic concept
4    Basic framework
5    Font Tags
6    Text Formatting
7    Background Setting
8    Special Characters
9    Lists
10   Image
11   Basic Hyperlink
12   Hyperlink Mode
13   Named Anchor
14   Table
15   Table Attributes
16   Table Combination
17   Basic Frame
18   Frame Attributes
19   Form
20   Multimedia

Table 8. The learning paths

The concepts in learning path

Expert-designed  3 [right arrow] 4 [right arrow]
  path           2 [right arrow] 5 [right arrow]
                 6 [right arrow] 1 [right arrow]
                 8 [right arrow] 10 [right arrow]
                 11 [right arrow] 12 [right arrow]
                 13 [right arrow] 7 [right arrow]
                 9 [right arrow] 20 [right arrow]
                 14 [right arrow] 15 [right arrow]
                 16 [right arrow] 17 [right arrow]
                 18 [right arrow] 19
User-generated   3 [right arrow] 4 [right arrow]
  path           1 [right arrow] 2 [right arrow]
                 5 [right arrow] 7 [right arrow]
                 6 [right arrow] 8 [right arrow]
                 9 [right arrow] 10 [right arrow]
                 11 [right arrow] 12 [right arrow]
                 13 [right arrow] 14 [right arrow]
                 15 [right arrow] 16 [right arrow]
                 17 [right arrow] 18 [right arrow]
                 19 [right arrow] 20

Table 9. Comparison of UALS with existing adaptive
learning systems

                     Adaptive                 Material
                     Presentation             Design

UALS                 Based on learners'       Created by users
  (This study)       abilities and material
                     difficulties
                     collectively
                     determined by users
Reategui's system    Based on learners'       Designed by
  (Reategui, Boff,   demographic and          experts
  & Campbel, 2008)   navigation features
                     and material
                     descriptors
Leung's system       Based on learners'       Designed by
  (Leung & Li,       abilities and learning   experts
  2007)              styles
TSAL                 Based on learners'       Designed by
  (Tseng, Chu,       learning                 experts
  Hwang, & Tsai,     achievement and
  2007)              effectiveness
Karampiperis's       No specific support      Designed by
  system                                      experts
(Karampiperis &
  Sampson, 2005)
aLFanet              Based on learners'       Designed by
  (Fuentes et al.,   knowledge states,        experts
  2005)              learning styles, and
                     cognitive modalities

APeLS                No specific support      Designed by
  (Liu & Yang,                                experts
  2005)

                     Adaptive                  Learning
                     Navigation                path Model

UALS                 Based on users' prior     Collaboratively
  (This study)       knowledge,                constructed by
                     comprehension levels,     users
                     and user-generated
                     learning paths
Reategui's system    No specific support       No path model
  (Reategui, Boff,
  & Campbel, 2008)

Leung's system       Based on learners'        Designed by
  (Leung & Li,       abilities and learning    experts
  2007)              styles
TSAL                 Based on learners'        Designed by
  (Tseng, Chu,       learning styles and       experts
  Hwang, & Tsai,     behaviors
  2007)
Karampiperis's       Based on learner's        Designed by
  system             cognitive                 experts
(Karampiperis &      characteristics and
  Sampson, 2005)     preferences
aLFanet              Based on learners'        Designed by
  (Fuentes et al.,   study progress,           experts
  2005)              learning activity, and
                     activities of
                     other learners studying
                     the same subject
APeLS                Based on learners'        Designed by
  (Liu & Yang,       abilities, learning       experts
  2005)              time, and difficulty
                     levels
COPYRIGHT 2012 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Huang, Shiu-Li; Shiu, Jung-Hung
Publication:Educational Technology & Society
Article Type:Report
Geographic Code:9TAIW
Date:Jul 1, 2012
Words:7004
Previous Article:Investigating learner affective performance in Web-based learning by using entrepreneurship as a metaphor.
Next Article:Social networks-based adaptive pairing strategy for cooperative learning.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters