Printer Friendly

Timely diagnostic feedback for database concept learning.

Introduction

Feedback provides learners with opportunities to adjust and develop their cognitive strategies and to rectify misconceptions during training (Azevedo & Bernard, 1995). Maughan et al. (2001) noted that e-learning feedback provides the information required to identify needed improvements. Scholars thus deem feedback as an essential e- learning component that facilitates student learning (Wang, 2008). Additionally, feedback received during a learning process can assist learners in reflecting on their learning and improve their motivation (Marriott, 2009). However, most early e-learning systems only offer short statements, such as "correct" or "incorrect," or update a score as feedback for student input, thereby limiting communication with learners. Therefore, e-learning systems that provide feedback to address student problems during a learning process have become popular. Diagnostic feedback allows learners to receive useful hints, which may facilitate the identification of a learner's misconceptions, provide crucial clues to rectify misconceptions, or offer remedial materials for learning (Chen et al., 2007; Lee et al., 2009).

Studies related to feedback timing (e.g., timely and delayed feedback) have obtained conflicting outcomes for the effects of feedback on learning (Anderson et al., 2001; Corbalan et al., 2010; Corbett & Anderson, 2001; Schroth, 1992). Although researchers for decades have examined the effects of timely feedback and delayed feedback on learning, study results for feedback timing have always been controversial (Mory, 2004). However, timely feedback has typically proven to have better effects than delayed feedback for a well- structured problem, which is a logic-, story-, and rule-based problem with predefined steps and exact solutions (Laxman, 2010). Timely feedback is mainly based on the theory proposed by Jonassen (1997), which claims that timely feedback is important in informing learners where their problem-solving processes went wrong and in providing coaching at an appropriate time.

Currently, most works related to well-structured problems provide diagnostic feedback only after a learner finishes a problem. However, this delayed feedback may hinder acquisition of the information needed during a problem-solving process (Dempsey et al., 1993; Kulik & Kulik, 1988). Thus, timely diagnostic feedback is promising to help learners enhance their learning achievements.

This work investigates the influence of timely diagnostic feedback on learning the "Database Concept," which belongs to the type of well-structured problems. To provide timely diagnostic feedback, this work first extracts a learner's database concept, an Entity-relationship Diagram (ERD) (Chen, 1976), into a two-dimensional matrix. This matrix and the correct matrix are compared to obtain the misconceptions of learners. Based on these misconceptions, association rules (Han & Kamber, 2001) are adopted to model learner behavioral patterns. Analyzing learner misconceptions can provide suitable hints and discover prospective misconceptions.

Using the proposed approach, this work realistically develops a novel Web-based Timely Diagnosis System (WTDS) to diagnose learning obstacles and to further provide crucial and adaptive hints in real time during a problem-solving process. This work also describes how to use the Asynchronous JavaScript and XML (AJAX ) technique (Paulson, 2005) and association rules to achieve timely diagnostic feedback. An evaluation is conducted to assess the effectiveness of the proposed WTDS. Finally, questionnaires and interviews are used to acquire student attitudes toward the proposed WTDS.

Background and literature review

Entity-relationship diagram

The Entity-Relationship Model is a data modeling method in the database concept that produces a conceptual schema or semantic data model of a relational database. Diagrams created by this process are called Entity-Relationship Diagrams (ERD) (Chen, 1976). An ERD is a critical tool in designing a database schema, helping users to achieve enhanced understanding of the database schema by displaying the structure in a graphical format (Elmasri & Navathe, 2006).

An ERD includes several essential concepts, such as entity, attribute, relationship, and the cardinality ratio of relationships, found in the detailed reference of Elmasri and Navathe (2006). Figure 1 is an example of the ERD for an enterprise. The typical steps of establishing an ERD are outlined as follows:

1. Creating entities: In accordance with the applied field, precisely picking up the entities (e.g., Employee and Project).

2. Determining relationship and the cardinality of the relationships: identifying the relationships between entities. Further denoting a cardinality of "many" can be done by writing "N" or "M" next to the entity while denoting a cardinality of "one" by writing "1" next to the entity.

3. Identifying attributes: Drawing the corresponding attributes for each entity and relationship if necessary.

4. Executing refinement:

Accordingly, drawing an ERD evidently belongs to the well-structured problem because it is a logic-, story-, and rule-based problem with predefined steps and exact solutions.

The related works

Feedback should be more than simple results (e.g., correct or incorrect) or correct answers. In addition to appraising the correctness of a learner's solution, informing students where their problem- solving process went wrong and providing coaching from that point onward are also important (Jonassen, 1997). An effective coaching method, "diagnostic feedback," has been proven to contribute to learning achievement. These diagnosis systems basically use diagnostic algorithms to discover individual misconceptions based on their incorrect responses to test problems and provide the corresponding remedy materials when necessary (Chen et al., 2007; Heh et al., 2008; Huang et al., 2008; Lee et al., 2009).

On the other hand, timely feedback is defined as feedback that occurs immediately after a student has completed a step. Delayed feedback, relative to timely feedback, is defined as a feedback that only occurs after the student has completed the task or test (Shute, 2008). Researchers have examined the effects of feedback timing (timely versus delayed) on learning for decades, but still have conflicting arguments of the effects on learning outcome. Some literatures assert the superiority of delayed feedback (e.g., Schroth, 1992), whereas others affirm the superiority of timely over delayed feedback in verbal materials, procedural skills, some motor skills, programming, and mathematics (Anderson et al., 2001; Corbalan et al., 2010; Corbett & Anderson, 2001; Wang, 2008). Thus, the study of feedback timing has always been muddied (Mory, 2004), and this may relate to the subject, applied field, and test form (e.g., single or multiple choices, or text-based) (Lewis & Anderson, 1985).

Although feedback differs for different subjects and different test forms, most published works adopted diagnostic and delay feedback to address well-structured problems. That is, the feedback offers the information about weak concepts of a learner and provides remedy materials (or adaptive hints) only after he/she completes a well-structured problem. For example, Chen et al. (2007) used association rules to design a multiple-choice diagnosis system for elementary school students learning mathematics. Heh et al. (2008) developed a multiple-choice assessment system for learning database concepts at a university. Huang et al. (2008) developed a text-based assessment system for university students learning a programming language. Lee et al. (2009) also used association rule based on the Apriori algorithm (Agrawal & Srikant, 1994) to design a text-based diagnosis system for senior high school students learning a programming language. However, these works only investigated the effects of providing delayed diagnostic feedback on learning.

For a well-structured problem requiring rule-using, predefined steps, and logical solutions, timely feedback is seemingly better than delayed feedback because an error made in one step during the problem-solving procedure carries over to the following steps and consequently to the final solution (Corbalan et al., 2010). In other words, if a student has a misconception on one-step, the subsequent steps and even the result could be wrong because this mistake can propagate over the entire problem-solving process. To prevent such carry-over effects, Mory (2004) suggested addressing this by detecting mistakes and providing timely feedback to that mistake during the problem- solving process. Wang (2010) used timely feedback to develop a multiple-choice web-based assessment system for natural science at an elementary; timely hints are provided whenever a student chooses an incorrect option during a problem-solving process. However, the provision is non-diagnostic feedback which is delivered in a pre-determined sequence, starting from "general hints" and gradually moving toward "specific hints." Such non-diagnostic feedback may not uncover individual misconception and further unable to provide adaptive assistances.

Notably, most above works focus on either diagnostic feedback or timely feedback. In other words, using timely diagnostic feedback for well-structured problem is relatively scant. Effective learning requires suitable feedback and how to generate suitable feedback in different fields is a key problem. Timely diagnostic feedback which provides adaptive assistances whenever a learner encounters hurdles during problem- solving process seems promising for address a well-structured problem. However, few studies have investigated the effects of both timely and diagnostic feedback for a well-structured problem. This work investigates these effects in detail.

Constructivist principles followed by the WTDS

According to constructivist theory, learning is a leaner-centered activity and a learner actively constructs meaningful knowledge with his/her own experiences. Figueira-Sampaio (2009) had elaborated the best educational principles proposed by constructivist theory. Four principles among them the WTDS follows are: "timely useful feedback," "learner independence," "learners are engaged in solving real-world problems," and "active learning." The following describes the details.

1. Timely useful feedback: The WTDS uses timely diagnostic feedback as "timely useful feedback."

2. Learner independence: The WTDS immerses learners into a context that presents a problem to be solved, encouraging them to individually practice, explore, and develop independent thinking ability.

3. Learners are engaged in solving real-world problems: the WTDS can provide diverse real-world ERD problems (as shown in Table 3) for learner practice by slightly modifying its parameters.

4. Active learning: Instead of exploring alone, learners should be provided with supporting and coaching (Ng'ambi & Johnston, 2006). The support of timely diagnostic feedback can decrease learner frustration and improve learner motivation, enabling learners to become more active in learning (Marriott, 2009).

The follows will discuss how the WTDS achieve these principles in practice.

Proposed approach

This research adopts timely diagnostic feedback to aid ERD learning with systematic hints once learners encounter learning barriers during the diagram-drawing process.

Overview

The kernel module is based on association rules to mine interesting associative or correlative relationships among a set of data items (Han & Kamber, 2001). Extensive information must be available for mining before the diagnosis process. Thus, preliminary tests must be conducted to acquire a model of learner behavioral patterns. These patterns can be deemed a set of data items for mining. By applying the Apriori algorithm, frequent itemsets can be found for association rules. In our case, a frequent itemset means that if a student makes a mistake on an item of an itemset, he or she is very likely to make the mistake on other items of the same itemset. This is because when a learner has a misconception on an item, it is very likely that the learner not only fully misunderstand the item, but also other related items (Lee et al., 2009).

This study then generates association rules from these frequent itemsets and further calculates the confidence (probability) for each association rule, which explicitly reveals the probability of making mistakes on related items once a mistake is made on an item. These frequent itemsets can be regarded as learning blockades. Providing adequate corresponding hints in a timely manner can therefore be useful to conquer these learning blockades.

Detailed steps

The diagnosis feedback is generated by executing the following steps.

Step 1: Presetting the correct ERD by an instructor

The first step is for an instructor to draw the correct ERD. To facilitate computing, the graphic information is converted into numeric data in a two-dimensional matrix, [R.sub.correct]. The translated formula is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where k is the number of entities. Taking Fig. 2 as an example, suppose that the left part is the correct ERD by an instructor. After the translation process, the resulting matrix is shown in the right part of Fig. 2.

Step 2: Comparing the ERD results of all testees with the correct ERD

The next step acquires the answering pattern of a testee. After a testee finishes his or her ERD, the corresponding matrix [R.sub.test] is obtained through Formula (1). The mistakes the testee makes can be identified by comparing [R.sub.test] with [R.sub.correct]. For example, assume the test result of a testee is shown in the left part of Fig. 3. After comparing [R.sub.test] with [R.sub.correct], we can identify these mistakes: two wrong relationships [R.sub.12] and [R.sub.23]. By repeatedly comparing the test results of a number of testees, we can obtain the mistake patterns for all testees, for example, Table 1. The table is deemed as a transaction database D (i.e., training data) for mining mistake patterns. For a clearer explanation, the example in Table 1 with the mistake list of nine testees is used to explain the complete diagnosis procedure. The procedures are the same when the number of testees exceeds nine.

Step 3: Using the Apriori algorithm to find frequent itemsets and then generating association rules.

Figure 4 shows the pseudo-code of the Apriori algorithm. The Apriori_gen ([L.sub.k]) function, which aims at generating candidates for [C.sub.k + 1], mainly contains two steps: Join Step and Prune Step. Join Step uses [L.sub.k] x [L.sub.k] to generate a candidate, [C.sub.k + 1], which consists of ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) itemsets. Prune Step is applied to prune an itemset in [C.sub.k + 1], which has an infrequent subset (e.g., the itemset has one subset, which is not in [L.sub.k]). Readers interested in this algorithm can refer to Agrawal & Srikant (1994) for further details.

Figure 4. Pseudo code of the Apriori algorithm

  [C.sub.k]: Candidate itemset of size k

  [L.sub.k]: Frequent itemset of size k

  [L.sub.1] = {Frequent items};
  For (k = 1; Lk ! = [phi]; k++)
  {

    [C.sub.k + 1] = candidates generated from Lk ; //Apriori_gen(Lk);
    For each transaction t in database D

    {

       Counting each candidate in [C.sub.k + 1] that are contained in
         t;
       [L.sub.k + 1] = candidates in [C.sub.k + 1] with min_support ;

  }

}


Figure 5 illustrates a procedural example of how to use the Apriori algorithm to generate frequent itemsets using the transaction database D (i.e., Table. 1), where the minimum support count is set as 2. The final two frequent itemsets: {{[R.sub.12], [R.sub.13], [R.sub.14]}, {[R.sub.12], [R.sub.13], [R.sub.23]}} are generated, as shown in [L.sub.3]. These resulting frequent itemsets can be deemed learning blockades for students. Therefore, we can foresee that according to first frequent itemset {[R.sub.12], [R.sub.13], [R.sub.14]}, if making a mistake on [R.sub.14], a student is very likely to also make mistakes on [R.sub.12] and [R.sub.13].

Once the frequent itemsets have been found, it is straightforward to generate strong association rules from them, where strong association rules satisfy both minimum support and minimum confidence (Han & Kamber, 2001). For {[R.sub.12], [R.sub.13], [R.sub.23]}, the resulting association rules accompanying their confidence are shown in Table 2, each listed with its confidence. For {[R.sub.12], [R.sub.13], [R.sub.14]}, the generation of association rules is the same as {[R.sub.12], [R.sub.13], [R.sub.23]}. If we set the minimum confidence threshold to 50%, then the output rules are these association rules with confidence > = 50%.

The space of frequent itemsets can be analyzed as follows. Since no meaning exists for [R.sub.ii] (i.e., elements are located on the diagonal of a matrix), only k x (k -1)/2 items exist. Let m be k x (k - 1)/2. Since the minimum number of items of a frequent itemset is 2, the number of itemsets in this case is first counted. The possible number of frequent itemsets is [C.sup.m.sub.2] when the number of items within a frequent itemset is 2. Similarly, the possible number of frequent itemsets is [C.sup.m.sub.m] when the number of items within a frequent itemset is m. Thus, the size of the space is

[C.sup.m.sub.2] + [C.sup.m.sub.3] + L [C.sup.m.sub.2] = -1 - m

Step 4: Inputting the hints of learning blockades for diagnostic feedback

Once frequent itemsets (i.e., learning blockades) are identified, the instructor is able to input the corresponding hints for each frequent itemset to provide suitable feedback for prospective students. In this manner, the instructor needs to input only the hints for major learning blockades, thereby saving effort in inputting unimportant hints.

During student practice, the system can respond to the corresponding hint once a mistake is made. Based on the results of the frequent itemsets and association rules, the related hints and their occurrence probability of related mistakes are automatically generated to prevent students from making subsequent mistakes. For example, if a student commits an error on [R.sub.23] (e.g., marking the wrong cardinality ratio of the relationship or drawing a meaningless relationship), the system not only returns the hint of [R.sub.23], but also the hints of [R.sub.12] and [R.sub.13] and the probability of committing such an error, which is 100%.

Web-based timely diagnosis system

Using the proposed approach, we implemented a realistic system, the Web-based Timely Diagnosis System (WTDS). Visual Studio .NET 2008 was chosen as the developmental tool for implementing the entire system because it fully supports the required techniques: HTML, JavaScript, ASP.NET, and AJAX.

Architecture of WTDS

In a traditional web application, a user request causes a response from a web server. For example, a server returns a new page with desired information when a user presses a submit button. Thus, when drawing an ERD, the common scenario is that a student submits the result only after finishing the ERD, and then receives feedback from the web server, or delayed feedback. The typical software model of delayed feedback is presented in the left part of Fig. 6. The verification module indicates whether the provided answer is either "correct" or "incorrect" instead of hints or references. Thus, the module is easy to design because it only compares a finished ERD work with the correct ERD and simply checks whether the two ERD matrixes (i.e., Figs. 2 and 3) are the same.

To provide timely feedback, the new technology AJAX is presented for creating more efficient and interactive web applications that handle users' requests instantly. AJAX applications do not require installing a browser plug-in, such as Java Applet and Microsoft ActiveX, but work directly with most popular browsers, allowing immediate updating of partial content on a web page when a user performs an action.

When drawing an ERD, the process flows of timely feedback are as follows: Whenever a user executes a drawing step on a browser, this action triggers the local AJAX engine for submitting the request to the web server. The web application then processes this request and returns the results. After receiving the results, the browser's partial page is updated according to the returned results. This processing flow is employed iteratively if the browser operates continuously. This AJAX feature that enables the result to return right after a student has completed a step can be used for timely feedback. The software model of the timely diagnosis feedback is shown in the right part of Fig. 6. The Diagnostic module is designed according to the descriptions of the proposed approach in the previous section.

Operation procedure and demonstrations

The operation procedure of WTDS is divided into two phases, shown in Fig. 7.

The first phase generates a diagnostic database, which consists of the correct ERD answer, frequent itemsets, and association rules. An instructor first draws correct ERDs using the management interface of the system. The values of minimum support and minimum confidence must be settled for generating frequent itemsets and association rules, after which several testees are involved in simultaneous pretesting. Following the pretest, the system automatically generates frequent itemsets and association rules and then imports them into the database. An instructor can input hints for each frequent itemset in the management interface, as shown in Fig. 8. These hints do not contain information about correct answers; that is, they only provide clues crucial to rectifying a learner's misconceptions about entities and relationships. These hints scaffold students to actively reflect and fix faulty concepts whenever they make mistakes during a problem-solving process.

In the second phase, students begin their learning process. When a student performs a drawing on ERD, the system determines if this step is correct. If incorrect, the WTDS returns diagnostic feedback immediately to inform the student. The procedure repeats iteratively until the working ERD is finished correctly.

Figure 9 illustrates the user interface for students to practice their ERDs. The functions include selecting and laying out entities, building relationships and cardinality ratio, adding attributes, setting strokes, and setting font and line colors. After laying out entities in their proper place, a student is able to build relationships between entities and their cardinality ratio. If the built relationship is incorrect, the diagnostic feedback appears immediately below. For example, as shown in Figure 9, once the student builds an incorrect relationship between Employee and Product, the feedback displays the following information: 1) Frequent Itemsets: There are [R.sub.13], [R.sub.12], and [R.sub.14]. 2) Major and likely errors and related hints: Because a major mistake is made on [R.sub.14], there is a greater likelihood to make mistakes on [R.sub.13] and [R.sub.12]. Meanwhile, the hints for [R.sub.13], [R.sub.12], and [R.sub.14] are also provided. 3) The confidence (e.g., probability) of making errors on [R.sub.12] and [R.sub.13] is also shown for reference.

Evaluation

The experimental course, called "Data Processing and Application," primarily teaches database concepts.

To enable students to practice diverse ERD models, five different ERD models were established based on the proposed system, including School, Sales, Publisher, Enterprise, and Hotel, as shown in Table 3. Thirty-six students were asked to join the pretest so that each ERD model has its own transaction database D to generate corresponding frequent itemsets and association rules. The Apriori algorithm was used where the minimum support count and confidence were set as 2 and 50%, respectively.

Table 3 depicts the results. The second column shows the number of frequent itemsets (denoted as NFI), whereas the third column shows the number of association rules (denoted as NAR). Our observation indicated that more entities may contribute to more NFI, resulting in more NAR.

Objectives

To identify whether timely diagnostic feedback can effectively enhance learning achievement, this evaluation compared WTDS with two common feedback type systems, namely WDDS (Web-based Delayed Diagnosis System) and WVS (Web-based Verification System). In WDDS, diagnostic feedback is returned only after a student solves a problem completely. On the other hand, WVS only shows whether the learner's answers are correct after he/she solves a problem completely.

To conduct the evaluation, the latter two systems must be developed. Based on the developed WTDS, it is relatively easy to build these systems because they are much simpler than the WTDS for the used software techniques and modules. The building of these systems requires only slight modifications in the inner software structure of WTDS. Modifying GUI to equip these systems with the same GUI is unnecessary. For building WDDS, the only modification is removing the AJAX function from WTDS. For building WVS, other than removing AJAX functions, replacing the diagnostic module with the verification module is required. The software model of WVS is illustrated in the left part of Fig. 6.

This study was administered to three classes: the first class consisted of 52 students using WTDS; the second class consisted of 49 students using WDDS; and the third class consisted of 51 students using WVS. The students in the three classes are the first time to take the course of "Data Processing and Application" for learning their database concepts. This evaluation addressed the following issues: (1) analyzing the learning behavior and achievements among these three classes; and (2) analyzing the learning achievements within the WTDS class.

Research tools and procedure

This study adopted a quasi-experimental design method requiring four weeks. In the first week, all classes took the pretest and were familiarized with their designated system. In the following two weeks, all classes received traditional database instruction in traditional classrooms from the same teacher based on the same learning material. During these two weeks, all students used the designated system to practice in school or at home. In the fourth week, all classes took the posttest. In the meantime, questionnaires were administered to the WTDS class to elicit student attitudes toward the proposed system.

To assure pretest validity and reliability, the content of the pretest was reviewed by 2 experts, and then conducted by 26 students. Inappropriate questions were removed according to the corresponding difficulty and discrimination levels, resulting in 16 multiple-choice questions and Cronbach's a being 0.86 in total. To ensure posttest validity and reliability, the content of the posttest was also similarly handled to that of the pretest, resulting in 19 multiple- choice questions and a total Cronbach's a value of 0.82.

The first part of Table 4 shows the descriptive statistics of the pretest results. Moreover, One-way ANOVA was further conducted to verify possible significant difference in the background knowledge of students in the three classes. The results revealed no significant difference in the average scores of the background knowledge between these three classes (F = 0.120, p > .05).

Results and discussions

To analyze the preference tendency of participants, all systems recorded participant activities as logged data, including login time, source IP, and staying period (the time a visitor spends on the system). SPSS Ver.12 was used to conduct statistical analysis.

Comparison among the three classes

Table 4 also shows the descriptive statistics and paired-samples t test of the mean scores and standard deviations of achievement on the pretest and posttest. For each class, the mean in the posttest was significantly higher than that in the pretest, meaning that all three systems can enhance students' learning achievement significantly.

Analysis of Covariance (ANCOVA) was further used to compare learning achievement among these classes. The analysis regarded the experimental treatment as the independent variable, posttest scores were seen as dependent variables, and pretest scores were taken as the covariate. Before analyzing covariance, homogeneity of regression coefficients was tested to examine whether homogeneity existed in the intra- group (test of the homogeneity of intragroup regression coefficients). SPSS analysis demonstrated that the F value of the regression coefficients was 2.68 (p > .05) for the hypothesis of homogeneity to be accepted. Thus, covariance analysis was further conducted.

Posttest scores were adjusted by removing the influence of the pretest from the scores on the posttest. Table 5 shows that learning achievement among these classes is significant (F = 12.40, p < .05), indicating a great difference in achievement among these classes in learning ERD. The Least Significant Difference (LSD) was used to compare these classes, as shown in Table 5.

1. Students in Class 1 (using WTDS) perform significantly better than those in Class 2 (using WDDS) and Class 3 (using WVS). Timely diagnostic feedback provides more effective learning than the rest. WTDS immediately provides diagnostic hints for students when they make mistakes and helps in solving problems. Thus, students can revise their thoughts through the guidance of timely feedback and in turn improve their ERD problem-solving ability.

2. Students in Class 1 and Class 2 have better performance than those in Class 3. This may be because Class 3 only provides the "correct or incorrect" answer in the final solution, resulting in insufficient information for handling learning barriers, limiting students' problem-solving ability, and encouraging rote memorization.

Total retention time of each student in the three classes was also computed, which means the total time students spend on their designated system during the evaluation period. This value is calculated by accumulating staying time of every login. Table 6 shows that WTDS has the longest total retention time, although the result does not reach a level of significance (F = 0.59; p > .05). This may be because students in each class felt the designated system could help their learning irrespective of the feedback type the system provided, causing total retention time among these classes to have no significance.

Comparison within the WTDS class

Under normal distribution, the most suitable ratios for high-level cluster (HC), medium-level cluster (MC), and low- level cluster (LC) are 27%, 46%, and 27% (Kelley, 1939), respectively. Hence, the WTDS class was further divided into three clusters according to their pretest scores (Liu et al., 2010). Students with scores in the top 27% were allocated to HC, and those with scores in the bottom 27% were allocated to LC, and the rest belonged to MC.

Table 7 shows the descriptive statistics and paired-sample t test for the three clusters. For each cluster, the mean score in the posttest is significantly higher than that in the pretest, meaning that all clusters benefit by the proposed WTDS.

To investigate whether there is significant difference among the three clusters in learning achievement, ANCOVA was further used. SPSS analysis demonstrated that the F value of the regression coefficients was 3.05 (p > .05) for the hypothesis of homogeneity to be accepted, and ANCOVA was then conducted. The result showed that the learning effectiveness among three clusters is not significant (F = 0.41, p > .05), indicating no significant difference in learning achievement between the three clusters. Timely diagnostic feedback can be deemed the problem-solving scaffold with a temporary framework to support learning. Regardless of the cluster, WTDS supports all learners in their "zone of proximal development" to perform complex tasks, such as problem solving, without the help of which they may be unable to perform (Jonassen, 1997). However, this result may contradict Liu et al. (2010), who found that the learning strategy of computer-assisted concept mapping had greater benefit for LC than for HC. This contradiction may result from differences of the applied field and applied techniques.

Questionnaire and interviews

To understand student satisfaction on relevance to their concern, a questionnaire with a Likert scale ranging from 5 (strongly agree) to 1 (strongly disagree) was provided to the WTDS class. This questionnaire was based on the study by Su et al. (2010), and further modified to elicit student attitudes toward WTDS. Among 52 students in the WTDS class, 46 valid questionnaires were collected and used for data analysis. After completion of the questionnaire survey, 7 students were selected for short interviews for eliciting their perceptions.

The questionnaire results, shown in Table 8, reveal that most of the evaluated aspects received positive feedback. Most students indicated that they were satisfied with WTDS and agreed that it is a stable and convenient online system. The results of questions 3 and 4 show that most students also agree that the WTDS is a practical auxiliary tool that can reduce student frustration when solving an ERD problem. For example, an interviewee stated: "I am a novice and it is helpful for me when encountering a hurdle. Too many hurdles will certainly decrease my willingness to learn. The WTDS guides me to solve the problem step by step, sustaining me to continue until finishing the work." This is because when the learner gradually overcomes each sub-problem, the possibility of solving the whole problem increases.

The results from questions 5 and 6 show that the system moderately stimulates students to spend more time on it. Five of 7 interviewees stated that they had practiced at home. As one interviewee stated, "I spent much time on the system, especially before the day of the exam because it provides sufficient examples to practice." Another interviewee commented, "I used to practice on paper and seldom on a computer screen. But I am interested in the WTDS. This is because when I have a misconception and make a mistake on one step, the system can respond with useful hints so that I can untangle the misconception immediately and remember not to make the same mistakes in subsequent solving processes." These responses may support the perspective of Mory (2004), who states that during initial practice, feedback should be provided for each step of the problem- solving procedure, allowing learners to verify immediately the correctness of a solution step.

Conclusions

This work investigates the influence of timely diagnostic feedback on database concept learning. This work first adopts the Apriori algorithm to find frequent itemsets and then generates association rules for drawing an ERD. Once frequent itemsets are identified, an instructor inputs the corresponding hints for each frequent itemset to provide suitable feedback to students. This work implements the WTDS using association rules and AJAX techniques to promote student efficiency in learning ERD. Providing timely diagnostic feedback gives students the necessary guidelines and directions when encountering hurdles during a problem-solving process.

An evaluation was conducted to compare the WTDS with the WDDS and the WVS. Evaluation results reveal that all systems have significant influences on ERD learning. The class using the WTDS had better achievement than those using the WDDS and WVS, even though total retention time for the three classes was insignificant. In the class using the WTDS, learning achievement of each cluster was enhanced significantly. Questionnaire results show that most students were satisfied with the WTDS and agreed that it can aid and stimulate student learning and decrease frustration when solving ERD problems.

This work has the following limitations. First, the proposed methodology for providing timely diagnostic feedback should be suitable for learning other similar data models, such as a data flowchart, state diagram, and concept map. However, whether this methodology is suited to all well-structured problems, and even to ill-structured problems, remains unclear. Second, the proposed WTDS assumes that whenever a student makes a mistake, that student will correct the mistake instantly according to feedback. However, if a student neglects feedback and does not correct the mistake, mistakes can accumulate. Because feedback does not contain any information about remedial sequencing, a student's remedial path to correct accumulated mistakes is heuristic. In the future, we will investigate the effect of remedial sequencing rules and attempt to identify optimal rules.

References

Agrawal, R., & Srikant, R. (1994). Fast algorithms for mining association rules. In J. B. Bocca, M. Jarke, & C. Zaniolo (Eds), Proceedings of the 20th international conference on very large database, Santiago, Chile (pp. 487-199). Santiago de Chile, Chile: Morgan Kaufmann.

Anderson, D. I., Magill, R. A., & Sekiya, H. (2001). Motor learning as a function of KR schedule and characteristics of task-intrinsic feedback. Journal of Motor Behavior, 33(1), 59-67.

Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111-127.

Chen, C. M., Hsieh, Y. L., & Hsu, S. H., (2007). Mining learner profile utilizing association rule for web-based learning diagnosis. Expert Systems with Applications, 33(1), 6-22.

Chen, P. P. (1976). The entity-relationship model - toward a unified view of data. ACM Transactions on Database Systems, 1(1), 9-36.

Corbalan, G., Paas F., & Cuypers, H. (2010). Computer-based feedback in linear algebra: Effects on transfer performance and motivation. Computers & Education, 55(2), 692-703.

Corbett, A. T., & Anderson, J. R. (2001). Locus of feedback control in computer- based tutoring: Impact on learning rate, achievement and attitudes. In M. Beaudouin-Lafon & R. J. K. Jacob (Eds.), Proceedings of ACM CHI 2001 conference on human factors in computing systems conference (pp. 245-252). New York, NY: ACM Press.

Dempsey, J. V., Driscoll, M. P., & Swindell, L. K. (1993). Text-based feedback. In Dempsey J. V. & G. Sales (Eds.), Interactive Instruction and Feedback (pp. 21-54). Englewood, NJ: Education Technology.

Elmasri, R., & Navathe, S., (2006). Fundamentals of database systems (5nd ed). Boston, MA: Addison Wesley Inc.

Figueira-Sampaio, A. S., Santos, E. E. F., & Carrijo, G. A. (2009). A constructivist computational tool to assist in learning primary school mathematical equations. Computers & Education, 53(2), 484-492.

Han, J., & Kamber, M. (2001). Data mining: Concepts and techniques. San Mateo, CA: Kanfmann.

Heh, J. S., Li, S. C., Chang, A., Chang, M., & Liu, T. C. (2008). Diagnosis mechanism and feedback system to accomplish the full-loop learning architecture. Educational Technology & Society, 11(1), 29--44

Huang, C. J., Chen, C. H., Luo, Y. C., Chen, H. X., & Chuang, Y. T. (2008). Developing an intelligent diagnosis and assessment e-learning tool for introductory programming. Educational Technology & Society, 11(4), 139-157.

Jonassen, D. H. (1997). Instructional design models for well-structured and ill- structured problem-solving learning outcomes. Educational Technology, Research and Development, 45(1), 65-94.

Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test item. Journal of Educational Psychology, 30, 17-24.

Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58(1), 79- 97.

Laxman, K. (2010). A conceptual framework mapping the application of information search strategies to well and ill- structured problem solving. Computers & Education, 55(2), 513-526.

Lee, C. H., Lee, G. G., & Leu, Y., (2009). Application of automatically constructed concept map of learning to conceptual diagnosis of e-learning. Expert Systems with Applications, 36(2), 1675-1684.

Lewis, M. W., & Anderson, J. R. (1985). Discrimination of operator schemata in problem solving: Learning from examples. Cognitive Psychology, 17(1), 26-65.

Liu, P. L., Chen, C. J., & Chang, Y. J. (2010). Effects of a computer-assisted concept mapping learning strategy on EFL college students' English reading comprehension. Computers & Education, 54(2), 436-445.

Maughan, S., Peet, D., & Willmott, A. (2001). On-line formative assessment item banking and learning support. In In M. Danson & C. Eabry (Eds.), Proceedings of the 5th International Computer Assisted Assessment Conference. Loughborough, England: Loughborough University.

Marriott, P. (2009). Students' evaluation of the use of online summative assessment on an undergraduate financial accounting module. British Journal of Educational Technology, 40(2), 237-254.

Mory, E. H. (2004). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (pp. 745-783). Mahwah, NJ: Erlbaum.

Ng'ambi, D., & Johnston, K. (2006). An ICT-mediated constructivist approach for increasing academic support and teaching critical thinking skills. Educational Technology & Society, 9(3), 244- 253.

Paulson, L. D. (2005). Building rich web applications with Ajax. IEEE Computer, 38(10), 14-17.

Schroth, M. L. (1992). The effects of delay of feedback on a delayed concept formation transfer task. Contemporary Educational Psychology, 17(1), 78-82.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189.

Su, A. Y. S., Yang S. J. H., Hwang, W. Y., & Zhang, J. (2010). A Web 2.0-based collaborative annotation system for enhancing knowledge sharing in collaborative learning environment. Computers & Education, 55(2), 752-766.

Wang, T. H. (2008). Web-based quiz-game-like formative assessment: Development and evaluation. Computers & Education, 51(3), 1247-1263.

Wang, T. H. (2010). Web-based dynamic assessment: Taking assessment as teaching and learning strategy for improving students' e-Learning effectiveness. Computers & Education, 54(4), 1157-1166.

Jian-Wei Lin (1) *, Yuan-Cheng Lai (2) and Yuh-Shy Chuang (1)

(1) Chien Hsin University, Taiwan // (2) National Taiwan University of Science and Technology, Taiwan //

jwlin@uch.edu.tw // laiyc@cs.ntust.edu.tw // yschuang@uch.edu.tw

* Corresponding author

(Submitted November 14, 2011; Revised April 13, 2012; Accepted June 05, 2012)

Table 1. Transaction database consisting of
the test results of nine tesetees

Transaction ID   List of Wrong Items (Relationships)

[T.sub.1]        [R.sub.12], [R.sub.13], [R.sub.23]
[T.sub.2]        [R.sub.13], [R.sub.15]
[T.sub.3]        [R.sub.13], [R.sub.14]
[T.sub.4]        [R.sub.12], [R.sub.13], [R.sub.15]
[T.sub.5]        [R.sub.12], [R.sub.14]
[T.sub.6]        [R.sub.13], [R.sub.14]
[T.sub.7]        [R.sub.12], [R.sub.14]

Table 2. Association rules for the frequent itemset {[R.sub.12],
[R.sub.13], [R.sub.23]}

Association Rules                            Confidence

{[R.sub.12], [R.sub.13]} = > {[R.sub.23]}    2/4 = 50%
{[R.sub.12], [R.sub.23]} = > {[R.sub.13]}    2/2 = 100%
{[R.sub.13], [R.sub.23]} = > {[R.sub.12]}    2/2 = 100%
{[R.sub.12]} = > {[R.sub.13] , [R.sub.23]}   2/6 = 33%
{[R.sub.13]} = > {[R.sub.12] , [R.sub.23]}   2/7 = 29%
{[R.sub.23]} = > {[R.sub.12] , [R.sub.13]}   2/2 = 100%

Table 3. Results of data mining of different ERD models

ERD Model (Entities)                                       NFI   NAR

Enterprise (Department, Project, Employee, Club)           3     8
Sales (Orders, Sales, Product, Customer)                   4     10
Publisher (Book, Publisher, Author, Member)                5     13
School (Department, Teacher, Course, Student, Classroom)   5     12
Hotel (Hotel, Location, Rooms, Cost, Manage, Facilities)   6     14

Table 4. Descriptive statistics and paired-samples t test of the
pretest and posttest for different classes

Group           N    Pre-test
                     Mean       SD

Class 1 (WTDS)  52   30.48      6.46
Class 2 (WDDS)  49   29.96      7.12
Class 3 (WVS)   51   29.01.     6.86

Group           Post-test           t value
                Mean        SD

Class 1 (WTDS)  80.19       14.59   -32.39 *
Class 2 (WDDS)  73.51       15.76   -27.14 *
Class 3 (WVS)   68.69       16.91   -21.32 *

Note. * The mean difference is significant at the .05 level.

Table 5. One-way ANCOVA on the scores of the post-test

Variable         Class     Mean (a)   SD     F

Pre-test                                     164.64 *
Type of System   Class 1   79.61      1.48   12.40 *
                 Class 2   73.82      1.53
                 Class 3   69.01      1.51

Variable         Class     Post Hoc (b)

Pre-test                   N/A
Type of System   Class 1   Class 1> Class 2 * ; Class 1> Class 3 *
                 Class 2   Class 2> Class 3 *
                 Class 3

Note. * The mean difference is significant at the .05 level. a
Covariates appearing in the model are evaluated at the following
values: Pretest = 30.14.  Adjustment for multiple comparisons: LSD
(equivalent to no adjustments).

Table 6. One-way ANOVA on total retention time

Class     Total Retention Time (Minutes)

          Mean                              SD      F      P

Class 1   33.68                             16.21   0.59   0.62
Class 2   29.32                             17.24
Class 3   30.57                             15.96

Table 7. Descriptive statistics and paired-sample t test of the
pretest and posttest for different clusters

Cluster   N    Pre-test          Post-test           t value

               Mean       SD     Mean        SD

LC        14   22.50      2.51   68.57       13.74   -12.77 *
MC        24   30.42      2.59   78.71       11.95   -21.41 *
HC        14   38.57      2.60   94.36       5.32    -42.05 *

Note. * The mean difference is significant at the .05 level.

Table 8. Questionnaire result

No   Question                                    M      SD

1.   Did the WTDS provide suitable user          4.11   0.73
       interfaces and stability?
2.   Did you experience overall satisfaction     4.10   0.89
       toward the WTDS?
3.   Did the WTDS aid you in learning ERD?       4.35   0.81
4.   Did the WTDS reduce your frustration in     4.25   1.01
       solving ERD problems?
5.   Did the WTDS increase your confidence       3.86   0.96
       in solving ERD problems?
6.   Did the WTDS stimulate you to spend more    3.96   0.91
       time on it?

Figure 8. Inputting hints

R13   An Employer has many orders
R12   A Customer has no relationship with
R23   A Customer may have many Order

R13   An Employer has many orders
R12   as no relationship with a Employee
R14   has no relationship with a Product
COPYRIGHT 2013 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lin, Jian-Wei; Lai, Yuan-Cheng; Chuang, Yuh-Shy
Publication:Educational Technology & Society
Article Type:Report
Date:Apr 1, 2013
Words:7214
Previous Article:Correcting misconceptions on electronics: effects of a simulation-based learning environment backed by a conceptual change model.
Next Article:Pre-service teachers' perceptions on development of their IMD competencies through TPACK-based activities.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |