Content-free computer supports for self-explaining: modifiable typing interface and prompting.
Self-explaining, which engages the student in explaining to him/herself while studying a text or while solving problems, has aroused much attention since 1989. In a research of having students spontaneously explain the text, students were classified as good students and poor students according to their posttest performance (Chi et al. 1989). The research showed that the good students spontaneously generated more self- explanations than poor students did. In a research of comparing a group of students under spontaneously self- explaining condition and a group of students under self-explaining with prompts from a human tutor, prompted students performed better than unprompted students did (Chi et al. 1994). However, individual difference exits when received prompts; that is, some prompted students generated more self-explanations and performed better in posttest than some other prompted students did. In addition, some unprompted students spontaneously generated many self- explanations. To account for the self-explaining phenomenon, Chi (2000) pointed out that self-explaining is a self-constructive activity that involves generating inferences to fill in information omissions in the text, as well as monitoring and repairing faulty knowledge. The inferences can be generated by integrating information presented across different sentences, by integrating information with prior knowledge, or by using the meanings of words to imply what may also be true. Knowledge repairing is a process of revising one's own imperfect mental model. The students' self-explanations can be classified as re-reading, paraphrase, inference, self-monitoring, etc. (Chi, 2000; McNamara, 2004). In addition, rereading and paraphrase could be noted as low-quality self-explanations (LSE) and inference and self-monitoring could be noted as high-quality self-explanations (HSE) (Roy & Chi, 2005). Studies also revealed that HSE is positive related to learning gains (Chi & Bassok, 1989; Pirolli & Recker, 1994). In fact, in many studies, only HSE is regarded as self-explanation.
Most research on self-explaining engaged students in speaking out explanations while reading a text. However, some research applied computers to support self-explaining. For instance, one study provided a menu-based interface for students to generate domain-based self-explanations through selection from a set of domain-based rules or plans (Conati & VanLehn, 2000a). The results showed that some self-explaining students performed better than other students without self-explaining, but some self-explaining students performed worse. The possible reason might be that self-explaining is a self-constructive activity and choosing from a menu of explanations is not so constructive (Hausmann & Chi, 2002). In contrast with the menu-based interface, another research project allowed students to generate explanations by typing on the keyboard when solving problems (Aleven & Koedinger, 2002). The results showed that self-explaining through a typing interface also makes students perform better at problem solving. Another example of applying a computer in self-explaining was to investigate whether a computer interface can support self-explaining (Hausmann & Chi, 2002). In the research, the students read a text sentence by sentence and typed their explanations on each sentence. The findings showed that the spoken explanations tended to be fragmented and incoherent while typed explanations could be noted for their completeness. The results also revealed that students generated fewer explanations in a typed form than that in a spoken form. Several possible reasons were proposed by researchers to account for the results: typing requires more cognitive capacity and resource than speaking, and typing provides a record so that students avoid errors. In addition, the research found that content-free prompts also benefited students' learning even in a typed form self-explaining environment. The content-free prompts are prompts without any domain-related or content-related information that can therefore be easily adopted in any domain or for any content, such as "Could you elaborate on what you just said?"
[FIGURE 1 OMITTED]
However, many issues surrounding the use of a computer to support self- explaining remain to be investigated. For an example, Hausmann & Chi (2002) investigated the effects of using a typing interface for self-explaining the text sentence by sentence, but students could not re-read the previous sentences or change their previous self-explanations when reading later sentences in the text. However, students might want to re- read some sentences or have different understandings on some sentences when they read later sentences. Typing records might also provide an opportunity for students to reflect on or revise their understanding of the text. Therefore, the first research issue we want to investigate is 1) the effects of self-explaining in a full-text reading and modifiable typing interface. The interface allows students to freely read the whole text according to their chosen sequences; that is, students can read, skip, or re-read some sentences. The interface also allows students to type self-explanations and modify their previous self-explanations. The study aims to investigate learning performance and self-explaining behaviors in such an environment. We are also interested in the students' reading and self- explaining sequences, and whether and how students modify their self-explanations when they are allowed to modify them.
The second research issue we want to investigate is 2) the effects of different kinds of prompts in self-explaining. The content-free prompts are showed to be helpful in promoting students to generate more self-explanations and to learn better (Hausmann & Chi, 2002). A research of comparing generic content-free prompts and specific content-related prompts revealed that the less able students learned better with specific prompts and the more able students learned better with generic prompts (Aleven et al. 2006). However, the detailed effects of different prompts in self-explanation remain unclear. In a full-text reading and typing self-explaining environment, this study aims to examine the effects of different kinds of prompts in self-explaining. Instead of providing adaptive prompts by human tutors, this study uses a learning companion as a prompter. The learning companion is a computer-simulated character that plays a non-authoritative role and may provide incorrect information (Chan & Baskin, 1990; Chou et al. 2003); that is, the prompts are designed in advance and are provided for specific sentences of the text without understanding the content of students' self-explanations.
[FIGURE 2 OMITTED]
To investigate the two research issues, a system was implemented with three kinds of interfaces: reading (Interface One), reading and self-explaining (Interface Two), and reading and self- explaining with prompts (Interface Three). Interface One is a full-text masking reading interface. The interface divides the text into several text fields, with one field containing one sentence or one diagram (Figure 1). The student reads the text by moving the mouse from one field to another field. When the student moves the mouse into one of the text fields, the sentence is revealed. When the student moves the mouse out of the text field, the sentence will disappear. The student can skip some sentences or re-read any sentence. The masking interface can help students focus their attention and allow the system to trace the student's attention (Conati & VanLehn, 1999; 2000b). In this study, the masking also allows the system to trace the sequence of how the students read and self-explain.
Interface Two contains a similar masking reading interface to Interface One, and, additionally, there is a self-explanation field beside each text field for students to type their self-explanations (Figure 2). However, in Interface Two, when a student types in a self-explanation field, the sentence of the corresponding text field is revealed. Therefore, in Interface Two students can read two fields at the same time if they put the cursor in a self-explanation field and move the mouse to another text field. Interface Two allows the students to read the text and generate their self-explanations by typing. Furthermore, the students can skip generating self- explanations on a sentence or modify self-explanations on previous sentences. Thus, the modifications of the self- explanations in each field can be recorded and analyzed. The masking mechanism was only applied to text fields, and was not applied to self-explanation fields so that students could easily know which field was being explained and read or modify their previous self-explanations.
[FIGURE 3 OMITTED]
The reading and self-explaining interface of Interface Three is similar to that of Interface Two, but Interface Three can display prompts on some fields (Figure 3). In contrast with the system of providing prompts after students read and self-explain a sentence that was used in the study of Hausmann and Chi (2002), Interface Three allows students freely to read and self-explain the whole text for a period of time, and then provides prompts on some sentences. After receiving prompts, the students can continue to read and self-explain the text. In this study, the prompts are designed in advance to promote students' self-explanations and are provided to students without understanding the content of students' self-explanations.
An experiment was conducted to investigate the effects of self-explaining in a full-text reading and typing self-explaining interface and the effects of different kinds of prompts for self-explaining. The participants were 75 college undergraduate students enrolled in a Computer Programming course and majoring in computer science. The participants had learnt binary tree in a previous Basic Computer Concepts course.
[FIGURE 4 OMITTED]
The text is to introduce the concept of a red-black tree and to teach the building process of a red-black tree. A red-black tree is a kind of binary tree with some specific limits. The building process of a red-black tree involves several steps: inserting a node according to the definition of a binary tree, checking whether the insertion makes this tree beyond the limits of a red-black tree, and transforming the tree to fit the limits. The first part of the text introduces conceptual knowledge of a red-black tree; that is, it describes the definition and specific limits of a red-black tree. The second part of the text presents procedural knowledge of a red-black tree; that is, it uses a set of diagrams to present an example about how to build a red-black tree with six nodes containing data from 1 to 6 from one node to six nodes, node-by-node and step-by-step (Figure 4). The second part does not contain other text clarifications about the building process of the example. Thus, the text omits some information, such as checking the status of the limits and the principles of transforming a tree. The design of the text aims to observe whether the students infer omitted information in the text or not. The text was divided into 21 fields and was put into a masking reading interface as shown in Figure 1. The first part of the text contains eight fields and the second part of the text contains 13 fields.
The pretest includes a Binary Tree Definition Test (BTDT) and a Red-black Tree Definition Test (RTDT). BTDT has three questions and each question shows a tree for students to judge whether the tree is a binary tree or not. The student can answer "Yes", "No", or "I don't know". Each question was scored as 1 if the student answered correctly and as 0 if the student answered incorrectly or answered "I don't know"; that is, full marks for the BTDT is 3. Similarly, RTDT has three questions for students to judge whether the tree is a red-black tree or not. The pretest is designed to assess the students' ability in binary tree (prior knowledge) and red-black tree (target knowledge). The posttest includes a BTDT, an RTDT, a Retention Test, and a Transfer Test. The questions in the BTDT and RTDT are the same as those in the pretest for assessing whether students could classify these trees; that is, assessing conceptual knowledge according to Bloom's taxonomy (Anderson et al. 2000). The Retention Test asks the students to draw the building process of a red-black tree with six nodes as the same as the text and aims to assess whether the students understand and remember the building process (procedural knowledge) of a red-black tree in the text or not. The Transfer Test assesses whether the students are able to apply the knowledge to another similar problem (Bransford et al. 2000). The Transfer Test includes two questions. The first question asks the students to draw the building process of a red-black tree with six nodes of 3, 6, 1, 5, 4, and 2. The content of these nodes is the same as that of the text, but the sequence of these nodes is different and it makes the building process different. The first question is termed as the Near Transfer Test. The second question is to add a node of 16 into a red-black tree with nodes of 10, 20, 30, 15, 13, 14, and 17. The second question involves several transformations and thus is more difficult than the first question. The second question is termed as the Far Transfer Test. In the Retention Test and each question of Transfer Test, the students' answers were scored as a decimal fraction from 0 to 1 according to their correctness. As an example, there are about 10 steps (some steps can be combined) in the answer to the Retention Test. Thus, the students got 0.1 for each step they performed correctly.
Grouping and procedure
Participants were divided into four groups: Reading (termed as Group R), Self- Explaining with No Prompts (Group SENP), Self-Explaining with content-Free Prompts (Group SEFP), and Self- Explaining with content-Related Prompts (Group SERP). In investigating the issue of the effect of self- explaining in a modifiable typing interface, Group R is a control group and Group SENP is an experimental group. Using the same modifiable typing interface to generate self-explanations, Group SEFP and Group SERP could also be regarded as experimental groups on this issue, although the prompting effect was involved. In exploring the effects of different kinds of prompts in self-explaining,
Group SENP is a control group and Group SEFP and Group SERP are both experimental groups. At first, participants completed the pretest. Then students from different groups were asked to engage in different computer lessons. The students in Group R used Interface One to read the text for 30 minutes. Students in Group SENP used Interface Two to read the text and self-explain without any prompts for 30 minutes. Students in Group SEFP used Interface Three to read the text and self-explain for 30 minutes while the system provided content-free prompts after 20 minutes. Students in Group SERP used Interface Three to read the text and self-explain for 30 minutes while the system provided content-related prompts after 20 minutes. After learning activities, participants completed a posttest.
The students in Group SEFP and Group SERP both used Interface Three. The system provided 10 prompts to students after 20 minutes. The difference was that the prompts for Group SEFP were content-free prompts while the prompts for Group SERP were content-related prompts. The content-free prompts and content-related prompts were designed in advance and both related to the same specific sentences of the text (field 3, 7, 10, 11, 12, 13, 15, 16, 18, and 21) so that the effects of different prompts can be observed. The prompts were provided to the students without understanding the content of the students' self-explantions. The content-free prompts referred to prompts without any domain-related or content-related information and thus could be easily adopted in any domain or content, such as "Could you explain it more clearly?". The content-related prompts involved some domain-related or content-related information, such as "Is this a red-black tree? Why?" and "Why is node 2 located at the right child-node of node 1?".
Results and analyses
The results of the experiment were reported and analyzed from the following perspectives: learning effects, self-explanation
generation and modification, and prompted vs. unprompted locations.
Among 75 participants, 11 students answered more than two questions correctly out of three questions in the RTDT pretest and thus their data were excluded. In addition, a student in Group SEFP and a student in Group SERP did not generate any self-explanations and thus their data were excluded. The assessment results of the four groups are listed in Table 1. Performance equals the mean score divided by the full marks. The pretest and posttest results of BTDT and RTDT across the four groups were not significantly different. However, the students in all four groups significantly performed better in the posttest than the pretest, both in BTDT and RTDT. This means students in the four groups gained conceptual knowledge of binary trees and red-black trees. In the Retention Test, the students in the four groups performed similarly. In the Near Transfer Test, the students in Group SEFP and Group SERP significantly performed better than the students in Group R (p < 0.05). The students in Group SENP also performed better than the students in Group R, although the difference did not reach a level of significance (p = 0.12). In the Far Transfer Test, the students in the four groups all attained low scores and thus it was possible that the floor effect might have happened. However, the students in Group SEFP and Group SERP slightly (but not significantly) performed better than the students in Group SENP and Group R. Overall, the results revealed that self-explaining through typing, particularly when prompted, made students perform better in applying target procedural knowledge to similar problems.
Self-explanation generation and modification
The students' self-explanations were classified according to the classifications of Chi (2000) and McNamara (2004). A self-explanation was classified as a paraphrase if the student repeated the text sentence or expressed the content of text in his/her words without further information or inference. Paraphrases were also classified into correct and incorrect paraphrases according to whether the self-explanation involved incorrect information. A classification of bridging inference was used to denote a self-explanation that integrates information across different text sentences. Prior-knowledge inference was used to indicate a self-explanation that integrates information of the text with prior knowledge. The concept of binary trees is prior knowledge of the original text, but we added additional information about the concept of binary trees in first sentence of the text in case the students did not know what a binary tree is. Therefore, self-explanations that integrate a text sentence with knowledge of binary trees were classified as bridging inferences and there was no prior-knowledge inference classified in this study. A self-explanation was classified as a logic inference if the student inferred further information from the sentence by logical deduction. Positive self-monitoring was the classification used to denote a self-explanation in which the student expressed positive understanding of the sentence, such as "It is easy", or "I see". Negative self- monitoring was used to indicate a self-explanation in which the student was uncertain of or questioned the text, such as "I do not know" or "Why is the node 1 black?". Self-explanations that were beyond the text were classified as others. Table 2 lists the generating percentage of different self-explanation classifications of the final self- explanations of the different groups. The generating percentage denotes the frequency of the classifications divided by the number of text sentences, that is, 21. Inferences and self-monitoring were counted as HSE while paraphrase and others were counted as LSE. The results revealed that the students of Group SEFP and Group SERP significantly generated less incorrect logic inferences than the students of Group SENP did. It might indicate that both content-free and content-related prompts made students generate less incorrect logic inferences. In addition, the students of Group SERP generated more negative self-monitoring than the students of Group SEFP did (p = 0.06). It might denote that the content-related prompts made students more aware of their ignorance of the text than content- free prompts did.
Analyzing the records of the students' self-explanations, the frequency and modifications of self-explanations of different groups are listed in Table 3. Self-explanation frequency indicates the total number of students' self-explanations, including both LSE and HSE. The generating percentage indicates the frequency divided by 21, that is, the total number of text fields in the text. Modifications mean the number of students' self-explanations that have been modified after being typed. The frequencies of self-explanations and HSE of Group SENP, Group SEFP and Group SERP are similar, but the number of modifications of Group SEFP and Group SERP are significantly greater than that of Group SENP. In addition, the number of modifications of Group SERP is significantly greater than that of Group SEFP. The results might reveal that prompts, particularly content- related prompts, promoted students to modify their self-explanations.
According to the classifications, self-explanation on each text field could be categorized into none (no self-explanation), LSE, or HSE, and thus the modifications of the self-explanations on the same text fields before and after prompting were classified into six kinds (Table 4). The percentage represents the frequency of the modification classification divided by the total frequency of the six classifications. The results showed that content-related prompts significantly gave rise to more self-explanation modifications, which ranged from LSE to HSE and from HSE to HSE, than content-free prompts did.
Prompted vs. unprompted locations
Prompts were provided at the same specific locations in Group SEFP and Group SERP. Table 5 lists the self-explanation statistics on prompted locations (10 fields) and un-prompted locations (11 fields) of different groups. The percentage represents the frequency divided by the total number of fields. The results showed that the percentages of HSE and self-explanation modifications in prompted locations were significantly higher than that in un-prompted locations both in Group SEFP and Group SERP. The results of comparing different groups also showed that the percentage of self-explanation modifications on prompted locations in Group SEFP and Group SERP was both significantly greater than that in Group SENP. It revealed that both content-free and content-related prompts promoted more self-explanation modifications in prompted locations. In addition, the percentage of self-explanation modifications in un-prompted locations in Group SERP was also greater than that in Group SENP. It might indicate that content-related prompts also promoted students to generate more self- explanation modifications even in unprompted locations. However, the results also showed that the percentage of HSE in un- prompted locations in Group SENP was significantly greater than that in Group SEFP and Group SERP. It might indicate that prompts made students pay more attention in prompted locations and less attention in un- prompted locations.
This study focuses on content-free computer supports because these supports can be easily applied to other domains or content. The reading and typing self-explaining interface can be applied to other domains or content by changing the text, but the typing interface makes students' self-explanations more complex to analyze than a menu-based interface because the analysis involves natural language processing. This study also used a learning companion to provide prompts without understanding the content of students' self- explanations. This releases the requirement to build domain-related knowledge into the system to analyze the self-explanations. Content-free prompts can be used for any domain or content, and content-related prompts can be assigned in advance along with the text. Thus the developed system can be used for other domains and content by changing the text and content-related prompts.
The prompting mechanism in this study remains several issues unclear and some room for enhancement. First, this study provided prompts without understanding the content of students' self- explanations. On the other hand, understanding the students' self-explanations can enable system to provide students with adaptive prompts and may have better effects on self-explanations. However, whether adaptive prompts could promote better effects remains unclear. Second, this study provided prompts after 20 minutes for all students. Prompting timing could be changed to fit the reading and self-explaining speed of different students. For example, the students push a button when they complete self-explaining on the text and then the system provides prompts. Thirdly, this study prompted at the same specific locations for all students. The prompting number and locations could be randomly assigned or be adaptive, such as prompting at the sentences where the students did not generate any self- explanations.
Some qualitative findings are reported below, but most of them are preliminary observations and require further investigation.
The study of Hausmann and Chi (2002) found that typed self-explanations can be noted for their completeness and spoken self-explanations tend to be fragmented and incoherent. They also pointed out that "in typing, students might have filtered out what they would spontaneously say orally." Consistent with their finding, students' self-explanations in this study appear to be complete. However, some students (about 23 %) typed some self-explanations to express what they were thinking, such as "hmm ...", "Oh!", "I am still thinking ..." . These students might tend to be less filtered when typing their self-explanations.
The interface used in this study allowed students to read and self-explain in their own sequences. Analyzing the students' reading and self-explaining sequences, three kinds of sequences were recognized. Some students (about 56.25%) read several text fields forward and backward many times before typing self-explanations on a field. As an example, most of them typed self-explanations on field 1 after reading field 1 to field 3 or field 5. Some students (about 15%) read all text fields once and then began to self-explain from field 1. Some students (about 15%) read and self-explained almost field by field and they seldom read fields forward and backward; that is, they read a field, self-explained the field, and then read the next field. Other students generated few self-explanations and their sequences were not recognized. The above results reveal that students have different reading and self-explaining sequences. These sequences might be regarded as records of students' effort to understand the text. However, many issues remain unclear, for example, "Do these sequences matter?" and "Can these sequences help us understand their self-explanations?"
The interface used in this study allowed students to modify their previous self- explanations. The results in Table 3 reveal that students will modify their previous self-explanations. Their modifications were analyzed to clarify how they modified their self-explanations. Three kinds of modifications were classified: addition, deletion, and adjustment. First, students added more self-explanations on some fields in addition to previous self-explanations. The self-explanations that were added may be paraphrases, inferences, or self- monitoring. Most modifications in this study were additions. Secondly, students deleted some self-explanations from their previous self-explanations. This might indicate that students found these self-explanations incorrect and thus deleted them. Thirdly, students adjusted the content of their previous self-explanations by adding, deleting, or changing some words. The adjustments tended to be minor and to make the self-explanations more complete. Different kinds of modifications may occur simultaneously. For example, a student deleted a self-explanation from his previous self-explanations and then added some self-explanations as a modification. However, these modifications of self- explanations might be regarded as symbols that students revised their understanding on the text. It could also justify our assumption that a typing record
might provide an opportunity for students to reflect on or revise their understanding of the text. Compared to the few HSE generating percentages (1.6% without prompts and 10.3% with prompts) in the study of Hausmann and Chi (2002), the students in the full-text reading and typing self- explaining environment generated higher HSE generating percentages (about 43%-53% on different groups). There might be several possible reasons to account for the difference between these two results. First, the full-text reading and modifiable typing self-explaining interface allows students to modify their previous self-explanations and to reflect on their previous understanding of the text and thus might provide a more constructive environment for students. Secondly, participants in this study major in computer science and they may be used to typing. However, their attitudes toward typing and typing speeds were not evaluated. Thirdly, the study of Hausmann and Chi (2002) adopted a text-only situation to present the text, and this study adopted a text situation in the first part of the text and a diagram situation in the second part. Roy and Chi (2005) suggested that text in diagram or multimedia situations promotes higher percentages and performance of students' self-explanations than text in a text- only situation does. On the other hand, Roy and Chi (2005) listed the proportions of HSE to the sum of HSE and LSE of some spoken self-explaining studies in different learning contexts: average 45.04% in a text situation, average 68.26% in a multimedia situation, and 91.61% in a diagram situation. In addition, the proportion in the typing self-explaining study of Hausmann and Chi (2002) is 8.45%. In this study, the proportions of HSE to the sum of HSE and LSE are 55%-63% on different groups. Although many variables in this study and the study of Hausmann and Chi (2002) were not equivalent (for example, participants and materials), and thus many issues require more controlled experiments for clarification, this study reveals that self-explaining through typing could make similar high HSE proportion as spoken self-explaining.
This study investigated the effects of two content-free computer supports for self-explaining: modifiable typing interface and prompting. The full-text reading and modifiable typing interface allowed students to read text in their own sequences, type self-explanations, and modify their previous self- explanations. In addition, the computer could provide prompts to promote students' self-explanations. The results of the experiment showed that the students that self-explained through typing, particularly those being prompted, performed better in applying target procedural knowledge to similar problems than the reading students did. The results also revealed that typing record of self-explanations might provide an opportunity for students to reflect on or revise their understanding of the text.
This study also investigated the effects of prompting by a learning companion without understanding the content of students' self-explanations. Comparing content-free and content-related prompts, some similar effects existed. First, the results showed that both content-free and content-related prompts made self- explaining students performed better in applying target procedural knowledge to similar problems than the reading students did. Secondly, they both promoted students to generate more self-explanation modifications and less incorrect logic inferences. Thirdly, they both made students generate higher HSE percentage and self-explanation modifications in prompted locations than un-prompted locations. It might indicate that prompts made students pay more attention in prompted locations and less attention in un-prompted locations.
However, content-related prompts facilitated more self-explanation modifications and more negative self-monitoring than content-free prompts did. Analyzing the classifications of self-explanation modifications, content-related prompts promoted more modifications, which ranged from LSE to HSE and from HSE to HSE, than content-free prompts did. The possible cause for the above results might be that content- related prompts asked the students about specific questions about the text and promoted students to reflect on their understanding or be aware of their ignorance.
The authors would like to thank the support of the National Science Council (NSC94-2520-S-155-002). The authors also thank two anonymous reviewers for their valuable comments to revise the paper.
Aleven, V. & Koedinger, K.R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26, 147-179.
Aleven, V., Pinkwart, N., Ashley, K., & Lynch, C. (2006). Supporting self- explanation of argument transcripts: specific v. generic prompts. Workshop of Intelligent Tutoring Systems for Ill- Defined domains, 8th International Conference on Intelligent Tutoring Systems, 47-55.
Anderson, L. W., & Krathwohl, D. R., Airasian, P. W., & Cruikshank, K. A. (2000). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon.
Bransford, J., Brown, A. L., & Cocking, R. R. (2000). (Eds.). How People Learn: Brain, Mind, Experience, and School: Expanded Edition, National Academies Press.
Chan, T.W. & Baskin, A.B. (1990). Learning companion systems. In C. Frasson & G. Gauthier (Eds.) Intelligent Tutoring Systems: At the Crossroads of Artificial Intelligence and Education, Chapter 1, New Jersey: Ablex Publishing Corporation. 1990.
Chi, M.T.H. (2000). Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In R. Glaser (Ed.), Advances in Instructional Psychology, Hillsdale, NJ: Lawrence Erlbaum Associates, 161-238.
Chi, M. T. H., & Bassok, M. (1989). Learning from examples via self- explanations. InL.B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser. Hillsdale, NJ: Lawrence Erlbaum Associates, 251-282.
Chi, M. T. H., Bassok, M., Lewis, M., Reimann, P., & Glaser, R. (1989). Self- explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.
Chi, M.T.H., de Leeuw, N., Chiu, M.H., & LaVancher, C. (1994). Eliciting self- explanations improves understanding. Cognitive Science, 18, 439-477.
Chou, C. Y., Chan, T. W., & Lin, C. J. (2003). Redefining the learning companion: the past, present, and future of educational agents. Computers & Education, 40, 255-269.
Conati, C., & VanLehn, K. (1999). Teaching meta-cognitive skills: implementation and evaluation of a tutoring system to guide self-explanation while learning from examples. In Proc. of AIED'99, 9th World Conference of Artificial Intelligence and Education, Le Mans, France, 297-304.
Conati, C. & VanLehn, K. (2000a). Toward computer-based support of meta- cognitive skills: a computational framework to coach self-explanations, International Journal of Artificial Intelligence in Education, 11, 389-415.
Conati, C., & VanLehn, V. (2000b). Further Results from the Evaluation of an Intelligent Computer Tutor to Coach Self-Explanation. Paper presented at the ITS 2000, 9th International Conference on Intelligent Tutoring Systems, Montreal, Canada, 304-313.
Hausmann, R.G. & Chi, M.T.H. (2002). Can a computer interface support self- explaining?, Cognitive Technology, 7 (1), 4-14.
McNamara, D. S. (2004). SERT: self-explanation reading training. Discourse processes, 38 (1), 1-30. Pirolli, P., & Recker, M.M. (1994). Learning strategies and transfer in the domain of programming. Cognition and
Instruction, 12, 235-275. Roy, M, & Chi, M. T. H. (2005). The Self-Explanation Principle, Cambridge Handbook of Multimedia Learning, 271-286.
Chih-Yueh Chou and Hung-Ta Liang
Department of Computer Science and Engineering, Yuan Ze University, ChungLi, Taiwan // Tel: +886-03-463-8800 ext 2362 // Fax: +886-03-463-8850 // firstname.lastname@example.org
Table 1. Learning effects of different groups Group R Group SENP n=14 n=18 mean (SD) performance Pretest 1.28 (1.14) 1.83 (0.85) (BTDT) 42% 1 Pretest 0.5 (0.26) 0.33 (0.23) (RTDT) 17% 0 Posttest 2.14 (0.74) 2.55 (0.37) (BTDT) 71% 1 Posttest 2.42 (0.57) 2.27 (0.56) (RTDT) 81% 1 Posttest - pretest 0.85 (1.51) 0.72 (1.27) (BTDT) 28% 24% Posttest - pretest 1.92 (0.84) 1.94 (0.99) (RTDT) 64% 65% Posttest 0.68 (0.09) 0.63 (0.15) (Retention Test) 68% 63% Posttest 0.29 (0.10) 0.49 (0.14) (Near Transfer Test) 29% 49% C Posttest 0.02 (0.006) 0.03 (0.004) (Far Transfer Test) 2% 3% Group SEFP Group SERP n=13 n=17 Pretest 1.53 (0.43) 1.47 (0.38) (BTDT) 51% 49% Pretest 0.30 (0.23) 0.11 (0.11) (RTDT) 10% 4% Posttest 2.15 (0.47) 2.05 (0.68) (BTDT) 72% 68% Posttest 2.15 (0.64) 2.17 (1.02) (RTDT) 72% 72% Posttest - pretest 0.61 (0.42) 0.58 (1.007) (BTDT) 20% 19% Posttest - pretest 1.84 (1.14) 2.05 (1.05) (RTDT) 61% 68% Posttest 0.8 (0.06) 0.74 (0.10) (Retention Test) 80% 74% Posttest 0.64 (0.06) 0.54 (0.11) (Near Transfer Test) 64% 54% A B Posttest 0.14 (0.07) 0.17 (0.09) (Far Transfer Test) 14% 17% A, B: Significantly greater than Group R, p < 0.05 C: Greater than Group R, p =0.12 Table 2. Self-explanation classifications of different groups Classifications\Grouping Group SENP Group SEFP Group SERP Correct paraphrases 25.93% 29.30% 31.65% Incorrect paraphrases 5.03% 3.66% 1.68% Correct bridging inferences 25.66% 35.90% 33.05% Incorrect bridging inferences 4.50% 0.00% 0.28% Correct prior-knowledge inferences 0.00% 0.00% 0.00% Incorrect prior-knowledge inferences 0.00% 0.00% 0.00% Correct logic inferences 1.59% 1.83% 2.80% Incorrect logic inferences 3.97% 0.37% 1.12% A B Positive self-monitoring 8.47% 0.73% 2.24% Negative self-monitoring 8.99% 4.03% 12.88% C Others 2.12% 1.47% 0.84% HSE 53.17% 42.86% 52.38% HSE / LSE+HSE 63.19% 55.45% 60.52% A, B: Significantly less than Group SENP, p < 0.05 C: Greater than Group SEFP, p = 0.06 Table 3. Self-explanation statistics of different groups Group SENP Group SEFP Group SERP Self-explanation 12.61 (40.58) 13.05 (42.17) frequency (SD) 60.07% 62.16% and percentage before prompts HSE frequency and 6.3 (20.2) 6.29 (13.2) percentage before 30% 29.90% prompts Self-explanation 3.30 (13.23) 3.05 (5.80) modifications 16% 15% before prompts Final self-explanation 18.1 (25.9) 16.23 (15) 18.23 (15.9) frequency and 86.24% 77.29% 86.83% percentage Final HSE frequency 11.16 (36) 9 (13.16) 11 (11.25) and percentage 53.17% 42.86% 52.38% Final self-explanation 2.44 (11.08) 8.4 (14.89) 10.52 (21.51) modifications 12% 40% 50% A B, C A, B: Significantly greater than Group SENP, p < 0.05 C: Significantly greater than Group SEFP, p < 0.05 Table 4. Self-explanation modification after prompting Classifications\Grouping Group SEFP Group SERP LSE to LSE 14.71% 4.88% LSE to HSE 5.88% 18.70% A HSE to HSE 8.82% 21.14% B HSE to LSE 2.94% 1.63% None to HSE 50.00% 39.02% None to LSE 17.65% 14.63% A, B: Significantly greater than Group SEFP, p < 0.05 Table 5. Self-explanation statistics on prompted and un-prompted locations Group SENP Group SEFP Prompted Un-prompted Prompted Un-prompted locations locations locations locations Self- 8.44 (8.26) 9.66 (5.2) 8.6 (1.75) 7.53 (10.7) explanations 84% 87.00% 86% 68% a A HSE 5.77 (9.7) 5.38 (10.4) 5.5 (5.6) 3.3 (2.89) 57% 48.90% 55% 30% b c B Self-explanation 1.05 (4.4) 1.44 (3) 5.15 (4.8) 2.5 (5.6) modifications 10% 13% 51% 23% D d Group SERP Prompted Un-prompted locations locations Self- 8.64 (5.1) 8.76 (6.06) explanations 86% 79% HSE 7 (5.8) 3.17 (5.4) 70% 28% C Self-explanation 7.29 (8.9) 3.17 (7) modifications 72% 28% E e f g A: Approximately significantly greater than un-prompted locations, p = 0.053 B, C, D, E: Significantly greater than un-prompted locations, p < 0.05 a, f: Significantly greater than Group SEFP, p < 0.05 b: Approximately significantly greater than Group SEFP, p = 0.053 c: Significantly greater than Group SERP, p < 0.05 d, e, g: Significantly greater than Group SENP, p < 0.05
|Printer friendly Cite/link Email Feedback|
|Author:||Chou, Chih-Yueh; Liang, Hung-Ta|
|Publication:||Educational Technology & Society|
|Date:||Jan 1, 2009|
|Previous Article:||Ubiquitous performance-support system as mindtool: a case study of instructional decision making and learning assistant.|
|Next Article:||Examining the factors influencing participants' knowledge sharing behavior in virtual learning communities.|