The effects of cognitive styles on the use of hints in academic English: a learning analytics approach.
English has been an international language so Asia schools consider English as the first "foreign" language (Yang & Zhang, 2015). Thus, how to enhance English abilities is a critical issue in non-English speaking countries because such abilities affect the overall competition ability of a country (Chen, Hsu, Li, & Peng, 2006). However, previous research found that English learning may be a challenge to students (Ombati, Omari, Ogendo, Ondima & Otieno, 2013). On the one hand, a universal problem that language teachers raise is that most learners feel reluctant to read English materials on their own (Arnold, 2009). On the other hand, some researchers attributed the reasons for such a problem to the lack of learners' motivation and self-efficacy (Milkova & Hercik, 2014). Moreover, Wu, Huang, Chao, and Park (2014) indicated that learners may not be able to construct relationships between different vocabularies and may not possess abilities to deduce, analyze, gauge, organize, or apply such vocabularies. This may be due to the fact that learners usually learn new vocabularies by rote memorization, which often causes boredom (Min & Hsu, 2008).
A number of approaches can be applied to solve these problems. Among them, E-assessment is a popular activity adopted by teachers (Wang, Li, Feng, & Liu, 2012). E-assessment, which refers to the evaluation of the knowledge or skills of learners in a computer-based environment, has received much attention in educational settings (Fox, 2013). For instance, Hwang and Chang (2011) incorporated E-assessment into a local culture course and examined relationships between learning interests and learning attitude. 61 students, who participated in their study, learned the historical background and the transit of local culture via E-assessment. The results indicated that learners' learning interests and learning attitude in the culture course were greatly improved after they used E-assessment. Additionally, Zlatovic, Balaba and Kermek (2015) employed E-assessment to support an informatics course and investigated students' learning strategies and learning performance. 351 learners from higher education institutions, who enrolled in an introduction to informatics course, participated in their study. The results showed that the E-assessment not only had positive impacts on their learning strategies, but also improved their learning performance.
In addition to the aforementioned advantages, the E-assessment also has other advantages, including improved question styles using interactive multimedia technology (Linden & Glas, 2010), improved validity and reliability (Xiao & Lucking, 2008) and helpful for learners to review what they have learnt (Butler, Karpicke, & Roediger, 2007). In brief, using E-assessment to support learning is helpful. Nevertheless, E-assessment also has some problems. For instance, it only emphasizes on whether learners can understand and remember what they have learnt out of the context (Sanchez-Vera, Fernandez-Breis, Castellanos-Nieves, Frutos-Morales, & Prendes Espinosa, 2012). In other words, such a process does not generate new teaching opportunities for teachers and new learning opportunities for learners. Actually, ideal assessment should be more comprehensible and contribute to make new opportunities for teaching and learning. The other problem is that learners' anxiety still exists. Such anxiety may negatively affect students' learning motivation or learning performance. To address these issues, there is a need to provide learners with scaffolding, which can reduce their anxiety and let them work well under pressure (Hsieh, Jhan, & Chen, 2014). Scaffolding refers to support or guidance provided by a mechanism so that learners will not think that a task is too difficult to effectively complete (Belland, 2014). This is because scaffolding uses dialogue to help learners develop ideas they most likely would not have had on their own but such ideas are recognized as the outcome of their own thought (Game & Metcalfe, 2009).
Motivation and aims
Scaffolding can be presented as several ways, among which hints are one of the most useful ways (Devolder, van Braak, & Tondeur, 2012). Hints act as mediators between a student and knowledge he/she attempts to understand and ultimately assist him/her in reaching goals, instead of being accomplished by himself/herself alone (Khaliliaqdam, 2014). Bannert, Sonnenberg, Mengelkamp, and Pieger (2015) claimed that hints can help learners analyze a problem and make a schema to explore a possible solution. Moreover, Bull, Shuler, Overton, Kimball, Boykin, and Griffin (1999) indicated that hints are not one-way, but interactive and reciprocal process between students and the source of instruction. During such a process, students are not passively receiving assistance but actively engaged in the learning process to benefit from the hints to attain a higher level of achievement (Rogoff, 1990).
Because of these potential, various hints have been proposed to support student learning and such hints can be broadly divided into direct hints and indirect hints (Bannert & Reimann, 2012). The former might help students get right answers but they might not be able to develop good reasoning. Conversely, the latter might not be able to help students get right answers but they have potential to help students develop good reasoning. The direct hints are useful to learn several facts while the indirect hints are helpful to understand some concepts. In other words, these two types of hints have different advantages and disadvantages.
On the other hand, learners have diverse knowledge, skills and background so they may favor different types of hints. In other words, there is a need to take into account individual differences. As suggested by Stone (1998), individual differences in cognitive, linguistic and interpersonal skills would affect the effectiveness of scaffolded instruction. Nonetheless, there is a lack of research into relationships between the use of hints and learners' individual differences (Paas, Tuovinen, Tabbers, & van Gerven, 2003). Among various individual differences, current research found that learners' cognitive styles influenced how they interacted with technology-based learning (Chen & Macredie, 2010).
Within the area of cognitive styles, differences between Holists and Serialists have been paid attention. According to Jonassen and Grabowski (1993), Holists focus on the content of comprehensive framework while Serialists emphasize on the details of the contents. Ford and Ford (1993) found that Holists and Serialists favored different navigation tools. Holists preferred to use a hierarchical map while Serialists favored to use an index. In addition, Clewley, Chen, and Liu (2011) indicated that Holists and Serialists showed different behavior patterns. Holists tended to use hyperlinks to browse information whereas Serialists favored to use the Previous/Next button to browse information step by step. Subsequently, Chan, Hsieh, and Chen (2014) examined how Holists and Serialists used electronic journals. The results showed that Holists and Serialists used different ways to judge the relevance of documents. More specifically, Holists tended to use a variety of approaches while Serialists preferred to use a single approach. In brief, the results from previous research demonstrated that Holists and Serialists have different learning preferences.
However, such research ignored the reactions of Holists and Serialists to the use of various hints. Thus, this study attempted to fill this gap. In other words, this study aims to examine how Holists and Serialists react differently to the use of various hints. Among a variety of topics, hints are used to support students to learn academic English in this study. This is due to the fact that academic English is essential for the dissemination of research outputs and the establishment of international collaboration. In summary, this study makes contributions to language learning in two aspects. One is to use hints as scaffolding instruction to support students to learn academic English in the context of e-assessment. The other is to contribute to the knowledge of the effects of cognitive styles on the use of hints in an academic English course.
To deeply disclose such contributions, we adopt a learning-analytic approach, which refers to the interpretation of a variety of data created by and collected on behalf of learners to identify potential issues (Long, Siemens, Conole, & Gasevic, 2011). Hence, an integrative data analysis was applied in this study. More specifically, both quantitative and qualitative methods are applied to analyze data. Furthermore, we also investigate students' behavior with a Lag Sequential Analysis, which has been recognized as a promising approach that can identify significant sequential behavior relationships in a series of behavior. Hou, Chang, and Sung (2007) found that the Lag Sequential Analysis can not only identify significant sequential behavior patterns but also illustrate the relationships between such behavior patterns with visual diagrams. Due to such benefits, the Lag Sequential Analysis is also employed in this study. By doing so, we can obtain a complete understanding of (1) how different cognitive style groups perform, (2) what their behavior differences are, (3) why they behave differently, and (4) when they can perform well.
The development of an online test
We developed an online test to help students learn Academic English, which is important to research students because it can help them develop an international perspective (Yang & Zhang, 2015). In particular, the content focused on enhancing students' understandings of English phrases.
To help the research students learn English phrases without frustration, the design principle of the online test was scaffolding instruction (Wood, Bruner, & Ross, 1976), which means to provide support for students so that they can achieve a deeper understanding for a complex task (Irvin, Meltzer, & Dukes, 2007). Due to such an advantage, scaffolding instruction has been widely used in English learning (Dare & Polias, 2001). Consequently, we also incorporated scaffolding instruction into the online test, where hints were applied to scaffold students. More specifically, the online test provided two types of hints, i.e., synonym hints and Chinese hints. The former, which were indirect hints, provided the meanings of English phases in English synonyms while the latter, which were direct hints, translated the meanings of the English phases into Chinese. Based on such hints, the online test had three versions. One version provided multiple hints (i.e., the MH condition, Figure 1), another version contained no hints (i.e., NH condition, Figure 2), and the other version included Chinese hints only (i.e., CH condition, Figure 3). Students were allowed to use hints without any limitations when they took the online test in the MH condition or CH condition. No points would be deduced, irrespective of choosing any hints.
Regardless of any conditions, the online test was delivered in the form of 30 multiple-choice questions, which were designed by an English expert. Each question was provided with three options, one of which was correct. If a wrong option was selected, five points would be reduced. Furthermore, the online test was implemented with a 10-inch android mobile device. This is owing to the fact that its display screen has a proper size. By doing so, such a mobile device not only demonstrates acceptable readability, but also offers the convenience of accessing information (Lan, Sung, Tan, Lin, & Chang, 2010; Huang, Wu, & Chen, 2012).
In this study, we used a quasi-experimental design to examine the impacts of cognitive styles on the use of hints in an online test. 46 individuals participated in this study. Participants were research students at a university in Taiwan and they volunteered to take part in this study. All participants had general English abilities but knew very few things about academic English.
Prior to conducting the experiment, the participants were given a series of academic English courses. During the experiment, they initially needed to take the Study Preference Questionnaire (SPQ). The SPQ includes two sets of 17 statements and the participants were asked to indicate the degree of agreement with either statement or to indicate no preferences. This study identified Holists and Serialists by using criteria suggested by Ford (1995): (a) if students agree with over half of the statements related to Holists, they are identified as Holists; (b) if students agree with over half of the statements related to Serialists, they are then considered as Serialists. Due to such simple criteria, the SPQ was chosen for the current study.
The results from the SPQ indicated that the sample consisted of 24 Holists and 22 Serialists. Subsequently, the participants were requested to interact with the aforementioned three versions of the online test. It took about 15 minutes to interact with each version. Furthermore, students needed to use each version to complete a task, which included 10 questions. How they completed the tasks was recorded in a log file, including the frequencies and time of using various hints and total time spent for interacting with the online test. The data recorded in the log file were applied to identify their learning behavior and they also needed to complete a formal test, which also consists of 20 multiple-choice questions. Finally, they were requested to fill out a questionnaire, which consisted of five opened questions, i.e., (1) the advantages of using the online test, (2) the disadvantages of using the online test, (3) English skills learnt from the online test, (4) the difficulty of using the online test and (5) the favorite type of hint. These five questions were applied to examine their responses to the online test.
Data analyses were conducted with an integrative approach, which included quantitative analyses, qualitative analyses and lag sequential analyses.
* Quantitative Analyses: A pair-t test was used to analyze differences between each condition and Pearson's correlations were employed to analyze relationships between the hints used in each condition and the test scores. Furthermore, an independent t test was applied to identify differences between Holists and Serialists, in terms of learning performance and learning behavior. The former was measured based on their scores obtained from the tasks and tests, i.e., the task scores and the test score, and the time spent for completing the tasks, i.e., the task time. On the other hand, the latter was identified on the basis of the frequencies of using various hints, including Chinese Hints and Synonyms Hints.
* Qualitative Analyses: Two researchers who had strong background in qualitative methodologies and had interests in English learning were responsible for reading and coding responses to the open questions of the questionnaire based on procedures suggested by Merriam (1988). Subsequently, similar responses were clustered together and were given headings that best described the characteristics of the responses in each cluster, for which a frequency count was performed. Finally, the responses from Holists and those from Serialists were compared so that we could identify similarities and differences between them.
* Lag Sequential Analyses: This study involved multiple conditions and multiple hints, which increased the complexity of data. Thus, only using the aforementioned quantitative and qualitative analyses was not sufficient. To address this issue, the lag sequential analysis was also applied in this study, where sequential relationships of the behaviors were illustrated with visual diagrams so that behavior differences between Holists and Serialists could be clearly identified. The other benefit of using the lag sequential analysis is that it can determine how often one behavior was followed by the other (Lehmann-Willenbrock & Allen, 2014). Accordingly, we used the lag sequential analysis to discover which behavior could lead students to get a correct or wrong answer in this study. Such sequential relationships can help instructors identify an online test that can enhance students' performance and facilitate students to choose suitable hints for themselves.
Results from quantitative analyses
Results from quantitative analyses are presented in this section, where we begin to describe overall tendencies and then move to depict the effects of cognitive styles on learning performance and learning behavior. Finally, relationships between the task scores and the test scores for each cognitive style group are presented.
In this section, we analyzed overall tendencies by identifying correlation between the task scores and the test scores and between the time spent for completing the tasks and the test scores for all students. The results indicated that students' task scores were significantly related to their test scores (r = .490, p < .05). However, no significant correlations existed between the task time and the test scores.
Further to their performance in the tasks, we also investigated the relationships between the test scores and the frequencies of using various hints for all students. We found that the test scores were significantly associated with the frequencies of using synonym hints (r = .523, p < .05), instead of Chinese hints (p > .05). More specifically, students who more used synonym hints could obtain higher test scores. These results implied that the synonymy hints were more helpful to the students than Chinese hints, in terms of the test scores. This might be because synonymy hints could assist students to deliver their thoughts with multiple ways while Chinese hints could help them understand the Chinese meanings of vocabularies only. Thus, there is a need to encourage students to use synonymy hints, with which they could effectively express their ideas.
As mentioned before, students were provided with three conditions, i.e., the NH condition, MH condition and CH conditions. As shown in Table 1, Holists and Serialists obtained similar task scores in all conditions (p > .05). On the other hand, we also examined the effects of cognitive styles on the task time in each condition. Like the task scores, there were no significant differences between Holists and Serialists, regardless of any conditions (p > .05). In particular, they spent the similar amount of time for completing the tasks in the NH condition, which did not provide any hints.
Furthermore, we analyzed how Holists and Serialists reacted to these three conditions. Regarding the task scores, students performed similarly in the three conditions, regardless of Holists (p > .05) or Serialists (p > .05). Regarding the task time (Figure 4), Holists spent similar amounts of time for completing the tasks in these three conditions (p > .05). Conversely, the task time that Serialists spent for each condition was different (p < .05). More specifically, Serialists in the NH condition significantly spent more time completing the tasks than those in the MH condition (t = 3.33, p < .05). Additionally, it significantly took more time for Serialists in the CH condition to complete the tasks than those in the MH condition (t = 2.88, p < .05). However, no significant difference was found between Serialists in the NH condition and those in the CH condition (p > .05). These findings suggested that the lack of hints might make Serialists spend much more time completing the tasks. In other words, Serialists more relied on the hints than Holists. Such results echoed those shown in Chen and Chang (2016), which revealed that Serialists needed additional support.
Whether Holists and Serialists demonstrated different learning behavior in these three conditions is investigated in this section. The results indicated that Holists and Serialists behaved similarly in the CH condition (p > .05) and NH condition (p > .05). However, there was a significant difference between Holists (Mean = 3.83, SD = 4.63) and Serialists (Mean = .64, SD = 2.11) in the MH condition, in terms of the frequencies of using the hints (Figure 5). More specifically, Holists more frequently used the hints than Serialists (t = 2.16, p < .05). After analyzing the frequencies of using each type of hints, we found that such a significant difference majorly existed in synonym hints (t = 2.36, p < .05), instead of Chinese hints (p > .05). To be more specific, Holists (Mean = 2.33, SD = 1.87) more frequently used synonym hints than Serialists (Mean = .27, SD = .04). In other words, Serialists less frequently used hints, especially synonym hints (Figure 6).
On the other hand, the results from Pearson's correlations indicated that no significant correlations existed between Holists' task scores and the frequencies of using Chinese hints (p > .05) and synonym hints (p > .05).
However, Serialists' task scores were related to the frequencies of using Chinese hints (r = .048, p < .05) as well as those of using synonym hints (r = .048, p < .05). These findings implied that Serialists might use Chinese hints and synonym hints in a relatively smart way. Accordingly, the more hints they used, the more task scores they got.
Task scores vs. test scores
As mentioned before, a significant correlation existed between students' task scores and their test scores. In this section, we further analyzed such a correlation from a cognitive style perspective. The results indicated that no such significant correlation was found for Serialists while there were some significant correlations for Holists.
In general, Holists' task scores in the three conditions was significantly related to their test scores (r = .628, p < .05). After analyzing the performance of Holists in each condition, we found that such a correlation was majorly happened in the NH condition. More specifically, Holists' task scores in the MH (p > .05) and CH (p > .05) conditions were not significantly related to their test scores but their task scores in the NH condition were significantly associated with their test scores (r = .626, p < .05). The difference between the NH conditions and the remaining two conditions lied within the fact that the former did not offer any hints while the latter provided some hints. In other words, they could not get any additional support in the NH condition so they had to rely on their prior knowledge. This finding revealed that Holists could also make the best use of their prior knowledge to cope with the test, instead of depending on the tasks only. This finding suggested that they could not only use what they had learned from each task, but also take advantage of their prior knowledge. This might be because Holists took a comprehension learning style to view the tasks and their prior knowledge as a whole (Howie, 1995).
Results from qualitative analyses
This section presents the results from their responses to the opened questions of the questionnaire, which were analyzed with a qualitative approach. Both Holists and Serialists shared several similar responses while some differences also existed between them.
Similarities between Holists and Serialists
Holists and Serialists showed similar responses to the advantages and disadvantages of using the online test and enhanced English skills. The details are presented in Table 2.
Differences between Holists and Serialists
On the other hand, differences also existed between Holists and Serialists. Firstly, they valued different types of hints. More specifically, Holists (N = 15, 63%) favored to use synonym hints while Serialists (N = 13, 59%) preferred to use Chinese hints. Furthermore, differences also existed in the aspects of the value of this online test and the difficulties that they met. Regarding the value of this online test, Holists paid attention to the whole test strategies. For instance, their thoughts were:
"It is valuable to me that the online test allows me to make a guess."
"The online test is useful because it facilitates me to read the questions carefully."
"I appreciate that the online test helps me use multiple test strategies."
"The online tests let me make effective use of resources to answer questions."
Conversely, Serialists emphasized on a single item provided by the online test, e.g.:
"The provision of hints is very helpful to me."
"I learn some new vocabularies from this online test."
"I learn how to use hints smartly to answer the question."
"I do not need to write the answer to each question"
Regarding the difficulties that they met, Holists focused on questions examined in the online test. For example, Holists thought:
"I feel difficult to understand the questions."
"I could only use limited time to answer the questions."
"Answering the questions needed to put heavy mental effort."
In contrast, Serialists were concerned with additional support, e.g.,
"There was a lack of an Introduction on how to take the test."
"I was not allowed to rotate the screen to answer the questions."
"I needed an alert before I submitted a wrong answer."
Results from lag sequential analysis
This section presents the results from the lag sequential analysis, including behavior frequency analysis, behavior sequential analysis and learning behavior patterns.
Behavior frequency analysis
The individual behaviors of the participants were coded using the coding scheme, which included five kinds of codes (Table 3). According to the rationale of the lag sequential analysis, participants' behaviors were coded in the chronological order of their occurrences. For example, after using Synonymy Hint 1 (S1), Synonymy Hint 2 (S2) and the Chinese hint (C), then they will get a wrong answer (N) to the question; this series of behaviors was, thus, coded as S1S2CN. In total, there are 645 students' behavior codes.
Behavior sequential analysis
Based on the frequency transition tables (Table 4 and Table 5), we examined whether the connection between each sequence reached statistical significance. The z-score value of each sequence was calculated to determine whether the continuity of each reached the level of significance and a z-value greater than +1.96 indicated that a sequence reached the level of significance (p < .05). In that case, the codes obtained from Holists and Serialists yielded the adjusted residuals tables (Tables 6 and 7). Furthermore we deduced the behavior-transfer diagrams of Holists and Serialists. Figure 7 and Figure 8 illustrate all sequences that have reached significance and the numerical values in the figures are the sequences' z-scores and the arrow indicates the direction of transfer for each sequence.
Learning behavior patterns
After comparing the behavioral diagram of Holists with that of Serialists, we found that Holists and Serialists demonstrated significant different behavior sequences. Regarding the MH condition, the significant sequence, S1 [??] S2, was found for Holists, indicating that they used Synonymy Hint 1, then they would use Synonymy Hint 2, and subsequently they would use Synonymy Hint 1 again. On the other hand, the significant sequence, S2 [right arrow] C, was found for Serialists, indicating that Serialists used Synonymy Hint 2 and then they would use the Chinese hint. In other words, Holists demonstrated iterative behavior while Serialists showed sequential behavior. Such findings implied that Holists relied on synonymy hints. However, Serialists did not depend on synonymy hints so much because they would also use Chinese hints. A difference between the synonymy hints and Chinese hints lied within the fact that the former belonged to indirect hints, which forced students to do the translation twice, while the latter pertained to direct hints, which allowed students to do the translation once. In other words, the latter was more directly relevant to their current task than the former. Accordingly, it seemed that Serialists tended to take an approach that was directly related to their task. These findings are coherent with those of Clewley, Chen, and Liu (2010), which indicated that Serialists preferred to only use the options that are relevant to their current tasks.
Regarding the CH condition, no significant sequence was found for Serialists. However, the significant sequence, C [right arrow] N, was found for Holists, indicating that Holists used the Chinese hints and then got a wrong answer. The findings implied that using Chinese hints had a possibility to make Holists get a wrong answer. Holists were suitable to have educational materials with rich information (Howie, 1995) but only Chinese hints were provided in the CH condition. Thus, the CH condition did not match with the needs of Holists. In brief, the aforementioned results from lag sequential analyses suggested that cognitive styles had great effects on student's behavior patterns.
Figure 9 presents a framework, which summarizes the findings of this study. As shown in this figure, cognitive styles have considerable influences on students' learning patterns when they using the online test. In particular, they used hints very differently. Such behavior differences may be a reflection of differences in their cognitive styles (Wildemuth et al., 1998). The details are discussed below.
Holists took an iterative approach
The results from Holists' qualitative responses to the opened questions of the questionnaire indicated that Holists valued multiple test strategies provided by the online test. This may be the reason why Holists repeated to use two types of synonymy hints. Due to such iterative behavior, the frequencies of Holists using the hints were higher than those of Serialists. This result was coherent with that of Clewley, Chen, and Liu (2010), which indicated that Holists preferred to have all of the available options. In other words, they prefer to collect large amounts of information (Howie, 1995), instead of focusing one a single item. This might be due to the fact that Holists tended to be relatively global (Mampadi, Chen, Ghinea, & Chen, 2011) so they took breadth-first paths. Hence, such a global approach made them demonstrate iterative behavior.
Serialists needed additional support
The results from quantitative data indicated that Serialists' task scores were significantly related to the frequencies of using Chinese hints and those of using synonym hints. Thus, using Chinese hints and synonym hints was useful for Serialists to get higher task scores. Furthermore, Serialists in the NH condition and those in the CH condition significantly spent more time for completing the tasks than those in the MH condition. In other words, Serialists spent more time for completing the tasks if fewer hints or no hints were offered. These findings implied that Serialists needed additional support from hints. This might be owing to the fact that Serialists tended to take a suggested route (Clewley, Chen, & Liu, 2011) and the hints provided such a suggested route for Serialists. This might be the reason why Serialists relied on hints.
This study aimed to investigate how cognitive styles affected learners' reactions to the use of hints in the context of academic English, in terms of learning behavior and learning performance. To obtain deep understandings, a learning-analytics approach was applied in this study, including quantitative measurement, qualitative evaluation and lag sequential analyses. The results from the quantitative measurement indicated that the frequencies of using Chinese hints and synonym hints were significantly associated with Serialists' task scores while such significant correlations were not found for Holists. The results from the qualitative evaluation showed that Holists favored to use synonym hints while Serialists preferred to use Chinese hints. The results from the lag sequential analyses suggested that Holists demonstrated iterative behavior while Serialists showed sequential behavior. In summary, cognitive styles have considerable influences on students' learning patterns in the context of Academic English.
Thus, designers should consider how to develop personalized online tests that can accommodate the preferences of different cognitive style groups based on the findings of this study. However, this study has several limitations. Firstly, the sample is small so further works need to use a bigger sample to verify the findings presented in this study. Additionally, this study took into account differences between Holists and Serialists only. Therefore, future research should consider other dimensions of cognitive styles (e.g., Field Dependent and Field independent). By doing so, more comprehensive results could be obtained.
The authors would like to thank the Ministry of Science and Technology of the Republic of China, Taiwan, for financial support (MOST 104-2511-S-008-008-MY3)
Arnold, N. (2009). Online extensive reading for advanced foreign language learners: An Evaluation study. Foreign Language Annals, 42(2), 340-366.
Bannert, M., & Reimann, P. (2012). Supporting self-regulated hypermedia learning through prompts. Instructional Science, 40, 193-211.
Bannert, M., Sonnenberg, C., Mengelkamp, C., & Pieger, E. (2015). Short- and long term effects of students' self-directed metacognitive prompts on navigation behavior and learning performance. Computers in Human Behavior, 52, 293-306.
Belland, B. R. (2014). Scaffolding: Definition, current debates, and future directions. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 505-518). New York, NY: Springer.
Bull, K. S., Shuler, P., Overton, R., Kimball, S., Boykin, C., & Griffin, J. (1999). Processes for developing scaffolding in a computer mediated learning environment. Paper presented at 19th conference proceedings of the American Council on Rural Special Education (ACRES), Albuquerque, New Mexico.
Butler, A. C., Karpicke, J. D., & Roediger III, H. L. (2007). The Effect of type and timing of feedback on learning from multiple-choice tests. Journal of Experimental Psychology: Applied, 13(4), 273-281.
Chan, C. H., Hsieh, C. W., & Chen, S. Y. (2014). Cognitive styles and the use of electronic journals in a mobile context. Journal of Documentation, 70(6), 997-1014.
Chen, C. M., Hsu, S. H., Li, Y. L., & Peng, C. J. (2006). Personalized intelligent m-learning system for supporting effective English learning. In IEEE International Conference on Systems, Man and Cybernetics (SMC'06) (Vol. 6, pp. 4898-4903). doi:10.1109/ICSMC.2006.385081
Chen, S. Y., & Chang, L. P. (2016). The Influences of cognitive styles on individual learning and collaborative learning. Innovations in Education and Teaching International, 53(4), 458-471.
Chen, S. Y., & Macredie, R. D. (2010). Web-based interaction: A Review of three important human factors. International Journal of Information Management, 30(5), 379-387.
Clewley, N., Chen, S. Y., & Liu, X. (2010). Cognitive styles and search engine preferences: Field dependence/independence vs. holism/serialism. Journal of Documentation, (56(4), 585-603.
Clewley, N., Chen, S. Y., & Liu, X. (2011). Mining learning preferences in web-based instruction: Holists vs. serialists. Educational Technology & Society, 14(4), 266-277.
Dare, B., & Polias, J. (2001). Learning about language: scaffolding in ESL classrooms. In J. Hammond (Ed.), Scaffolding Teaching and learning in language and literacy education (pp. 91-110). Newtown, Australia: Primary English Teaching Association.
Devolder, A., van Braak, J., & Tondeur, J. (2012). Supporting self-regulated learning in computer-based learning environments: Systematic review of effects of scaffolding in the domain of science education. Journal of Computer Assisted Learning, 28, 557-573.
Ford, N. (1995). Levels and types of mediation in instructional systems: An Individual differences approach. International Journal of Human-Computer Studies, 43(2), 241-259.
Ford, N., & Ford, R. (1993). Towards a cognitive theory of information accessing: An Empirical study. Information Processing & Management, 29(5), 569-585.
Fox, J. P. (2013). Multivariate zero-inflated modeling with latent predictors: Modeling feedback behavior. Computational statistics & data analysis, 68, 361-374.
Game, A., & Metcalfe, A. (2009). Dialogue and team teaching. Higher Education Research & Development, 28, 45-57.
Hou, H. T., Chang, K. E., & Sung, Y. T. (2007). An Analysis of peer assessment online discussions within a course that uses project-based learning. Interactive Learning Environments, 15(3), 237-251.
Howie, D. (1995). To the beat of a different drummer: The Role of individual differences in ecological interface design. [Technical report]. Ontario, Canada: Cognitive Engineering Laboratory, University of Toronto.
Hsieh, C.-W., Jhan, S.-N. & Chen, S. Y. (2014). Individual differences and joyful assessment-based learning. In Proceedings of the IEEE 14th International Conference on Advanced Learning Technologies (ICALT) (pp. 330-335). doi:10.1109/ICALT.2014.101
Huang, H. W., Wu, C. W., & Chen, N. S. (2012) The Effectiveness of using procedural scaffoldings in a paper-plus-smart phone collaborative learning context. Computers & Education, 59(2), 250-259.
Hwang, G. J., & Chang, H. F. (2011). A Formative assessment-based mobile learning approach to improving the learning attitudes and achievements of students. Computers & Education, 56(4), 1023-1031.
Irvin, J. L., Meltzer, J., & Dukes, M. (2007). Taking action on adolescent literacy: An Implementation guide for school leaders. Alexandria, VA: Association for Supervision & Curriculum Development.
Jonassen, D. H., & Grabowski, B. (1993). Individual differences and instruction. New York, NY: Allen & Bacon.
Khaliliaqdam, S. (2014). ZPD, scaffolding and basic speech development in EFL Context. Procedia-Social and Behavioral Sciences, 98, 891-897.
Long, P., Siemens, G, Conole, G, & Gasevic, D. (Eds.) (2011). Proceedings of the 1st international conference on learning analytics and knowledge (LAK'11). New York, NY: ACM.
Lan, Y.-J., Sung, Y.-T., Tan, N. C., Lin, C. P., & Chang, K. E. (2010). Mobile-device-supported problem-based computational estimation instruction for elementary school students. Educational Technology & Society, 13(3), 55-69.
Lehmann-Willenbrock, N., & Allen, J. A. (2014). How fun are your meetings? Investigating the relationship between humor patterns in team interactions and team performance. Journal of Applied Psychology, 99(6), 1278-1287.
Linden, W. J., & Glass, C. A. W. (2010). Elements of adaptive testing. New York, NY: Springer.
Mampadi, F., Chen, S. Y., Ghinea, G., & Chen, M. P. (2011). Design of adaptive hypermedia learning systems: A Cognitive style approach. Computers & Education, 56(4), 1003-1011.
Merriam, S. B. (1988). Case study research in education: A Qualitative approach. London, UK: Jossey-Bass.
Milkova, E., & Hercik, P. (2014). Possibilities of language skills development within the framework of communicative competence mastering in elearning language courses. Procedia--Social and Behavioral Sciences, 131, 182-186.
Min, H.-T., & Hsu, W.-S. (2008). [TEXT NOT REPRODUCIBLE IN ASCII] [The Impact of supplemental reading on vocabulary acquisition and retention with EFL learners in Taiwan]. Journal of National Taiwan Normal University, 53(1), 83-115.
Ombati, J. M., Omari, L. N., Ogendo, G. N., Ondima, P. C., & Otieno, R. R. (2013). Evaluation of factors influencing the performance of Kenyan secondary school students in English grammar: A Case of Nyamaiya division, Nyamira county, Kenya. Journal of Education and Practice, 4(9), 58-63.
Paas, F., Tuovinen, J. E., Tabbers, H., & van Gerven, P. W. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational psychologist, 38(1), 63-71.
Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. New York, NY: Oxford University Press.
Sanchez-Vera, M., Fernandez-Breis, J. T., Castellanos-Nieves, D., Frutos-Morales, F., & Prendes-Espinosa, M. P. (2012). Semantic Web technologies for generating feedback in online assessment environments. Knowledge-Based Systems, 33, 152165.
Stone, C. A. (1998). The Metaphor of scaffolding its utility for the field of learning disabilities. Journal of Learning Disabilities, 31(4), 344-364.
Wang, Y., Li, H., Feng, Y., Jiang, Y., & Liu, Y. (2012). Assessment of programming language learning based on peer code review model: Implementation and experience report. Computers & Education, 59, 412-422.
Wildemuth, B. M., Friedman, C. P., & Downs, S. M. (1998). Hypertext versus Boolean access to biomedical information: A Comparison of effectiveness, efficiency, and user preferences. ACM Transactions on Computer-Human Interaction, 5(2), 156-183.
Wood, D., Bruner, J. S. and Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17, 89-100.
Wu, T. T., Huang, Y. M., Chao, H. C., & Park, J. H. (2014). Personalized English reading sequencing based on learning portfolio analysis. Information Sciences, 257, 248-263.
Xiao, Y., & Lucking, R. (2008). The Impact of two types of peer assessment on students' performance and satisfaction within a Wiki environment. The Internet and Higher Education, 11 (3), 186-193.
Yang, C., & Zhang, L. J. (2015). China English in trouble: Evidence from teacher dyadic talk. System, 51, 39-50.
Zlatovic, M., Balaban, I., & Kermek, D. (2015). Using online assessments to stimulate learning strategies and achievement of learning goals. Computers & Education, 91, 32-45.
Sherry Y. Chen * and Chia-Chi Yeh
Graduate Institute of Network Learning Technology, National Central University, Jhongli, Taiwan // firstname.lastname@example.org // email@example.com
* Corresponding author
Caption: Figure 1. MH condition
Caption: Figure 2. NH condition
Caption: Figure 3. CH condition
Caption: Figure 4. The task time in each condition
Caption: Figure 5. The frequencies of using hints
Caption: Figure 6. The use of hints in the MH condition
Caption: Figure 7. The behavioral transition diagrams in the MH condition
Caption: Figure 8. The behavioral transition diagrams in the CH condition
Caption: Figure 9. The framework to summarize the findings
Table 1. Performance in each condition Conditions Groups Task scores Task time Mean (SD) Mean (SD) MH Holists 88.75 (7.11) 176.42 (48.52) Serialists 90.45 (5.22) 136.64 (51.34) NH Holists 77.92 (19.40) 193.42 (85.61) Serialists 81.83 (11.02) 199.36 (87.92) CH Holists 85.42 (11.78) 179.08 (74.60) Serialists 86.36 (9.77) 210.18 (86.76) Table 2. Similar Responses Advantages of using the * Immediate feedback to know the answers Online test * Lightweight and easy to take * No limitations for time and places * Simple and visual interface * Full of curiosity and freshness * No need to make hard copies Disadvantages of using * Uncomfortable to use a touch screen the Online test * Unable to answer questions non-sequentially * Unable to take the notes * No chance to correct wrong answers Enhanced English skills * Sentences * Prepositions * Grammatical issues * Sentence structure * Tenses of Verbs Table 3. Coding scheme of learning behaviors Code Behavior S1 Use Synonymy Hint 1 S2 Use Synonymy Hint 2 C Use Chinese Hint Y Get Correct an Answer N Get Wrong an Answer Table 4. Frequency transition in the MH condition Holists S1 S2 C Y N S1 0 8 1 7 1 S2 3 0 2 4 2 C 1 2 0 11 4 Y 10 0 12 70 15 N 2 1 1 19 4 Serialists S1 S2 C Y N S1 0 0 0 2 0 S2 0 0 1 0 0 C 0 0 0 4 0 Y 1 1 3 80 14 N 0 0 0 17 4 Table 5. Frequency transition in the CH condition Holists C Y N Serialists C Y N C 1 27 12 C 1 19 7 Y 30 62 16 Y 22 62 14 N 5 24 6 N 1 22 7 Table 6. The sequential analysis of behavior in the MH condition Holists S1 S2 C Y N S1 -1.19 7.44 * -0.41 -1.19 -0.83 S2 2.19 * -0.77 1.05 -0.96 0.46 C -0.41 1.05 -1.26 0.17 1.06 Y 0.04 -2.57 0.49 0.13 -0.18 N -0.16 -0.38 -0.90 0.88 0.24 Serialists S1 S2 C Y N S1 -0.16 -0.12 -0.23 0.44 -0.53 S2 -0.12 -0.08 5.96 * -0.86 -0.37 C -0.23 -0.16 -0.33 0.63 -0.75 Y -0.39 0.31 0.04 -0.13 -0.38 N -0.53 -0.37 -0.75 0.43 0.62 Note. * p < .05. Tables 7. The sequential analysis of behavior in the CH condition Holists C Y N C -2.47 0.87 2.07 * Y 1.54 -1.11 -0.99 N -0.68 0.89 0.04 Serialists C Y N C -1.55 0.64 1.16 Y 1.42 -0.78 -1.09 N -1.69 0.90 0.88 Note. * p < .05.
|Printer friendly Cite/link Email Feedback|
|Author:||Chen, Sherry Y.; Yeh, Chia-Chi|
|Publication:||Educational Technology & Society|
|Date:||Apr 1, 2017|
|Previous Article:||A multivocal approach in the analysis of online dialogue in the language-focused classroom in higher education.|
|Next Article:||A mobile game-based English vocabulary practice system based on portfolio analysis.|