Printer Friendly

Does Augmented Reality Effectively Foster Visual Learning Process in Construction? An Eye-Tracking Study in Steel Installation.

1. Introduction

Currently, with information technology playing an increasingly important role in various fields, people also pay increasing attention to the potential of information technology in education [1]. The construction industry is a complex environment and engineers need to deal with integrated information. Construction education has long been challenged. Traditional teaching or training is not effective enough to bridge the gap between academic and practice [2]. However, information technology enables new education strategies to be used to assist learning, one of which has gained much attention in recent years--the application of augmented reality (AR) [3]. AR is a technology that can enhance and augment reality by generating virtual objects in real environments [4]. Such coexistence of virtual and real objects helps learners visualize complex spatial relationships and abstract concepts [5].

The application of AR technology in education has been developing for more than 20 years, and AR has been applied to many fields like astronomy, chemistry, biology, mathematics, and geometry [6]. While referring to the effectiveness of the AR learning environment, it is always compared with the text-graph(TG-) based tool for learning. While in the construction industry, apprenticeship programs are common site training methods where risk is unavoidable [7]. Besides, AR is also a significant education measure with no health or safety risks [8]. Many researchers proposed frameworks based on AR to bring remote job sites indoors [9], transform learning processes [10], or enhance the comprehension of complex dynamic and spatial-temporal constraints [11]. The use of AR technology can be an efficient way to assist learning, but there is still little quantitative evidence about the effects of AR [3]. Many researchers have evaluated the effects of AR on learning outcomes, ignoring its potential causes during the learning processes.

Eye tracking is a measurement of eye movement, which can reveal aspects of learners' learning processes [12]. Because of the use of eye-tracking software for recording and producing data, studies on learners' cognitive processes have entered a new phase [13].

TG-based and physical model- (PM-) based are common tools for construction learning and training. The authors of the present study conducted an experiment of construction class learning to (1) evaluate learning outcomes while comparing TG-based, AR-based, and PM-based environments and (2) investigate the underlying causes of the effects of the learning method from a cognitive perspective and the potential effects of AR by utilizing eye-movement data.

2. Literature Review

2.1. Does AR Facilitate or Inhibit Learning Efficiency? Multimedia learning theory suggests that appealing design features can help increase cognitive engagement and retain learner attention when it was first used [14]. Through more investigation, the visual detail in the multimedia resource can result in effective learning and instructional multimedia design [15]. According to Mayer [16], the following cognitive load theory is the basis for instructional design principles [17], cognitive theory of multimedia learning (CTML) between three kinds of processing demands that arise during learning: (1) extraneous processing, which is led by the manner in which the material is presented, increasing the chances that attention will be split among various information. Poor instruction may enhance this process and thus inhibit the effects of transfer learning; (2) essential processing, which is done to focus on presented material and is caused by the complexity of the material; and (3) generative processing, which is done to comprehend the material. It is caused by learner's efforts in the learning process such as selecting, organizing, and integrating. As asserted in previous studies, both extraneous and germane cognitive load can be manipulated and intrinsic cognitive loads cannot [17]. However, according to Mayer, extraneous, generative, and essential processing can be managed [18]. Furthermore, unnecessary and greater loads that stem from the design of instruction may impose extraneous cognitive loads [19]. Ineffectively searching for information may increase extraneous cognitive load and disturb essential processing. Therefore, the reasonable reduction of redundant information is an important way to reduce cognitive load and, further enhance cognitive learning. The measures include reducing extraneous processing, such as highlighting crucial materials with colors, managing essential processing, such as decomposing learning materials into several parts, and fostering generative processing [16].

AR is a useful technology with which to improve learning, as explained by the CTML [20]. It allows visual information to be registered to the real world [21]. The visual information, as instructional materials in this paper, can be designed following the CTML. Although the materials can be designed and displayed using 3D model design software, AR technology differs in that it provides immersive environments and has been developed as an immersive language learning framework that was motivated by the CTML [22]. Many scholars contend that different learning tools lead to different learning outcomes as shown in Table 1. Few researchers have paid attention to arguments of the design of AR models, the instructional material in this case.

A confounding question arises: Does AR facilitate or inhibit learning efficiency by highlighting partial but critical information?

2.2. Manipulation of Extraneous Information with Various Learning Materials. AR has been proven to be a more efficient way of learning in various studies as shown in Table 1. Nonetheless, the evaluations of, compared to, conventional learning environments were basically limited to learning outcomes and, using questionnaires to examine students' subjective motivations and satisfaction [23, 24]. Because the major function of AR rests in highlighting critical information and labeling extra information as a reference for learning purposes, AR can be perceived as a measure that manipulates extraneous information processing, potentially enhancing the generative process of learning. From this perspective, previous researchers did not answer why and how AR foster learning in construction. In the educational domain, AR appears to be a smart technology with which to create attractive and motivating content. It improves the time spent on acquired learnings [25]. Moreover, an experiment revealed higher learning achievement and lower cognitive load by utilizing mobile AR application [26]. For construction education, applying AR can create a realistic learning environment without health and safety risks and enhance students' comprehensive understandings of construction equipment and operational safety [8, 10, 27]. As shown in the "control group" column of Table 1, generally, the advantages above of AR mainly come to conclusion after comparisons with conventional learning type, especially TG-based. However, the comparisons ignore the contrast with real PM-based learning materials. Besides, some of the TG-based learning material is colored as extraneous information in the experiments of Table 1, but in this paper, the TG-based model is designed according to Chinese Drawing Collection for National Building Standard Design which is not highlighted with color. The PM-based learning material is modeled as well.

2.3. Eye Tracking for Cognitive Processing Measures. Although the effect is proposed that the AR design feature leads to better learning outcomes, there is little substantive evidence that shows how this occurs in the cognitive processing. Fortunately, the AR material is designed based on the CTML, and many researchers have studied how to measure its cognitive activity. Eye tracking, combined with measures of learning performance, provides information about the focus of cognitive activity [31]. Consequently, to identify how learners behave in AR-based and other conventional learning environments, the use of an eye-tracking device is an effective way to provide cognitive processing measures.

Eye-tracking techniques can be utilized to record eye movement which can show how people behave while they are engaged in cognitive processing such as fixation count, total fixation time, and average fixation duration [32, 33]. However, the use and interpretation of eye-tracking measures are different and depend on research questions. A summary of relevant studies in which eye tracking was used to conduct eye-movement measures in multimedia learning and cognition is listed in Table 2. Fixation duration and fixation count are the most prevalently used eye-tracking measures [34]. Generally, for the learning process, both longer fixation duration and lower fixation rates indicate higher cognitive load, and more fixation counts mean less efficient information processing. Moreover, a long average fixation duration means that deeper information processing is led by the complexity of the background information [32, 35, 36]. Besides, the attentional guidance hypothesis proposes that participants pay more attention to salient elements than other elements, which leads to longer fixation times [37].

In summary, three eye-movement measures, including total fixation time, fixation count, and average fixation duration are utilized in this study to demonstrate how learners behaved during the entire formal experimental process for the following reasons: (1) The higher the values of fixation count and fixation time, the more the cognitive load in extraneous processing and the more the distributions in essential processing. (2) The longer the average fixation duration, the deeper the comprehension of the learning material, the more the complex information generated by various information sources, and the more the focus on essential processing. The relationship between the eye-movement metrics and the CTML cognitive processing is shown in Figure 1.

3. Research Questions and Methodology

The literature review shows that many related studies explain the effects of AR by comparing AR-based and TG-based (Table 1). These studies demonstrate the effectiveness of AR. However, they do not reveal the gap with PM-based education, which is also a common teaching method in construction education. The differences in effectiveness between AR and PM need to be examined to leverage the application of AR. Therefore, it is necessary to compare AR-based to TG-based and PM-based to provide convincing evidence with which to explore the effects of AR. On the contrary, although it has been proven that AR has a positive effect on learning outcomes, there is a lack of research works on the exploration and evaluation of AR in the cognitive process. Consequently, the researcher aims to prove the following hypotheses:

(1) Compared to TG- and PM-based materials, AR-based materials promote learning outcomes.

(2) Compared to the use of TG-based materials and PM-based materials, the use of AR-based materials that are designed using the CTML can lower learners' cognitive loads and foster deep information processing, which means that AR-based groups will have lower fixation counts and fixation times but higher levels of average fixation duration than TG- and PM-based groups.

To achieve the results, an experiment that involved learning and testing was developed. There were three groups of people who were exposed to three different learning environments: TG-based, AR-based, and PM-based learning environments. Each participant was separately given the same questions. The questions were answered by referring to the learning material provided in the TG-based, AR-based, or PM-based learning environments.

Figure 2 shows the experimental flow. Before the test, learning content and corresponding test questions were prepared. We randomly divided participants into the three groups (AR, TG, and PM). In the cognitive testing process, we recorded the participants' answers and answer times as their learning outcomes to comparatively analyze the three groups. During the whole testing process, participants' eye movements were recorded using an eye tracker (SMI iView XTM HED at 50 Hz). The fixation time and fixation count data were obtained using Begaze (iView software). We defined one area of interest (AOI) for each question, and total fixation time, fixation count, and average fixation duration values for each AOI were recorded and calculated.

3.1. Participants. A total of 40 senior undergraduate students majoring in construction management at Chongqing University were invited to participate. Because the samples of eye-tracking-related studies range from less than ten samples for qualitative studies to 30 for quantitative studies [49], a total of 40 samples are robust enough for a quantitative eye-tracking study.

Chongqing University is one of the top 10 research universities in the field of construction management in China. In this study, we use two approaches to invite participants: (1) students of one class were assigned to participate in the study as their final project; (2) an invitation flyer was posted in the laboratory of Chongqing University to invite volunteers to participate in the experiment. Finally, we selected 23 students from the class and 17 volunteers who were attracted by the flyer. To maximally avoid the differences between individuals, we choose participants with the same major (construction management), same grade (forth year), and similar age (21 to 23 years). There were 22 males and 18 females among the participants, and they all took the same courses in college. The students were trained with 32 credit hours of reinforcement arrangement courses in the third year of college, but they all lacked practical experience in construction, meaning that they did not receive any onsite training or have any injury experience in construction. Based on their academic and practical backgrounds, we assumed that these students had similar intrinsic learning abilities. The vision of all participants was either normal or corrected-to-normal.

3.2. Learning and Test Materials. Learning materials were about the detailing of longitudinal bars at the tops of antiseismic corner columns from one Chinese Drawing Collection for National Building Standard Design, 11G101-1 (drawing rules and standard detailing drawings of an ichnographic representing method for construction drawings of RC structure). According to our previous research and interviews with experts with engineering and construction majors in Chongqing University, this is quite an important and basic section of professional knowledge for construction workers. Meanwhile, it is difficult to understand for students who do not have any practical experience. Therefore, we designed three forms of instructional materials based on this content with the guidance of a teacher in the field of construction techniques.

For the TG-based learning environment, the learning material was abstracted from 11G101-1 (Figure 3) and shown on a computer screen for learners.

Figure 4 shows the design of the AR model. The key steel bars are highlighted and distinguished based on their binding methods with various colors. The others are processed with gray to reduce its recognition. Thus, according to multimedia theory, this design could attract attention and help learners reduce extraneous processing. Besides, the key information can be easily selected to manage the essential processing and learners should have a better comprehensive understanding of learning contents with more effective generative processing. If one adopts the CTML, it can be supposed that AR-based learning environments may be more attractive than others, helping learners pay attention to key information.

The AR-based learning environment consisted of a computer with ARToolkit software, a camera, and a paper label. As shown in Figure 4, before the experiment, a virtual model based on the learning content was made with two software programs: Revit Structure, and 3D Max. Then, the ARToolkit was used to connect the model to a paper label. In the learning process, utilizing a plug-in installed in ARToolkit, which was developed in our previous research, put the paper label in front of the camera. The AR model would then appear on the label. The users could observe the model from different angles by rotating the label. Figure 5 shows the workflow of the AR-based learning environment, and the final practical AR-based environment is shown in Figure 6.

As for the PM-based learning environment, a solid model was made with mini-steel bars based on the actual situation on construction site, as shown in Figure 7.

Correspondingly, a test was designed to evaluate learning outcomes within the three different environments, and the test consisted of six questions in total, which in detail, included three true or false and three short-answer questions (Table 3). During the testing process, both learning material and text material were given on the same screen. Learning material was on the left and text material was on the right, with one question on each page. As shown in Figure 8, a cross-sectional drawing was given in the test material, and the configuration of each numbered longitudinal bar was arranged using one of the various ways shown in learning materials. Learners could reference the learning materials based on the questions, and they were asked to figure out the arrangement of each bar and their spatial relationships to give the correct answers. For each question, there was one corresponding AOI in learning material that showed the most important information that learners need to notice and process.

When answering true or false questions, learners were asked to make a judgment about a description associated with the spatial configuration and then answer with "yes" or "no." For the short-answer questions, on the basis of each question, learners were required to give the correct number of the 12.

3.3. Experimental Procedure. Every participant was randomly assigned to one of three groups. Each participant was provided training materials in TG-based, AR-based, or PM-based form. Referring to these training materials, the participants sequentially answered predesigned questions. Details about the experimental procedure are listed as follows.

3.3.1. Preexperiment Calibration. Participants were told about the purpose of the experiment. Then, they were asked to identify their dominant eye using the facilitator's instrument so that participants could be fit with the eye tracker (SMI iView XTM HED) with the proper eyeglass--with a sampling rate of 200 Hz. Participants were seated approximately 50 cm away from the front of the screen in which the learning materials were demonstrated. A five-point calibration screen was used to assess the calibration for each participant before each cognitive process. If the accuracy exceeded 1[degrees] in the x or y direction, then the calibration was repeated.

3.3.2. Formal Experiment. Every participant was given two minutes to familiarize themselves with the learning content. Six questions were then sequentially demonstrated on the screen (Figure 9). After the participant answered, the research facilitator immediately switched slides to the next question and recorded the participant's answer. No auxiliary verbal instructions were provided during the entire formal experiment in any group.

During the whole process, participants in the AR and PM groups could ask the research facilitator to rotate the paper label or model according to their own requirements if they wanted to observe from different angles. They were not given opportunities to change their answers.

3.4. Data Analysis. Every participant's answers and the completion times for every single question were recorded by the facilitator, and learners' eye movements were recorded by the eye tracker (SMI iView XTM HED) and the associated software (Begaze), which was utilized to build AOI. The total fixation time and fixation count of each AOI could be then calculated and exported.

Table 4 gives a brief definition of each measure. All data were imported into Excel and SPSS for statistical analysis. To identify if there were statistically significant differences among three groups, ANOVA was used to conduct group comparisons. If statistically significant results existed, then further Bonferroni multiple comparisons to identify the significant differences were conducted between the two groups.

4. Results

A total of 40 students participated in this study. However, because the eye-tracking data were missing for six participants, we finally had 34 subjects for analysis in this study, 11 for the TG group, 11 for the AR group, and 12 for the PM group. Thus, 204 (34 * 6 = 204) data points for each index were recorded or calculated. Before mathematical calculation was conducted, all data were checked with SPSS to identify outliers, and the result showed that five completion time data points, eight fixation time data points, six fixation count data points, and three average fixation duration data points were thought of as outliers and excluded during the following statistical analysis.

4.1. Learning Outcomes. As seen in Table 5, generally, the mean scores of the PM group were the highest, with minimum average completion times for both question forms. A significant difference of scores in short-answer questions (p < 0.05) was found among three groups, and multiple comparisons (Table 6) showed that the AR group and the PM group scored significantly higher than the TG group on the short-answer questions. No significant differences in scores among the three groups were found in the true or false questions. There were no significant completion time differences among the three groups for either form of question.

People in the AR and PM groups performed better than those in the TG group. The increase in scores was much more significant for the short-answer questions. Contradictory to the first hypothesis, our findings showed that people in the PM group exhibited the same degree of learning performance as those in the AR group.

4.2. Eye-Tracking Measures. The eye-movement data were analyzed using ANOVA to explore learners' cognitive processes with regard to key information in AOIs.

Tables 7 and 8 show that for fixation time, people in the TG group spent significantly more fixation time on AOI compared to those in the PM group for true or false questions, and there were no significant differences regarding other comparisons between the two groups. The results of fixation count show that for true or false questions, people in the TG group significantly fixed AOI more frequently than the other two groups. However, the result was different for the short-answer questions. Multiple comparisons showed that there were no significant differences between any two groups.

The average fixation duration result showed that significant differences were found in both question forms among three groups. Multiple comparisons determined that for true or false questions, people in the AR group showed a significantly higher level of average fixation duration than those in the TG group. For the short-answer questions, people in both the AR and PM groups showed a significantly higher level of average fixation time than those in the TG group.

The result of all eye-movement measures showed that AR-based learning material did not reduce learners' fixation counts or fixation times in all conditions. Moreover, no significant difference between AR-based and PM-based learning material was identified. People in the TG group spent significantly less fixation time on the true or false questions than those in the PM and AR groups, which could not fully prove the second experiment hypothesis.

However, the results demonstrate that the effects of AR and PM teaching were different for the two question forms.

Although people in the TG group had similar scores on the true or false questions as people in the other two groups (Table 5), they had significantly longer fixation times and fixation counts. Long fixation times indicate that difficulty was faced in extracting information or that the object is more engaging in some way. Moreover, a high fixation count on AOI indicates inefficiency in identifying relevant information [34, 36, 50]. For the same learning outcomes, the result demonstrated that compared to the TG-based environment, both the AR-based and PM-based environments reduced learners' cognitive load sand improved their searching efficiency in the learning and test processes.

For the short-answer questions, people in the TG-based group exhibited the same level of fixation time and fixation count as those in the other two groups. However, it should be noticed that on the short-answer questions, participants in the AR and PM groups scored significantly higher than those in the TG group. Consequently, both AR-based and PM-based teaching considerably improved learners' answering accuracy, but it cannot be determined that which environment means lower cognitive load and searching efficiency by comparing eye-tracking data.

Unlike the two indicators of fixation time and fixation count, the result of average fixation duration showed that for both question types, the AR-based group had the highest level while the TG-based group had the lowest (Table 6). A long average fixation duration is thought to be an indication of deep processing [32]. When related information is easy to target and integrate, learners can likely engage in the deep processing of key information required for meaningful learning [37, 51, 52]. This result indicates that the AR-based learning environment helped learners more easily find and focus on key information for each question, which then lead to deep understanding of the content.

5. Discussion

The main purpose of the study is to understand how AR-based teaching impacts college students' learning outcomes and learning processes compared to TG-based and PM-based teaching about construction. The result showed that AR-based environments lead to better learning outcomes than TG-based environments, but not compared to PM-based environments. However, the difference on eye-tracking data did not keep the same gap during the whole process.

5.1. Effect of Question Form. Participants in the TG group scored significantly lower on the short-answer questions than those in the AR and PM groups. People in the three groups had similar scores for the true or false questions. In this study, to answer the true or false questions, learners just had to say "yes" or "no." However, they had given precise and comprehensive numbers of steel bars in the short-answer questions, which required more exact information processing. This result suggests that for some limited tasks, learners with TG-based learning or training environments can achieve ideal performance, despite the high cognitive load and inefficiency of doing so compared to when it is done in AR-based and PM-based environments. Moreover, TG-based teaching has the advantages of low cost and easy implementation. Therefore, for some learning tasks and practical work, TG-based education is the most economical option.

5.2. Effect of Cognitive Load and Emotion. Another reason why the participants in the TG-based group scored significantly worse on the second question form is related to cognitive load and motivation. As a positive emotion in cognitive processing, interest is closely related to motivation and attention, and those who with interest show greater persistence on subsequent tasks. Cognitive load may affect emotional state and further hamper effective visual search [53-55].

Before they started to learn, all learners in the three groups were thought to have positive emotions and motivations. Their performances at the beginning were based on the same emotion. In this study, the sequence of the test was three true or false questions followed by three short-answer questions. The TG-based group scored at the same level as the other two groups with significantly more fixations in the first three questions. We supposed that learners in the TG-based group experienced excessive cognitive load at the beginning, which further had a negative impact on their motivation, so they were not motivated enough to pay adequate attention to information processing. Thus, it led to the increasingly worse learning outcomes on the final three questions.

5.3. Effect of AR. Compared to the PM-based learning environment, the AR-based learning environment did not show a competitive advantage in learning performance or significant difference in eye-movement data with the exception of average fixation duration. The result showed that although the result of longer average fixation duration indicated that learners in the AR-based group more easily found and focused on key information and then had a better understanding of the learning content than others, this did not translate into superior learning outcomes. After the experiment, a few students were invited to experience all three learning tools. They generally thought that compared to the traditional TG-based learning method, both AR and PM are obviously helpful for them to understand the learning material. However, they did not indicate that there were significant differences between the effects of AR and PM. Their subjective is in agreement with our experimental result. It further indicates that the features and advantages of AR were not sufficiently utilized.

In practical application, AR has superiority in flexibility and convenience. In contrast to PM-based education, users can build AR-based learning or training environments with no limit on time, and the displayed objects can be repeatedly modified and utilized. Thus, AR has great potential and prospects. However, efficiently utilizing the features of AR to help learners or trainers achieve improved performance is not only the key to maximize its value but also the most persuasive reason for its application, which calls for further studies. It is worth exploring for which tasks AR is the most suitable environment or whether other ways need to be combined with AR to improve teaching and training efficiency.

6. Conclusion

In this study, we applied TG-based, AR-based, and PM-based learning environments for construction learning. We compared learners' learning outcomes and utilized eye tracking to explore the cognitive processes of the three groups.

For learning outcomes, our research suggests that the effects of learning environments are different for various forms of tasks. The three-dimensional display should have the advantage of showing objects more comprehensive and intuitively than other displays, but our study showed that, in terms of outcome, conventional TG-based training ways can achieve the same degree of AR-based and PM-based in some specific tasks, such as answering true or false questions. In practical application, the content and demand for learning and training are diverse for different majors and posts. AR and PM are not as effective in all cases. One should be careful and selective on the application and popularization of the new method.

Eye-tracking data provided quantitative evidence about the cognitive process. Both AR-based and PM-based environments helped learners reduce their cognitive loads compared to those in the TG-based group. However, lower cognitive loads did not transform into significantly higher test scores or quicker completion times compared to other groups. Similarly, eye-tracking data showed that AR has the potential for learners' key information focus and deeper understanding, but learners in the AR-based group did not show better learning performance than those in the other groups. This result suggests that to achieve improved outcomes, maybe we should combine other materials, such as 2D drawings and text, or perform more reasonable adjustments when modeling. To explore how to take full advantage of AR or other similar technology in practical application, additional research needs to be developed and integrated to provide an in-depth understanding of learners' mental models and cognitive processes.

In summary, this study illustrates the effects of TG-based, AR-based, and PM-based environments on construction learning outcomes and learners' cognitive processes. However, it remains limited by learning the single material and a few independent test questions. Future researchers should apply AR to systematized tasks and perform comprehensive tests to evaluate the effects of doing so.

https://doi.org/10.1155/2018/2472167

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to extend their appreciation to the Fundamental Research Funds for the Central Universities of China (no. 106112016CDJSK03XK06) and the Natural Science Foundation of China (no. 51578317) for vital support.

References

[1] A. Z. Sampaio, D. P. Rosario, A. R. Gomes, and J. P. Santos, "Virtual reality applied on Civil Engineering education: construction activity supported on interactive models," International Journal of Engineering Education, vol. 29, no. 6, pp. 1331-1347, 2013.

[2] K. Ku and P. S. Mahabaleshwarkar, "Building interactive modeling for construction education in virtual worlds," Electronic Journal of Information Technology in Construction, vol. 16, 2011.

[3] H.-K. Wu, S. W.-Y. Lee, H.-Y. Chang, and J.-C. Liang, "Current status, opportunities and challenges of augmented reality in education," Computers and Education, vol. 62, pp. 41-49, 2013.

[4] S. Nivedha and S. Hemalatha, "A survey on augmented reality," International Research Journal of Engineering and Technology, vol. 2, no. 2, pp. 87-96, 2015.

[5] T. N. Arvanitis, A. Petrou, J. F. Knight et al., "Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities," Personal and Ubiquitous Computing, vol. 13, no. 3, pp. 243-250, 2009.

[6] K. Lee, "Augmented reality in education and training," Techtrends, vol. 56, no. 2, pp. 13-21, 2012.

[7] C. Bilginsoy, "The hazards of training: attrition and retention in construction industry apprenticeship programs," Industrial and Labor Relations Review, vol. 57, no. 1, pp. 54-67, 2003.

[8] L. Carozza, F. Bosche, and M. Abdel-Wahab, "Image-based localization for an indoor VR/AR construction training system," in Paper Presented at 13th International Conference on Construction Applications of Virtual Reality, pp. 363-372, London, UK, October 2013.

[9] A. H. Behzadan and V. R. Kamat, "A framework for utilizing context-aware augmented reality visualization in engineering education," in Proceedings of the International Conference on Construction Application of Virtual Reality, p. 8, Taipei, Taiwan, November 2012.

[10] A. H. Behzadan, A. Iqbal, and V. R. Kamat, "A collaborative augmented reality based modeling environment for construction engineering and management education," in Proceedings of the 2011 Winter Simulation Conference (WSC), pp. 3568-3576, Phoenix, AZ, USA, December 2011.

[11] I. Mutis and R. R. A. Issa, "Enhancing spatial and temporal cognitive ability in construction education through augmented reality and artificial visualizations," in Proceedings of

International Conference on Computing in Civil and Building Engineering, pp. 2079-2086, Orlando, FL, USA, June 2014.

[12] B. A. Knight, M. Horsley, and M. Eliot, Eye Tracking and the Learning System: An Overview, Current Trends in Eye Tracking Research, Springer International Publishing, Berlin, Germany, 2014.

[13] H. R. Chennamma and X. Yuan, "A survey on eye-gaze tracking techniques," Indian Journal of Computer Science and Engineering, vol. 4, no. 5, 2013.

[14] J. M. Harley, E. G. Poitras, A. Jarrell, M. C. Duffy, and S. P. Lajoie, "Comparing virtual and location-based augmented reality mobile learning: emotions and learning outcomes," Educational Technology Research and Development, vol. 64, no. 3, pp. 359-388, 2016.

[15] L. Han-Chin, "Investigating the impact of cognitive style on multimedia learners' understanding and visual search patterns: an eye-tracking approach," Journal of Educational Computing Research, vol. 55, no. 8, pp. 1053-1068, 2017.

[16] R. E. Mayer, "Incorporating motivation into multimedia learning," Learning and Instruction, vol. 29, pp. 171-173,2014.

[17] J. Sweller, J. J. G. V. Merrienboer, and F. G. W. C. Paas, "Cognitive architecture and instructional design," Educational Psychology Review, vol. 10, no. 3, pp. 251-296, 1998.

[18] R. E. Mayer, Multimedia Learning, Cambridge University Press, Cambridge, UK, 2nd edition, 2009.

[19] R. Moreno, "Does the modality principle hold for different media? A test of the method-affects-learning," Journal of Computer Assisted Learning, vol. 22, pp. 149-158, 2006.

[20] P. Sommerauer and O. Muller, "Augmented reality in informal learning environments: a field experiment in a mathematics exhibition," Computers and Education, vol. 79, pp. 59-68, 2014.

[21] S. Zollmann, C. Hoppe, S. Kluckner, C. Poglitsch, H. Bischof, and G. Reitmayr, "Augmented reality for construction site monitoring and documentation," Proceedings of the IEEE, vol. 102, no. 2, pp. 137-154, 2014.

[22] A. Ibrahim, B. Huynh, J. Downey, T. Hollerer, D. Chun, and J. O'Donovan, "ARbis Pictus: a study of language learning with augmented reality," 2017, http://arxiv.org/abs/1711.11243.

[23] T. H. C. Chiang, S. J. H. Yang, and G. J. Hwang, "An augmented reality-based mobile learning system to improve students' learning achievements and motivations in natural science inquiry activities," Journal of Educational Technology and Society, vol. 17, no. 4, pp. 352-365, 2014.

[24] Y. H. Hung, C. H. Chen, and S. W. Huang, "Applying augmented reality to enhance learning: a study of different teaching materials," Journal of Computer Assisted Learning, vol. 33, no. 3, pp. 252-266, 2017.

[25] J. Ferrer-Torregrosa, M. A. Jimenez-Rodriguez, J. Torralba-Estelles, F. Garzon-Farinos, M. Perez-Bermejo, and N. Fernandez-Ehrling, "Distance learning ects and flipped classroom in the anatomy learning: comparative study of the use of augmented reality, video and notes," BMC Medical Education, vol. 16, no. 1, p. 230, 2016.

[26] S. Kuyuk, S. Kapakin, and Y. Goktas, "Learning anatomy via mobile augmented reality: Effects on achievement and cognitive load," Anatomical Sciences Education, vol. 9, no. 5, pp. 411-421, 2016.

[27] Q. T. Le, A. Pedro, C. R. Lim, H. T. Park, S. P. Chan, and K. K. Hong, "A framework for using mobile based virtual reality and augmented reality for experiential construction safety education," International Journal of Engineering Education, vol. 31, no. 3, pp. 713-725, 2015.

[28] Y.-H. Wang, "Exploring the effectiveness of integrating augmented reality-based materials to support writing activities," Computers and Education, vol. 113, pp. 162-176, 2017.

[29] M. B. Ibanez, A. Di Serio, D. Villaran, and C. D. Kloos, "Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness," Computers and Education, vol. 71, pp. 1-13, 2014.

[30] D. Fonseca, N. Marti, E. Redondo, I. Navarro, and A. Sanchez, "Relationship between student profile, tool use, participation, and academic performance with the use of Augmented Reality technology for visualized architecture models," Computers in Human Behavior, vol. 31, pp. 434-445, 2014.

[31] R. E. Mayer, "Unique contributions of eye-tracking research to the study of learning with graphics," Learning and Instruction, vol. 20, no. 2, pp. 167-171, 2010.

[32] K. Rayner, "Eye movements in reading and information processing," Psychological Bulletin, vol. 124, no. 3, pp. 372-422, 1998.

[33] H. Liu and I. Heynderickx, "Visual attention in objective image quality assessment: based on eye-tracking data," IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 7, pp. 971-982, 2011.

[34] M. A. Just and P. A. Carpenter, Eye Fixations and Cognitive Processes, Aldine Publishing, London, UK, 1976.

[35] J. Zagermann, U. Pfeil, and H. Reiterer, "Measuring cognitive load using eye tracking technology in visual computing," in Proceedings of the Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization, pp. 78-85, Baltimore, MD, USA, October 2016.

[36] R. J. Jacob and K. S. Karn, "Eye tracking in human-computer interaction and usability research Ready to deliver the promises," in The Mind's Eye: Cognitive and Applied Aspects of Eye Movement Research, pp. 573-605, Elsevier, New York, NY, USA, 2003.

[37] E. Ozcelik, T. Karakus, E. Kursun, and K. Cagiltay, "An eye-tracking study of how color coding affects multimedia learning," Computers and Education, vol. 53, no. 2, pp. 445-453, 2009.

[38] S. Hasanzadeh, B. Esmaeili, and M. D. Dodd, "Impact of construction workers' hazard identification skills on their visual attention," Journal of Construction Engineering and Management, vol. 143, no. 10, article 04017070, 2017.

[39] B. Esmaeili, "Measuring the impacts of safety knowledge on construction workers' attentional allocation and hazard detection using remote eye-tracking technology," Journal of Management in Engineering, vol. 33, no. 5, article 04017024, 2017.

[40] R.-J. Dzeng, C.-T. Lin, and Y.-C. Fang, "Using eye-tracker to compare search patterns between experienced and novice workers for site hazard identification," Safety Science, vol. 82, pp. 56-67, 2016.

[41] S. Hasanzadeh, B. Esmaeili, and M. D. Dodd, "Measuring construction workers' real-time situation awareness using mobile eye-tracking," in Proceedings of the Construction Research Congress, pp. 2894-2904, San Juan, Puerto Rico, June 2016.

[42] C. Y. Wang, M. J. Tsai, and C. C. Tsai, "Multimedia recipe reading: predicting learning outcomes and diagnosing cooking interest using eye-tracking measures," Computers in Human Behavior, vol. 62, pp. 9-18, 2016.

[43] S. Yeni and E. Esgin, "Usability evaluation of web based educational multimedia by eye-tracking technique," International Journal Social Sciences and Education, vol. 5, no. 4, pp. 590-603, 2015.

[44] O. Navarro, A. I. Molina, M. Lacruz, and M. Ortega, "Evaluation of multimedia educational materials using eye tracking," Procedia-Social and Behavioral Sciences, vol. 197, pp. 2236-2243, 2015.

[45] E. Jamet, "An eye-tracking study of cueing effects in multimedia learning," Computers in Human Behavior, vol. 32, no. 1, pp. 47-53, 2014.

[46] Q. Wang, S. Yang, M. Liu, Z. Cao, and Q. Ma, "An eye-tracking study of website complexity from cognitive load perspective," Decision Support Systems, vol. 62, no. 1246, pp. 1-10, 2014.

[47] H. C. Liu, M. L. Lai, and H. H. Chuang, "Using eye-tracking technology to investigate the redundant effect of multimedia web pages on viewers' cognitive processes," Computers in Human Behavior, vol. 27, no. 6, pp. 2410-2417, 2011.

[48] F. Schmidt-Weigand, A. Kohnert, and U. Glowalla, "A closer look at split visual attention in system- and self-paced instruction in multimedia learning," Learning and Instruction, vol. 20, no. 2, pp. 100-110, 2010.

[49] K. Pernice and J. Nielsen, How to Conduct Eyetracking Studies, Nielsen Norman Group, Fremont, CA, USA, 2009.

[50] A. Poole, L. J. Ball, and P. Phillips, "In search of salience: a response-time and eye-movement analysis of bookmark recognition," in People and Computers XVIII-Design for Life, pp. 363-378, Leeds Metropolitan University, Leeds, UK, 2004.

[51] R. E. Mayer, "The promise of multimedia learning: using the same instructional design methods across different media," Learning and Instruction, vol. 13, no. 2, pp. 125-139, 2003.

[52] T. Seufert, "Supporting coherence formation in learning from multiple representations," Learning and Instruction, vol. 13, no. 2, pp. 227-237, 2003.

[53] N. Berggren, E. H. W. Koster, and N. Derakshan, "The effect of cognitive load in emotional attention and trait anxiety: an eye movement study," Journal of Cognitive Psychology, vol. 24, no. 1, pp. 79-91, 2012.

[54] X. Li, Z. Ouyang, and Y. J. Luo, "The effect of cognitive load on interaction pattern of emotion and working memory: an ERP study," in Proceedings of the IEEE International Conference on Cognitive Informatics, pp. 61-67, Beijing, China, July 2010.

[55] D. B. Thoman, J. L. Smith, and P. J. Silvia, "The resource replenishment function of interest," Social Psychological and Personality Science, vol. 2, no. 6, pp. 592-599, 2011.

Ting-Kwei Wang [ID], (1) Jing Huang [ID], (1) Pin-Chao Liao [ID], (2) and Yanmei Piao [ID] (1)

(1) School of Construction Management and Real Estate, Chongqing University, Chongqing 400045, China

(2) Department of Construction Management, Tsinghua University, Beijing 100084, China

Correspondence should be addressed to Pin-Chao Liao; pinchao@tsinghua.edu.cn

Received 18 April 2018; Accepted 24 June 2018; Published 15 July 2018

Academic Editor: Yingbin Feng

Caption: FIGURE 1: Relationship between the eye-movement metrics and the CTML cognitive processing.

Caption: FIGURE 2: Experimental flow.

Caption: FIGURE 3: Paper-based learning material.

Caption: FIGURE 4: Design of the model for the AR-based learning environment.

Caption: FIGURE 5: The workflow of AR-based learning environment preparation.

Caption: FIGURE 6: AR-based learning material.

Caption: FIGURE 7: PM-based learning material.

Caption: FIGURE 8: Test interface.

Caption: FIGURE 9: Formal experiment.
TABLE 1: Overview of experimental studies on AR for teaching and
learning.

Reference      Domain       Setting

[24]          Biology      Classroom

[26]          Anatomy      Classroom

[28]          Chinese      Classroom
              writing      and field

[21]        Mathematics      Field
                           experiment

[29]          Physics      Classroom

[30]        Architecture   Classroom

[23]          Natural        Field
              science      experiment

Reference         Participants            AR treatment

[24]         72 fifth-grade children     AR graphic book

[26]          171 students: 78 with        AR software
            medicine degree, 48 with
            physiotherapy degree, and
             45 with podiatry degree

[28]         30 12th grade students     AR-based writing
                                         support system

[21]        101 participants: 40 from       AR mobile
             primary school, 34 from       application
            secondary school, and 27
                 from university

[29]         64 high-school students       AR-learning
                                           application

[30]         57 university students         AR mobile
                                           application

[23]          57 4th grade students      AR-based mobile
                                        learning approach

Reference       Control group           Evaluation content
                  treatment

[24]          A picture book or         Error; retention;
            physical interactions          satisfaction

[26]            Notes; videos         Acquisition of anatomy
                                             contents

[28]         Text-graph writing        Writing performance
              support materials     (subject, content control,
                                      article structure, and
                                             wording)

[21]              Physical             Knowledge retention
                 information

[29]           An educational         Knowledge acquisition;
                   website               flow experience

[30]        Text-graph materials       Academic performance

[23]            Inquiry-based        Learning achievement and
               mobile learning              motivation
                  approach

TABLE 2: Overview of multimedia learning and cognition studies with
eye tracking.

Reference                     Materials                     Eye tracker

[38]                Construction scenario images            EyeLink II

[39]                  Construction site images              EyeLink II

[40]             Virtual building construction site          ViewPoint
                                                            EyeTracker
                                                              GIG160

[41]                      Construction site                  Tobii Pro
                                                             Glasses2

[42]         Static (text and picture) and dynamic(text     FaceLab 4.6
                          and video) recipe

[43]                Web-based multimedia package             SMI iView
                                                               X 2.4

[44]         Images and texts with and without coloring      Tobii X60

[45]           A digital learning environment with and       Tobii T60
                         without visual cues

[46]                           Webpage                      SMI iView X

[47]                           Webpage                       FaceLab 4

[48]                      Text and picture                    ASL 504

[37]           Color-coded and conventional format of       Tobii 1750
                       multimedia instruction

Reference                     Eye-movement metrics

[38]            Fixation count; run count; dwell-time percentage

[39]            First fixation time; dwell percentage; run count

[40]                       Fixation count; scan path

[41]         Visit count; fixation count; total dwell time; time to
                               the first fixation

[42]              Total fixation count; total fixation time;
                              interscanning count

[43]         Total fixation count; gaze sequence; dwell time in AOI
                               (area of interest)

[44]          Time to the first fixation; fixation numbers to the
              first fixation; total fixation count; fixation count
                                    percent

[45]                          Total fixation time

[46]                   Fixation duration; fixation count

[47]           Total fixation count; fixation duration; average
                          fixation duration; scan path

[48]                 Total fixation time; transition count

[37]         Average fixation duration; total fixation time; first
                                 fixation time

TABLE 3: Test questions.

Question                              Question
type

True or false      (1) Do the longitudinal bars distribute in four
                                 layers in the node?

                (2) Does the no. 10 bar located in the second layer?

                   (3) Do no. 1 and no. 12 bars anchor in the same
                                        way?

                   (4) Please write down the number of bars which
                                 anchor in the beam.

Short answer       (5) Please write down the number of bars which
                  anchor in the way of "bending towards the inside
                                    the column."

                   (6) Please write down the number of bars which
                 anchor in the way of "stretching to the edge of the
                           column, then bending downward."

TABLE 4: Definition of measures used in this study.

Measures                                      Definition

Test score                        The score of learners' answers; one
                                      point for each right answer

Completion time (s)                  Total time spent on answering
                                               questions

Total fixation time (ms)             Total time fixated on an AOI

Total fixation count (time)        Total number of fixations counted
                                             within an AOI

Average fixation duration (ms)     Average duration of time of every
                                  fixation count on an AOI: the ratio
                                   of total fixation time and total
                                            fixation count

TABLE 5: Descriptive statistics of score and completion time.

                                       TG-based        AR-based
Item                  Question form
                                      Mean     SD     Mean     SD

Score                 True or false   0.52    0.51    0.67    0.48
                      Short answer    0.06    0.24    0.61     0.5

Completion time (s)   True or false   30.81   16.96   26.61   11.62
                      Short answer    36.91   26.09   39.76   20.74

                       PM-based        F
Item
                      Mean     SD

Score                 0.72    0.45     1.70
                      0.67    0.48    20.90 *

Completion time (s)   23.19   14.93    2.32
                      34.09   24.38    0.48

* The mean difference is significant at the 0.05 level.

TABLE 6: Multiple comparisons of items with significant differences.

                             TG- and AR-based
Item
                        Mean difference    Sig.

Score of short-answer       -0.55 *        0.000
questions

                            TG- and PM-based
Item
                        Mean difference    Sig.

Score of short-answer       -0.61 *        0.000
questions

                           AR- and PM-based
Item
                        Mean difference    Sig.

Score of short-answer        -0.06         1.00
questions

* The mean difference is significant at the 0.05 level.

TABLE 7: Descriptive statistics of score and completion time.

                                         TG-based        AR-based
Item                    Question form
                                        Mean     SD     Mean     SD

Fixation time (ms)      True or false   9.95    8.03    7.38    7.73
                        Short answer    5.35    7.61    9.97    9.69

Fixation count (time)   True or false   21.68   20.55   8.79    8.28
                        Short answer    10.00   13.50   13.18   11.38

Average fixation        True or false   0.55    0.17    0.84    0.45
duration (ms)           Short answer    0.41    0.23    0.69    0.28

                          PM-based        F
Item
                        Mean     SD

Fixation time (ms)      5.21    6.58     3.28 *
                        10.21   9.17     3.17 *

Fixation count (time)   7.36    8.28    11.27 **
                        15.24   13.30     1.43

Average fixation        0.64    0.36    5.82 **
duration (ms)           0.66    0.27    10.87 **

Note. ** p < 0.01; * p < 0.05.

TABLE 8: Multiple comparisons of items with significant differences.

                                        TG- and AR-based

Item                    Question form

                                          Mean       Sig.
                                        difference

Fixation time (ms)      True or false      2.57      0.532
                        Short answer      -4.62      0.111

Fixation count (time)   True or false    12.89 *     0.001

Average fixation        True or false    -0.29 *     0.004
duration (ms)           Short answer     -0.28 *     0.000

                        TG- and PM-based

Item

                          Mean       Sig.
                        difference

Fixation time (ms)        4.74 *     0.036
                          -4.86      0.082

Fixation count (time)    14.32 *     0.000

Average fixation          -0.10      0.940
duration (ms)            -0.25 *     0.001

                        AR- and PM-based

Item

                          Mean       Sig.
                        difference

Fixation time (ms)         2.17      0.681
                          -0.24      1.000

Fixation count (time)      1.43      1.000

Average fixation           0.20      0.061
duration (ms)              0.03      1.000

* The mean difference is significant at the 0.05 level.
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wang, Ting-Kwei; Huang, Jing; Liao, Pin-Chao; Piao, Yanmei
Publication:Advances in Civil Engineering
Article Type:Report
Date:Jan 1, 2018
Words:7948
Previous Article:Applying Adaptive Neural Fuzzy Inference System to Improve Concrete Strength Estimation in Ultrasonic Pulse Velocity Tests.
Next Article:Failure Law and Mechanism of the Rock-Loose Coal Composite Specimen under Combined Loading Rate.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |