Printer Friendly

Designing a personalized guide recommendation system to mitigate information overload in museum learning.

Introduction

In recent years, it has been shown that museums are one of the most important institutions serving as sources for informal learning (Sung, Chang, Hou, & Chen, 2010; Sung, Chang, Lee, & Yu, 2008; Tan, Liu, & Chang, 2007; Vavoula, Sharples, Rudman, Meek, & Lonsdale, 2009). Over time, museums have gradually developed into public learning centers, and they have been seen as serving a role in public education (Semper, 1990). This implies that museums have been viewed as one type of informal educational context and as an important asset by which to acquire knowledge. Consequently, museums play a significant role in providing people with in depth knowledge beyond formal educational contexts (Ramey-Gassert, Walberg III, & Walberg, 1994; Semper, 1990).

Although museums are accepted as a means to pursue knowledge, the problem of information overload (IO) still remains in existing museum contexts (Bitgood, 2009). IO implies that users encounter a mass of information and need to make a decision as to whether to retain information about a certain topic (Toffler, 1970). In museums, visitors often have to confront a vast number of exhibits and, due to time pressure, must make a decision about whether to view more details about a particular exhibit or to move on (Bitgood, 2009). However, such a situation may lead to a generation of IO because the large number of exhibits leads to confusion. More specifically, visitors may have an inability to process input because too many exhibits are presented at once or because the information is presented too quickly over time (Bitgood, 2009). Consequently, visitors may acquire only a superficial understanding through a quick and casual viewing of any given exhibit.

In this paper, a personalized guide recommendation (PGR) system is proposed to mitigate IO in museum contexts. Previous studies have indicated that recommendation systems can help reduce IO (Itmazi & Megias, 2008; Lee & Kwon, 2008; Yang & Chen, 2010). A recommendation system refers to a system that actively provides relevant information to users according to their interests so that they are no longer required to handle too much information. This means that the recommendation system can be used to ease visitor IO. In order to develop an appropriate recommendation system for museum contexts, collective and individual visiting behavior was analyzed in order to recommend personalized guides for visitors. More specifically, a recommendation technique, association rule mining (ARM), was used to discover the PGR rules existing among collective visiting behaviors (Agarwal, Imielinski, & Swami, 1993; Sun, Kong, & Chen, 2005), and then the individual visiting behavior was used to improve the PGR rules in order to provide visitors with a PGR. In this innovative approach, each visitor is able to obtain a PGR service and relieve the distress associated with IO. Furthermore, the PGR system is different from virtual museum information system since visitors can take a mobile device with the PGR system out into authentic museum contexts.

For the purposes of this paper, an experiment was conducted to evaluate user satisfaction with a PGR system in museum contexts. Specifically, we implemented a PGR system and introduced the system into a university museum. Afterward, an evaluation model for user satisfaction with the PGR system was designed in order to evaluate the system. Subsequently, a questionnaire was developed to examine user satisfaction according to the evaluation model. Finally, a series of analyses were carried out to understand the user satisfaction with the PGR system.

Background and related studies

Relevant research on museum learning

Recently, research on museum learning has been quite diverse and has continued to grow. These studies have included enhancement of museum functions, investigations of virtual museums, and research on the connection between museum learning and formal learning. These studies are summarized as follows:

Research on enhancement of museum functions has focused on adopting various information technologies to enhance the effects of museums. Early on, researchers used interactive videodiscs to enhance the effect of exhibits (Hirumi, Savenye, & Allen, 1994). The results showed that an interactive videodisc can attract and hold visitor attention for longer periods of time than conventional exhibits lacking such amenities. Later, some studies utilized mobile technology such as electronic guidebooks to augment user experiences (Jeng, Wu, Huang, Tan, & Yang, 2010; Sung et al., 2010; Sung et al., 2008). These studies showed that electronic guidebooks resulted in patrons staying longer at exhibits as compared to the results for a paper-based guidebook. In addition to mobile technology, researchers have also expressed interest in ubiquitous technology (Huang & Wu, 2011; Huang, Chiu, Liu, & Chen, 2011; Wu, Sung, Huang, Yang, & Yang, 2011). Some researchers have used ubiquitous technology to detect the location of museum visitors and also to provide visitors with adaptive guides (Ghiani, Paterno, Santoro, Spano, 2009; Hall & Bannonw, 2006; Pianesi, Graziola, Zancanaro, Goren-Bar, 2009). Likewise, the results have indicated that adaptive guides can provide a richer overall experience and positively impact user engagement. Overall, the introduction of information technologies is helpful to promote the effects of museum visits.

Investigations into virtual museums have focused on applying Internet technology to develop virtual museums. Fomichova and Fomichov (2003) used Internet technology to create child-oriented art museum websites. In their study, the authors used websites to establish a bridge between the world of art and the inner world of a child in order to expand the child's interest. With the development of virtual museums, Corredor (2006) explored the influence of prior knowledge on goal setting and content use in virtual museums. His study showed that both the domain knowledge and general knowledge of visitors influence both goal setting and the content use of museum visitors. Neill (2008) also reported a project regarding transnational cooperation (NEOTHEMI) among ten countries in Europe that developed a cultural heritage virtual museum. The report indicated that NEOTHEMI had helped students understand both their culture and other cultures better and also helped them to understand different perspectives on culture. Reynolds, Walker, and Speight (2010) utilized both Internet and mobile technologies to develop web-based museum trails for university-level design students. The trails offered students a range of ways to explore a museum environment and its collections. Their results showed that the trails promoted the students' knowledge and interest in the museum used in the study. In sum, the development of virtual museums is also one of the important issues in museum learning.

The research on the connection between museum learning and formal learning has focused primarily on how to integrate museums into formal learning contexts. Morreale (2001) used hypermedia, database and network technologies to integrate museums into school activities. The results showed that both the creativity and mental outlook of pupils could be enhanced with these technologies. Cox-Petersen et al. (2003) explored the feasibility of using docent-led guided school tours at a museum of natural history. Their results indicated that the tours were organized in a didactic way that conflicted with inquiry-based learning. The results also showed that student satisfaction with the tours was high but that the level of science learning was low. Vavoula et al. (2009) developed a Myartspace service on mobile phones for inquiry learning in museums. The Myartspace service allowed students to gather information and send the information to a website during museum field trips. In this manner, the students were able to view, share, and present the information regardless of whether they were in a classroom or at home. That is to say, a collaborative learning context was formed (Hwang, Huang, & Wu, 2011; Lin, Huang, Cheng, 2010). Their results showed that the Myartspace service was effective in assisting students in gathering information in museums and with providing resources for effective construction and reflection in the classroom.

Overall, museum learning has been extensively discussed. However, methods by which to assist visitors to ease IO have rarely been considered. At present, the issue of IO in museums has only been mentioned in a study of museum fatigue (Bitgood, 2009). To make up for this deficiency, the use of a PGR system to mitigate IO in museum contexts is being considered in this paper.

Information overload and its solutions

IO is one of important issues in learning and teaching scenarios (Chen, 2009; Paulo, 1999). IO means that there is too much information; it has negative connotations, and it is a widespread problem, especially in computer-mediated communication (CMC) contexts (Paulo, 1999). Moreover, IO is different from cognitive overload (CO). IO usually arises in an attention process in which users encounter disturbances due to excessive information, and it results in the loss of information (Chen, 2009). CO usually arises in knowledge construction processes in which users encounter difficulties in the storage and retrieval process, and it leads to failure in knowledge construction (Chen, 2009). Therefore, CO studies are focused on assisting learners in constructing knowledge (Cierniak, Scheiter, & Gerjets, 2009; Huang, Huang, Liu, & Tsai, 2011) while IO studies are focused on assisting learners in avoiding excessive information (Itmazi & Megias, 2008; Lee & Kwon, 2008; Yang & Chen, 2010). In this study, we focused on museum contexts that are IO contexts (Bitgood, 2009), so the term "IO" is studied in this paper rather than "CO."

Recommendation systems are one of several useful techniques designed to cope with IO (Itmazi & Megias, 2008; Lee & Kwon, 2008; Yang & Chen, 2010) that have been used to assist users in identifying relevant information from a vast amount of information (Ghauth & Abdullah, 2010). In this manner, users are not exposed the risk of IO. Collaborative filtering (CF), content-based filtering (CBF), and hybrid filtering (HF) are the three most famous methods among the proposed recommendation systems (Ghauth & Abdullah, 2010; Wang, Tsai, Lee, & Chiu, 2007). CF refers to a system that use user attributes, such as browsing behavior, to predict items of interest for a user (Khribi, Jemni, & Nasraoui, 2009; Manouselis, Vuorikari, & Van Assche, 2010; Rodriguez, Sicilia, Sanchez-Alonso, Lezcano, Garcia-Barriocanal, in press). That is to say, new items that have not been browsed by the user and new items that are of interest to similar users will be recommended to the user (Ghauth & Abdullah, 2010). CBF refers to a system that uses the attributes among items, such as categories of items, to predict items of interest for a user (Ghauth & Abdullah, 2010; Huang, Huang, Wang, & Hwang, 2009; Khribi et al., 2009; Yang & Chen, 2010; Yang, Huang, Tsai, Chung, & Wu, 2009). In other words, if the user was interested in an item in the past, it was assumed that he/she would probably be interested in other similar items in the future (Wang et al., 2007). HF refers to a system that combines CF and CBF to predict user likes or dislikes (Khribi et al., 2009; Wang et al., 2007). At present, HF is usually believed to be a better method for recommendation systems (Wang et al., 2007). In this study, the HF method is adopted in a PGR system that uses both collective and individual visiting behavior to recommend personalized guides for visitors, the details of which are described in the next section.

Personalized guide recommendation (PGR) system

The PGR system developed in this study was composed of a back-end subsystem and a front-end subsystem, as shown in Figure 1. The back-end subsystem was designed for staff to use, and it would be used on a desktop computer. The front-end subsystem was designed for visitors to use, and it would be used on netbook computer (Notebook, 2010). The functions of these subsystems are described in more detail below.

[ILLUSTRATION OMITTED]

Back-end subsystem

A back-end subsystem provides staff with a management tool, which is comprised of an exhibit management function and a PGR rules generation function, as shown in Figure 2. The exhibit management function assists staff in managing exhibits, in which the members of a staff can use this function to add, modify, and delete exhibit information. Here, the exhibit information includes the name, category, description, image, location, and so on. The PGR rules generation function assists staff members in generating the PGR rules from the collective visiting behavior. When staff members want to generate the rules, this function will automatically connect to database to retrieve the visiting behavior of visitors and further generate the PGR rules. Afterward, the PGR rules can be used to provide PGR services for visitors. Moreover, since the visiting behavior of visitors is frequently added to database, an automatic update setting was embedded in this function for the purpose of assisting staff to automatically update PGR rules. In this manner, the PGR rules will become more significant over time because a large number of visiting records is useful in generating significant PGR rules.

[FIGURE 2 OMITTED]

Front-end subsystem

A front-end subsystem provides visitors with a guide tool, which is comprised of an electronic guidebook function and a PGR service function, as shown in Figure 3 and Figure 4. The electronic guidebook function assists visitors in their museum visit, where the visitors can use this function to view more details about a particular exhibit as shown in Figure 4. In the meantime, the individual visiting behavior is also recorded into a visiting record table. When visitors finish their visiting activity, staff members will upload the table into the visiting record database of the backend subsystem in order to update the PGR rules. The PGR service function is used to provide visitors with a personalized guide based on their visiting behavior and the PGR rules. By using this function, visitors have opportunities to avoid making a decision to view more details about a particular exhibit because this function will actively provide a suitable exhibit guide for them.

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

Here, an example is used to illustrate how to use the front-end subsystem to guide a visitor to visit exhibits. First of all, the visitor holds a mobile device with the front-end subsystem to visit the exhibits. During the period of visit, if the visitor is interested in an exhibit in one of the exhibit areas such as the dragon lock in the D area, he/she can click on the front-end subsystem' button labeled "exhibit area D" to view the details of the dragon lock as shown in Figure 3. Afterward, the front-end subsystem will present the details of the dragon lock and recommend an exhibit to the visitor as shown in Figure 4. Consequently, the visitor can view more details about the dragon lock and have an opportunity to avoid the risk of IO through PGR service. With regard to the details of the PGR service, which are presented in the following subsection.

PGR service

A PGR service is an HF recommendation technique, which involves CF and CBF, as shown in Figure 5. The CF is performed in the back-end subsystem, and the CBF is executed in front-end subsystem. Specifically, the ARM technique and the Apriori algorithm (Agarwal et al., 1993; Sun et al., 2005) were first adopted to implement CF in order to generate the PGR rules. Afterward, the relationship both the category and location of exhibits and both the category of interest and location of visitors were formulized to implement CBF and then personalize the PGR rules. Their details are described as below.

[FIGURE 5 OMITTED]

The ARM technique is a powerful data mining method designed to search for interesting relationships between items by finding the items frequently appearing together in a transaction database (Agarwal et al., 1993; Sun et al., 2005). Hence, the ARM technique was used in this study to discover the relationships among the collective visiting behaviors of visitors by finding the exhibits that frequently appeared together in a visiting record database. In this manner, the visiting patterns among visitors could be found through ARM technique, and these patterns could be used as the PGR rules. In this study, a PGR rule is defined as an implication of the form X [??] Y where X, Y are sets of exhibits and X [intersection] Y = [empty set]. X is called antecedent while Y is called consequent, the rule means X implies Y. A PGR rule signifies that if a visitor has visited the exhibits in X, he/she would like to visit the exhibits in Y. Furthermore, support and confidence are used as thresholds to select the PGR rules. The support of a PGR rule X [??] Y is defined as the percentage of visiting records that contain X [union] Y to the total number of visiting records in the visiting record database. The confidence of a PGR rule X [??] Y is the percentage of visiting records that contains X [union] Y to the total number of visiting records that contain X in the visiting record database. In general, a strong rule has a large support and high confidence. However, in this work, visitors may have different ages and different background, and therefore may result in a diversity of visiting records. This signifies that the same visiting pattern among visitors may be relatively less, and thus both the support and the confidence of the PGR rules need to be set smaller and lower in order to discover more PGR rules, especially when the visiting record database is small. Once the visiting record database becomes large, both the support and the confidence of the PGR rules can be set larger and higher in order to discover more significant PGR rules. Subsequently, these rules would be improved to personalize in the front-end subsystem by using individual visiting behavior.

To personalize the PGR rules, the main idea of this work is to determine whether the category of the recommended exhibit is identical to the category of interest of the visitor and whether the recommended exhibit is around the visitor. To this end, both the category of interest and location of the visitor and both the category and location of recommended exhibit were used to evaluate the recommendation level between the visitor and the recommended exhibit. In order to understand the visitor's interest in the category of exhibits, the category that occurred the most frequently among the visited exhibits was used as the category of interest of the visitor. Moreover, to identify the possible location of the visitor, the location of the exhibit last visited by the visitor was also used to infer the location of the visitor. Accordingly, the recommendation level formula was defined as Equation (1). In this manner, the recommendation level between the visitor and the recommended exhibit could be computed through Equation (1). Consequently, the PGR system was able to use the recommendation level to provide a personalized guide.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1)

where

rl is the recommendation level between the visitor and the recommended exhibit;

cat(Vc, REc) is to determine whether both the category of interest of the visitor and the category of the recommended exhibit are the same;

Vc is the category that occurs the most frequently in the visiting record table of the visitor, which is viewed as the category of interest of the visitor;

REc is the category of the recommended exhibit;

loc(Vl, REl) is to compute the distance between the location of the visitor and the location of the recommended exhibit and convert the distance into a interval [0,1];

([Vl.sub.x], [Vl.sub.y]) is the location of the exhibit last visited by the visitor, which is viewed as the location of the visitor;

([REl.sub.x], [REl.sub.y]) is the location of the recommended exhibit;

[dist.sub.max] is the maximum distance between all exhibits.

Experimental design

Theoretical fundamental of evaluation: user satisfaction

User satisfaction plays an important role in the successful development of e-learning systems (Bekele, 2010; Ho & Dzeng, 2010; Huang & Liu, 2009). An appropriate evaluation will motivate researchers to improve the development of e-learning systems. Hence, user satisfaction evaluations are used to recognize user needs and significant factors in order to improve systems and to obtain user acceptance (Bekele, 2010; Ho & Dzeng, 2010; Ong, Day, & Hsu, 2009). Such evaluations can provide suggestions regarding system design and can facilitate the improvement of systems. Furthermore, such evaluations can also be used to understand whether systems meet user requirements and demonstrate system value (Ong et al., 2009). Consequently, in this study, user satisfaction is adopted as a theoretical fundamental of evaluation by which to evaluate PGR system.

To investigate user satisfaction with the PGR system in museum contexts, a comprehensive model proposed by Ong et al. (2009) was modified in the experiment. Ong et al. (2009) proposed four constructs, which included perceived ease of use (PEU), perceived usefulness (PUF), perceived service quality (PSQ), and perceived information quality (PIQ) in order to evaluate user satisfaction. PEU refers to users' belief that using a technology will be free of effort (Davis, 1989). PUF refers to users' belief that using a technology will enhance his/her job performance (Davis, 1989). PSQ refers to users' judgment on the overall excellence or superiority of a system (Ong et al., 2009). PIQ refers to users' judgment on the content of a system (Ong et al., 2009). In addition to the four constructs, perceived information overload (PIO) was also considered in this model. PIO is defined as users' judgment on whether they can engage in exhibit visitation. Due to the fact that IO is one of the factors influencing museum fatigue (Bitgood, 2009), PIO was added as one of the constructs for evaluating user satisfaction. Consequently, the five constructs formed an evaluation model as shown in Figure 6, which are used to develop a questionnaire for investigating user satisfaction with the PGR system.

[FIGURE 6 OMITTED]

Questionnaire

A structured questionnaire was developed based on a review of prior studies (Bitgood, 2009; Davis, 1989; Ong et 0l., 2009) as well as feedback from experts. The improved questionnaire was distributed to the visitors, who were required to complete the questionnaire by indicating their level of agreement with a five-point Likert scale, as shown in Table 1.

Participants and the system

A total of 72 visitors (46 males and 26 females) participated in the experiment, which was conducted in a university museum in Tainan City, Taiwan. The participants' ages ranged between the age of young students to that of the elderly, as shown in Table 2. The PGR system was implemented through C# programming language and a SQL Server 2005 database. 0 shows the participants visiting the museum through the PGR system.

[FIGURE 7 OMITTED]

Procedure

At the start of the experimental procedure, all participants were asked to execute a visiting activity through the PGR system. In the activity, the participants used the system to visit exhibits in which the system recommended the exhibit according to their visiting behavior. When the activity was finished, the participants were asked to fill out the questionnaire.

Results and discussion

Assessment of questionnaire

Reliability

Cronbach's a was used to assesses the reliability. The Cronbach a values in five constructs were higher than 0.70 (total Cronbach a value in five dimensions=0.928; the Cronbach a value of PEU=0.874, PUF=0.766, PSQ=0.792, PIQ=0.794 and PIO=0.895). This implies that the reliability was sufficiently high (Wortzel, 1979). Furthermore, the minimum value of each corrected item-to-total correlation was above 0.5 (minimum = 0.535), which showed that the questionnaire had strong reliability (Ong et al., 2009). The results of the reliability analysis are summarized in Table 3.

Content validity

Domain experts were used to examine the content validity. Some ambiguous or unsuitable items were modified, removed, altered, or arranged in a proper order according to the feedback from experts. This rigorous process implies that the questionnaire had good content validity.

Criterion-related validity

Criterion-related validity is used to demonstrate the accuracy of a measure by comparing it with another measure (Sartori & Pasini, 2007). It is assessed by comparing correlation coefficient test scores with the external criterion or overall satisfaction (Ong et al., 2009). For the purposes of this study, the correlation between the total scores on the questionnaire (the sum of 15 items) and the measures of criterion validity (the sum of three global items used to measure overall satisfaction with PGR) were determined. The results showed that the questionnaire had a criterion-related validity of 0.68 (P-value < 0.001), suggesting acceptable criterion-related validity (Hair, Black, Babin, Anderson, & Tatham, 2006; Ong et al., 2009).

Construct validity

Construct validity is used to validate that a questionnaire is actually a measure of what it is intended to measure (i.e. the construct) and not a measurement of other variables. It is assessed by using convergent and discriminant validity (Ong et al., 2009). The convergent validity was assessed by examining the average variance extracted (AVE), which must exceed the standard minimum level of 0.5 (Hair et al., 2006). The discriminant validity was evaluated by the square root of the AVE and the correlation matrix of the construct (Fornell & Larcker, 1981), in which the square root of the AVE of each construct should exceed the correlation shared between one construct and other constructs. The results seen in Table 4 show that most criteria exceeded the threshold suggested in previous research and thus indicated that a satisfactory construct validity was obtained. Hence, the observed assessment of reliability and validity suggest the adequacy of the questionnaire used in this study.

Results of user satisfaction evaluation

A user satisfaction questionnaire was designed as a general survey of 15 items and consisted of five constructs (i.e., PEU, PUF, PSQ, PIQ, and PIO) on the five-point Likert scale that ranging from strongly disagree (1), disagree (2), undecided (3), agree (4), strongly agree (5). In the experiment, the questionnaire was filled out by 72 visitors, and their responses to the questionnaire are summarized in Table 5. The mean score of most of responses was greater than 4. The results are overall very positive.

Figure 7 is the radar chart for the mean scores of the five constructs. It can be observed that the mean scores of PEU, PUF, and PIQ constructs are greater than PSQ and PIO constructs. This means that the overall excellence or superiority of the PGR system still has room for improvement, especially in regard to the reliability of the functions of the PGR system (see the mean score of PSQ1 in Table 5). This is because the PGR system is a prototype system, so the reliability of the functions is relatively low. Furthermore, in the PIO construct, visitor responses to the fatigue and patience items (i.e., PIO1 and PIO2) had relatively small positive feedback, in contrast to remembrance (PIO3). This is because visitors need to consume energy when carrying a netbook computer, so the PIO1 and PIO2 had the relatively lower scores. In sum, the PGR system obtained overall positive feedback in the user satisfaction evaluation.

[FIGURE 7 OMITTED]

Comparison of user satisfaction with gender

To investigate whether gender influences user satisfaction with the PGR system, an independent sample T-test was used to examine it. According to the results of the T-test shown in Table 6, there was no significant difference between the user satisfaction of females and males (P-value > 0.05). The results indicated that females and males were both consistently satisfied with the PGR system.

Comparison of user satisfaction with age

To compare the user satisfaction with the PGR system among different age groups, a one-way ANOVA analysis was used. To carry out the ANOVA analysis, the participants were divided into five groups, G1 (1-10), G2 (11-20), G3 (21-30), G4 (31-40), and G5 (41-50), according to the demographic characteristics of the participants (see Table 2). Table 7 shows the result of the ANOVA analysis, which revealed that there was a significant difference in the PSQ construct (F-value=3.71, P-value = 0.01), though this was a relatively small effect size (eta-squared = 0.18). The results demonstrate that a significant relationship was found between PSQ and participant age.

To further investigate the relationship between PSQ and participants' age, a least significant difference (LSD) comparison was used. Table 8 shows the result of the LSD comparison, which indicated that G3-G1, G3-G2, G3G4, and G3-G5 were significantly different from each other (see the gray cell in Table 8). 0 shows the PSQ scores of the different age groups and indicates that G3 has more stringent criteria for the service quality of the PGR system than do other groups. One reason for this was inferred, which is the premise that the G3 participants were experienced with the use of computers. Considering the time during which widespread use of computers has taken place (about 30 years), G3 participants could be determined to have had the most experience with using computers as compared to the other groups because the other groups were either too young or too old. Considering the opportunity for use of computers, G3 participants had the most opportunity to use computers because the other groups were either students or retirees. Consequently, it can be assumed that G3 participants had more stringent requirements for the service quality of the PGR system due to their prior experience with computer usage.

[FIGURE 8 OMITTED]

Comparative analysis of museum systems

A comparative analysis is used to evaluate the usability of the PGR system. In this analysis, we have conducted an extensive comparison between our proposed system and other museum systems. To fairly analyze these systems, only technical criteria are adopted to analyze the comparison. The adopted criteria are described in Table 9.

Since information technology advances rapidly, only the related museum systems published within recent three years are selected as candidates for the comparison. There are seven related museum systems are considered in the comparison. As the PGR system is introduced in this article, the other museum systems and their features are briefly described below. Each candidate was assigned an ID in order to identify them, and Table 10 summarizes the results of the comparative analysis. From the results, MS4 are similar to the PGR system. Compared with MS4, the PGR system lacked map function, but it can simultaneously satisfy the needs of visitors and staffs. Moreover, the screen of operating device of the PGR system is bigger than that of mobile phone and PDA, and the weight of operating device of the PGR system is smaller than that of tablet PC. Accordingly, we believed that the PGR system can provide visitors and staffs with a satisfied experience.

* MS1 (Sung et al., 2008): This system is a mobile guide system, which is a web-based system and operated in tablet PC. The system mainly provides visitors with map and multimedia presentations of exhibits. Here, the multimedia presentations include photo, audio, and text.

* MS2 (Vavoula et al., 2009): This system is an inquiry learning system, which is the combination of web-based and client-based systems and operated in mobile phone. The system mainly provides visitors with multimedia presentations of exhibits and a set of functions for supporting their inquiry learning in museum. Here, the multimedia presentations include photo, illustration, and text.

* MS3 (Pianesi et al., 2009): This system is a mobile guide system, which is a client-based system and operated in PDA. The system mainly provides visitors with multimedia presentations of exhibits and location-awareness service. Here, the multimedia presentations include audio and video; the location-awareness service signifies that the system provides visitors with the multimedia presentation according to the location of visitors.

* MS4 (Ghiani et al., 2009): This system is a mobile guide system, which is a client-based system and operated in PDA. The system mainly provides visitors with map, multimedia presentations of exhibits, location-awareness service, and personalized service. Here, the multimedia presentations include photo, video, and text; the location-awareness service implies that the system provides visitors with a path to a specific exhibit from the current location; the personalized service means that the system guides visitors to visit their favorite exhibits.

* MS5 (Reynolds et al., 2010): This system is a mobile guide system, which is a web-based system and operated in PDA. The system mainly provides visitors with the multimedia presentations of exhibits, a set of functions for supporting visitors to explore museum and exhibits. Here, the multimedia presentations include photo, audio, video, and text.

* MS6 (Sung et al., 2010): This system is a mobile guide system, which is a web-based system and operated in tablet PC. The system mainly provides visitors with map and multimedia presentations of exhibits. Here, the multimedia presentations include photo, audio, and text.

Conclusion

Museum learning has considerable potential for informal learning, but IO is harmful to museum learning. In this paper, we developed a PGR system to assist visitors in engaging in such learning. The system applied collective and individual visiting behavior to recommend personalized guides for visitors. In this way visitors are capable of avoiding a situation of dealing with a large number of exhibits; that is to say, IO could be eased through a personalized guide. To explore user satisfaction with the PGR system, a user satisfaction questionnaire was developed. The assessments of reliability and validity demonstrated that the questionnaire was appropriate for further analysis of its results. The results showed that the PGR system obtained overall positive feedback among both females and males. In the meantime, the relationship between system service quality and user age was found to be significant. The users' prior experience with computer use was inferred to be the main factor contributing to this relationship.

Although the proposed system had demonstrated benefits, some problems remain and should be addressed in future research. In this study, the category that occurred the most frequently among visited exhibits was used as user interest. However, the time for visiting exhibits also needs to consider because visitors may spend a small amount of time visiting many exhibits for which they have little interest. Moreover, the presentation style of the system also needs to be considered since visitors' ages ranged from very young students to the elderly. In future work, we will attempt to design a new way to recommend a personalized guide and present different display styles for visitors. Finally, the value of cat(Vc, REc) may be too discrete (either 0 or 1). In the current research, we focused on using the system for recommendation purpose in order to mitigate information overload in museum learning, and thus the current discrete setting seems good enough to achieve the current purpose. However, in the future research, we will modify the value of cat(Vc, REc) to a real value in order to more accurately recommend personalized guides for visitors.

The limitations of this study include the types of measurements and the relatively small sample size. In this study, all of the measurements were limited to the visitors' self-reported perceptions. In future work, we will introduce additional measurements to explore the relationship between the PGR system and museum learning effectiveness. Furthermore, increasing the sample size to obtain stronger evidence for the proposed PGR system will be expected as well.

Acknowledgments

The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract No. NSC 97-2511-S-006-001-MY3, NSC 99-2631-S-011-002, NSC 99-2631-S-006-001, NSC 100-2511-S-006-015-MY3, NSC 100-2511-S-006-014-MY3, and NSC 100-2631-S-006-002-.

References

Agarwal, R., Imielinski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. In P. Buneman et al. (Eds.), Proceedings of the ACM SIGMOD international conference on management of data (pp. 207-216). Washington DC, USA.

Bekele, T. A. (2010). Motivation and satisfaction in internet-supported learning environments: A review. Educational Technology & Society, 13(2), 116-127.

Bitgood, S. (2009). Museum fatigue: a critical review. Visitor Studies, 12(2), 93-111.

Chen, C. Y. (2009). Influence of perceived information overload on learning in computer-mediated communication. In M. Spaniol et al. (Eds.), Proceedings of the 8th International Conference on Advances in Web Based Learning (pp. 112-115). Aachen, Germany.

Cierniak, G., Scheiter, K., & Gerjets, P. (2009). Explaining the split-attention effect: is the reduction of extraneous cognitive load accompanied by an increase in germane cognitive load? Computers in Human Behavior, 25(2), 315324.

Corredor, J. (2006). General and domain-specific influence of prior knowledge on setting of goals and content use in museum websites. Computers & Education, 47(2), 207-221.

Cox-Petersen, A. M., Marsh, D. D., Kisiel, J., & Melber, L. M. (2003). Investigation of guided school tours, student learning, and science reform recommendations at a museum of natural history. Journal of Research in Science Teaching, 40(2), 200-218.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.

Fomichova, O. S., & Fomichov, V. A. (2003). A new paradigm for constructing children-oriented web-sites of art museums. Educational Technology & Society, 6(3), 24-29.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50.

Ghauth, K. I., & Abdullah, N. A. (2010). Learning materials recommendation using good learners' ratings and content-based filtering. Educational Technology Research and Development, 58(6), 711-727.

Ghiani, G., Paterno, F., Santoro, C., & Spano, L. D. (2009). UbiCicero: A location-aware, multi-device museum guide. Interacting with Computers, 21(4), 288-303.

Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate Data Analysis. New Jersey: Prentice-Hall Press.

Hall, T., & Bannonw, L. (2006). Designing ubiquitous computing to enhance children's learning in museums. Journal of Computer Assisted Learning, 22(4), 231-243.

Hirumi, A., Savenye, W., & Allen, B. (1994). Designing interactive videodisc-based museum exhibits: a case study. Educational Technology Research and Development, 42(1), 47-55.

Ho, C. L., & Dzeng, R. J. (2010). Construction safety training via e-Learning: learning effectiveness and user satisfaction. Computers & Education, 55(2), 858-867.

Huang, Y. M. & Wu, T. T. (2011). A systematic approach for learner group composition utilizing u-learning portfolio. Educational Technology & Society, 14(3), 102-117.

Huang, Y. M., & Liu, C. H. (2009). Applying adaptive swarm intelligence technology with structuration in web-based collaborative learning. Computers & Education, 52(4), 789-799.

Huang, Y. M., Chiu, P. S., Liu, T. C., & Chen, T. S. (2011). The design and implementation of a meaningful learning-based evaluation method for ubiquitous learning. Computers & Education, 57(4), 2291-2302. doi: 10.1016/j.compedu.2011.05.023

Huang, Y. M., Huang, T. C., Wang, K. T., & Hwang, W. Y. (2009). A Markova-based recommendation model for exploring the transfer of learning on the web. Educational Technology & Society, 12(2), 144-162.

Huang, Y. M., Huang, Y. M., Liu, C. H. & Tsai, C. C. (2011). Applying social tagging to manage cognitive load in a Web 2.0 self-learning environment. Interactive Learning Environments. doi: 10.1080/10494820.2011.555839

Hwang, W. Y., Huang, Y. M., & Wu, S. Y. (2011). The effect of an MSN agent on learning community and achievement. Interactive Learning Environments, 19(4), 413-432. doi: 10.1080/10494820903356809

Itmazi, J., & Megias, M. (2008). Using recommendation systems in course management systems to recommend learning objects. International Arab Journal of Information Technology, 5(3), 234-240.

Jeng, Y. L., Wu, T. T., Huang, Y. M., Tan, Q., & Yang, S. J. H. (2010). The add-on impact of mobile applications in learning strategies: A review study. Educational Technology & Society, 13(3), 3-11.

Khribi, M. K., Jemni, M., & Nasraoui, O. (2009). Automatic recommendations for e-learning personalization based on web usage mining techniques and information retrieval. Educational Technology & Society, 12(4), 30-42.

Lee, K.C., & Kwon, S. (2008). Online shopping recommendation mechanism and its influence on consumer decisions and behaviors: A causal map approach. Expert Systems with Applications, 35(4), 1567-1574.

Lin, Y. T., Huang, Y. M., Cheng, S. C. (2010). An automatic group composition system for composing collaborative learning groups using enhanced particle swarm optimization. Computers & Education, 55(4), 1483-1493.

Manouselis, N., Vuorikari, R., & Van Assche, F. (2010). Collaborative recommendation of e-learning resources: An experimental investigation. Journal of Computer Assisted Learning, 26(4), 227-242.

Morreale, E. (2001). Integration of external and internal school activities: Support from new technologies. Educational Technology & Society, 4(2), 66-78.

Neill, S. (2008). Assessment of the NEOTHEMI virtual museum project--an on-line survey. Computers & Education, 50(1), 410-420.

Ong, C. S., Day, M. Y., & Hsu, W. L. (2009). The measurement of user satisfaction with question answering systems. Information & Management, 46(7), 397-403.

Paulo, H. F. (1999). Information overload in computer-mediated communication and education: is there really too much information? Implication for distance education (Unpublished master's thesis). University of Toronto, Toronto, Canada. Retrieved from https://tspace.library.utoronto.ca/bitstream/1807/13110/1/MQ45488.pdf

Pianesi, F., Graziola, I., Zancanaro, M., & Goren-Bar, D. (2009). The motivational and control structure underlying the acceptance of adaptive museum guides-an empirical study. Interacting with Computers, 21(3), 186-200.

Ramey-Gassert, L., Walberg III, H. J., & Walberg, H. J. (1994). Reexamining connections: Museums as science learning environments. Science Education, 78(4), 345-363.

Reynolds, R., Walker, K., & Speight, C. (2010). Web-based museum trails on PDAs for university-level design students: Design and evaluation. Computers & Education, 55(3), 994-1003.

Rodriguez, D., Sicilia, M. A., Sanchez-Alonso, S., Lezcano, L., & Garcia-Barriocanal, E. (2011). Exploring affiliation network models as a collaborative filtering mechanism in e-learning. Interactive Learning Environments, 19(4), 317-331. doi: 10.1080/10494820903148610

Sartori, R., & Pasini, M. (2007). Quality and quantity in test validity: how can we be sure that psychological tests measure what they have to? Quality & Quantity, 41(3), 359-374.

Semper, R. J. (1990). Science museums as environments for learning. Physics Today, 43(11), 50-56.

Sun, X., Kong, F., & Chen, H. (2005). Using quantitative association rules in collaborative filtering. In W. Fan et al. (Eds.), Proceedings of the 6th International Conference on Advances in Web-Age Information Management (pp. 822-827). Hangzhou, China.

Sung, Y. T., Chang, K. E., Hou, H. T., & Chen, P. F. (2010). Designing an electronic guidebook for learning engagement in a museum of history. Computers in Human Behavior, 26(1), 74-83.

Sung, Y. T., Chang, K. E., Lee, Y. H., & Yu, W. C. (2008). Effects of a mobile electronic guidebook on visitors' attention and visiting behaviors. Educational Technology & Society, 11(2), 67-80.

Tan, T. H., Liu, T. Y., & Chang, C. C. (2007). Development and evaluation of an RFID-based ubiquitous learning environment for outdoor learning. Interactive Learning Environments, 15(3), 253-269.

Toffler, A. (1970). Future shock. New York: Bantam Books.

Vavoula, G., Sharples, M., Rudman, P., Meek, J., & Lonsdale, P. (2009). Myartspace: design and evaluation of support for learning with multimedia phones between classrooms and museums. Computers & Education, 53(2), 286-299.

Wang, T. I., Tsai, K. H., Lee, M. C., & Chiu, T. K. (2007). Personalized learning objects recommendation based on the semantic-aware discovery and the learner preference pattern. Educational Technology & Society, 10(3), 84-105.

Notebook (2010). Wikipedia. Retrieved October 4, 2010 from http://en.wikipedia.org/wiki/Netbook Wortzel, R. (1979). New life style determinants of women's food shopping behavior. Journal of Marketing, 43(3), 28-39.

Wu, T. T., Sung, T. W., Huang, Y. M., Yang, C. S. & Yang, J. T. (2011). Ubiquitous English learning system with dynamic personalized guidance of learning portfolio. Educational Technology & Society, 14(4), 164-180.

Yang, J. C., & Chen, S. Y. (2010). Investigation of learners' perceptions for video summarization and recommendation. Interactive Learning Environments. doi: 10.1080/10494820.2010.486888.

Yang, J. C., Huang, Y. T., Tsai, C. C., Chung, C. I., & Wu, Y. C. (2009). An automatic multimedia content summarization system for video recommendation. Educational Technology & Society, 12(1), 49-61.

Yong-Ming Huang (1), Chien-Hung Liu (2), Chun-Yi Lee (1) and Yueh-Min Huang (1,3) *

(1) Department of Engineering Science, National Cheng Kung University, Taiwan // (2) Department of Network Multimedia Design, Hsing Kuo University of Management, Taiwan // (3) Department of Applied Geoinformatics, Chia Nan University of Pharmacy and Science, Taiwan // ym.huang.tw@gmail.com // chliu@mail.hku.edu.tw // chunyilee@yahoo.com // huang@mail.ncku.edu.tw

* Corresponding author

(Submitted January 08, 2011; Revised July 25, 2011; Accepted July, 28, 2011)
Table 1. Questionnaire

Construct   Item

PEU         (PEU1) I think that operating the system is easy.

            (PEU2) I think that learning to use the system is easy.

            (PEU3) I think that the functions of the system are easy
            to understand.

PUF         (PUF1) I think that using the system can result in
            knowledge of the exhibits.

            (PUF2) I think that using the system can satisfy my
            curiosity about exhibits.

            (PUF3) I think that using the system can promote
            convenience in visiting the museum.

PSQ         (PSQ1) I think that the functions of the system are
            reliable.

            (PSQ2) I think that the system has up-to-date, portable
            hardware.

            (PSQ3) I think that the system has an up-to-date,
            user-friendly interface.

PIQ         (PIQ1) I think that the content provided by the system is
            reliable.

            (PIQ2) I think that the content provided by the system is
            comprehensive.

            (PIQ3) I think that the content provided by the system is
            easy to understand.

PIO         (PIO1) I am not feeling fatigue when I use the system to
            visit exhibits.

            (PIO2) I am not losing patience when I use the system to
            visit exhibits.

            (PIO3) I can remember more information about exhibits of
            interest when I use the system to visit exhibits.

Table 2. Demographic characteristics of participants

Characteristic   Category   Number

Gender            Male       46
                 Female      26
Age               1-10       15
                 11-20       10
                 21-30       21
                 31-40       17
                 41-50       9

Total                        72

Table 3. The results of the reliability analysis

                              Reliability analysis

Construct   Item   Corrected item-total   Cronbach's [alpha]
                       correlation

PEU         PEU1          0.786                 0.874
            PEU2          0.796
            PEU3          0.709

PUF         PUF1          0.591                 0.766
            PUF2          0.679
            PUF3          0.535

PSQ         PSQ1          0.678                 0.792
            PSQ2          0.733
            PSQ3          0.695

PIQ         PIQ1          0.564                 0.794
            PIQ2          0.745
            PIQ3          0.622

PIO         PIO1          0.900                 0.895
            PIO2          0.832
            PIO3          0.690

Table 4. The results of convergent and discriminant validity

            Convergent        Discriminant validity
             validity

                         Correlation matrix of construct

Construct      AVE       PEU    PUF    PSQ    PIQ    PIO

PEU            0.81      0.90
PUF            0.68      0.62   0.83
PSQ            0.76      0.58   0.76   0.87
PIQ            0.72      0.52   0.66   0.63   0.85
PIO            0.83      0.46   0.64   0.69   0.75   0.91

Table 5. The responses to the questionnaire

Construct   Item   Strongly agree    Agree     Undecided   Disagree

PEU         PEU1      40% (29)      49% (35)    11% (8)     0% (0)
            PEU2      35% (25)      51% (37)   14% (10)     0% (0)
            PEU3      33% (24)      51% (37)    11% (8)     4% (3)

PUF         PUF1      31% (22)      54% (39)    11% (8)     4% (3)
            PUF2      42% (30)      40% (29)    13% (9)     6% (4)
            PUF3      36% (26)      44% (32)   17% (12)     1% (1)

PSQ         PSQ1      29% (21)      28% (20)   21% (15)    19% (14)
            PSQ2      32% (23)      53% (38)   14% (10)     1% (1)
            PSQ3      35% (25)      57% (41)    8% (6)      0% (0)

PIQ         PIQ1      35% (25)      46% (33)   17% (12)     0% (0)
            PIQ2      32% (23)      56% (40)    11% (8)     0% (0)
            PIQ3      26% (19)      53% (38)   18% (13)     3% (2)

PIO         PIO1      29% (21)      36% (26)   26% (19)     8% (6)
            PIO2      31% (22)      42% (30)   17% (12)     8% (6)
            PIO3      35% (25)      51% (37)    10% (7)     4% (3)

Construct   Item   Strongly disagree   Mean

PEU         PEU1        0% (0)         4.3
            PEU2        0% (0)         4.2
            PEU3        0% (0)         4.1

PUF         PUF1        0% (0)         4.1
            PUF2        0% (0)         4.2
            PUF3        1% (1)         4.1

PSQ         PSQ1        3% (2)         3.6
            PSQ2        0% (0)         4.2
            PSQ3        0% (0)         4.3

PIQ         PIQ1        3% (2)         4.1
            PIQ2        1% (1)         4.2
            PIQ3        0% (0)         4.0

PIO         PIO1        0% (0)         3.9
            PIO2        3% (2)         3.9
            PIO3        0% (0)         4.2

Table 6. The comparison of the user satisfaction with gender

Construct   Gender    Mean    Standard deviation   P-value

PEU         Female   4.2051          0.63           0.94
             Male    4.2174          0.64

PUF         Female   4.0128          0.70           0.24
             Male    4.2101          0.66

PSQ         Female   3.8590          0.65           0.19
             Male    4.0942          0.76

PIQ         Female   4.0897          0.53           0.94
             Male    4.1014          0.73

PIO         Female   3.8590          0.84           0.39
             Male    4.0362          0.83

Table 7. The ANOVA analysis of the user satisfaction with age

Construct       Source       Sum of squares   Degrees of freedom

PEU         Between Groups        1.51               4.00
            Within Groups        26.55              67.00
                Total            28.07              71.00

PUF         Between Groups        4.11               4.00
            Within Groups        28.50              67.00
                Total            32.61              71.00

PSQ         Between Groups        6.84               4.00
            Within Groups        30.93              67.00
                Total            37.77              71.00

PIQ         Between Groups        1.92               4.00
            Within Groups        29.28              67.00
                Total            31.21              71.00

PIO         Between Groups        2.56               4.00
            Within Groups        46.94              67.00
                Total            49.50              71.00

Construct       Source       Mean square   F-value   P-value

PEU         Between Groups      0.38        0.95      0.44
            Within Groups       0.40
                Total

PUF         Between Groups      1.03        2.41      0.06
            Within Groups       0.43
                Total

PSQ         Between Groups      1.71        3.71      0.01
            Within Groups       0.46
                Total

PIQ         Between Groups      0.48        1.10      0.36
            Within Groups       0.44
                Total

PIO         Between Groups      0.64        0.91      0.46
            Within Groups       0.70
                Total

Table 8. The comparison of the PSQ construct with age

(I) PSQ   (J) PSQ         Mean         Standard error   P-value
                    difference (I-J)

G1          G2            0.06              0.28         0.842
G1          G3            0.78              0.23         0.001
G1          G4            0.36              0.24         0.144
G1          G5            0.21              0.29         0.472
G2          G3            0.73              0.26         0.007
G2          G4            0.30              0.27         0.272
G2          G5            0.15              0.31         0.628
G3          G4           -0.43              0.22         0.057
G3          G5           -0.58              0.27         0.037
G4          G5           -0.15              0.28         0.599

Table 9. The description of the criteria

Criteria         Description

Client/Web       The type of museum system.

Device           The operating device of museum system.

Location-aware   Museum system provides visitors with services
                 according to visitors' location.

Personalized     Museum system provides visitors with services
                 according to visitors' interest.

Multimedia       Museum system provides visitors with multimedia
                 presentations of exhibits.

Map              Museum system provides visitors with map of exhibit
                 areas.

Management       Museum system provides staffs with functions to
                 manage exhibits.

Table 10. Comparative analysis of museum systems

Criteria/Museum system   PGR system         MS1         MS2

Client/Web               Client             Web         Client & Web
Device                   Netbook computer   Tablet PC   Mobile phone
Location-aware           Yes                No          No
Personalized             Yes                No          No
Multimedia               Yes                Yes         Yes
Map                      No                 Yes         No
Management               Yes                No          No

Criteria/Museum system   MS3      MS4      MS5   MS6

Client/Web               Client   Client   Web   Web
Device                   PDA      PDA      PDA   Tablet PC
Location-aware           Yes      Yes      No    No
Personalized             No       Yes      No    No
Multimedia               Yes      Yes      Yes   Yes
Map                      No       Yes      No    Yes
Management               No       No       No    No
COPYRIGHT 2012 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Huang, Yong-Ming; Liu, Chien-Hung; Lee, Chun-Yi; Huang, Yueh-Min
Publication:Educational Technology & Society
Article Type:Report
Date:Oct 1, 2012
Words:8373
Previous Article:A multivariate model of factors influencing technology use by preservice teachers during practice teaching.
Next Article:Assessment of 3D viewers for the display of interactive documents in the learning of graphic engineering.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters