GUI vs. TUI: engagement for children with no prior computing experience.
The two commonly debated 'ideal' interfaces are the conventional Graphical User Interface (GUI) and Tangible User Interface (TUI). A GUI is driven commonly by windows, icons, menus and pointers (WIMP) in a desktop environment whereas a TUI is generally driven by tangible/tactile interaction devices in most augmented reality (AR) realms. Both interfaces have a close relation to usability issues. Therefore, both are frequently compared and evaluated using usability measures.
Usability evaluations and testing have long been introduced (since the 1980's) where  pointed out the importance of product usability; user interfaces being one of them. Specifically relating to TUI,  has highlighted the importance of applying Human Computer Interaction (HCI) and usability in AR systems design which mostly incorporates the usage of TUI. According to  and , formal evaluation on AR interfaces have commenced only recently. In addition, there has been very little user evaluation, which in our opinion, is vital to the survival and perfection of computer interfaces.
User satisfaction is a major part of a few well-known usability models e.g. those found in [4-8]. According to , 'fun' is a form of satisfaction for children. Fun can be separated to three different dimensions; engagement is one of them. There are very few works that evaluate children's engagement comparing these two interfaces. Furthermore, we have not come across any engagement measures involving children with no computer knowledge whatsoever. In this research, we therefore aim to explore the outcome of an evaluation process that compares GUI and TUI engagement capabilities on computer-illiterate children.
II. LITERATURE REVIEW
This section discusses related previous work, issues, models, attributes and methods that are relevant.
A. GUI versus TUI
There have been many debates comparing GUI and TUI. Table 1 highlights some of them.
GUI falls in the category of indirect input methods where interaction devices like the mouse itself act only as intermediaries . Known as a "time-multiplexed" interface, Fitzmaurice in  and Billinghurst in  agree that a mouse in a WIMP interface is a good example of "time-multiplexed" design.
"Ubiquitous Computing" was first coined by  to signify work environments that are populated with networked computer systems of all sizes . Weiser's vision was to push the computers into the background in such a way that they became "invisible" . Since then, many researchers have been working on moving the user interface out of the screen and into the physical environment of the user, or as Ishii and Ullmer in  put it: "to change the world itself into an interface." One of these researchers, Fitzmaurice, also established the term "Graspable User Interfaces" . Ishii and Ullmer in  later came up with the term TUI, which is an interface that will augment the real physical world by coupling digital information to everyday physical objects and environments. TUI falls in the category of direct input methods , and is an example of "space-multiplexed" design  .
Back in 1996,  argued that the direct manipulation of GUI has not evolved or changed much despite being first introduced in the 1970's . Many would agree that GUI might have reached its peak, as the concept of GUI has been monotonous to this day. On the other hand, Ishii and Ullmer quote that interactions between people and cyberspace are now largely confined to traditional GUI-based boxes sitting on desktops or laptops (i.e. notebook computers). The interactions with these GUIs are separated from the ordinary physical environment within which we live and interact. Ishii and Ullmer find that GUIs fall short of embracing the richness of human senses and skills people have developed through a lifetime of interaction with the physical world .
Ullmer and Ishii in  and Feltham in  both agree that TUI both represent and control the data of the computational activity, while manipulation with a mouse and keyboard is bound to lack representational significance relative to the data and task. Therefore, more and more researches are developing TUIs as alternatives to traditional GUIs to meet the need for a more natural and direct interaction with computers . From Table 1, [18-24] reported successes of TUIs in their respective experimental designs and setups. However, Marshall in  claims that:
"While there are many claims made about the benefits of tangibles compared with other kinds of interfaces (e.g., GUIs, speech) we really do not know why, how or whether they can be substantiated. The user studies that have been carried out have been largely informal evaluations that tend to be positive, i.e., users like them and find them easy to use. However, the results from the few controlled experiments that have been carried out have revealed no difference in performance between GUIs and TUIs."
Despite research promoting the advantages of TUI, we believe that TUIs perhaps should not be prioritized in certain cases. Being a "space-multiplexed" interface, the implementation of TUIs comes with several trade-offs namely the lack of flexibility and portability for being what we call "application specific" interfaces. Jacob et al. in  mentioned this as one of the trade-offs in the framework design of Reality-Based Interaction (RBI). In , the trade-off term used is "Reality vs. Versatility" where a single GUI based system can be used to perform a variety of tasks such as editing films, writing code, or chatting with friends whereas a TUI system, such as the Tangible Video Editor by , will only let you complete a single task (i.e. edit video clips) while allowing for a higher degree of realism . Therefore, it is understood that TUIs offer tangibility and intuitiveness compared to GUI but fall short on portability and flexibility.
Even though many authors in Table 1 flaunt and exhibit TUI to be better than GUI, there are a few comparative studies that show otherwise. Ploderer in  expected TUI to be better than GUI in a few usability measures. However, in his experiment, he failed two of his hypotheses when his experiments showed a completely different set of quantitative results . In Ploderer's experiment, TUI lacks in efficiency and learnability compared to GUI . Ploderer's results are the opposite of Wren and Reynolds' work in  where they have shown TUI to be more learnable compared to GUI . On the other hand, TUI is somehow proven better in terms on fun or enjoyment . Xie et al. in , conversely, presented that there were no significant differences in children's enjoyment when comparing GUI, TUI and a physical model. Many questions could be raised through these contradicting results despite implementation using different setups and in different environments.
B. Prior Experience in GUI versus TUI
'Prior experience' here refers to prior experience or knowledge of a product before using it . Prior experience is indeed one of the most important factors in measuring users' performance. Generally, the experience mentioned here is prior experience of using a computer interface, given that GUI and TUI are both computer interfaces. Prior experience will be able to determine the level of a person's computer performance due to familiarity. Since different users might have a different set of prior experiences, it is too optimistic to measure users' level of computer performance through self-reported data gathering . Even though many researchers have gathered and reported usability data using the self-reported approach, Kim and Maher in  questioned the measures' validity due to the subjective nature of self-reported data. Nielsen in  has also stated that
"A common aspect of both questionnaires and interviews is that one cannot necessarily trust all the users' answers. People have a tendency to give the replies they think they ought to give, especially to sensitive questions where certain answers may be embarrassing or may be deemed socially unacceptable."
From Table 1, most of the prior experience data were gathered through self-reported data collection; the others did not even consider the importance of prior experience. None of the experimenters tried using performance metrics  to gather information about prior experiences, or replacing self-reported metrics, which mostly are implemented through query techniques.
While there are researches who determine prior experience with subjective measurements, most samples selected in their works were reported to be users with computing experience except for  who had two users with no computing experience; even though the integrity and validity of this data is questionable due to self-reported metrics. Since it is understandable that most experiments done in GUI versus TUI were objectified to find differences among the two, we stress that it is very important to minimize elements of bias. Most samples selected for the experiments in Table 1 were reported to have prior computing experience, specifically in GUI, while information on prior experience for TUI was unknown.
None of these did a study solely on users with no prior computing experience. Hanna et al. in  suggested that researchers screen and select children that have at least some experience with a computer, and also to not include children who have too much computer expertise (unless they are the target audience). There has therefore not been any comparison done in the area of GUI versus TUI, using samples of people that have no experience using either interface. It might be difficult to implement this type of comparative study when sample options are limited (we are now a technology-driven society). Relating to that,  reported that in the United States, given 145 parents of 2 to 3-year-olds, these young children spent an average of 17 minutes on the computer, 19 minutes playing video games, and 5 minutes on the internet daily. The number might be higher these days since this report was taken eight years ago.
Therefore, it should be understandable how difficult it is to find computer illiterate children to use in such a study. However, in this research, we were interested to know the outcome of using samples that are not yet exposed to both interfaces. At least it minimizes bias even further. Instead of using self-reported metrics, in this research we screened and selected the samples using performance metrics.
C. Satisfaction and Engagement in GUI versus TUI
Satisfaction is often referred to as subjective measures for usability since the nature of measurements involves human emotion and feelings, which fluctuates and varies enormously among different individuals (see Figure 1). According to Tullis and Albert in , satisfaction is seen as the degree to which the user was happy with his or her experience while performing the task. The most common metrics used to measure users' satisfaction is by applying self-reported metrics  like questionnaires and interviews. Tullis and Albert in , Nielsen in , Dix et al. in  and Read et al. in  agreed to the usage of rating scales (closed-ended questionnaires) for users. To name just a few, [17-18],  and  have all used rating scales in measuring users' satisfaction.
[FIGURE 1 OMITTED]
In the case of evaluation involving children, there are instances of different approaches in satisfaction metrics. According to , children are not the same as adults; their motivations are different and they have different desires and expectations. Hanna et al. suggested the usage of children's physical expression like frowns and yawns as one of the usability metrics . On the other hand,  has related 'fun' with satisfaction in their book called "Funology" which bridges usability to enjoyment.
In fact, according to , fun is one manifestation of what adults call 'satisfaction'. It may be inappropriate to ask a young child to say how satisfied he or she was with a product . It is, however, more practical to ask questions like "How much fun did you have?" and "How much you did enjoy it?" to children. Replacing the word "satisfaction" with "fun", Read et al. in  elaborated on the three dimensions of fun: expectations, engagement and endurability. The metrics used in all three dimensions, however, is base heavily on the idea of a rating scale. Read et al. in  developed a 'Fun Toolkit' exclusively to measure children's satisfaction. All three dimensions of fun according to , have their own specified tools for metric measurement (see Figure 2). According to , engagement, specifically, can be measured through observation methods .
[FIGURE 2 OMITTED]
It can be implemented by recording facial expressions or time-on-tasks. Since facial expressions might be prone to subjectivity, this research chose time-on-tasks as the method of choice. Xie et al. have validated the use of performance measure in measuring children's engagement . In Xie's work, time-on-tasks is used to register the subsequent playtime in order to see how long these children stayed interested with their puzzle games .
D. Related Work
In , we have a clear comparison involving children. Xie et al. presents the results of an exploratory comparative study which investigated the relationship between three different interface styles, and school-aged children's enjoyment and engagement while doing puzzles . Xie et al. compared TUI, GUI and the physical (traditional) puzzle interfaces. The research suggested that there were no significance differences in terms of enjoyment for all three interfaces. However, TUI is seen to outperform GUI in terms of completion time and engagement .
Children samples were tested in pairs. All samples claimed to have prior experience in computing through self-reported data. We believe in this case that TUI might outperform GUI in engagement due to what  refers to as the "wow" factor, where they explained how students might be interested and curious in something new and unfamiliar at first, but will revert to less attentive behaviour once the "wow" factor has subsided. Murray and Barnes in  defines that the "wow" factor encompasses both extremely positive and extremely negative initial reactions in the user towards a software package. This immediate and instinctive evaluation can colour the user's opinion of the program as a whole, even on a medium- to long-term basis . Since Xie's work in  involved mostly children with computing experience, their interest in GUIs might have decayed past the "wow" stage, hence reducing their engagement.
III. EVALUATION DESIGN
This section highlights the main aspects of the evaluation design used in the research.
A. Research Hypotheses
We believe that in terms of enjoyment (i.e. engagement) evaluation, there will be no significant differences comparing GUI and TUI for children sampled that have no prior computing or GUI experience. Engagement is measured with time-on-tasks during the short between-sessions break (see Figure 7). Since the children sampled do not have any prior experience in both interfaces, it is believed that they might be interested and curious in something new and unfamiliar at first ; this may foster engagement.
Comparative results by  show TUI to have better engagement than GUI. However, samples producing these results were mostly common GUI users who might have reverted to less attentive behaviour towards GUI. In this research we would therefore like to show evidence that computing experience will significantly affect the results of the experiment. With samples that are 'clean' in terms of GUI and TUI knowledge, it is believed that GUI will have a similar amount of engagement time compared to TUI.
B. Interface Content for Evaluation Process
Dunser et al. in  stated that TUI has the advantage of being used in 3D learning and construction environments, which we believe is also the same for GUI. The content therefore must contain 3D visualization and interaction for optimization. Since our sample consists of 8 year-olds (Standard 2 in Malaysian primary schools), a Standard 3 mathematics topic called "3-D Shapes" (see Figure 3) was chosen because it promotes 3D learning and it has not been formally learned by the samples in class. This will be a good topic for their initial exposure to 3D objects. All contents are constructed in the Malay language.
The contents were conveyed in two different interfaces: GUI and TUI. Both interfaces provided the same number of object menus and operation menus. There are five objects chosen for this experiment, i.e. cube, cone, cylinder, pyramid and sphere.
[FIGURE 3 OMITTED]
Each of the above 3D Objects can be manipulated using these five operations:
* Rotate--allows the children to view objects in multiple angles while the object is animated to rotate by itself.
* Wireframe--allows the children to convert solid objects to wireframe so that they can count the edges.
* Extract--allows the children to extract the objects to a 2D object diagram where they can count the number of faces.
* Say it!--allows the children to hear the name of the object.
* Reset--allows the children to reset the object to its initial state.
These operations were mapped to the different interaction capabilities in GUI and TUI.
C. GUI Design
The GUI application (see Figure 4) was developed using Adobe Flash CS4 with Away3D as the object modeller. The design of the GUI is guided by Nilesen's heuristics. The design is a straightforward menu design where the top left to right menu shows the five objects that can be selected for display on the centre panel. Top down on the right is the menu for the five different operations to manipulate a selected object. At centre bottom, there are also six rotation controls that allow samples to rotate freely on the three rotating dimensions (Yaw, Tilt and Roll). Each clickable icon has reflective affordances where they will respond by changing images once the cursor is over them. These changes will give users feedback and the impression that the icon can respond to interaction.
[FIGURE 4 OMITTED]
D. TUI Design
As for TUI (see Figure 5), the application was developed with ARToolkit 2.72, an open source AR builder. MD2 Editor was used to develop the structure and animation of the five different 3D objects.
[FIGURE 5 OMITTED]
The three spatial heuristics by  guide the development of TUI. The TUI design adapts the concept of 'interactive surface and 'interactive table' [39-40]. Different from GUI, TUI does not use any mouse or keyboards for any of its operations. TUI uses pattern recognition on fiducial markers to display and operate objects on tangible devices. The TUI kit for this experiment consists of five "shape viewers" and five "function controllers". Each "shaper viewer" and "function controller" is equipped with respective fiducial markers that have been recognized by the application to perform their specific job: "shapes viewers" to display the object and "function controllers" to manipulate displayed objects.
E. Sample Screening and Segmentation
The targeted participants here were 8 year-old school children (Standard two) in Malaysian local primary school, who did not have any prior computing experience (neither GUI nor TUI). As explained by Jacob et al. in , the GUI or WIMP consists of four main components: windows, icons, menus, and pointer. Therefore, a regular GUI user will have the capability to operate these four components easily. The research motive here is to find those who do not know how to operate these four GUI components.
In order to achieve that, five basic GUI tasks were given containing instructions to operate the four components (see Figure 6). After task 5, samples were shown a TUI fiducial marker handle, asking if they have seen it. This process is to find out if the students were ever familiar with either interface.
[FIGURE 6 OMITTED]
All students were selected from a primary school in rural area, in Malaysia. We first gathered information from each class of the school on how many 8 year old students there are. Simple demographic profiling was done where the names and computing experience of each student were collected through an informal set of questionnaires. Information was gathered from 131 students during this process. Next was the sample screening process with the five basic GUI tasks.
All instruction and communication in this process was in Malay since that is the official language used in the school. There were three screening stations at one time, involving three pairs of facilitator-student to speed up the process. Each station was placed at least 15 metres away from one another to avoid disturbance and noise. These stations were set up in the school's computer lab; fully air-conditioned and far away from daily classrooms. Students were called one after another to each respective station. The screening process commenced in this fashion. Each student performed the screening test individually.
One facilitator assisted one student at a time by verbally instructing the student for each task. During this session, 92 out of 131 showed up for screening. Out of 92 students screened, only 32 students failed all five tasks given. These 32 students had not seen a TUI fiducial marker before. In percentage terms, about 35% out of the 92 students were GUI illiterate. 12 were male students while the other 20 were females. All students were able to understand, read and write in Malay (proficiently).
The selected 32 students were divided into two different groups. Each group was assigned to a respective user interface, one for GUI and the other for TUI. Therefore, there are 16 samples experimenting GUI and 16 samples experimenting TUI. Each sample experienced only one interface throughout the whole evaluation process. Even though there are uncertainties about gender differences, we believe in will lead to better data collection if an aspect as simple as gender differences is taken into account. Replicating the approach used by Wren and Reynolds in , samples in this experiment were divided in a balanced manner (in terms of gender) for both interfaces as shown in Table 2.
F. Experiment Sequences
Next was the usability evaluation phase, which took a month to complete (see Figure 7). The first two weeks were used for the screening and segmentation. GUI and TUI engagement evaluation started on the third week and ended in the fourth. Due to agreements with the school, experiments could only be carried out from Monday to Wednesday each week. Each sample performed two sessions or sets of tasks each week. All samples were scheduled for the same time, same day in each week.
These arrangements ensured all samples had a fair and complete cycle (one-week break). In week 4, same students were asked to perform two sets of tasks on the same day and time as they had performed in week 3. Each facilitator was given a stopwatch, a pen, a score sheet. On the first session, each facilitator first demonstrated how to use the respective interfaces. Each application was demonstrated only once and moved directly to the set of tasks. There were two sets of tasks each week. Each set contained 7 tasks.
In between the two sets, a 3-minute break was given. Referring to Figure 7 (grey boxes), the short break between Set 1 and Set 2 was the time when engagement data was registered; the same for the short break between set 3 and 4. After the completion of Set 1, the facilitator informed the children that they can have a break for 3 minutes but they can also stay and "play" with the interface if they want to. If they choose to continue playing, time registration for engagement would continue until they left the computer, or until the end of 3 minutes, whichever came first. Time registration was done using time-on-tasks method, as explained in .
[FIGURE 7 OMITTED]
As for the method of time-registration, this research did not use an automated method. This is due to the fact that the interaction nature of GUI and TUI is completely different. Most automated time-registration programs can start and stop the clock automatically through hardware triggers like a mouse click and a key press. This, however, could not be achieved with TUI since it relies more on the users' physical actions. The precision of an automated clock is different from a real-world action. The stopwatch time recording devices used have these specifications:
* Digital chronograph function
* Made by Casio Watch Company in Japan, which has sufficient precision degree to record time-on-tasks registration
Even though precision for non-automated methods is lenient compared to an automated approach, Tullis and Albert in  have stated some good rules in helping to improve time precision. We therefore adapted these rules and procedures in order to improve the time registration process. For engagement, after a facilitator told a child that he can either rest or continue playing, the facilitator observed their choice. If, at this moment, the child chose to stay and continue playing, the facilitator would start the clock until the child leaves his station. The facilitator will walk 10 metres away and observe from that particular spot, to avoid any pressure or disturbance while the child is within his own time.
IV. EXPERIMENT RESULTS AND DISCUSSIONS
The children's' engagement was measured twice, which is in two separate weeks in between two consecutive sessions. Time is registered separately for both weeks and compared among GUI and TUI users for each week. Time for engagement was registered in seconds. Then means in seconds for each respective interface groups will be compared. Figure 8 shows the unpaired t-test comparative results for week 1. In the first week, there were no significance differences between the means of GUI (mean for students' engagement time: 98.5 seconds) and TUI (mean for students' engagement time: 105 seconds).
[FIGURE 8 OMITTED]
In the group of samples with no prior computing (GUI or TUI) experience, we found that there were no significance differences in engagement for both GUI and TUI in week 1 (t=0.212, d.f.=30, p>0.05). Similar to the first week, there were no significance differences in second week's engagement between the means of GUI (mean for students' engagement time: 134 seconds) and TUI (mean for students' engagement time: 134 seconds) for engagement (see Figure 9).
[FIGURE 9 OMITTED]
Similar to week 1, there were no significance differences in engagement for both GUI and TUI in week 2 (t=0.0199, d.f.=30, p>0.05). Through a formal observation, most of the children seemed to be engaged to both interfaces. Some photographs taken during the research are shown in Figures 10 through 13. From the results, our research hypotheses cannot be rejected. The results are different from previous work comparing GUI and TUI in terms of engagement measures, regardless of differences in experimental setups. The results suggest that children with no prior computing experience have similar engagement and interest when they were exposed to both respective interfaces for the first time. These results have also directly highlighted prior experience to be an important factor to be taken into consideration when it involves users' testing and evaluation. We plan to apply the same experimental design on bigger and diverse populations.
[FIGURE 10 OMITTED]
[FIGURE 11 OMITTED]
[FIGURE 12 OMITTED]
[FIGURE 13 OMITTED]
In this paper, we have highlighted the impact of GUI and TUI on children with no prior computing experience in terms of engagement measures. Despite many claims by previous researchers that TUI is better than GUI in terms of usability, most carried out experiments on computer-literate subjects or without taking into considerations of their possible prior computing experience. We have emphasized three major deliverables. First is the implementation of performance metrics (time-on-tasks) during sample screening, where samples selection is carried out in a specific, controlled environment. This is to ensure better data reliability through performance metrics as compared to self-reported metrics. Second is performing GUI vs. TUI comparative studies through samples with no prior computing experience. Third are the results on children's engagement obtained through the entire evaluation process.
This research would like to extend gratitude to the headmaster, teachers and students from Sekolah Kebangsaan Gombak Utara for their approval, patience and cooperation in allowing this research to take place. Special thanks also to all lecturers and students of Universiti Tenaga Nasional who have been involved directly and indirectly in this research.
 S. Rosenbaum, "Usability Evaluations vs. Usability Testing: When and Why?" IEEE Transactions on Professional Communication, vol. 32, no. 4, December 1989, pp. 210-16.
 A. Dunser, R. Grasset, H. Seichter, M. Billinghurst, "Applying HCI principles to AR systems design," MRUI '07--In Proceedings of the 2nd International Workshop on Mixed Reality User Interfaces: Specification, Authoring, Adaptation, Charlotte, NC, USA, March 11, 2007.
 A. Dunser, R. Grasset, M. Billinghurst, "A survey of evaluation techniques used in augmented reality studies," International Conference on Computer Graphics and Interactive Techniques. ACM SIGGRAPH ASIA 2008 courses, Article No. 5, Singapore, 2008.
 A. J. Dix, J. Finlay, G. Abowd, R. Beale, "Human-Computer Interaction," Third Edition, Prentice Hall, 2004.
 J. Nielsen, "Usability Engineering," Academic Press, 1993.
 International Standards Organization-ISO 9241-11 1998, Ergonomic requirements for office work with visual display terminals (VDTs); Part 11-Guidance on usability.
 A. Abran, A. Khelifi, W. Suryn, A. Seffah, "Consolidating the ISO usability models," Proceedings of 11th International Software Quality Management Conference (Springer), Glasgow, Scotland, UK, 2003
 A. Abran, A. Khelifi, W. Suryn, A. Seffah, "Usability Meanings and Interpretations in ISO Standards," Software Quality Journal, vol. 11, no. 4, 2003, pp. 325-338.
 J. C Read, S. J. MacFarlane, C. Casey, "Endurability, Engagement and Expectations: Measuring Children's Fun," Proceedings of Interaction Design and Children, Shaker Publishing, Eindhoven, The Netherlands, 2002, pp. 189-198.
 A. N. Antle, M. Droumeva, D. Ha, "Thinking with hands: an embodied approach to the analysis of children's interaction with computational objects," Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, Boston, MA, USA, 2009, pp. 4027-4032.
 G. "Fitzmaurice, Graspable User Interfaces," Ph.D. Thesis. Department of Computer Science, University of Toronto, 1996.
 M. Billinghurst, "Usability Testing of Augmented / Mixed Reality Systems," International Conference on Computer Graphics and Interactive Techniques. ACM SIGGRAPH ASIA 2008 courses, Singapore, 2008.
 M. Weiser, "The computer for the 21st century," Scientific American, vol. 265, no. 3, 1991, pp. 94-104.
 D. Svanaes, W. Verplank, "In search of metaphors for tangible user interfaces," Proceedings of on Designing augmented reality environments (DARE 2000, New York, USA, 2000), pp. 121-129.
 H. Ishii, B. Ullmer, "Tangible bits: towards seamless interfaces between people, bits and atoms," Proceedings of the SIGCHI conference on Human factors in computing systems. Atlanta, Georgia, United States, 1997, pp. 234-241.
 W. O. Galitz, "The Essential Guide to User Interface Design: An Introduction to GUI Design Principles and Techniques," Third Edition, Wiley Publishing, Indianapolis, Indiana, 2007.
 B. Ploderer, "Tangible User Interfaces: Potentials Inherent in Tangible User Interfaces for Simplified Handling of Computer Applications," Master's Thesis, University of Applied Sciences FH JOANNEUM, Graz, Austria, 2005.
 C. R. Wren, C. J. Reynolds, "Parsimony and Transparency in Ubiquitous Interface Design," Ubiquitous Computing: Adjunct Procedings, 2004, pp. 31-32.
 M. J. Kim, M. L. Maher, "Comparison of Designers Using a Tangible User Interface and a Graphical User Interface and the Impact on Spatial Cognition," Key Centre of Design Computing and Cognition, University of Sydney, Australia, 2006.
 M. J. Kim, M. L. Maher, "The Impact of Tangible User Interfaces on Designers' Spatial Cognition," Human-Computer Interaction. A Journal of Theoretical, Empirical, and Methodological Issues of User Science and of System Design, 23, 2, 2008, pp. 101-137.
 H. Kaufmann, A. Dunser, "Summary of Usability Evaluations of an Educational Augmented Reality Application," Proceedings of the 2nd international conference on Virtual reality (ICVR'07), Beijing, China, 2007, pp. 660-669.
 K. Sitdhisanguan, N. Chotikakamthorn, A. Dechaboon, P. Out, "Comparative Study of WIMP and Tangible User Interfaces in Training Shape Matching Skill for Autistic Children," TENCON 2007-2007 IEEE Region 10 Conference, 1-4, Oct. 30 -Nov. 2, 2007, pp. 1-4.
 L. Xie, A. N. Antle, N. Motamedi, "Are Tangibles More Fun? Comparing Children's Enjoyment and Engagement Using Physical, Graphical and Tangible User Interfaces," TEI'08--Proceedings of the Second International Conference on Tangible and Embedded Interaction, Bonn, Germany, 18-20 Feb 2008, pp. 191-198.
 J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, B. Lok, "Tangible User Interfaces Compensate for Low Spatial Cognition," IEEE Symposium on 3D User Interfaces (3DUI '08), Reno, Nevada, USA, 2008, pp. 11-18.
 B. Ullmer, H. Ishii, "Emerging frameworks for tangible user interfaces," IBM Systems Journal 39(3): 2000, pp. 915-931.
 F. G. Feltham, "Do the Blocks Rock: a Tangible Interface for Play and Exploration," OZCHI 2008--Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat, Cairns, QLD, Australia, December 8-12, 2008, pp. 188-194.
 P. Marshall, S. Price, Y. Rogers, "Conceptualising tangibles to support learning," Proceedings of the 2003 conference on Interaction design and children, Preston, England, 2003, pp. 101-109.
 P. Marshall, "Do tangible interfaces enhance learning?," Proceedings of the 1st international conference on Tangible and embedded interaction (TEI '07), Baton Rouge, LA, USA, 2007, pp. 163-170.
 R. J. K. Jacob, A. Girouard, L. M. Hirshfield, M. S. Horn, O. Shaer, E. T. Solovey, J. Zigelbaum, "Reality-based interaction: a framework for post-WIMP interfaces," Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, Florence, Italy, 2008, pp. 201-210.
 J. Zigelbaum, M. Horn, O. Shaer, R. J. K.Jacob, "The tangible video editor: collaborative video editing with active tokens," TEI '07--Proceedings of the 1st international conference on Tangible and Embedded interaction, Baton Rouge, Louisiana, 2007, pp. 43-46.
 T. Tullis, B. Albert, "Measuring the User Experience : Collecting, Analyzing, and Presenting Usability Metrics," Morgan Kaufmann, March 31, 2008.
 L. Hanna, K. Risden, K. J. Alexander, "Guidelines for usability testing with Children," Interactions (September + October), 1997, pp. 9-14.
 E. A. Wartella, J. H. Lee, A. G. Caplovitz. "Children and interactive media: Research Compendium Update," University of Texas at Austin, November 2002, available from URL: www.markle.org/downloadable_assets/cimcomp_update.pdf (online article), accessed on 17 May 2010, 1.07a.m.
 M. Fjeld, J. Fredriksson, M. Ejdestig, F. Duca, K. Botschi, B. Voegtli, P. Juchli, "Tangible User Interface for Chemistry Education: Comparative Evaluation and Re-Design," CHI 2007--Proceedings of the SIGCHI conference on Human factors in computing systems, San Jose, California, USA, April 28-May 3, 2007, pp. 805-808.
 M. Blythe, C. Overbeeke, A. F. Monk, P. C. Wright (eds), "Funology: From Usability to Enjoyment," Kluwer Academic Publishers, Dordrecht, 2005.
 G. Beauchamp, J. Parkinson, "Beyond the 'wow' factor: developing interactivity with the interactive whiteboard," School Science Review, vol. 86, no. 316, 2005, pp. 97-104.
 L. Murray, A. Barnes, "Beyond the "wow" factor evaluating multimedia language learning software from a pedagogical viewpoint," System 26, 1998, pp. 249-259.
 E. Sharlin, B. Watson, Y. Kitamura, F. Kishino, Y. Itoh, "On tangible user interfaces, humans and spatiality," Personal and Ubiquitous Computing. Volume 8, Issue 5, September 2004, pp. 338-346.
 B. Boussemart, S. Giroux, "Tangible User Interfaces for Cognitive Assistance," AINAW 07--Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops--Volume 02, 2007, pp. 852-857.
 B. Ullmer, H. Ishii, R. J. K. Jacob, "Token+Constraint Systems for Tangible Interaction with Digital Information," ACM Transaction on Computer-Human Interaction (TOCHI), vol. 12, issue 1, 2005, pp. 81-118.
Lim Kok Cheng (1) Chen Soong Der (2) Manjit Singh Sidhu (3) Ridha Omar (4)
(1,2,3,4) College of Information Technology, Universiti Tenaga Nasional, Selangor, Malaysia e-mail: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com
Table 1. GUI versus TUI. Authors Interface Usability Results Compared Evaluation Metrics Ploderer,  GUI & TUI Efficiency GUI outperforms Learnability TUI in Fun Efficiency and Learnability. TUI outperforms GUI in Fun. Wren & Reynolds, GUI & TUI Learnability TUI is more  learnable than GUI Kim & Maher, GUI & TUI Spatial TUI improves  Cognition spatial cognition, increasing subjects' problem- solving behaviours Kaufma nn & GUI & TUI Controllability TUI has better Dunser,  Learnability ratings for all Usefulness the metrics Satisfaction except for Feedback Menu/ Technical Interface Aspects Technical Aspects Sitdhisa nguan GUI & TUI Relative Ease TUI is easier et al.,  of Use to use compared (Efficiency) to GUI. Xie et al.,  GUI, TUI & Enjoyment No significant Physical Engagement difference for (traditional) Enjoyment. TUI outperforms GUI in Engagement through repeat play after the completion of first task set. Quarles et al., GUI, TUI & PUI Spatial TUI offers  Cognition significant cognitive benefits to individuals with low spatial cognition. Table 2. The table below shows the balance segmentation of samples according to interface type and gender Gender/User Interface Samples for GUI Samples for TUI Male samples 6 6 Female samples 10 10
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Graphical User Interface (GUI) and Tangible User Interface|
|Author:||Cheng, Lim Kok; Der, Chen Soong; Sidhu, Manjit Singh; Omar, Ridha|
|Publication:||Electronic Journal of Computer Science and Information Technology (eJCSIT)|
|Date:||Jan 1, 2011|
|Previous Article:||Coalesce techniques to secure web applications and databases against SQL injection attacks.|
|Next Article:||Development of a semi-automated electoral system: case study: Nigeria electoral system.|