Printer Friendly

Gamification Assisted Language Learning for Japanese Language Using Expert Point Cloud Recognizer.

1. Introduction

Gamification emerged in the late 1960 [1]. Because researchers see the effectiveness of gamification use, it has become a major highlight since the early 2010 [2]. Gamification is a method to implement by which a boring activity is converted into a fun activity, to attract interest and attention and motivate and improve performance in certain activities. It is widely used in the fields of education, economics, social studies, culture, politics, health, business ecology, and many more related fields [3-8].

In general, gamification changes person's perception of nongame activity into games [9]. In this case, the game can also serve to stimulate different sensory and motor systems for each person [10], so as to enhance person's understanding and memory [11]. Then it can activate the brain cells as one of its functions can improve the ability of person language [12, 13]. Human intuitive theories explain the development of brain cells in the game because one can explain the world of the game, analyze some examples, and generate counterfactual thinking and produce effective plans [14].

On the other hand, some countries use language as one of the main conditions for continuing education and obtaining citizenship and residence [15]. In a study conducted by [16] on language education policy in China, Indonesia, Japan, the Philippines, and Vietnam, one of the countries that rejected internal linguistic diversity was Japan. It is governed by the myth of Japanese identity monoethnicity that is largely derived from the assimilation of Japanese norms that reinforce a monolingual as well as monoculture ideal of the Japanese state itself. Then at the Meiji restoration in 1868 began an educational revolution, where foreign language was the focus of education in Japan.

Currently, according to statistics from Japan Student Service Organization (JASSO) through articles written in Student Guide to Japan 2016/2017, the value of production per person (Gross Domestic Product) is the 3rd highest in the world, and Japan ranks the 7th in the world and number 1 in Asia because 24 Japanese nationals received Nobel prizes in 2016. These things make more and more people interested in continuing education in Japan, which can be seen from the data of 2011-2016: the number of international students is increasing (see Figure 1); then data as of May 1,2016, explains that the number of international students continuing their education in Japan is 239287 people and that dominates from Asia, that is, 222627 people (see Figure 1) [17]. If viewed statistically from a survey conducted by JASSO, by 2016 the number of Indonesian international students is ranked 6th after Taiwan (see Table 1) [18,19].

Along with the increase of international students, Japan is also the 6th most popular country with many devotees to continue education. On the other hand, the increasing number of international students and the popularity of the country resulted in various problems and challenges that must be faced. According to [20], one of the factors which is a common problem often found against international students in Japan is the language skills. Of the 100 international students studying in Japan, 81% say that Japanese is hard to understand [20]. Based on these results, preferably before continuing education to Japan, prospective students must learn Japanese first, so as to facilitate life in Japan, and students who are interested in continuing education in Japan are easier to be accepted in schools or universities in destination (schools or universities that have the main terms of Japanese as a language of daily learning).

In the process of learning, Japanese has several elements that must be considered: the letters (moji), vocabulary (goi), and grammar (bunpo). One of the first elements to be learned in Japanese is to memorize letters such as hiragana, katakana, and kanji, because Japanese letters model is different from the alphabetic language in general [21]. In fact, it is quite difficult to memorize Japanese because of its form and complex writing. Therefore, many researchers are comparing several methods of learning Japanese language so that someone better understand the language.

One of the learning models is conventional or using textbooks. In general, conventional Japanese learning makes it difficult for a person to understand the material given [22]. Ref. [22] said that Japanese language learning methods that use multimedia elements can attract motivation and encourage someone to continue learning Japanese to understand the use of the grammar. Meanwhile, the problem when using multimedia elements is that the learner focuses more on the effects raised of the multimedia elements provided instead of focusing on Japanese content, such as graphics, animation, or other multimedia elements. Therefore, gamification is made using Japanese as the most important part of the game.

In its implementation, this educational game uses point cloud ($P) recognizer algorithm developed by [23], because the $P recognizer has fast computing and high accuracy. On the other hand, the weakness of the algorithm is that it can only detect the result of the pattern that has been made, so it cannot detect the process of making it. Then the algorithm is developed again into expert point cloud ($EP) recognizer for the player who can learn to write Japanese well and truly.

2. Related Work

2.1. Game Development Live Cycle (GDLC). The main stages in game development are design, prototype, production, and testing. Based on research conducted by [9, 24], GDLC has several processes, scilicet, initiation, preproduction, production, testing, beta, and release done iteratively to enable flexibility during the development process, resulting in good game quality. The quality can be measured from 5 criteria, namely, fun, functional, balanced, internally, and accessible.

2.2. Computer-Assisted Language Learning (CALL). CALL was born in the 1960s. Fundamentals of CALL framework are always doing mutual relationship between development, implementation, and evaluation so that CALL can always evolve [25].Then in the evaluation several considerations are involved; that is,

(i) Can users understand what the application is doing?

(ii) What kind of content of the lesson that is created is compatible with current technological interactions? For example, in terms of reading, writing, or listening.

(iii) How well are the design elements with understanding of user?

(ii) User Experience (UX)

According to [26] in general CALL is used audiolingually, where the students listen to a recording and then learner is asked to retell the recorded tape by saying or typing an answer that has been programmed by computer.Then in 2016 CALL was developed using game. The game has a Role-Playing Game Simulators gameplay (RPG Sims), where the inference gained by the game can be used to facilitate learning Japanese language, as it produces fairly good learning outcomes, and RPG Sims has a high potential to motivate learners too [27].

Once reviewed, game still adopts the conventional way of learning and then converts to digital.Things that become attraction or how to motivate learners to learn can be developed again by making the learning content as the main game content. Then according to [28], computer studies need to analyze linguistic input from learners to detect errors and provide corrective feedback and have contextual instructional guidance.

2.3. $-Family Recognizer. Unistroke ($1) Recognizer is a 2D gesture recognizer algorithm designed to read patterns quickly. As the name implies, $1 algorithm can only be used to read a single pattern (stroke) or it can be said to have permutations [23, 29]. Characteristics of the $1 algorithm are rotation invariant and size invariant. Rotation invariant is a pattern formed by slope of angle created by user; if it is in accordance with the order of formation of the same pattern, it will produce same reading. Subsequently size invariant is a pattern formed with a certain size created by the user, where reading is done by adjusting the size of the data created, resulting in same reading. Here is the complexity algorithm of $1:

$1 = O(n.T.R) (1)

Multistroke ($N) Recognizer is an algorithm that reads more than one pattern (stroke) but uses a lot of memory resulting in a slow process, because of the permutation of each stroke [23]. Here is the complexity algorithm of $N:

$N = O(n.S!.[2.sup.s].T) (2)

Because this algorithm produces a slow process, the point cloud recognizer algorithm is developed.

Point Cloud ($P) Recognizer is to optimize the $N algorithm that reads a pattern based on a stroke made; then $P is based on the relationship between the points so it does not require permutation or the number of patterns does not affect the complexity level of the algorithm. This algorithm produces accuracy above 99% and the process is faster than $N [23].

Characteristics possessed by the $P algorithmare the size invariant and direction invariant. The size invariant of $P equals $1, while thedirection invariant is a pattern formed ina different order that will produce the samereading, if pattern is formed according to existing datasets. Here is the complexity algorithm of $P:

$P = O([n.sup.2.5].T) (3)

The explanation of the $-Family algorithm can be seen in Table 2.

2.4. Game Experience. Game experience is judged based on emotion, thought, reaction, and behavior from players because it is influenced by the functionality, content, service, player affinity, and value of the player, where there are some elements that become benchmarks, namely, user interface (UI), user experience (UX), gameplay experience (GX), and game balancing. That matter will be evaluated using game experience questionnaire.

The benchmark adopts research undertaken by. [30-38], specifically,

(i) User Interface (UI)

(1) Usability: UI can be said to be usable when all features work properly and have informative feedback.

(2) Consistent: UI can be said to be consistent when an event used is also used elsewhere with the same model, thereby reducing short-term memory.

(ii) User Experience (UX)

(1) Useful: UX can be said to be useful if it can meet basic needs of player or game has benefits for player.

(2) Usable: UX can be said to be usable if it can be used efficiently and easily learned.

(3) Desirable: UX can be said to be desirable if it can contribute to user satisfaction by having a design with attractive aesthetic value.

(iii) Gameplay Experience (GX)

Based on research [35], there are 2 main factors that can be formed through cognitive and affective person, namely, beliefs and feelings, which are derived from a combination that is expected by someone to an object where it becomes something fun for player. Then besides fun, [30] adds a review for gamification where the content contained therein is worthy of studying (see Figure 2).

(iv) Game Balancing

(1) Game is fair: every action taken has an appropriate impact on the game. Any success and failure experienced by player can be understood rationally.

(2) Different skill levels: this is needed to determine the satisfaction of players, where there are challenges that have different levels of difficulty in every quest faced.

3. Propose Method

3.1. Gamification Assisted Language Learning (GALL). In general, the process of CALL is to change the learning of language conventionally to digital. When collaborated with gamifications, these high potentials can be maximized, since gamification has the following elements:

(i) Like games, the main requirement of gamification is to make a person feel happy and satisfied, and there are intrinsic elements of the contribution of knowledge in it.

(ii) Have goals to achieve.

(iii) Limiting game with the rules that apply to achieve the goal.

(iv) Provide information about progress of achievements that have been made to achieve the objectives.

(v) Have psychological elements to motivate players. Ref. [39] says there are 6 principal perspectives on motivation that closely relate to gamification, namely, trait perspective, behavioral learning perspective, cognitive perspective, perspective of self-determination, perspective of interest, and perspective of emotion.

Because Computer Assisted Language Learning (CALL) can be developed into Gamification Assisted Language Learning (GALL), GALL can maximize player interest to learn compared to CALL.

3.2. Datasets Japanese Language. Before recognizing Japanese writing required datasets are used as a measuring tool for the assessment standards of the games to be used as learning. These datasets are created by projecting the initial process of line formation to produce the final form of a point, where the datasets are stored in the xml format containing the position (x, y) of the writing. As seen in Figure 3, for example, this research has canvas of 5 x 5 and letter written is Ku. The letter is cut in accordance with the existing coordinates and then processed with the resample algorithm of $1 and $P where $1 each scratch generates 64 points with the same distance between points and $P each letter produces 32 points with the same distance between points. Visualization of datasets model made can be seen in Tables 3 and 4. The table describes the correct sequence of writing and the number of strokes contained in the Japanese language.

3.3. Expert Point Cloud ($EP). Generally, the $-Family Recognizer has a deficiency of reading a pattern based on the results that have been formed, not based on the manufacturing process. To learn Japanese, the process of writing the letter is very important, because the pattern of writing symbolizes the balance and neatness of writing someone; therefore a method is required that can read the process of making Japanese from beginning to end.

To accomplish this, the $P algorithm was developed again into an expert point cloud ($EP) recognizer, where the algorithm used is a combination and modification of expert system methods, unistroke ($1) recognizer from [29] and point cloud ($P) recognizer from [23].

Expert systems are used to make Japanese recognition systems sequentially, so that writing procedure from beginning to end can be properly written (see Tables 3 and 4). Subsequently, to detect every stroke (striation), $1 algorithm is used by doing resample, rotation, scale, and translation. After all strokes formed into letter, the $P algorithm is used to detect writing as a whole, by doing greedy cloud match, resample, scale, and translation. Examples of model illustrations of $1 and $P modified to form $EP can be seen in Figures 4-7 and explanation of the algorithm can be seen in Tables 5-11. Subsequently, overall algorithm flow can be seen in Figure 8.

3.3.1. Unistroke ($1) Recognizer Algorithm Model. The following are stages of $1 algorithm:

(1) Resample. Based on the illustration of Figure 4, step (a) describes the player being asked to create a Japanese letter; then systematically input from the player is processed to find out whether language is true or not. Letter processing starts from step (b); step (b) explains that the input made by player will change to N point in accordance with the coordinates of writing. After that at step (c) the calculation of the distance is done between the points until all points passed all and the results can be seen in step (d). Step (e) describes N-th distance divided into 64 points. Here is the formula used in the resample stages:

(a) Formula for calculating the average distance:

avg D = [n.summation over (i = 1)][square root of [([p.sub.i] - [p.sub.i-1]).sup.2] + [([p.sub.i] - [p.sub.i-1]).sup.2]] (4)

(b) For each point, if avg D [greater than or equal to] 1 then the following equation is used:

d = [p.sub.i] + [p.sub.i-1] (5)

Subsequently if (D + d) [greater than or equal to] I then the following equation is used:

[mathematical expression not reproducible] (6)

[mathematical expression not reproducible] (7)

If (D + d) [less than or equal to] I then the following equation is used:

D = D + d (8)

(2) Rotation. In this step, Figure 5 describes step (a) where the midpoint of the writing is processed by resample. Then in step (b) withdrawal line is done from the midpoint to starting point of writing, after that the angle of 0 is determined. In step (c) a rotation is performed on the x-axis until 0 reaches angle of 0Q. This matter is done so that writing can still be detected even though writing canvas is upside down. Here is the formula used in the rotation stages:

(a) Formula to determine the midpoint:

[c.sub.x] = [x.sub.0] + [x.sub.1] + ... + [x.sub.n] (9)

[c.sub.y] = [y.sub.0] + [y.sub.1] + ... + [y.sub.n]/k (10)

(b) Formula for determining angular tilt:

[theta] = atan ([c.sub.y] - [x.sub.0],[c.sub.x] - [y.sub.0]) for - [pi] [less than or equal to] [theta] [less than or equal to] [theta] (11)

(c) Formula for rotation:

x' = ([x.sub.n] - [c.sub.x]) cos [theta] - ([y.sub.n] - [c.sub.y]) sin [theta] + [c.sub.x] (12)

y' = ([x.sub.n] - [c.sub.x]) sin [theta] + ([y.sub.n] - [c.sub.y]) cos [theta] + [c.sub.x] (13)

(3) Scale. In this step, scaling is performed to determine the equalization of the large or small size of inputs made to existing datasets. Initially, there is formation of a bounding box by drawing line perpendicularly from the coordinates of min (x, y) and max (x, y) so as to form a box that has coordinates (min x, max y), (max x, max y), (minx, min y), and (max x, min y) (see Figure 6). Then, bounding box of player posts compared to datasets. If bounding box player is not the same as bounding box dataset, then calculation is done using the following formula:

[q.sub.x] = [p.sub.x]([I.sub.width]/[D.sub.width]) (14)

[q.sub.y] = [p.sub.y]([I.sub.height]/[D.sub.width]) (15)

(4) Translation. At this step is the determination of center point of writing player and datasets, so that center point of both writing models is in position (0,0). It means writing of player coincides with writing of datasets. Then look for all difference distance between point of player and dataset (see Figure 7). Here is formula used in the translation stages:

(a) Formula to determine center point of writing:

[q.sub.x] = [p.sub.x] - [c.sub.x] (16)

[q.sub.y] = [p.sub.y] - [c.sub.y] (17)

(b) Formula to determine average distance between points:

[d.sub.i] = [N.summation over (k = 1)][square root of [(I[[k].sub.x] - [D.sub.i][[k].sub.x]).sup.2] + [(I[[k].sub.y] - [D.sub.i][[k].sub.y]).sup.2]]/N (18)

(5) Score. Calculation accuracy of ratio is calculated from 0 to 1 with the following formula:

s = 1 - [d.sup.*.sub.i]/(1/2)[square root of [I.sub.height.sup.2] + [I.sub.width.sup.2]] (19)

3.3.2. Point Cloud Recognizer ($P) Algorithm Model. In $P algorithm steps taken are not much different from $1; for the resample stage, the scale and translation remain the same as $1, which distinguishes it is the N-distance calculation which is divided into 32 points and calculated using greedy cloud match and $P algorithm does not have a rotation.

Greedywhich is conducted by $P is looking for minimum distance that is compared between input player and datasets. In greedy usage there is distance calculation using cloud distance. Cloud distance uses variable weight to determine level of accuracy of a comparison. If a point in input player is paired to the nearest datasets point, then weight variable will decrease. Here is the formula used for calculation of variable weight:

w = 1 - ((i - [p.sub.0] + n)mod n)/n (20)

In addition, within cloud distance there is an Euclidean distance that is used for distance calculation at point cloud. Here is formula used for calculating distance of point cloud:

[mathematical expression not reproducible] (21)

[summation over (i)][w.sub.i] x [parallel][I.sub.i] - [D.sub.j][parallel] (22)

3.3.3. Score Final Expert Point Cloud ($EP) Recognizer. For assessment of overall accuracy, the following formula is used. If stroke = 1 then the following equation is used:

F = s$1 (23)

If stroke > 1 then the following equation is used:

F = [s$1.sub.1] + ... + [s$1.sub.n] + s$P/n (24)

$EP algorithm is used as the standardization of writing of language performed, since the input of the player will be compared with datasets that have been created, where it will affect accuracy of assessment.

4. Results and Discussion

In created RPG game, the battle system is lifted using turn based and has an active time battle (ATB), where, when the battle is done, the player and enemy alternately attack in accordance with the ATB that has been determined. Then to attack the player must write the Japanese language correctly, and damage is obtained by enemy in accordance with the accuracy of the writing.

In general, order of Japanese writing system can be seen in Figure 8. That figure explains the process of making datasets with algorithm process that runs up to get the accuracy of the writing player.

Then, after the game has been made, the game experience evaluation and pretest and posttest are given to 150 players to see players ability improvement, where there are 2 player categories: 46 players have learned Japanese and 104 player never learned Japanese. Test is done by asking player to make all the letters contained in Tables 3 and 4 along with the romaji.

On the other hand, to conduct game experience evaluation, a game experience questionnaire (GEQ) is made below:

(1) Are the features in this game is running well?

(a) Yes (100%)

(b) No (0%)

(2) Whether in-game display of buttons, portals, and all displays on each screen with the same model can make it easier to remember the function of interface used?

(a) Yes (100%)

(b) No (0%)

(3) Do tutorial and help feature can help you in playing?

(a) Yes (100%)

(b) No (0%)

(4) Is the system in the game easy to learn?

(a) Yes (100%)

(b) No (0%)

(5) How do you think about aesthetics are displayed in this game?

(a) Attractive (97%)

(b) Not attractive (3%)

(6) Does game you have played help you understand Japanese?

(a) Yes (100%)

(b) No (0%)

(7) How do you think difficulty level of the game after played?

(a) Appropriate (100%)

(b) Not appropriate (0%)

(8) Whether achievements and punishments provided matches the effort you are providing?

(a) Appropriate (100%)

(b) Not appropriate (0%)

(9) Is overall learning content provided worthy to be learn?

(a) Worth learning (100%)

(b) Not worth learning (0%)

(10) How do you think a game you have played?

(a) Fun (100%)

(b) Not fun (0%)

From GEQ it can be concluded that conditions found in the game experience have been met. For aesthetic problems, it is a matter of taste of player, because the developer cannot force someone to like aesthetics of the game made.

Before performing GEQ do pretest and posttest, where pretest is done before playing and posttest is done after playing, in which case player is asked to play for one week. Here are pretest results with 150 players: 10 players answered all questions correctly, 36 players answered with an average of 30%-50% wrong answers, and 104 players did not answer end answer but all wrong answers.

Subsequently, here are posttest results from the same person with pretest: there are 10 players who answered all the answers correctly (same player with pretest) and 134 people answered the correct average answer of 20%-100 %, while 6 players answered questions with all wrong answers.

5. Conclusions

Inference of this research is as follows:

(i) Japanese datasets are based on expert knowledge.

(ii) Currently $-Family has a new family of Expert Point Cloud Recognizer ($EP).

(iii) RPG game battle system using turn-based and ATB plus attack system using $EP can attract players to learn to write Japanese.

(iv) A game that is made already meets rules of game experience.

(v) Increased ability of a person is based on the capability of the person, because everyone has different capabilities. It can be said that the more diligent a person is to learn, the more science is absorbed.

(vi) A good game is a game that makes player feel happy to play it and unknowingly player can understand the science implicit in it.

(vii) Overall GALL in a matter of a week can increase players ability from 20% to 100%.

https://doi.org/10.1155/2018/9085179

Data Availability

The data I sent is the same as in the paper.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

[1] A. Deif, "Insights on lean gamification for higher education," International Journal of Lean Six Sigma, vol. 8, no. 3, pp. 359-376, 2017.

[2] M. Sailer, J. U. Hense, S. K. Mayr, and H. Mandl, "How gamification motivates: An experimental study of the effects of specific game design elements on psychological need satisfaction," Computers in Human Behavior, vol. 69, pp. 371-380, 2017.

[3] C. J. Costa, M. Aparicio, and I. M. S. Nova, "Gamification: Software Usage Ecology," Online Journal of Science and Technology, vol. 8, no. 1, 2018.

[4] J. Kasurinen and A. Knutas, "Publication trends in gamification: A systematic mapping study," Computer Science Review, vol. 27, pp. 33-44, 2018.

[5] H. Korkeila and J. Hamari, "The Relationship Between Player's Gaming Orientation and Avatar's Capital: a Study in Final Fantasy XIV," in Proceedings of the Hawaii International Conference on System Sciences.

[6] L. E. Nacke and S. Deterding, "The maturing of gamification research," Computers in Human Behavior, vol. 71, pp. 450-454, 2017.

[7] F. Xu, D. Buhalis, and J. Weber, "Serious games and the gamification of tourism," Tourism Management, vol. 60, pp. 244-256, 2017.

[8] L. J. Hiiliard, M. H. Buckingham, G. J. Geldhof et al., "Perspective taking and decision-making in educational game play: A mixed-methods study," Applied Developmental Science, vol. 22, no. 1, pp. 1-13, 2018.

[9] Yanfi, Y. Udjaja, and A. C. Sari, "A Gamification Interactive Typing for Primary School Visually Impaired Children in Indonesia," Procedia Computer Science, vol. 116, pp. 638-644, 2017.

[10] F. E. Gunawan, A. Maryanto, Y. Udjaja, S. Candra, and B. Soewito, "Improvement of E-learning quality by means of a recommendation system," in Proceedings of the 11th International Conference on Knowledge, Information and Creativity Support Systems, KICSS 2016, Indonesia, November 2016.

[11] M. B. Armstrong and R. N. Landers, "An Evaluation of Gamified Training: Using Narrative to Improve Reactions and Learning," Simulation & Gaming, vol. 48, no. 4, pp. 513-538, 2017.

[12] J. S. Hong, D. H. Han, Y. I. Kim, S. J. Bae, S. M. Kim, and P. Renshaw, "English language education on-line game and brain connectivity," ReCALL, vol. 29, no. 1, pp. 3-21, 2017.

[13] M. E. D. M. Perez, A. P. Guzman Duque, and L. C. F. Garcia, "Game-based learning: Increasing the logical-mathematical, naturalistic, and linguistic learning levels of primary school students," Journal of New Approaches in Educational Research, vol. 7, no. 1, pp. 31-39, 2018.

[14] P. A. Tsividis, T. Pouncy, J. L. Xu, J. B. Tenenbaum, and S. J. Gershman, "Human learning in Atari," in Proceedings of the 2017 AAAI Spring Symposium Series, Science of Intelligence: Computational Principles of Natural and Artificial Intelligence, 2017.

[15] E. Shohamy, Critical language testing, Language Testing and Assessment, 2017.

[16] A. Kirkpatrick and A. J. Liddicoat, "Language education policy and practice in East and Southeast Asia," Language Teaching, vol. 50, no. 2, pp. 155-188, 2017.

[17] JASSO, "Student Guide to Japan 2017-2018," 2017, http://www .jasso.go.jp/en/study_j/_icsFiles/afieldfile/2017/05/22/sgtj_2017 _e.pdf.

[18] JASSO, "International Students in Japan 2016," 2017, http://www .jasso.go.jp/en/about/statistics/intLstudent/__icsFiles/afieldfile/ 2017/03/29/data16_brief_e.pdf.

[19] JASSO, "Student Guide to Japan 2016-2017," 2017, http://www .jasso.go.jp/id/study_j/__icsFiles/afieldfile/2016/11/30/sgtj_2016_ id_2.pdf.

[20] J. S. Lee, "Challenges of international students in a Japanese university: Ethnographic perspectives," Journal of International Students, vol. 7, no. 1, pp. 73-93, 2017.

[21] T. Ogino, K. Hanafusa, T. Morooka, A. Takeuchi, M. Oka, and Y. Ohtsuka, "Predicting the reading skill of Japanese children," Brain & Development, vol. 39, no. 2, pp. 112-121, 2017.

[22] M. C. Chan, Multimedia Courseware for Learning Japanese Language Level 1 [Ph.D. thesis], UTAR, 2016.

[23] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock, "Gestures as point clouds: A $p recognizer for user interface prototypes," in Proceedings of the 14th ACM International Conference on Multimodal Interaction, ICMI2012, pp. 273-280, USA, October 2012.

[24] R. Ramadan and Y. Widyani, "Game development life cycle guidelines," in Proceedings of the 2013 5th International Conference on Advanced Computer Science and Information Systems, ICACSIS 2013, pp. 95-100, Indonesia, September 2013.

[25] P. Hubbard, "Foundaton of Computer-Assisted Language Learning," 2017, https://web.stanford.edu/~efs/cancourse2/CALL1.htm.

[26] N. Gunduz, "Computer assisted language learning," Journal of Language and Linguistic Studies, vol. 1, no. 2, 2005.

[27] S. J. Franciosi, "Acceptability of RPG Simulators for Foreign Language Training in Japanese Higher Education," Simulation & Gaming, vol. 47, no. 1, pp. 31-50, 2016.

[28] T. Heift and M. Schulze, "Tutorial computer-assisted language learning," Language Teaching, vol. 48, no. 4, pp. 471-490, 2015.

[29] J. O. Wobbrock, A. D. Wilson, and Y. Li, "Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes," in Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST 2007, pp. 159-168, USA, October 2007.

[30] S. Kim, K. Song, B. Lockee, and J. Burton, Gamification in Learning and Education, Springer International Publishing, Cham, 2018.

[31] D. P. Kristiadi, Y. Udjaja, B. Supangat et al., "The effect of UI, UX and GX on video games," in Proceedings of the 2017 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), pp. 158-163, Phuket, November 2017.

[32] E. Adams, Fundamentals of game design, Pearson Education, 2014.

[33] E. C. Contreras and I. I. Contreras, "Development of Communication Skills through Auditory Training Software in Special Education," in Encyclopedia of Information Science and Technology, pp. 2431-2441, IGI Global, 4th edition, 2018.

[34] D. Lightbown, Designing the user experience of game development tools, CRC Press, 2015.

[35] M. Kors, E. D. Van der, Spek., and B. A. Schouten, "A Foundation for the Persuasive Gameplay Experience," FDG, 2015.

[36] Y. Udjaja, "Ekspanpixel Bladsy Stranica: Performance Efficiency Improvement of Making Front-End Website Using Computer Aided Software Engineering Tool," Procedia Computer Science, vol. 135, pp. 292-301, 2018.

[37] H. Joo, "A Study on Understanding of UI and UX, and Understanding of Design According to User Interface Change," International Journal of Applied Engineering Research, vol. 12, no. 20, pp. 9931-9935, 2017.

[38] Y. Udjaja, V. S. Guizot, and N. Chandra, "Gamification for Elementary Mathematics Learning in Indonesia," International Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 6, 2018.

[39] M. Sailer, J. Hense, H. Mandl, and M. Klevers, "Psychological Perspectives on Motivation through Gamification," Psychological Perspectives on Motivation through Gamification, vol. 19, pp. 28-37, 2013.

Yogi Udjaja [ID] (1,2)

(1) Computer Science Department, School of Computer Science, Bina Nusantara University, Jl. K. H. Syahdan, No. 9, Kemanggisan, Palmerah, Jakarta 11480, Indonesia

(2) Ekspanpixel, Jl. K. H. Syahdan, No. 37R, Kemanggisan, Palmerah, Jakarta 11480, Indonesia

Correspondence should be addressed to Yogi Udjaja; yogi.udjaja@binus.ac.id

Received 22 May 2018; Accepted 28 November 2018; Published 18 December 2018

Academic Editor: Michael J. Katchabaw

Caption: Figure 1: Movement of number of international students in Japan from 2011-2016.

Caption: Figure 2: Gameplay Experience Evaluation for Learning Performance [30].

Caption: Figure 3: Example formation of dataset coordinates.

Caption: Figure 4: Resample Stages.

Caption: Figure 5: Rotation Stages.

Caption: Figure 6: Scale Stages.

Caption: Figure 7: Translation Stages.

Caption: Figure 8: Expert Point Cloud Recognizer Algorithm.
Table 1: Number of International Students Based on Nations.

No.   Country/    Number of Students
      Region      2016     2015     2014     2013     2012     2011

1     China       98483    94111    94399    81884    86324    87533
2     Vietnam     53807    38882    26439    6290     4373     4033
3     Nepal       19471    16250    10448    3188     2451     2016
4     Republic    15457    15279    15777    15304    16651    17640
      of Korea
5     Taiwan      8330     7314     6231     4719     4617     4571
6     Indonesia   4630     3600     3188     2410     2276     2162
7     Sri Lanka   3976     2312     1412     794      670      737
8     Myanmar     3851     2755     1935     1193     1151     1118
9     Thailand    3842     3526     3250     2383     2167     2396
10    Malaysia    2734     2594     2475     2293     2319     2417
11    U.S.A.      2648     2423     2152     2083     2133     1456
12    Mongolia    2184     1843     1548     1138     1114     1170
13    Bangla-     1979     1459     948      875      1052     1322
      desh
14    Philip-     1332     1028     753      507      497      498
      pines
15    France      1299     1122     957      793      740      530
16    Other       15264    13881    12243    42291    33313    34098
      Total       239287   208379   184155   168145   161848   163697

Table 2: Description of $-Family Recognizer Algorithm.

Symbol   Description

n        Number of sampled points
T        Number of training samples per gesture
         type
R        Number of iterations required
S        Number of strokes in a multistroke

Table 3: Hiragana Letter.

    A   I   U   E   O

K
S
T
N
H
M
Y
R
W
N

Table 4: Katakana Letter.

    A   I   U   E   O

K
S
T
N
H
M
Y
R
W
N

Table 5: Description of Formula Resample.

Symbol        Description

avg D         Average distance
d             Distance between starting points
D             d plus distance to the next point
Pi            Current point position
[P.sub.i-1]   Position point to i minus 1
[q.sub.x]     New coordinates of x-axis
[q.sub.y]     New coordinates of y-axis

Table 6: Description of Formula Rotation.

Symbol      Description

[c.sub.x]   Coordinates of midpoint against the x-axis
[c.sub.y]   Coordinates of midpoint against the y-axis
[x.sub.0]   Starting point coordinates to x-axis
[x.sub.1]   Coordinate 1st point of x-axis
[x.sub.n]   Coordinates of n-point of x-axis
[y.sub.0]   Starting point coordinates to y-axis
[y.sub.1]   Coordinate 1st point of y-axis
[y.sub.n]   Coordinates of n-th point of y-axis
k           Sum of all points
[theta]     Angle formed by 0
x'          Coordinate posts after rotation of x-axis
y'          Coordinate posts after rotation of y-axis

Table 7: Description of Formula Scale.

Symbol           Description

[q.sub.x]        New coordinates of x-axis
[q.sub.y]        New coordinates of y-axis
[p.sub.x]        current coordinates of x-axis
[p.sub.y]        current coordinates of y-axis
[I.sub.width]    length of bounding box of player input
[I.sub.height]   Size of bounding box width of player input
[D.sub.width]    length of bounding box of dataset
[D.sub.width]    Size of bounding box width of dataset

Table 8: Description of Formula Translation.

Symbol      Description

[q.sub.x]   Center point of x-axis
[q.sub.y]   Center point of y-axis
[p.sub.x]   Coordinate point input player against x-axis
[p.sub.y]   Coordinate point input player against y-axis
[c.sub.x]   Coordinate point datasets against x-axis
[c.sub.y]   Coordinate point datasets against y-axis
[d.sub.i]   Average distance
I           Input made by player
[D.sub.i]   Dataset to i
k           Point to k
N           Total number of points

Table 9: Description of Formula Score.

Symbol            Description

s                 Score
[d.sup.*.sub.i]   Distance to i
[I.sub.height]    Size of bounding box width of player input
[I.sub.width]     length of bounding box of player input

Table 10: Description of Formula Greedy Cloud Distance.

Symbol                         Description

w                              Weight
[w.sub.i]                      Weight to i
[p.sub.0]                      Coordinate starting point
n                              Number of points already paired
[I.sub.i]                      Coordinate point input player to i
[mathematical expression not   Coordinate point input player to i
reproducible]                  against x-axis
[mathematical expression not   Coordinate point input player to i
reproducible]                  against y-axis
[D.sub.j]                      Coordinate point datasets to j
[mathematical expression not   Coordinate point datasets to j
reproducible]                  against x-axis
[mathematical expression not   Coordinate point datasets to j
reproducible]                  against y-axis

Table 11: Description of Formula Score Final ($EP).

Symbol        Description

F             Score final
s$1           Score $1
[s$1.sub.1]   Score $1 to 1
[s$1.sub.n]   Score $1 to n
S$P           Score $P
n             Number of strokes plus overall stroke that
              make up the writing
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Udjaja, Yogi
Publication:International Journal of Computer Games Technology
Article Type:Report
Geographic Code:9JAPA
Date:Jan 1, 2018
Words:6003
Previous Article:Using the Revised Bloom Taxonomy to Analyze Psychotherapeutic Games.
Next Article:Pegasus: A Simulation Tool to Support Design of Progression Games.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |