Printer Friendly

A multidimensional analysis tool for visualizing online interactions.


In recent years, problem-solving skills have attracted increasing attention from education researchers. The importance of collaborative learning, which facilitates literacy by enabling learners to cooperate with one another through various interactive activities, has been emphasized (An, Shin, & Lim, 2009; Pozzi, 2010). In particular, collaborative learning has been emphasized for implementation in online learning as well as face-to-face classes because, unlike face-to-face classes, online collaborative learning is available on an anywhere/anytime basis (Edge, 2006). In this regard, an increasing number of studies have focused on online collaborative learning, and some have explored factors that can facilitate online collaborative leaning (Benbunan-Fich & Hiltz, 1999; Sinclair, 2005; Yang, Newby, & Bill, 2008).

Some factors that have been found to promote online collaborative learning include individual characteristics of learners, instruction methods, the learning environment, learners' motives, and type and level of online interaction). Researchers have suggested that, among the factors mentioned above, the type and level of online interaction are most likely to influence online collaborative learning(An et al., 2009; Daradoumis, Martinez-Mone, & Xhafa, 2006). For this reason, previous studies have typically focused on dynamic interaction in the online collaborative learning environment. In particular, a number of studies have attempted to find ways to analyze such interactions because different analysis methods tend to provide different information and interpretations.

Online interaction has attracted increasing attention from researchers and, thus, a number of studies have attempted to enhance existing analysis methods for measuring online interaction (Marra, 2006), including quantitative analysis, content analysis, and social network analysis(SNA) methods. The quantitative analysis method is used to investigate the level of online interaction by considering the number of posts by users, the number of replies, and the number of logins (Benbunan-Fich & Hiltz, 1999). A major advantage of this method is that it can easily quantify the level of online interaction. On the other hand, the content analysis method allows for an analysis of interaction types and levels by classifying learners' posts based on certain criteria. Among various relationship analysis methods, the SNA method has recently been used by researchers to analyze the relationship among individuals within a certain group by treating those individuals as nodes and structuralizing message content into links (Hu & Racherla, 2008).

Although all of these methods are useful when analyzing online interaction, each of them cannot provide multidimensional aspects of online interaction, and thus each method focuses only on one aspect (e.g., the quantitative analysis method focuses on the number of interactions, the content analysis method on the types and levels of interaction, and the SNA method on the structure of the interaction). Researchers want a method that can provide rich information of online interpretation because they require an in-depth understanding of online interaction.

In addition, researchers want to visualize the analysis results of online interaction because visualization is a useful way to interpret complex interaction among group members more clearly (Hirschi, 2010). However, any principle or method analyzing interactions with a multidimensional approach has not been reported as of yet. Therefore, this study intends to develop an instrument that visualizes the results of the analysis on the basis of principles of multidimensional approaches to analyzing online interactions. A second objective of this study is to show how the results of multidimensional analysis and that of existing one-dimensional analyses are different.

Analysis methods for interactions in the online collaborative learning environment

Online interaction is defined as more than two people are giving and taking information to pursue their common learning goals in an online learning environment (Makitalo, Hakkinen, Leinonen, & Jarvela, 2002). Online interaction includes relatively intensive information about the process of learners' thinking and knowledge formation because it mostly happens in an asynchronous environment, which allows enough time for learners' reflective thinking (An, Shin, & Lim, 2009; Blanchette, 2012; Garrison & Cleveland-Innes, 2005). To understand the online learning process properly, researchers must recognize that analyzing online interaction is an important issue. Depending on the types of analysis enacted, researchers can gather or lose valuable information (Blanchette, 2012). For this reason, many studies have addressed how to analyze online interaction. The following are representative ways to analyze online interaction.

Quantitative analysis of online interaction

The quantitative analysis method was the first method used for analyzing interactions in the online collaborative learning environment (Marra, 2006). This method considers the number of posts written and read, as well as replies and logins by learners. In addition, the quantitative analysis method also compares the points produced by adding the number of writings and readings for the level of online interaction(Benbunan-Fich & Hiltz, 1999; Gorghiua, Lindforsb, Gorghiuc, & Hamalainend, 2011; Pozzi, 2010). For other methods, the average found from dividing the number of postings into the number of participants (Hewett, 2000), as well as a level of online interaction, was also analyzed by scoring the values of each message with certain criteria(Brooks & Jeong, 2006; Newman, Webb, & Cochrane, 1996).

This method typically employs relatively simple and objective quantitative data. Early studies considered this method to be the most objective for analyzing online interactions among learners since researchers were able to use diverse statistical methods based on quantitative data (Benbunan-Fich & Hiltz, 1999; Marra, 2006; Mason, 1992). However, this method is limited in that it provides only quantities for online interaction without analyzing the type and structure of online interaction or identifying important phenomena in the interaction process (Strijbos, Martens, Prins, & Jochems, 2006)

Content analysis of online interaction

The content analysis method is frequently used in research on online interaction because it can better analyze the type and level of online interaction than the quantitative analysis method, which allows only for limited information (George, 2008; Strijbos et al., 2006). The content analysis method characterizes the meaning of message content in a systematic and qualitative manner (George, 2008). The unit of analysis and the category of analysis play important roles in content analysis (Strijbos et al., 2006). The content of learners' interactions is analyzed as messages, and such messages are classified based on the decided unit of analysis. A number of recent studies have used the content analysis method because of its ability to determine type, structure, and level of online interaction (Strijbos et al., 2006).

De Weber, Schellens, Valcke, & Van Keer (2006) described the framework of content analysis, and diverse analytical frameworks have been used for content analysis in the context of online collaborative learning. Among such frameworks, Henri's (1992) framework has widely been used because its categories are clearly distinguished and its analysis method is relatively simple, allowing even non-experts to analyze messages exchanged during the learning process. Henri's framework is composed of five categories: participative, social, interactive, cognitive, and metacognitive. Other widely-used frameworks are that by Gunawardena, Lowe, and Anderson (1997), which is composed of the following five categories: sharing/comparing information; discovery and exploration of dissonance; negotiation of meaning/co-construction of knowledge; testing and modification of proposed synthesis; and phrasing of agreement, statement, and application of the newly-constructed meaning, and that by Zhu's (1996), which is composed of the following six categories: answers, information sharing, discussion, comment, reflection, and scaffolding. These content analysis methods enhance the ability of researchers to gather a wide range of information from the interaction process, and thus the methods have been extensively used (Bassani, 2011; Kale, 2008). However, the content analysis method is limited in that, although it provides qualitative information about the types and levels of discourse content, it does not provide structural information of interaction.

Social network analysis of online interaction

The SNA method focuses on revealing the relationship and structure of online interactions among individuals in a group(Sternitzke, Bartkowski, & Schramm, 2009). The key advantage of the SNA method is its ability to visualize the relationship among individuals and the structure of their online interaction through nodes and links(Medina & Suthers, 2009; Sternitzke, et al., 2009). This method of analysis also provides information on personal contribution to interaction within the group(Contractor, Wasserman, & Faust, 2006), as well as varied information for analyzing interaction, such as its structure, flow, and processes. It is able to present results of the analysis after visualizing them (Bergs, 2006; Wasserman & Faust, 1989). In addition, SNA can visualize learning processes through group members' interaction (Suthers & Rosen, 2011). Also, SNA provides quantitative data in the form of various indices, including centrality(the degree to which an individual occupies a central position in the network), concentration(the degree to which the entire network is concentrated toward the center), and density(the number of connections between individuals)(An et al., 2009; Heo, Lim, & Kim, 2010). However, although this method can be used to analyze the relationship and structure of online interactions among learners, it is limited in that specific types of messages cannot be analyzed.

Given the discussion above, each analysis method has beneficial aspects in terms of analyzing online interaction, but is limited in providing multiple aspects of online interaction due to its pursuit of one-dimensional analysis. In addition, the methods do not help researchers understand the results of interaction analysis more explicitly. Therefore, we suggest a multidimensional analysis method in order to overcome the limitations of one-dimensional analysis approaches. Multidimensional Interaction Analysis Tool (MIAT)

We developed the Multidimensional Interaction Analysis Tool (MIAT), which can facilitate a multidimensional (quantitative analysis, content analysis, and structural analysis) analysis of online interactions among individuals in a group, as well as conceptualize those interactions in a visual way. In other words, the MIAT can simultaneously analyze the quantitative analysis/content analysis aspects of online interactions and the relationships among individuals in a group. In addition, the most remarkable feature of the MIAT is its ability to visualize all interactions among individuals in a certain group at a specific point in time. The principles of MIAT are as follows:

Unit of analysis

The unit of analysis is the most critical factor in content analysis (Woo & Reeves, 2007). In the MIAT, the unit of analysis is the message (the entire post under a title) because the MIAT considers the structure of the relationship among group members based on SNA.

Multidimensional principles of the MIAT

Quantitative analysis used in the MIAT carried two principles. One of them is using the frequency of a message, and the other is giving quantitative scores by evaluating the values of each message. For instance, assuming that there were three messages, and one contained false information whereas others contained exact information, the researcher gave marks of less than three by deducting or subtracting points from the error message rather than giving a quantitative score of three with the number of messages. Thus, more plentiful information on interaction could be obtained if it was analyzed by giving quantitative scores from evaluating values of messages rather than simply using the frequency (Brooks & Jeong, 2006; Newman, et al., 1996). The criteria for assessing the values of each message in the MIAT can be freely selected by researchers. For instance, if a researcher desired to evaluate the values of messages by using the 10 criteria of Newman et al. (1996), he could give scores between -10~10 to each message. Additionally, if he desired to use criteria of Brooks & Jeong (2006), he could give marks between -1~1 by scoring +1 for a message that helped solve the assignment and -1 for a message that was not helpful. A key feature of the MIAT is it allows for criteria that assess values of each message to be adjustable by research goals or frameworks since online interactions can vary with learning contexts.

The MIAT used content analysis principles. Content analysis is a method that distinguishes message content by a certain category. The method can analyze the learning process shown during interactions and can achieve accurate, objective, consistent information on types and structures of interactions (De Weber, et al., 2006; Strijbos et al., 2006). The MIAT was developed for researchers to create their own frameworks for content analysis, in accordance with research purposes. For example, researchers can use Henri's (1992) analyzing category or that of Zhu (1996) for MIAT analysis, with a category of content analysis according to the intention of the researcher. The MIAT also offers researchers the flexibility to input a self-generated category or use various other categories (see Figure 1). A lot of flexibility was given to selecting a category of content analysis because online interaction occurs in varied learning contexts, and the research frameworks of researchers desiring to analyze such interactions are also diverse.



The MIAT used a principle of online network analysis to examine a relationship between learners. In other words, if A posted something on a bulletin board and B made a B, and B influenced C. Wiki pages can be analyzed by using the history function as an editing method, which was different from the editing method of a bulletin board. Assuming that B modified A's posting, and C either added details to this or asked a question about B's modification, one could say that A influenced B, and B influenced C. Lastly, in the case of live chat, if A wrote a message and B wrote a message after A, and C wrote a message after B, one could say that that A influenced B, and B influenced C(see Figure 2). The direction of arrows in the relationship model is the direction of message influence. Because B's message comes after A's, A's message is expected to influence B, and thus the direction of the arrow goes from A to B. Therefore, the individual with the thickest outgoing arrows is expected to be the most influential group member, and the one with the thickest incoming arrows is most likely to be influenced by his interaction within the group. As such, network analysis can clearly show the relationships among individuals in a group as well as the structure of their interactions. In this regard, the MIAT uses basic network analysis principles to analyze relationships among learners.

Calculation of the level of online interaction

To explain the principles for calculating a level of online interactions in the MIAT, we will use example data from a bulletin board system. First, criteria for assigning quantitative scores and categories for content analyses are required for explanations. Then, suppose the following four criteria of newness, importance, relevance, and accuracy are applied as a framework of quantitative analysis among 10 criteria used in research performed by Newman et al. (1996). Scores given ranged between 0 and 4 points. The remaining six criteria are excluded from this scoring since these are considered inappropriate because they are generally used to sort message content.

Now, suppose Henri's(1992) framework is used in categories for content analysis. The original analysis categories of Henri include five types, but, for this example, only three categories are used, social, cognitive, and metacognitive, because a category analyzing the types of messages is required. Participative and interactive are the categories used to analyze a level and structure of interaction rather than the types of messages. Therefore, social, cognitive, and metacognitive were chosen for the analysis. Principles that calculate a level of online interaction using the MIAT could be explained as follows. In order to calculate the level of interaction, the MIAT first requires the input of the analyzed data(the type of and score for each message)(see Figure 3). Then, the MIAT automatically provides a matrix of interaction scores (Figure 4).


The MIAT uses an interaction matrix to calculate two types of interaction levels: the total interaction level and the comparative interaction level. The total number of messages, average score, and standard deviation express the total interaction level. As shown in Figure 4, the total interaction level can be summarized as follows: the total number of messages = 24, the total sum of scores per message = 55, the average score = 2.29, and the standard deviation = 0.93. The MIAT uses the average score and the standard deviation to calculate the T-score for the comparative interaction level.

When using the sum or average of raw scores, it is difficult to identify comparative levels of interaction in a given group. Therefore, the MIAT uses the T-score(a standard score) to identify comparative scores for the group. For example, the cognitive interaction level between C and A is 3, which becomes 53.43 through the MIAT's indexation (which uses the T-score). As a result, the cognitive interaction level between C and A is slightly higher than the average. The comparative interaction level is calculated as follows:



Interpretation of outputs

We analyzed the data in 24 messages using the MIAT, and Figure 5 shows the results. In terms of the total interaction level, the total number of messages was 24, the average score was 2.29, and the standard deviation was 0.93. The comparative interaction level is indicated by the number next to each arrow; the larger the number, the higher the comparative interaction level. Further, the thicker the arrow, the larger the number of messages. The direction of the arrow indicates the direction of the interaction, and interaction types are classified into cognitive, metacognitive, and social categories (indicated by the style of the arrow).

As shown in Figure 5, Student A and Student C had a large number of links and thick arrows, which implies the interaction between these two students was the most active. Student A and Student B had fewer links than the other student pairs, and their arrows were thinner, indicating that the interaction between Student A and Student B was passive. However, the arrow for cognitive interaction (the solid arrow) of Student A and Student B was thicker than those of the other student pairs. In addition, the active interaction between Student A and Student C was mainly social.


MIAT Implementation

In this section, we will explain how the results of the MIAT analysis are different from those of other one-dimensional analysis methods through an example study. Since the level of participation in interaction is one of important indicators of successful online collaborative learning, we conducted a study aiming to recognize group members' interaction levels. Specifically, we wanted to identify the most active participant in a collaborative work. Through the study, we will try to explain how the results of the MIAT analysis are different from others.

Participants and the task

We conducted the MIAT analysis by considering a sample of 30 students taking an online course in education technology in the spring semester of 2011 at D University. The average age of these students was 21.7, and 67% were female. They had diverse majors, including human studies, social sciences, curriculum studies, engineering science, and art. The online collaborative task assigned to the class was based on instructional design. The students were randomly assigned to one of six groups(five students per group). Each group was expected to determine the theme of the instructional design through asynchronized interactions on an online bulletin board and to follow the instructional design for two weeks. Only the team showing the most active interaction was selected for this case analysis.

Comparison analysis methods

We compared the results from the MIAT with those from existing one-dimensional analysis methods (i.e., quantitative analysis, content analysis, and SNA methods).

* For the quantitative analysis method, we used the method of Gorghiua et al. (2011), which is one of the most commonly used quantitative analysis methods of interaction levels. We counted the number of messages and the hit number of those messages.

* For the content analysis method, we only used three categories (social, cognitive, and metacognitive) among Henri's five categories (1992) because the other two categories(participative and interactive) do not pertain to the type of interaction. We classified each message based on these three categories and analyzed the interaction level based on the number of messages in each category.

* We used NetMiner 2.4 for SNA. We analyzed the relationship among students by considering centrality, cohesion, and the number of messages sent and received.

Data analysis

The unit of analysis was the message. For the quantitative analysis method, we calculated the total number of posts as well as the number of hits. For the content analysis method, we analyzed the content of messages after their classification. Cohen's kappa for the inter-rater reliability was 0.90. For SNA, we decided the direction of messages and calculated the number of messages. Cohen's kappa for the inter-rater reliability was 0.92.

For the MIAT, we scored each message's value based on the criteria in Table 1. Then, we categorized the message and decided its direction. Cohen's kappa for the inter-rater reliability was 0.84 for scoring values, 0.93 for categorizing messages, and 0.92 for direction of messages. For inconsistent results, we reached an agreement through face-to-face discussions.

Analysis results of one-dimensional approaches

Table 2 shows the results of the quantitative analysis method. According to results from the quantitative analysis method, Student D produced the most posts and Student C yielded the most hits. Therefore, we can infer Student C and Student D were the most active participants in the group work.

According to results obtained using Henri's(1992) framework, the most active participant is Student D with the largest sum. Among the Student D's messages, the most common category was social. The next active participants were Student C and Student A, but they show different participation patterns. Student C's messages are evenly distributed across all categories, but Student A's messages were concentrated on social messages. Thus, content analysis gives information on the types of interaction as well as similar results of quantitative analysis.

According to the results of the SNA, the total number of nodes was five and the total number of links was 20. The betweenness centrality of all nodes was 0(see Figure 6), and the betweenness centrality stood for a degree of a node mediating the connection of other nodes. These results indicated that flow and exchange of information was even without a special focus on a certain student(Cho, Gay, Davidson, & Ingraffea, 2007). In terms of the cohesion analysis which is about an attractive force between nodes, there were five nodes and five clusters. This implies that no specific nodes gathered to form a cluster. Indeed, if nodes gather to form a cluster, the nodes in the cluster interact only with one another and not with nodes outside of the cluster. Therefore, the equal number of nodes and clusters indicated students engaged in balanced interactions. In terms of messages sent and received between nodes, Student C sent the highest number of messages, whereas Student D received the highest number of messages. This indicated that Student C interacted with other students most actively.


The results of MIAT method

Figure 7 shows the results from the MIAT. Student C and Student E engaged in all three types of interactions (cognitive, metacognitive, and social). The comparative interaction level for cognitive interactions was 163.75 for Student C to Student E, and 79.96 for Student E to Student C. The comparative interaction level for metacognitive interactions was 121.36 for Student C to Student E, and 90.43 for Student E to Student C. The comparative interaction levels for cognitive and metacognitive interactions exceeded the average of 50, indicating the interaction between Student C and Student E contributed to their collaborative task.

Comparison of analysis results

The results of one-dimensional analyses(quantitative analysis, content analysis, and SNA) indicated that Student C and Student D are most active participants in collaborative group work. However, the MIAT showed slightly different results. According to the MIAT analysis, the most active participants were Student C and Student E. Student C was commonly identified as one of the most active participants, whereas Student E only appeared active in the results of the MIAT analysis. Therefore, who is the more active participant among collaborative work between Student D and Student E? According to the results of the one-dimensional analysis, Student D is a more active participant than Student E. Also, the results of quantitative analysis and SNA indicated Student D's number of interactions is more than that of Student E. Additionally, the content analysis also indicated Student D's number of cognitive messages exceeded those of Student E. However, the results of the MIAT analysis indicated Student D's T-score of cognitive messages was 598.58, which is lesser than Student E's T-score of 622.77. In addition, Student D's T-score of meta-cognitive messages was 107.61, which was also below Student E's T-score of 382.27.


The variability in the results is due to differences between one-dimensional analyses and the MIAT analysis. One-dimensional analyses simply count the number of messages or the types of messages, whereas the MIAT considers the type of interaction and the comparative interaction level. For example, there was almost no difference in the total number of messages between Student A and Student C, and between Student C and Student E. In this case, results of the SNA indicated balanced interactions among group members. However, results from the MIAT convey a different story. Student A and Student C were most likely to engage in social interactions, whereas Student C and Student E were most likely to engage in cognitive interactions. These results indicated interactions between Students C and E were more likely to contribute to collaborative work than were the interactions between Students A and C. Previous studies have found that cognitive interactions directly influence the problem-solving activity of learners (Veerman & Veldhuis-Diermanse, 2001).

In sum, the MIAT considers both the type of interaction and the comparative interaction level in order to provide more specific and in-depth interpretation information on interactions among group members. However, SNA considers only the number of messages and the direction of messages. The results obtained through an analysis were differentiated by whether the approach was one-dimensional or multidimensional. Results showed the multidimensional approach can provide more in-depth information about the learning process and the structure of online interactions. This is because interactions could be understood more deeply when taking the multidimensional approach into account, compared to working only with a one-dimensional approach (Tomsic & Suthers, 2005).

Conclusion and implications

Existing quantitative or content analysis methods for analyzing interactions among group members are limited in that they have difficulty providing rich information on such interactions. This is because such methods take a one-dimensional approach. For instance, Driver(2002) and Chiu and Hsiao(2010) examined the effects of group size in online collaborative learning but provided different findings. Driver(2002) found no differences in interactions among group members when comparing large and small groups, whereas Chiu and Hsiao(2010) concluded that interactions among small-group members were more effective than those among large-group members. The researchers obtained different results because they used different methods. Driver used a self-reported questionnaire to measure interactions among students, whereas Chiu and Hsiao conducted a content analysis. This example suggests a need for caution when interpreting interaction results obtained using a method based on a one-dimensional approach because the results can vary according to the method used. Methods based on a one-dimensional approach have difficulty providing sufficient information for an in-depth analysis of the interactions among group members. Thus, this research was performed based upon the premise that multidimensional analysis was pivotal for an in-depth understanding on interactions.

Looking into the methods for analyzing interactions of online cooperative activities used until now, we came across a case using either one of quantitative analysis, content analysis, and relational methods, as well as a case using two analyses in tandem(e.g., Newman et al., 1996; Tomsic & Suthers, 2005). However, a multidimensional analysis considering all types of analyses simultaneously has rarely been performed. Multidimensional analysis could provide information on teaching and learning to instructors conducting online lessons since it analyzes the details of interactions and provides visual information on a learner's structural relationships in an online cooperative study. Therefore, this study introduced the principles of developing the MIAT, a multidimensional instrument analyzing online interactions and considering quantitative analysis, content analysis, and relational analyses simultaneously. We also explained its advantages in analyzing online interactions by comparing its performance with existing analysis methods.

Utilities in the MIAT, in comparison with existing methods for analyzing online interactions, could be arranged as follows. The MIAT could provide the results of quantitative analysis, content analysis, and relational analyses simultaneously since the functions of one-dimensional analytical methods were integrated for the MIAT. It also provided visual results of an analysis so researchers could see the flow and processes of interactions as well as the visual, relational structure between learners. Numerous scholars have acknowledged that visualization is an effective way to support a deep understanding of interactions(Medina & Suthers, 2009; Saltz, Roxanne, & Turoff, 2004).

Also, the MIAT provided flexibility, which could modify a specific analytical framework of existing analyzing methods into various forms. For instance, some researchers might desire to use Henri's framework for a content analytical framework for interaction, while other researchers might wish to use the framework of Gunawardena and his colleagues(1997). The MIAT uses a content analytical framework, where an appropriate framework could be entered directly in accordance with the researcher's purpose or background of studies. Moreover, quantitative scoring criteria for assessing the values of each message could be entered by a researcher's intention or framework of studies (Newman and his colleagues' standard was used as a criterion for assigning quantitative scores and Henri's framework was used for content analysis in an example analysis in this study). It is expected that the MIAT can provide meaningful information for instructors or researchers due to its characteristics of flexible analysis framework and visualization. The information provided by the MIAT would vary and be more robust than information provided by one-dimensional analysis methods of interactions. De Weber et al.(2006) indicated that coding categories for interaction are developed to analyze the process of knowledge acquisition, sharing, and formation. This means the information from interaction analysis pertains to the learning process. Thus, the MIAT would provide some useful information about the learning process by providing multidimensional information.

Limitations and future study

The purpose of this study was to provide instructors and researchers with more teaching-learning information through analyzing interactions. This was accomplished by developing an instrument that suggested principles for the multidimensional approach to online interaction analysis. However, there were a few limitations to the study. First, the MIAT confined the scope of analysis in analyzing online interactions. Relational analysis was one of the most important analytic functions for the MIAT, so the context must be definite for the utterance. In this case, it is considered as interaction where an individual wrote after another individual. Although the writing targeted unspecified individuals, it is simply considered an interaction between previous writer and next writer. So in this case, the MIAT cannot analyze the interaction completely.

Second, the MIAT focused on quantitative analysis, content analysis, and relational analysis. However, the MIAT did not provide information on a change of interactions according to time. The MIAT would function as a more powerful analytic instrument if it also considered a change of learners, in accordance with time, while learning for a certain period of time.

Third, the MIAT allowed for an analytic framework in which a researcher was interested in using a standard for quantitative and content analyses. The MIAT also included flexibility, as it could select the number of people for a study team according to the intentions of the researcher. Nevertheless, the researcher must enter scores assigned to messages for content analysis and evaluation during this process. Therefore, the MIAT's capabilities could be optimized to the researcher's purpose. However, this requires the user's efforts and the MIAT is not completely automated.

Finally, a few follow-up studies are suggested to improve meaningful use of the MIAT. This study focused on the necessity of multidimensional analysis for online interactions, as well as developing an instrument for such a purpose. However, a study that applies the developed MIAT in a research context, with a researcher's theoretical framework, should be performed as well. Also, this study analyzed interactions on ordinary bulletin boards in order to examine the relative advantages and disadvantages offered by the MIAT over existing analyzing methods. It would be beneficial to investigate further what kind of analytic information can be provided by the MIAT in various cooperative activities online(e.g., wiki, live chat).

Furthermore, this study focused on comparing the MIAT with existing analysis methods to verify whether the actual results of the MIAT analysis were appropriate and not supplemented with content analysis research. This concern must also be addressed by additional studies. The process requires future work to ensure the results of the MIAT analysis are valid. This should involve additional information, such as learner interviews, to analyze online interactions more precisely and make appropriate conclusions.


The present research was conducted by the research fund of Dankook University in 2011.


An, H., Shin, S., & Lim, K.(2009). The effects of different instructor facilitation approaches on students' interactions during asynchronous online discussions. Computers & Education, 53(3), 749-760.

Bassani, P. B. S.(2011). Interpersonal exchanges in discussion forums: A study of learning communities in distance learning settings. Computers & Education, 56(4), 931-938.

Benbunan-Fich, R., & Hiltz, S. R. (1999). Impacts of asynchronous learning networks on individual and group problem solving: A Weld experiment. Group Decision and Negotiation, 8, 409-423.

Bergs, A.(2006). Analyzing online communication from a social network point of view: Questions, problems, perspectives. Language @ Internet, 3, 1-17.

Blanchette, J. (2012). Participant interaction in asynchronous learning environments: Evaluating interaction analysis methods. Linguistics and Education, 23, 77-87

Brooks, C. D., & Jeong, A. (2006). Effects of pre-structuring discussion threads on group interaction and group performance in computer-supported collaborative argumentation. Distance Education, 27(3), 371-390.

Chiu, C.-H., & Hsiao, H.-F. (2010). Group differences in computer supported collaborative learning: Evidence from patterns of Taiwanese students' online communication. Computers & Education, 54, 427-435.

Cho, H., Gay, G., Davidson, B., & Ingraffea, A. (2007). Social networks, communication styles, and learning performance in a

CSCL community. Computers & Education, 49(2), 309-329.

Contractor, N. S., Wasserman, S., & Faust, K. (2006). Testing multi-theoretical, multilevel hypotheses about organizational networks: An analytic framework and empirical example. Academy of Management Review, 31(3), 681-703.

Daradoumis, T., Marti'nez-Mone, A., & Xhafa, F. (2006). A layered framework for evaluating on-line collaborative learning interactions. International Journal of Human-Computer Studies, 64(7), 622-635.

De Weber, B., Schellens, T., Valcke, M., & Van Keer, H. (2006). Content analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computer & Education, 46(1), 6-28.

Driver, M. (2002). Exploring student perceptions of group interaction and class satisfaction in the web-enhanced classroom. The Internet and Higher Education, 5(1), 35-45.

Edge, J. (2006). Computer-mediated cooperative development: Non-judgemental discourse in online environments. Language Teaching Research, 10(2), 205-227.

Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: interaction is not enough. The American Journal of Distance Education, 19(3), 133-148.

George, A. L. (2008). Quantitative and qualitative approaches to content analysis. In R. Franzosi(Ed.), Content analysis: What is content analysis(Vol. 1, pp. 222-244). LA: SAGE Publications.

Gorghiua, G., Lindforsb, E., Gorghiuc, L. M., & Hamalainend, T.(2011). Acting as tutors in the ECSUT on-line course--how to promote interaction in a computer supported collaborative learning environment? Procedia Computer Science, 3, 579-583.

Gunawardena, C., Lowe, C., & Anderson, T.(1997). Analysis of global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397-431.

Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye(Ed.), Collaborative learning through computer conferencing The Najadan Papers(pp. 117-136). London: Springer-Verlag.

Heo, H., Lim, K. Y., & Kim, Y. (2010). Exploratory study on the patterns of online interaction and knowledge co- construction in project-based learning. Computers & Education, 55(3), 1383-1392.

Hewett, B. (2000). Characteristics of interactive oral and computer-mediated peer group talk and its influence on revision. Computers & Composition, 17, 265-288.

Hirschi, C. (2010). Introduction: Applications of social network analysis. Procedia--Social and Behavioral Sciences, 4, 2-3

Hu, C., & Racherla, P. (2008). Visual representation of knowledge networks: A social network analysis of hospitality research domain. International Journal of Hospitality Management, 27(2), 302-312.

Kale, U. (2008). Levels of interaction and proximity: Content analysis of video-based classroom cases The Internet and Higher Education, 11(2), 119-128.

Makitaloa, K., Hakkinen, P. i., Leinonen, P., & Jarvela, S.(2002). Mechanisms of common ground in case-based web discussions in teacher education. Internet and Higher Education, 5, 247-265.

Marra, R.(2006). A review of research methods for assessing content of computer-mediated discussion forums. Journal of Interactive Learning Research, 17(3), 243-267.

Mason, R.(1992). Evaluation methodologies for computer conferencing application. In A. R. Kaye(Ed.), Collaborative Learning Through Computer Conferencing(pp. 105-116). Berlin: Springer-Verlag.

Medina, R., & Suthers, D.(2009). Using a contingency graph to discover representational practices in an online collaborative environment. Research and Practice in Technology Enhanced Learning, 4(3), 281-305.

Newman, D. R., Webb, B. R., & Cochrane, A. C. (1996). A content analysis method to measure critical thinking in face-to- face and computer supported group learning. Interpersonal Computing and Technology, 3(2), 56-77.

Pozzi, F.(2010). Using jigsaw and case study for supporting online collaborative learning. Computers & Education, 55(1), 67-75.

Saltz, J. S., Roxanne, S., & Turoff, M. (2004, November). Student social graphs: Visualizing a student's online social network. Paper presented at the 2004 ACM conference on Computer Supported Cooperative Work, Chicago, IL, USA.

Sinclair, M. P.(2005). Peer interactions in a computer lab: Reflections on results of a case study involving web-based dynamic geometry sketches. Journal of Mathematical Behavior, 24, 89-107.

Sternitzke, C., Bartkowski, A., & Schramm, R. (2009). Visualizing patent statistics by means of social network analysis tools. World Patent Information, 30(2), 115-131.

Strijbos, J.-W., Martens, R. L., Prins, F. J., & Jochems, W. M. G. (2006). Content analysis: What are they talking about? Computers & Education, 46, 29-48.

Suthers, D. D., & Rosen, D. (2011). A unified framework for multi-level analysis of distributed learning. In P. Long, G. Siemens, G. conole, & D. Gasevic (Eds.), Proceedings of the First International Conference on Learning Analytics & Knowledge (pp. 64-74). New York, NY: ACM.

Tomsic, A., & Suthers, D. (2005, January). Effects of a discussion tool on collaborative learning and social network structure within an organization. Paper presented at the 38th Hawaii International Conference on System Sciences, Hawaii, USA.

Veerman, A., & Veldhuis-Diermanse, E. (2001). Collaborative learning through computer-mediated communication in academic education. Retrieved from the Maastricht Mcluhan Institute website: cscl/Papers/166.doc

Wasserman, S., & Faust, K. (1989). Canonical analysis of the composition and structure of social networks. Sociological Methodology, 19, 1-42.

Woo, Y., & Reeves, T. C. (2007). Meaningful interaction in web-based learning: A social constructivist interpretation. Internet and Higher Education, 10, 15-25.

Yang, Y.-T. C., Newby, T., & Bill, R. (2008). Facilitating interactions through structured web-based bulletin boards: A quasiexperimental study on promoting learners' critical thinking skills. Computers & Education, 50, 1572-1585.

Zhu, E. (1996). Meaning negotiation, knowledge construction, and mentoring in a distance learning course. Paper presented at the selected research and development presentations at the 1996 National Convention of the Association for Educational Communications and Technology, Indeanapolis, USA. Retrieved from Eric database. (ED397849).

Zheng, L., Yang, K., & Huang, R. (2012). Analyzing Interactions by an IIS-Map-Based Method in Face-to-Face Collaborative Learning: An Empirical Study. Educational Technology & Society, 15(3), 116-132.

Minjeong Kim (1) * and Eunchul Lee (2)

(1)Department of Teaching Education //(2)Department of Education, Dankook University, Gyunggi-do, South Korea 448-701 // //

* Corresponding author
Table 1. Criteria for Scoring Values of Each Message

Criteria         New         Important          Relevant       Accurate

Information   New theme      Important      Relevant to the    Accurate
              or details     information    assignment or the  Details
                             or details     discussion theme
                           for completing
                           the assignment
Score             1              1                  1               1

Table 2. Results from the Quantitative
Analysis Method

               Posts   Hits   Total

A              23      171    194
B              19      114    133
C              23      212    235
D              27      175    202
E              22      170    192

Total number   114     842    956

Table 4. Results from the Content Analysis Method

                Cognitive   Metacognitive   Social   Sum

A                   6             4           13      23
B                   8             1           10      19
C                   7             7            9      23
D                  11             3           13      27
E                   9             5            8      22
Total number       41            20           53     114
COPYRIGHT 2012 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kim, Minjeong; Lee, Eunchul
Publication:Educational Technology & Society
Article Type:Report
Geographic Code:9SOUT
Date:Jul 1, 2012
Previous Article:Using data mining for predicting relationships between online question theme and final grade.
Next Article:Teaching analytics: a clustering and triangulation study of digital library user data.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters