Printer Friendly

The use of instructional simulations to support classroom teaching: a crisis communication case study.

The purpose of this study was to investigate how exposure to classroom instruction affected the use of a computer simulation that was designed to provide students an opportunity to apply material presented in class. The study involved an analysis of a computer-based crisis communication case study designed for a college-level public relations course, comparing students taking the course with students having no public relations exposure. The results showed that while the total scores were almost identical for both groups, there were differences in the types of questions the groups tended to score higher on. Also, there were differences in how each group navigated through the simulation. Differences in learning styles had a greater impact on those without public relations exposure than those taking the public relations class. The results indicate that instructional simulation designers need to account for the influence that classroom instruction will have on student performance during these types of simulations.

**********

The use of computer-based simulations for supporting classroom teaching has interested educators in many fields of study. As software applications become more sophisticated, teachers are finding more opportunities to create case-based studies that present students with more "realistic" settings in which to apply what they have learned during class instruction. The use of animation, audio, and video elements that can respond to specific user feedback has given instructors tools that can create complex environments that mimic real-life situations.

The use of computer-based simulations for instruction has been widely promoted because of the opportunities it provides for students to apply knowledge they have acquired in the class. As Jonassen, Campbell, and Davidson (1994) noted, "According to contemporary theories of learning,... learning is most effectively situated in the context of some meaningful, real-world task" (p.32).

Hoffman and Ritchie (1997) pointed out that interactive multimedia provide an important advantage to case studies in that they allow the simulation of real life that can break the student out of time, place, and physical limitations that make many case studies impractical for the classroom. The ability to condense time is a particularly important function in many case studies that require students to make decisions based on information that is presented to them at particular points in time. In simulating real-world scenarios, students are placed in situations where they are faced with making decisions when given incomplete or even inaccurate information; a situation faced by many professionals in many fields.

Over the past several years, a number of studies have been conducted, which have examined the effectiveness of computer-based case studies in the form of instructional simulations. Lee (1999) conducted a metaanalysis of research studies involving instructional simulations to determine what factors influence their effectiveness. Lee's study focused on two different instructional modes of simulation. The practice mode is one where simulation users are first exposed to a body of knowledge through traditional instruction and then asked to apply that knowledge during tasks presented in the simulation. In the presentation mode, the simulation is meant to be a source of both instruction and practice opportunities for the student. The simulation itself is presented to students in either a pure form (no guidance) or a hybrid form (some guidance). The results from Lee's analysis of several studies led to a number of conclusions, one of which was that the hybrid form of the simulation tended to work best, especially when used in the presentation mode. In general, the study suggests that simulations are more effective when users are given some level of guidance.

While it is important that research efforts be conducted to examine the influence of factors such as instructional modes, two important factors involving instructional simulations are often left out. In developing a study of instructional simulations, emphasis is most often placed on designing a simulation that fits the research design at the expense of providing a realistic case study for students in a particular class for a particular field of study. Less attention is paid to how a student will respond to a simulation that is built to support what is being taught in the class. The focus is more on the simulation than on the user. This leaves a number of questions as to how students will respond to a simulation if there is information available both through the class and through the simulation. Which resource are they most likely to use? Will they make assumptions about how to respond in the simulation based on classroom instruction that may lead them to miss important resources in the simulation? Jonassen et al. (1994) stated,
 Perhaps the major failing of instructional systems technology
 research has been the lack of concern with the effects of context.
 We test treatments on unsuspecting and often unwilling subjects who
 have no interest or need to know about the content embedded within
 our treatments. That instruction and media exist in and rely upon
 their surrounding context is usually ignored. (p.32)


Another factor concerns how the simulation provides guidance to users. All simulations may not fit easily into either the pure or the hybrid form. While a simulation may provide information that is helpful or even vital to a student in making a decision during the simulation, the way that information is presented may not be expository instruction. In fact, whether the simulation is viewed as either pure or hybrid could be determined by the user and how they respond to it. As the simulation becomes more complex in its attempts to mimic a real-life situation, it places a greater burden on the user, in terms of, not only making the correct decisions but in determining what resources to use in making those decisions. If instructional simulations are to be used in either mode of instruction (presentation or practice) an important element to examine would be the context in which users are exposed to that simulation.

This article presents an overview of the research literature concerning studies that have attempted to determine how users will respond to case studies presented in a computer-mediated form. Special focus is placed on those characteristics of computer-mediated case studies that are inherent in most instructional simulations. The results of a study comparing the performances of students enrolled in a class for which the simulation was designed with a group of students not enrolled in the class and having no prior experience with the subject matter in the simulation is discussed. Attention will be given to the performance of the two groups as well as how they navigated through the simulation. The article will conclude with a discussion of areas where future research in this area of study might be directed, what the implications are for instructional simulation designers and some general thoughts on how the simulation impacted students in this particular class.

LITERATURE REVIEW

An important factor to consider when examining how effective an instructional simulation will be is the means by which a user navigates through the simulation and accesses information along the way. Nearly all computer-based simulations are built using a hypermedia design that provides a means for students to navigate through the study. Hypermedia systems are what allow simulations to create realistic environments because they provide an opportunity for students to make a variety of choices and conceivably choose a variety of directions with which to proceed through a simulation. Many scholars see hypermedia systems as particularly useful for fields of study defined as "ill-structured" domains of knowledge. This is due to the belief that a hypermedia environment promotes the construction of knowledge that is easier to transfer to novel situations than knowledge developed through more traditional (linear) methods of instruction (Jonassen, Abruso, & Olesen, 1992; Spiro, Feltovich, Jacobson, & Coulson, 1991). Chen and Rada (1996) indicated that research studies suggest hypermedia instruction is better suited for complex (open) tasks than more traditional approaches to instruction. Iiyoshi and Hannafin (1998) argued that hypermedia promotes more open-ended learning as students develop their own strategies and sequences of learning.

Other scholars believe that hypermedia may actually inhibit learning because of the additional challenges they present students. With the use of hypermedia in computer applications for case studies comes the added responsibility of working with the application as well as working through the case study. A number of researchers have raised concerns regarding hypermedia instruction due to the additional burdens that are placed on a student's cognitive processing. McKerlie and Preece (1993) argued that hypermedia places more demands on a user's memory. Freedom of choice can create confusing experiences for students because it increases decision making. This can be compounded by easily accessible information, which is peripherally relevant to the task at hand (Paolucci, 1998). In analyzing existing research on hypermedia learning contained in several studies. Chen and Dwyer (2003) concluded, "... there is little empirical evidence showing that a hypermedia learning environment improves learning outcomes" (p. 144).

Besides the additional cognitive load placed on students in a hypermedia learning environment, there is the additional factor of the learner's ability and motivation to take advantage of all the instructional elements that are available. McKnight, Richardson, and Dillon (1990) contended that the majority of students are not able to set learning objectives for themselves and autonomously evaluate their performance in relation to the study. Horney (1993) found that students tend not to view all links available to them suggesting that simulation users might miss important information if they are not motivated, if they are uncomfortable with the application or if they assume they have all the information necessary to perform a specific task.

Many of the studies looking at hypermedia instruction, including simulations, have discussed how hypermedia is best-suited for specific types of instruction, such as open-ended instruction. However, if an instructional simulation is to create a realistic scenario, it would contain elements of both open-ended and closed-ended instruction. Sometimes a student must make a decision based on very specific facts. Other times the student may be called upon to extrapolate an answer based on a variety of information sources. Also, the form in which information that is used for answering questions or making decisions in the simulation is often presented in a way that is meant to fit with a specific case study and not in the form it would exist in a real-world setting. In many case analyses, the information provided is condensed and filtered by the case writer, diminishing the student's ability to experience and explore the problem in a realistic setting (Small, 1994).

If the simulation is to be used as a way to apply knowledge from traditional instruction then it is important to see how a student would use the simulation with this knowledge available. In other words, is there a difference in how the student would work through the simulation based on having prior knowledge of the subject versus having to rely solely on the simulation for information? Studies have shown that in general, hypermedia instruction is more effective with individuals who have a higher level of prior knowledge of the subject. Students with prior subject matter knowledge are better able to generate terms that may be relevant to their search for information regarding a specific topic and evaluate the veracity of that information once it is found (Chen & Dwyer, 2003; Hill & Hanafin, 1997). However, the research also suggests that hypermedia users may have a tendency not to look at all the resources that are available to them. Would this tendency be magnified by students who felt they could rely on their existing knowledge of the topic leading them to miss important resources in the simulation?

An additional factor that must be considered is the role that individual approaches to learning play in the effectiveness of students in open-ended, hypermedia environments. Scholars in a number of studies (Boles & Pillay, 1999, Chou & Lin, 1998; Melara, 1996) contend that individual learning styles will influence how a student will work in a hypermedia environment and how successful they will be in accomplishing certain tasks.

The research presents a number of factors that may influence how students will perform with an instructional simulation. The purpose of this study was to analyze the effect that factors associated with the context of a class experience may have on a simulation's effectiveness.

THE STUDY

For this study, a comparison of simulation performance was made between students enrolled in a college-level course for which the simulation was designed and students who had no prior exposure to the subject matter. The top priority in developing the simulation was making it fit with the goals established by the class instructor and making it as realistic as possible within the subject matter. As a result, the nature of tasks that students would perform varied from simple fact-finding to high levels of abstract thinking based on the types of questions that had to be addressed.

The simulation was developed for a first level public relations course at a small midwestern university. The specific topic of the simulation involved a crisis communication case study. This subject is well suited for this type of simulation because part of the training that students receive for this subject involves making decisions based on limited information that may or may not be accurate. It also involves sorting through several sources of information in order to make decisions.

The 19 students enrolled in the public relations class made up the group who had been exposed to the subject matter in the context of an actual class. The simulation was given near the end of the semester, shortly after students had completed a section on crisis communication. One student was not able to take the simulation and another student's results were dropped because of the difficulty she had with English as her second language.

The group representing those not exposed to the subject matter consisted of 14 students who were not enrolled in the public relations class who volunteered to take the simulation. These students came from a variety of majors across the university with the only stipulation being that they had not taken any courses involving public relations and had not been exposed to any material involving crisis communication.

During the simulation students were placed in the role of a public relations professional whose organization is facing a crisis. As the student progressed through the simulation they had to filter information that may or may not have been correct or may not have been complete. Files containing various information were available to students at all times within the simulation, which they could use to help answer questions. The files that were present were considered resources that would commonly be available to professionals involved in such a scenario. The files were presented in the form of office memos and documents to put the information in as realistic a context as possible. As a result, these documents were not necessarily catered to this specific crisis but were more general in nature to appear more authentic. This often meant that students needed to filter through this information to find what was relevant.

The computer software used to build the simulation was Macromedia Director. The interface of the simulation presented an office setting viewed from behind the character the student was portraying (see Figure 1 for an example). The documents containing information students could access to help answer questions during the study were accessible through a menu simulating desk drawers labeled with six categories of information (Table 1). When students began the study, they were told that the documents contained information that would be important for answering questions in a printed questionnaire that they received before starting the simulation. The student was then provided a guided tour through each of the six categories. By the time the student actually began the case study, they had been exposed to the name of every file that was available to them and where that document could be found. At certain points in the study students were directed to check these resources for information regarding the case study in either text or audio form (representing e-mail or voice mail messages).

[FIGURE 1 OMITTED]

The case study involved a chocolate company facing a potential product tampering crisis. The study began with the student (in the role of public relations director) receiving a voice mail message indicating that an individual had died after eating the company's product. As the simulation progresses, the student is asked at various points to answer questions presented in a written questionnaire. The simulation explains to students that information contained in the document files may be accessed at any time to answer any question. In an attempt to make the simulation as realistic as possible, the information received through e-mail and voice mail messages was often incomplete, inaccurate or from unreliable sources, situations that are quite common in crisis events such as this one.

The simulation was divided into 12 sections. Each section represented a specific point in time during the crisis when the student was faced with making decisions (responding on the questionnaire) with the information that was currently available. There were a total of 26 questions many of which asked for multiple answers ("What are the three principles to follow when talking to the media during a crisis" for example). The total number of points that could be earned was 67. Students were specifically told when to answer questions in the questionnaire and then when to advance to the next screen in the study. Students were not allowed to go back and change answers to previous questions.

In an effort to characterize the different types of questions asked, each of the questions presented in the questionnaire was ranked along two dimensions on a scale of one to five by the class instructor. One dimension defined a question by the level of extrapolation that was necessary to reach the appropriate answer. Some questions simply required finding the right document and copying the information from that document directly to the questionnaire (level one extrapolation). Other questions did not have direct answers in the documents and required that the students extrapolate information from one or more documents to develop an answer (level five extrapolation). The other dimension, also measured for each question on a scale of one to five, was the degree to which context impacted the answer. For some questions, the answer would apply no matter what the situation was in the case study (level one context). For other questions the context, what was happening at this specific point in the simulation, was very important in determining the answers (level five context).

A written log was kept to track how the student navigated through the simulation, which included every time a document was opened. This information was used, in part, to determine whether a student had accessed a document that was either necessary or useful in answering a question in a specific section. Monitors were present during each student's session and were instructed to record any comments that the students made during the simulation making special note of anything said regarding their thoughts toward the simulation. The monitors also kept track of the amount of time it took to complete the simulation. All questionnaires were scored using an answer key created by the class instructor. The instructor made the final determination of the correctness of any answer that was in question.

In an effort to evaluate the impact that learning styles may have on simulation performance, each student completed a Gregorc Style Delineator (1982, See Appendix A for details), a widely used instrument for measuring learning styles and one that has been suggested as possibly useful for developing hypermedia learning tools (Ross & Schultz, 1999). Students in the public relations class also completed a questionnaire asking them to indicate how much they depended on material they learned in class to answer questions and how much difficulty they had determining whether they should rely on class material versus material presented in the simulation.

RESULTS

A comparison of the overall scores between the two groups shows that their performances were similar, although with both groups, the scores were very low. Out of a possible score of 67, the nonclass group's mean was 39.11 compared to the class group mean of 38.58. The low scores were not unexpected due to the instructor's efforts to make the simulation as realistic as possible.

When the scores are broken down by question, class students tended to score higher than nonclass students on questions that were ranked high in extrapolation and context. A significant difference (p < .10) between scores occurred in 6 of the 26 questions asked. The three questions class students scored significantly higher on were all ranked with extrapolation and context levels of five. Two of three questions that nonclass students scored significantly higher on were ranked with extrapolation and context levels of one. Table 2 has a breakdown of the t-tests for questions along with their extrapolation and context scores. Table 3 provides details on the questions where there was a significant difference.

There was a significant difference between groups in how often they accessed documents that were judged as either necessary or helpful in answering the questions. The nonclass group accessed documents that were rated as necessary for correctly answering questions in a specific section of the questionnaire a significantly higher (p = .02) number of times on average than the class group. The nonclass group also had a significantly higher (p = .06) average for accessing documents rated as helpful to answering questions. This lead to a highly significant difference (p = .005) in the amount of time both groups took to complete the simulation. The nonclass group took an average of 80 minutes to complete the simulation compared to an average of 66 minutes for the class group.

In the survey given to students in the class group after the simulation was completed, 67% responded that they felt confident that they were finding all the information necessary in the study's document folders to answer the questions adequately. Yet 67% also indicated that not knowing if they should rely more on class material or information in the simulation's documents caused them problems. The majority of students (89%) said that they felt that what they learned in class was helpful when answering questions in the simulation.

Learning styles based on the Gregorc Style Delineator showed no significant impact on total simulation scores when all students were included. However, when nonclass student scores were isolated, learning styles did seem to have some impact. The Abstract-Sequential style had a positive correlation with total scores while the Concrete-Random style had a negative correlation with total scores (Table 4). It should be noted that the Concrete-Random style was suggested as being the one best suited for computer simulations (Gregorc & Butler, 1984). The results of this study showed just the opposite.

DISCUSSION

The results of this study suggest that there are differences in the way students will use an instructional simulation based on their previous experience with the subject matter in the form of class instruction. While the overall scores were almost identical, the class group tended to score higher on questions that required more abstract thinking. This group would benefit from exposure to terminology, principles, and concepts that were stressed in class that would allow them to "see the bigger picture."

However, the class group also showed a tendency to miss out on resources in the simulation that would have been important in answering many of the questions, especially those questions that required a specific fact for an answer. As a result, the nonclass group tended to score higher on questions where the answer relied heavily on finding the right document. Because of their total reliance on the simulation for information to make decisions, they were more thorough in how they proceeded. Also, they did not have to deal with the potential conflict of deciding which source of information, the class or the simulation, they should rely upon.

The results of the survey appear to support the idea that having two sources of information from which to draw may create confusion in terms of how students will use one or the other resource. The class group expressed that they used material from class instruction even though they were not specifically told to do so and that this ambiguity created some problems in how they used information in the study.

In attempting to use information gathered in class, this group of students would rely mainly on memory, which may not have been accurate or they remembered pieces of information and did not apply them in the proper context. As a result, they would not look up the accurate information on questions related to specific facts in the resources that were available to them in the simulation. As one student would describe the experience in reflecting back on taking the simulation, "It was like taking an open book test without using the book." While they may have felt they were accessing all relevant information, an analysis of their actions demonstrated that they often did not.

The results regarding learning styles pointed more to the influence of class involvement than specific learning styles. The study suggests that students who were not in the class and were depending solely on the simulation for information tended to rely more heavily on their dominant learning styles. The fact that the learning styles that were significant in impacting scores positively were different from those suggested by the tool's author raise questions concerning the instrument's validity for this study. It also suggests that learning styles may be more influential on student performance in the context of different simulation forms.

The fact that the learning style that significantly impacted scores negatively was the one suggested by the tool's author as being best suited for computer simulations raises questions concerning the instrument's validity for this study. When identifying the types of learning environments that are best suited for the Concrete-Random style, Gregorc and Butler (1984) group simulations in with learning tools such as open-ended problem solving, independent study, and exploration. Putting simulations in this category tends to generalize the activities involved in working through the simulation, some of which were highly structured and required following specific instructions closely.

Gregorc and Butler (1984) did state that Concrete-Random students, "might have difficulties in classes that demand step-by-step learning, especially if they enjoy discovering how things work by trial and error" (pp. 28-29). While the case study was designed to encourage exploration of resources and promote open-ended problem solving, there were a number of instances where step-by-step learning was required. Many times, these different activities were required for the same question making it difficult to isolate learning style impact on the results of specific questions. This would suggest that a more detailed analysis of how learning styles impact certain computer simulations that integrate these different activities would be helpful.

An interesting result of this study that was not a part of the original design was the impact that misreading questions may have played on the results. When doing the initial scoring on the questionnaire, it became obvious that there were students who accessed the right documents but gave the wrong answer because they did not realize the context of the question. This was due in large part to some students losing track of the characters involved in the study, including the character they were portraying. For example, in a couple of sections of the simulation, the student is asked to call a meeting to discuss some aspect of the crisis. For each meeting, the student, in the role of the public relations director, is asked who should attend the meeting. The question is worded, "Besides you and the CEO (both mentioned by the names they are given in the simulation), who should attend the meeting?" In both instances, there is a document which provides the names (by job title) of people who attend meetings such as these. For these questions, 72% of the class group and 50% of the nonclass group got part of the answer wrong by naming either the PR Director or the CEO even though the question specifically said they were already on the list and did not need to be included in the answer.

In another question, the student whose simulation character was about to return a phone call from a reporter regarding the crisis, was asked to provide some guidelines to follow when speaking to the reporter. Again, there was a document in the simulation which contained several tips that the student could use for the answer. At least two of the students used the tip listed in the document "Make eye contact" even though the context of the question involved a phone call. It seemed obvious that some students became so involved in finding the document that fit the question, they lost track of what the question was really asking. This points to the importance of reinforcing the role that the student is playing during the simulation.

The results of this study suggest two areas of consideration for instructional simulation researchers and designers. For researchers, it points to the importance that contextual factors have on how students perform on such simulations. Further study is required to determine how students apply various information resources to a simulation, both internal and external to the simulation itself, and what can be done to improve how students use the information resources that the simulation presents. For designers, the study highlights the importance of coordination with class instructors. Instructional simulations need to clearly communicate to students what information they should be gathering from the simulation's resources. The study also points to the need to reinforce the role-playing aspect of simulations to take full advantage them.

Finally, it should be noted that the instructor of the course saw value in using the simulation even though the average scores were very low. This simulation served as a point of discussion in later classes as students related their experiences, particularly with those sections that proved especially difficult. The instructor believed that having students experience an environment in which the answers are not always apparent or obvious helped them better understand what their experiences might be as future professionals. Although this can be conveyed through class discussion, the instructor contends that the simulation created greater student interest by producing an environment that more closely imitates real life experiences. This led to an increase in the level of involvement among the students in the class. While further study and improvements are needed, there does appear to be value in supplementing classroom experiences with computer-based instructional simulations.

References

Boles, W.W., & Pillay, H. (1999). Matching cognitive styles to computer-based learning instruction: An approach for enhanced learning in electrical engineering. European Journal of Engineering Education, 24, 371-384.

Chen, C., & Rada, R. (1996). Interacting with hypertext: A meta-analysis of experimental studies. Human-Computer Interaction, 11, 125-156.

Chen, W-F., & Dwyer (2003). Hypermedia research: Present and future. International Journal of Instructional Media, 30(2). 143-148.

Chou, C., & Lin. H. (1998). The effect of navigation map types and cognitive styles on Learner's performance in a computer-networked hypertext learning system. Journal of Educational Multimedia and Hypermedia, 7(2/3). 151-176.

Gregorc, A. F., & Butler, K. A. (1984). Learning is a matter of style. VocEd, 59(3), 27-29.

Hill, J.R., & Hannafin, M.J. (1997). Cognitive strategies and learning from the world wide web. Educational Technology, Research and Development, 45(4), 37-64.

Hoffman, B., & Ritchie, D. (1997). Using multimedia to overcome the problems of problem based learning. Instructional Science, 25, 97-115.

Horney, M. (1993). A measure of hypertext linearity. Journal of Educational Multimedia Hypermedia, 2(1), 67-82.

Iiyoshi, T., & Hannafin, M.J. (1998, April). Cognitive tools for open-ended learning environments: Theoretical and implementation perspectives. Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA.

Jonassen, D.H., Ambruso, D.R., & Olesen, J. (1992). Designing a hypertext on Transfusion medicine using cognitive flexibility theory. Journal of Educational Multimedia and Hypermedia, 2. 309-322.

Jonassen, D. H., Campbell, J. P., & Davidson, M. E. (1994). Learning with media: Restructuring the debate. Educational Technology Research & Development, 42(2). 31-39.

Lee, J. (1999). Effectiveness of computer-based instructional simulation: A metaanalysis. International Journal of Instructional Media, 26(1), 71-86.

Melara. G. E. (1996). Investigating learning styles on different hypertext environments: Hierarchical-like and network-like structures. Journal of Educational Computing Research, 14, 313-328.

McKerlie, D., & Preece. J. (1993). The hype and the media: Issues concerned with designing hypermedia. Journal of Microcomputer Applications, 16(1), 33-47.

McKnight, C., Richardson, J., & Dillon, A. (1990). Journal articles as learning resource: What can hypertext offer? In D. Jonassen & H. Mandl (Eds.), Designing hypermedia for learning (pp.277-291). Berlin: Springer-Verlag.

Paolucci, R. (1998). The effects of cognitive style and knowledge structure on performance using a hypermedia learning system. Journal of Educational Multimedia and Hypermedia, 7(2/3), 123-150.

Ross, J.L., & Schulz, R. A. (1999). Using the world wide web to accommodate diverse learning styles. College Teaching, 47(4), 123-129.

Small, R.V. (1994). Adding value to the case study method: The potential of multimedia. Journal of Instruction Delivery Systems, 8(1), 18-19.

Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive flexibility, constructivism and hypertext: Random access instructions for advanced acquisition in ill-structured domains. Educational Technology. 31(5), 24-33.

APPENDIX A Gregorc Style Delineator

The Gregorc Style Delinieator provides a measurement of four different categories of learning styles. Gregorc and Butler (1982) have associated these four styles with specific types of learning environments that students who are dominant in these styles work best.
Learning Style Learning Environment

Concrete sequential (CS) Works best with specific directions and step-
 by-step instructions.
Abstract sequential (AS) Works best with theories and abstract ideas
 that require the application of logic and
 rational thought.
Abstract random (AR) Works best when the environment is important
 in learning such as working with others in
 group projects.
Concrete Random (CR) Works best in situations requiring exploration
 and open-ended problem solving. Gregorc and
 Butler specifically mention that this type of
 person works well with simulations.


MARK SHIFFLET AND JANE BROWN

University of Evansville

USA

ms83@evansville.edu

jbrown@roadescape.com
Table 1 Documents available to Students in the Simulation by Subject
Category

Principles of R.O.P.E.
Crisis Communication What is a Crisis?
 Crisis Communication Principles
Communication The Crisis Management Team
Resources Crisis Space Needs
 Floor Plan for Emergency Operations Center
 Floor Plan for Information Center
 Setup for Press Conferences
 Media Kits
 Crisis Hotline
 Using the Web in a Crisis
Internal Notifying Aunt Lolly's Personnel in a Crisis
Communication Keeping your Employees Informed
External News Release Guidelines
Communication Tips for Dealing with Reporters
 Handling Yourself at a Press Conference
 Handling Yourself at a Broadcast Interview
 Outside Notification in a Crisis
 Key NonMedia Constituencies
 Key Media Contacts
Aunt Lolly's Mission Statement
Information Company History
 Organizational Chart
 Memo about Possible Risk to Company

Table 2 Mean Scores and t-Test Values for Simulation Questions by
Context and Extrapolation Ratings

 Context,
 Total Class Non-Class T-Test Extrapolation
Question Points Mean Mean p-value Rating (1)

A.1 3 1.69 2.14 0.09* 1, 1
D.1 4 3.17 3.36 0.57 1, 1
D.2 4 3.22 3.71 0.15 1, 1
E.1 1 0.89 0.79 0.44 1, 1
B.3 3 2.17 2.64 0.17 1, 1
G.2 3 2.61 2.36 0.52 1, 1
A.2 1 0.56 0.36 0.28 2, 1
G.1 3 2.22 1.89 0.30 1, 3
H.1 1 0.86 0.89 0.78 2, 2
I.2 3 1.44 1.82 0.37 2, 2
L.1 3 0.50 1.43 0.001* 2, 2
C.1 1 0.94 1.07 0.16 2, 3
C.2 3 1.61 2.07 0.08* 2, 3
E.2 3 1.47 1.71 0.27 3, 2

* Significant at p <.10 or lower.

(1) Context: 1 = answer not dependent on context
 5 = answer highly dependent on context
(1) Extrapolation: 1 = no extrapolation necessary for answer
 5 = extrapolation necessary for answer

 Context,
 Total Class Non-Class T-Test Extrapolation
Question Points Mean Mean p-value Rating (1)

G.3 4 2.14 2.11 0.94 1, 5
H.2 1 0.64 0.54 0.49 2, 4
H.3 3 1.97 1.96 0.98 3, 3
J.1 3 0.94 0.86 0.54 4, 4
K.1 3 1.03 1.04 0.98 4, 5
A.3 3 1.50 1.00 0.05* 5, 5
B.1 3 2.44 2.04 0.12 5, 5
B.2 2 1.39 1.54 0.59 5, 5
F.1 1 0.19 0.04 0.16 5, 5
F.2 3 1.22 0.71 0.10* 5, 5
I.1 2 0.83 0.21 0.02* 5, 5
J.2 3 0.92 0.82 0.71 5, 5
Total 67 38.58 39.11 0.82

* Significant at p <.10 or lower.

(1) Context: 1 = answer not dependent on context
 5 = answer highly dependent on context
(1) Extrapolation: 1 = no extrapolation necessary for answer
 5 = extrapolation necessary for answer

Table 3 Simulation Questions in Which There Was a Significant Difference
in Group Scores

Question Notes on Question

A. 1 "Whom, inside the company, does This is the first question the
Casey Doe notify first? Second? student is asked after being
Third?" notified about the possible
 product tampering.
L. 1 "Doe plans to emphasize which This question was asked in the
three points when addressing a context of Doe addressing a
beginning public relations class on college class
the E step of the public relations
process in crisis communication?"
C. 2 "Identify at least three errors This question is asked regarding
of content (not format) that Doe Doe's evaluation of two drafts
identifies in the press release handed of a news release.
back to Chris Lowly"
A. 3 "Circle the top three non-media The student is presented a list
publics Doe should notify if this of 12 specific non-media publics
situation is, indeed, a crisis" from which to select 3.
F. 2 "What are three communication This question is asked in the
objectives that Doe might set, along context of a corporate meeting
with the main public targeted by each concerning the crisis.
objective?"
I. 1 "Casey Doe and CEO Scotty LeMeaux This question is asked in the
meet to strategize about the context of preparing the
interview ... What is the message and company's CEO for a nationally-
what is the sound-bite that Scotty can televised interview.
repeat during the interview?"

Table 4 Correlation of Gregorc Learning Style Scores with Simulation
Scores (Class and Non-class)

Learning Style All Students Class Students Non-class Students

Concrete-Sequential: r = .019 r = -.01 r = .061
Abstract-Sequential: r = .18 r = -.01 r = .52 (1)
Abstract-Random: r = .007 r = .229 r = -.37 (2)
Concrete-Random: r = -.20 r = -.21 r = -.19

(1) t-test was significant at p <.05
(2) t-test was significant at p <.10
COPYRIGHT 2006 Association for the Advancement of Computing in Education (AACE)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Brown, Jane
Publication:Journal of Educational Multimedia and Hypermedia
Geographic Code:1USA
Date:Dec 22, 2006
Words:6510
Previous Article:Are language learning websites special? Towards a research agenda for discipline-specific usability.
Next Article:Improving students' understanding and perception of cell theory in school biology using a computer-based instruction simulation program.
Topics:


Related Articles
Effect of Computer-Mediated Communications on Teachers' Attitudes Toward Using Web Resources in the Classroom.
A computer by any other name.
Teaching critical thinking online.
Mathematics and computer-aided learning.
Action research in language learning.
Implementing computer technologies: teachers' perceptions and practices.
The impact of online teaching on faculty load: computing the ideal class size for online courses.
Improving students' understanding and perception of cell theory in school biology using a computer-based instruction simulation program.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters