Printer Friendly

Vital signs for instructional design.


Graphics' contribution to learning has long been acknowledged (Duchastel, 1980; Evans, Watson, & Willows, 1987; Lee & Boling, 1999; Levie & Lentz, 1982), but the process for selecting graphics, especially graphics that represent abstract concepts, has been largely ignored. Instructional designers should have three design competencies that address graphics: communicate visually and in writing; design page layout and screen design consistent with message design principles; and identify visuals to instruct, orient, or motivate students from diverse backgrounds and roles (Richey, Fields, & Foxon, 2001). This study investigated how two experienced, academically trained instructional designers selected graphics to represent abstract concepts and how learner ratings of each graphic's appropriateness may relate to the selection process.


Humans have conveyed their lessons through signs, symbols, and stories for thousands of years. Humans use language in similar ways with symbols, oral narratives, and graphic representations to convey meaning (Levi-Strauss, 1969). Levi-Strauss suggested these representative elements provided unconscious structures for knowledge and could be validly interpreted across cultures and time. People can reinterpret their perceptions and the tools that influenced their perceptions and their impact (Erstad & Wertsch, 2008). This is one of the foremost reasons to incorporate graphics into instructional design, since instructional design depends upon the power of visuals to communicate and to facilitate learning (Richey, Klein, & Tracey, 2011).

Pictures seem to be more easily remembered than words, perhaps because most humans are visual thinkers and their pictures are situated in cultural experiences and memories (Erstad & Wertsch, 2008). Humans mediate these cultural experiences and memories with new information and assimilate them into their knowledge. Pictures are, as the old adage states, worth a thousand words because they convey meaning and communicate information. Because instruction is all about communication and learning, communication skills are essential instructional design competencies, so visuals and graphics should be valued as primal forms of communication. This study, grounded in semiotics theory and professional standards, documents the graphics selection process and implications for instructional designers.

New technologies are often emphasized in the design and delivery of online courses. Instructional designers employ these technologies, often using graphics in course designs, but the majority of online course content remains text-based. As online course design matures, more research into how designers select graphics and online learner perceptions of graphics becomes increasingly relevant.

Graphics: One Aspect of Semiotics

Semiotics is defined as the study of signs, codes, symbols, and metaphors that represent something else (Chandler, 2002). Semiotics includes stories, linguistics, and proxemics, such as technology, computers, and mobile devices. Semiotics in education is receiving more attention (Tochon, 2013) and holds implications for instructional design. Marketing practitioners and academics have long acknowledged semiotics as a social-science tool for analyzing the influence of verbal, visual, and spatial sign systems and interpreting cultural codes (Oswald, 2013). As Bruner (1996) noted,

   The evolution of the hominid mind is linked
   to the development of a way of life where
   "reality" is represented by a symbolism
   shared by members of a cultural community
   in which a technical-social way of life is both
   organized and construed in terms of that symbolism.
   This symbolic mode is not only
   shared by a community, but conserved, elaborated,
   and passed on to succeeding generations
   who, by virtue of this transmission,
   continue to maintain the culture's identity
   and way of life.... On this view, knowing and
   communicating are in their nature highly
   interdependent, indeed virtually inseparable.
   (p. 3)

Semiotic Theory

The marketing value of semiotics rests with the affective meanings associated with graphics, signs, and metaphors that are recognized and respected by prospective consumers. Semiotics enables marketing to anchor nonverbal signs in consumer culture (Oswald, 2008). The cultural order and categories are analyzed and ascribed to the lifestyles and values of consumers and matched to the semiotic codes that will facilitate the transference of intended messages in brands and products. This technique, called semiotic marketing, successfully uses the ways humans respond to messages in their environments to promote their purchase of products. Extensively and strategically calibrating and branding the use of signs, symbols, and semiotic tools, in a marketing context can identify strategic relationships between the product categories and competitors or consumers (Oswald, 2013). Instructional designers can use the same semiotic approach when designing online courses.

Learners who see strategic visual information combined with text are more likely to recall the information than if they only have verbal information (Clark & Lyons, 2010; Richey et al., 2011). Clear graphics combined simultaneously with text or audio that activate a learner's prior knowledge can be a powerful instructional method for the visually literate learner (Flick, 2013). Graphics that represent concrete concepts such as a house, a car, or a dog are relatively easy to select, but abstract concepts are more difficult to graphically represent but equally useful for capturing learner attention. Graphics can cue the learner to important text information (Richey et al., 2011).

Semiotic theory provides a philosophical rationale for using objects, such as graphics, to facilitate reading and interpretation (Hlynka, 2013), and conveys meaning through signs and symbols to an interpreter through his or her cultural and prior knowledge. Therefore, inserting signs and symbols--including text analogies or metaphors--affects meaning. Others have suggested that online course design should incorporate graphic concepts, symbols, or visuals that are meaningful and facilitate learning (Clark & Lyons, 2010; Gannon-Cook, 2012, Tochon, 2013).

Graphic concepts may be defined as familiar visual images that scaffold concept learning. Abstract concepts add complexity to finding meaningful images because they are not associated with a readily identifiable image. Graphics, images, and pictures refer to an entire situation in which an image has made an appearance, as when one asks someone if he gets the picture. Heidegger proposed that this is the age of a world conceived and understood as picture (Mitchell, 2005). For example, if the concept is "monarch butterfly," then a picture of a monarch butterfly would be the obvious graphic representation of a butterfly. On the other hand, if the concept is "needs assessment," then identifying what explicit image could represent this abstract concept is neither obvious nor easily determined. An image becomes descriptive and discursive, an analogy between an image and artifact or between an icon and reality (Mitchell, 2005). A very long history of images has described people, specimens, and social and cultural events, and these images have migrated from one culture to another, and have survived leaving memorable impressions even though the oral narratives and texts have been forgotten (Mitchell, 2005). Semioticians, such as Peirce (1931), agree that while signs and symbols may seem meaningless unless taken in context (Goodman, 1978; Mounce, 1997), entire realities are described with images (Mitchell, 2005), and placing a picture with an abstract description can instantly afford the viewer a recognizable or meaningful association. The reader interprets meanings from the symbols or pictures and words because the picture summarizes words that are both indeterminate and ambiguous (Mitchell, 2005).

Instructional Graphics

Graphics can promote learning if the viewer reads the same story as the designer and subject matter expert (SME) intended. Adding graphics enhances an online course (Gannon-Cook, 2011, 2012; Kallinikos, Aaltonen, & Marton, 2010; Reed, 2012; Stanney, 2003; Zaltman & Zaltman, 2009). Online instructional effectiveness studies have repeatedly linked graphics that integrate learning with concrete visual representations of abstract concepts (Means, Bakia, & Murphy, 2014). Effective instructional graphics convey a story to the learner that will contribute to the intended learning. There are design guidelines for adding instructional graphics (Clark & Lyons, 2010; Clark & Mayer, 2011; Lee & Boling, 1999; Vai & Sosulski, 2011), although most emphasize graphics for concrete concepts or processes. Instructional graphics can add value, but creating graphics may be problematic for instructors or instructional designers who do not possess graphic design skills or who do not have a graphic designer to help them. Few if any studies have investigated what constitutes an effective selection process for identifying graphics from existing resource to enhance abstract concept learning.

While the research on the effects of instructional graphics has long been of interest (Duchastel, 1980; Evans et al., 1987; Lee & Boling, 1999; Levie & Lentz, 1982), the methods by which these graphics are chosen for abstract concepts and the resulting learner perceptions of those graphics is less well documented. This study investigated a graphics selection process and the corresponding learners' appropriateness ratings of graphics selected to illustrate important abstract concepts in a graduate instructional design course. Therefore two research questions framed the study: (1) How do experienced instructional designers engage their professional competencies and collaborate to select graphics to represent abstract concepts? (2) How would students who have just completed an online human performance technology (HPT) course rank appropriateness of graphics for representing abstract HPT concepts?


This study investigated the process of identifying meaningful visual representations for abstract concepts that will be referred to as graphic-concepts. We followed an established method for studying designer decision making and problem solving (Richey, 2013; Richey & Klein, 2007). This study is classified as process development research (Richey & Klein, 2007) because the development processes and the resulting product, graphics representing abstract concepts, are the object of investigation. The method enables the researchers to create knowledge grounded in data systematically derived from practice (Richey & Klein, 2007). Unique to instructional design-development research, the method can answer questions about how instructional designers engage in their professional activities (Richey & Klein, 2007).

The purpose of this study was to describe a systematic process for selecting graphic concepts and learner perceptions of those graphics. As a two-person team we documented and analyzed our process for selecting graphics to represent each of the eight abstract HPT concepts presented in a graduate online course.


Two online faculty members and nine students were study participants. Nine of 11 graduate students enrolled in an online graduate HPT course required for a master's degree ranked the graphics. The five men and four women were enrolled in a public urban university with about 8,000 graduate and undergraduate students. We sent students a course message, including the informed consent form, requesting students participate in the research. Students who volunteered to participate agreed with a reply to the course message.


Developmental research requires extensive and detailed notes about the process under investigation. Initially the investigation was to select a set of concepts for students from a graduate course in HPT. We formed a team of two coinvestigators, one of us who had taught an online graduate HPT course for several semesters and wanted to enhance the course with graphics to represent foundational HPT concepts, and the other an experienced online instructor who had investigated applying semiotic principles to graphically enhance online instruction (Gannon-Cook, 2011). We asked if designers would be able to select images that students perceived were representative or consistent with corresponding HPT concepts. The concepts would be selected from course materials and a syllabus that would be used to redesign the HPT course. Part of the graphics selection process was to keep a log of four points (J. Klein, personal communication, December 2012): What did I do today? What did I think of what I did? How will it change what happens in the future? What constraints affected what you did?

Graphic Selection

The first decision was to decide how to select the concepts for which we would select the graphics. We agreed that the faculty member teaching the course should select the concepts from the syllabus and course materials from an on-ground course in a different program. The new online course would incorporate or adapt the concepts as required for the different delivery method. The on-ground course had six modules. Based on the review of the objectives and assignments for each of the six modules, eight HPT concepts that students would have to comprehend to complete assignments successfully were selected.

We scheduled four collaborative development sessions for four consecutive days--4 hours per day in the same location. We simultaneously and systematically collected data on our collaborative process with copious notes and observations on the approximately 16 hours of development time. We documented the development process with design log notes written and recorded in real time. We began with the list of concepts and a development goal to select a graphic representing each that would cue online learner attention to each of the concepts when introduced in the online materials.

The eight concepts for which graphics would be selected represented the concepts addressed in one or more of the six units in the course: HPT, intervention, performance analysis, performance improvement model(s), performance problem, cause analysis, performance gap, and problem statement. At the first of four daily sessions, we agreed to find a graphic for each of the eight concepts in sequence and began with the first term: HPT. We decided to consider graphics for each of the three words in the term. After sketching some possibilities on paper, we agreed that three separate graphics would not accurately communicate the term and could potentially be more confusing than reinforcing.


At this point in the development process we agreed to select a single graphic to represent each HPT concept and began searching available graphics from online resources with Pinterest, a social media website dedicated to collating, displaying, and disseminating pictorial and text information. After a few frustratingly unproductive minutes searching Pinterest, we decided to try Google images. Our initial searches on Google yielded too many useless or irrelevant images, so we changed the search strategy to imagine what might be a possible graphic representation and then search for a corresponding graphic in Google images. Google images proved to be a trove of diverse potentially useful graphics. We began to search in earnest for a graphic to capture HPT.

We sat on either side of a 2-foot wide table, in front of our computers so that the left side of one laptop screen was inches from the left side of the other laptop screen. This enabled us to search and quickly turn the screen around to show images to each other. The instant, real time exchange of images and ideas for selecting graphics enabled us to consider and accept or dismiss images almost as quickly as we could find them unless we disagreed. As we began to search for likely graphics to illustrate the first concept, HPT, one of us suggested a ballerina. Both of us rejected the image after discussing how the ballerina would reinforce a common misconception of previous students entering the course: HPT was about physiology. Our physical proximity facilitated communications with quick interactions to reconcile differences of opinion and clarify miscommunications.

During our initial discussions about the salient attributes of HPT, we agreed that a concept definition was relatively useless; a definition for abstract concepts and its attributes did not necessarily imply an image. For example, the first concept term had three words, HPT, so the faculty member/de signer who had not taught HPT decided to find graphics for each word in the concept and then string the images together. Yet the faculty member/designer who had taught the HPT course for years immediately saw a potential problem with this approach--a graphic could perpetrate misunderstandings novice learners had when they first saw the concept. Specifically representing performance with a ballerina, as one designer suggested, implied the concept was about physical activities performed by one person. The SME designer objected to the ballerina image because novice learners when they began the course often held a common misperception that HPT was about improving physical performance through physiological analysis. The graphic designer countered that a ballerina or musician playing would be familiar images readily interpretable by the novice learners. The instructor-designer agreed although the images would encourage misperceptions about HPT instead of associating the concept with its content domain definition. The graphic had to foster a new interpretation consistent with the course content domain.

Refining the Technique

Our initial intense discussion about how to proceed led us to a refined technique for finding potential images and documenting our efforts. Almost immediately one of us chose a cloud application to organize development notes so that the notes could easily and quickly be recorded and shared. For each of the eight concepts, we recorded notes in a cloud application that captured the graphics and recorded notes for adapting or revising the graphic that we discussed. With a click and entering an email address, one of us instantly sent the completed note to the other so both would have a complete record of possible graphics for each concept.

Sitting side by side with our respective laptop computers, we began to search for online images for the same concept. As each would find a possible image, the other would comment on why it might or might not work. As the process continued, the instructor noted that many of our comments identified salient attributes of the graphic that might represent the concept or possibly confuse the learner. These comments would then guide our future graphic selections.

The iterative process led the two designers to assume different roles. One became the SME and the graphics expert became the instructional designer. These roles emerged within the first 2 hours after beginning the first collaborative session for all eight concepts and the resulting images and detailed descriptions of the decisions were reported in our combined data analysis. We selected three graphics for each concept (Appendix A) with one exception--the term "intervention" (Questions 47)--because we both were concerned that the term could engender confusion without at least one more additional picture option for the students. Following intense discussion and disagreement over which three graphics should be included for one concept, we included four graphics for intervention.

Graphic Appropriateness Ratings

After listening to each other, we concluded one way to resolve the discussion would be to select three potential images and allow our students to rank their appropriateness for representing the concept. Students ranked the graphic appropriateness of the 25 graphics representing eight concepts after they had completed the HPT course and had learned about the concepts. Therefore, students from the target learner group--but with prior conceptual knowledge--would evaluate how well each graphic represented the concept. Furthermore, their ratings would be influenced by what they had learned in the course.

Rating Instrument

Students accessed the online instrument through the course learning management system. The instrument presented the 25 graphics for eight concepts. The three graphics for each of seven concepts and four graphics for one concept (intervention) were presented sequentially, one concept at a time. Students were asked to electronically select one of five appropriateness options: (a) very appropriate, (b) appropriate, (c) no opinion, (d) inappropriate, and (e) very inappropriate.

Data Analysis

We searched for patterns or trends in the kinds of graphics students selected. To conduct a rank order analysis of student perceptions, values were assigned to appropriateness ratings: very appropriate, +2; appropriate, +1; no opinion, 0; inappropriate, -1; very inappropriate, -2. Twenty-five graphics were ranked with frequencies and modes. We individually analyzed patterns and through discussion integrated our analyses.


Rank modes (Table 1) and frequencies (Appendix B) are reported for the descriptive statistics. Six graphics with the highest mode, 2, represented three concepts; the concepts with the survey item number: Performance Gap 21, 22; HPT 1, 2; Intervention 7; Performance Improvement Models 11. Six graphics representing five concepts shared the lowest mode, -1: Performance Improvement Models 12, 13; Intervention 6; Performance Analysis 8; Performance Problem 15; Problem Statement 24. Four graphics had two or more modes of which one was zero--no opinion: Intervention 6; Performance Analysis 8; Cause Analysis 17, 18. Only one graphic had a no opinion mode, Performance Problem 16. Other graphics frequencies with at least two negative responses and several no opinions were Human Performance Technology 3, Intervention 4, 6; Performance Analysis 8, 10; Performance Problem 15, 16.

The rank order analysis resulted in appropriateness ranking for the 25 graphics. Table 1 displays the graphics rank order from least appropriate, Performance Improvement Models 13 ranked 25th, to most appropriate, Performance Gap 21, ranked first. Graphics with images that were associated with nonprofessional contexts were ranked less appropriate. For example, the two least appropriate graphics (race cars and runway models) ranked 24th and 25th. Graphics with text repeating the concept, but displaying images associated with an explicitly nonprofessional context, were also ranked less appropriate. For example, school children with a chalkboard displaying the text problem statement ranked 23rd; a checkered flag combined with the text performance analysis ranked 20th. Graphics with text unrelated to the concept were also was deemed less appropriate. Arrows pointing to the text answer, a graphic for the concept intervention, and a clipboard displaying the text checklist, a graphic for the concept performance problem ranked 21st and 22nd respectively.

Graphics ranked as most appropriate had either text that repeated terms from the course or abstract humans in action. The bar graph titled gap analysis and illustrated gaps, ranked first. The circle of arrows with the text, improvement process in the center and representing HPT was ranked fourth. A graphic with performance gap text and arrows pointing to the gap ranked sixth. The seventh ranked was a graphic representing performance improvement models with the text, performance models, and eight cylinders, each a different color and labeled model, extending from the center. Similarly, the eighth ranked graphic was a cartoon figure with text asking, how do you fill this gap? between current and desired results. Graphics depicting abstract human images engaged in abstract activities consistent with HPT concepts were ranked second, third, and fifth. Student appropriateness rankings for the remaining items, those ranked between ninth and 19th, reflected ambiguities. Their rankings were distributed across the range from very appropriate to very inappropriate and many had no opinions.


This study indicates that graphics can be selected efficiently by expert designers, although images should be closely linked to course content when cueing professional concepts. In addition, despite the small sample size, the data revealed that students who were familiar with the concepts had some consensus about the most appropriate and least appropriate graphics. Furthermore, students ranked the images and text as a graphic unit since there were instances when the text was identical to the concept, but the images telegraphed a nonprofessional or, perhaps conflicting context and the student consensus was that these graphics were inappropriate. Therefore, the responses indicated that students considered both the image and the text as a unit. If either the text or the image in the same graphic was inconsistent with what students had learned in the HPT course they had just completed, they ranked the graphic as inappropriate. Graphics with images or text or both were perceived as appropriate if consistent with the HPT course concepts and context. If the graphic included either an image or text inconsistent with the concepts included in the course, they perceived the graphic as inappropriate.

More attention should be given to the selection of graphics to describe and enhance important learner concepts in online courses. Designers can collaborate to select appropriate graphics to prompt attention to abstract concepts, but images and text must both be consistent with the context and meaning of the abstract concept. In this study, our complementary expertise and experiences affected the process in unanticipated and important ways. The two research questions answered were: how did experienced instructional designers select graphics to represent abstract HPT concepts, and how did students perceive the appropriateness of the same graphics for representing abstract concepts?

The answer to the first question emerged from the multisession graphics-selection process simultaneously and individually documented by each of us. Designers who bring differing expertise to a collaborative process enrich the design process, although differing expertise may mean the design process should begin by clarifying assumptions about the content, especially when one serves as an SME and the other as a designer with graphics and semiotics expertise. An SME who had taught the concepts to learners would be best able to identify images that might be misinterpreted or unfamiliar to the learner group. In this study each brought a different expertise, but collaboratively selected graphics that represented concept options that even learners did not necessarily agree were appropriate.

Clear and explicit communications between the SME and instructional designer were essential to the design process, although at times this can be fraught with disagreement (Keppell, 2004). Communications have to clarify both potentially confusing and most relevant attributes of each concept they seek to illustrate. A clear, communicative relationship is essential in selecting course graphics with both parties acknowledging and agreeing upon the importance of graphics to cue learner attention to relevant concepts.

The second research question, how would students who have just completed an online HPT course rank graphic appropriateness for representing abstract HPT concepts, was answered with the student appropriateness rankings. Students may have diverse perceptions of graphics appropriateness when the graphics lack explicit content that cues the appropriateness or inappropriateness of the graphics. Furthermore, either a graphic's text component or image may perform that function. Future research should investigate how culture and prior knowledge affect learners' perceptions of graphic appropriateness.


* Three graphics for the concept: human performance technology

* Four graphics for the concept: intervention

* Three graphics for the concept: performance analysis

* Three graphics for the concept: performance improvement model(s)

* Three graphics for the concept: performance problem

* Three graphics for the concept: cause analysis

* Three graphics for the concept: performance gap

* Three graphics for the concept: problem statement

* Twenty-five total graphics


1. Human performance technology: 4 very appropriate, 4 appropriate, 0 no opinion, 0 inappropriate, 0 very inappropriate

2. Human performance technology: 6 very appropriate, 1 appropriate, 0 no opinion, 1 inappropriate, 0 very inappropriate

3. Human performance technology: 0 very appropriate, 4 appropriate, 3 no opinion, 2 inappropriate, 0 very inappropriate

4. Intervention: 4 very appropriate, 1 appropriate, 2 no opinion 2 inappropriate, 0 very inappropriate

5. Intervention: 2 very appropriate, 4 appropriate, 1 no opinion, 1 inappropriate, 0 very inappropriate

6. Intervention: 1 very appropriate, 2 appropriate, 3 no opinion, 3 inappropriate, 0 very inappropriate

7. Intervention: 4 very appropriate, 4 appropriate, 1 no opinion, 0 very inappropriate, 0 inappropriate

8. Performance analysis: 1 very appropriate, 2 appropriate, 3 opinion, 3 inappropriate, 0 very inappropriate

9. Performance analysis: 1 very appropriate, 2 appropriate, 3 opinion, 3 inappropriate, 0 very inappropriate

10. Performance analysis: 2 very appropriate, 4 appropriate, 1 no opinion, 2 inappropriate, 0 very inappropriate

11. Performance improvement models: 4 very appropriate, 3 appropriate, 1 no opinion, 1 inappropriate, 0 very inappropriate

12. Performance improvement models: 0 very appropriate, 3 appropriate, 0 no opinion, 6 inappropriate, 0 very inappropriate

13. Performance improvement models: 0 very appropriate, 1 appropriate, 0 no opinion, 5 inappropriate, 3 very inappropriate

14. Performance problem: 2 very appropriate, 5 appropriate, 1 no opinion, 0 very inappropriate

15. Performance problem: 1 very appropriate, 2 appropriate, 2 no opinion, 4 inappropriate, 0 very inappropriate

16. Performance problem: 1 very appropriate, 2 appropriate, 4 no opinion, 2 inappropriate, 0 very inappropriate

17. Cause analysis: 2 very appropriate, 3 appropriate, 3 no opinion, 0 inappropriate, 1 very inappropriate

18. Cause analysis: 2 very appropriate, 8 appropriate, 0 no opinion, 0 inappropriate, 0 very inappropriate

19. Cause analysis: 3 very appropriate, 4 appropriate, 1 no opinion, 1 inappropriate, 0 very inappropriate

20. Performance gap: 2 very appropriate, 6 appropriate, 1 no opinion, 0 inappropriate, 0 very inappropriate

21. Performance gap: 5 very appropriate, 4 appropriate, 0 no opinion, 0 inappropriate, 0 very inappropriate

22. Performance gap: 4 very appropriate, 3 appropriate, 0 no opinion, 2 inappropriate, 0 very inappropriate

23. Problem statement: 1 very appropriate, 6 appropriate, 2 no opinion, 0 inappropriate, 0 very inappropriate

24. Problem statement: 0 very appropriate, 2 appropriate, 3 no opinion, 4 inappropriate, 0 very inappropriate

25. Problem statement: 1 very appropriate, 4 appropriate, 2 no opinion, 2 inappropriate, 0 very inappropriate


Bruner, J. (1996). The culture of education, Cambridge, MA: Harvard University Press.

Chandler, D. (2002). Semiotics for beginners. London, England: Routledge.

Clark, R. C., & Lyons, C. (2010). Graphics for learning: Proven guidelines for planning, designing, and evaluating visuals in training materials. New York, NY: Wiley.

Clark, R. C., & Mayer, R. E. (2011). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. New York, NY: Wiley.

Duchastel, P. C. (1980). Research on illustrations in text: Issues and perspectives. ECTJ, 28, 283-287.

Evans, M. A., Watson, C., & Willows, D. M. (1987). A naturalistic inquiry into illustrations in instructional textbooks. In H. A. Houghton & D. M. Willows (Eds.), The psychology of illustration (pp. 86-115). New York, NY: Springer.

Erstad, O., & Wertsch, J. V. (2008). Tales of mediation: Narrative and digital media as cultural tools. In K. Lundby (Ed.) Digital storytelling, mediatized stories (pp. 21-40). New York, NY: Peter Lang.

Flick, J. (2013). Graphics. In R. C. Richey (Ed.), Encyclopedia of terminology (pp. 133-134). New York, NY: Springer.

Gannon-Cook, R. (2011). Semiotics, social and cultural landmarks in elearning. In G. Kurubacak & T. Volkan Yuzer (Eds.), Handbook of research on transformative online education and liberation: Models for social equality (pp. 352-369). Hershey, PA: IGI.

Gannon-Cook, R. (2012). Restoring washed out bridges so eLearners arrive at online course destinations successfully. Creative Education, 3, 557-564. doi:10.4236/ce.2012.34083

Goodman, N. (1978). Ways of worldmaking. Indianapolis, IN: Hackett.

Hlynka, D. (2013). Semiotics. In R. C. Richey, (Ed.), Encyclopedia of terminology. New York, NY: Springer.

Kallinikos, J., Aaltonen, A., & Marton, A. (2010). A theory of digital objects. First Monday, 15, 6-7.

Keppell, M. (2004). Legitimate participation? Instructional designer-subject matter expert interactions in communities of practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications, Lugano, Switzerland (pp. 3611-3618). Retrieved from Reader.ViewAbstract&paper_id=12035&

Lee, S. H., & Boling, E. (1999). Screen design guidelines for motivation in interactive multimedia instruction: A survey and framework for designers. Educational Technology, 39(3), 19-26.

Levie, W. H., & Lentz, R. (1982). Effects of text illustrations: A review of research. ECTJ, 30(4), 195-232.

Levi-Strauss, C. (1969). The elementary structures of kinship. Boston, MA: Beacon Press.

Means, B., Bakia, M., & Murphy, R. (2014). Learning online: What research tells us about whether, when and how. New York, NY: Routledge.

Mitchell, W. J. T. (2005). What do pictures want? Chicago, IL: University of Chicago Press.

Mounce, H. O. (1997). Two pragmatisms. New York, NY: Routledge.

Oswald, L. (2008). Marketing semiotics. London, England: Oxford University Press.

Peirce, C. S. (1931). Collected papers of Charles Sanders Peirce: Edited by Charles Hartshorne and Paul Weiss. [Sp.] Ed. by Arthur W. Burks. Boston, MA: Belknap Press of Harvard University Press.

Reed, Y. (2012). Critical pedagogic analysis: An alternative to user feedback for (re)designing distance learning materials for language teachers? English Teaching: Practice and Critique, 11, 60-81.

Richey, R. C. (Ed.). (2013). Designer-decision-making research. In Encyclopedia of Terminology (p. 81). New York, NY: Springer.

Richey, R. C., Fields, D. C., & Foxon, M. (2001). Instructional design competencies: The standards. Syracuse, NY: ERIC.

Richey, R. C., & Klein, J. D. (2007). Design and development research: Methods, strategies, and issues. New York, NY: Erlbaum.

Richey, R., Klein, J. D., & Tracey, M. W. (2011). The instructional design knowledge base: Theory, research, and practice. New York, NY: Taylor & Francis.

Stanney, K. M. (2003). Metaphor for navigation and wayfinding within interactive systems. Ergonomics, 46, 1-3.

Tochon, F. (2013). Signs and symbols in education. Blue Mounds, WI: Deep University Press.

Vai, M., & Sosulski, K. (2011). Essentials of online course design: A standards-based guide. Florence, KY: Routledge.

Zaltman, G., & Zaltman, L. (2009). Marketing metaphoria. New York, NY: Erlbaum.

Kathryn Ley

University of Houston-Clear Lake

Ruth Gannon-Cook

DePaul University

* Kathryn Ley, Associate Professor, Instruction Technology, University of Houston-Clear Lake, 2700 Bay Area Blvd., Box 217, Houston, Texas 77058. E-mail:

Rank Order With Sums for 25 Graphics (n = 9)

Graphic                           Sum   Mean    Rank Order

PerformanceImprovementModels13    -20   -2.00        1
PerformanceImprovementModels12     -6    -.60        2
ProblemStatement24                 -4    -.40        3
PerformanceProblem15                0     .00        4
Intervention6                       2     .20        5
Performanceanalysis8                2     .20        6
HPT3                                4     .40        7
PerformanceProblem16                4     .40        8
ProblemStatement2 5                 8     .80        9
CauseAnalysis17                    10    1.00       10
Intervention5                      12    1.20       11
Performanceanalysis10              12    1.20       12
Intervention4                      14    1.40       13
Performanceanalysis9               14    1.40       14
PerformanceProblem14               16    1.60       15
ProblemStatement23                 16    1.60       16
CauseAnalysis19                    18    1.80       17
PerformanceGap22                   18    1.80       18
PerformanceImprovementModels11     20    2.00       19
PerformanceGap20                   20    2.00       20
CauseAnalysis18                    22    2.20       21
HPT2                               24    2.40       22
Intervention7                      24    2.40       23
HPT1                               24    2.67       24
PerformanceGap21                   28    2.80       25
COPYRIGHT 2014 Information Age Publishing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ley, Kathryn; Gannon-Cook, Ruth
Publication:Quarterly Review of Distance Education
Article Type:Report
Date:Jun 22, 2014
Previous Article:"I'm not sharing my work!" An approach to community building.
Next Article:E-learning versus blended learning in accounting courses.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters