Printer Friendly

Working engineers' multimedia type preferences.

1. Introduction

The present research concerned improving the instructional design of continuing engineering education (CEE) courses. This study reports the results of the second phase of a two-phase study of working engineers' multimedia preferences. The two phases analysed engineers' preferences for four multimedia categories and two media types (in parentheses) in each category: (1) verbal (text vs. labels + narration); (2) static graphics (drawing vs. photograph); (3) dynamic non-interactive graphics (animation vs. video); and (4) dynamic interactive graphics (simulated virtual reality [VR] vs. real VR). The eight types were compared in pairs projected on dual screens. Comparing eight types, two at a time where the position (left or right) does not matter, would have required 28 pairwise combinations. This was too many for a single testing session as it would likely have resulted in participant fatigue that could have biased the results. To avoid this potential research design weakness, the study was conducted in two phases.

In the first phase, participants only made the following four within--multimedia--category comparisons: text vs. labels + narration, drawing vs. photograph, animation vs. video and simulated VR vs. real VR (Baukal and Ausburn 2016a). That phase focused on comparing preferences for multimedia within categories, with no comparisons between categories. The results showed the working engineers sampled strongly preferred text over labels + narration in the verbal category and mildly preferred: drawings over photographs in the static graphics category; animations over video in the dynamic non-interactive graphics category; and simulated (graphic images) VR over real (photographic) VR in the dynamic interactive graphics category.

There is no current consensus regarding what multimedia type is best for learning (Baukal and Ausburn 2016a). In the absence of established guidelines for choosing media formats for learning, learner preferences should be an important factor as higher interest may motivate the learner and increase learning, depending on learner characteristics and learning context. The potential role of learner preference in designing instructional materials and the need for in-depth research in this arena provided the impetus for the present study.

The general purpose of the second phase of the study reported here was to compare the four preferred multimedia types from within each category as determined in Phase 1 against each other. Phase 2 then was a between-category preference comparison of text, drawing, animation and simulated VR.

The specific purpose of this study was to describe (a) multimedia type preferences of working engineers and (b) relationships among multimedia type preferences to selected demographic variables. The following research questions guided this study:

(1) What are the preferences of engineers among the multimedia types of verbal, static graphics, non-interactive dynamic graphics and interactive dynamic graphics?

(2) What are the relationships of engineers' multimedia type preferences to the demographic variables of gender, age, total engineering work experience, total engineering work experience at a particular company, management level, highest engineering degree, specialty for highest engineering degree and professional engineering license?

2. Methodology

This study was a cross-sectional survey where data were collected at a single point in time. A portion of the total population of engineers working at a particular company at a specific location were investigated. The company manufactures equipment used in the power, chemical and petrochemical industries that employs primarily mechanical engineers. The survey technique used here was directly administered questionnaires given to a group of participants assembled for a particular purpose at a certain place.

The general population of interest for this research was working engineers. The specific target population was engineers working at a medium-sized U.S. manufacturing company (USMC), which at the time of the survey was 174 as identified by the Human Resources (HR) department of the company. This included those who either had an engineering degree or had a job title containing the word engineer. Nine of the 110 participants (8.2%) in Phase 2 that had engineer in their title did not have engineering degrees. However, they were performing the functions of an engineer and were included in the results.

According to the HR records of USMC, the gender distribution for the population of 174 engineers included 152 males and 22 females. The management distribution included 133 individual contributors, 31 middle managers (supervised at least 1 person) and 10 senior managers (Vice Presidents, Chief Financial Officer and President). Other information collected for the study, including age, total years of work experience and years worked at USMC, was unavailable from HR and was collected via the survey.

In the Phase 1 and 2 surveys for this study, there were 86 (49.4% participation) and 110 (63.2% participation) participants, respectively. There were a total of 118 participants (67.8%) who took at least one of the two surveys. Only eight of those who took the Phase 1 survey did not take the Phase 2 survey. The results of the Phase 1 study (Baukal and Ausburn 2016a) were used to select the preferred multimedia types used in Phase 2. Phase 1 identified the media formats preferred within media categories; then these preferred formats were compared in Phase 2 to identify preferences between media categories.

Two surveys were developed: one for Phase 1 (Baukal and Ausburn 2016a) which is shown in Table 1 and one for Phase 2 which is shown in Table 2. Both surveys had two versions. The only difference between the two versions was the relative screen positions of the multimedia types, which were reversed. For example, for Multimedia Pair 5 in Phase 2, text was shown on the left screen in version C and on the right screen in version D. This was done to minimise the effects of diffusion of information between participants and to minimise potential bias based on where participants sat in the room.

The between-category multimedia pairs in Phase 2 were selected after the highest preferences from Phase 1 were determined which were text, drawing, animation and simulated VR (see Baukal and Ausburn 2016a). Six pairs were used in Phase 2 (Multimedia Pairs 5 through 10), which represented all the possible combinations of four different items identified in Phase 1 taken two at a time where the relative position (left or right) doesn't matter. Each multimedia type appeared three times in the six pair comparisons. The order of the pairs and which type appeared on the left and right slides were randomised to minimise the effects of any biases participants might have had towards the left or right side or towards the order of presentation, such as preferring what they saw first or last. Additional details of the four multimedia types compared in this study are given elsewhere (Baukal and Ausburn 2016a).

Five different methods were used to measure multimedia preferences. The first was a relative preference between the multimedia types shown on the left screen compared to that on the right. This technique, known as stimulus presentation methodology, is commonly used in multi-image research (Salomon and Clark 1977). A seven-point Likert-type scale was used: strongly prefer left slide, moderately prefer left slide, slightly prefer left slide, no preference, slightly prefer right slide, moderately prefer right slide, strongly prefer right slide. The second comparison method was rating each media type on a scale of 0 ('hate it') to 100 ('love it'). The third comparison method was ranking where 1 indicated 'like most' and 2 indicated 'like least'. Since each multimedia type appeared three times in the six pairwise comparisons, each type received three relative preferences, ratings and rankings.

Two other comparison methods were used: overall rating and overall ranking. For both of these, all four types were shown simultaneously with two on the left screen and two on the right screen, as shown in Figure 1. In the overall rating method, participants rated each type on the same scale used on pairwise comparisons from 0 ('hate it') to 100 ('love it'). In the overall ranking method, participants ranked types from 1 ('like most') to 4 ('like least').

One reason for using multiple comparative methods was to reinforce the results, in case a participant entered conflicting data during a comparison, which occurred in a few cases. In the preference method, no numbers were used. In the rating method, a higher number indicated a higher preference, while in the ranking method a lower number indicated a higher preference. Another reason for using multiple methods was to provide a measure of preference strength. For example, rating one type with a zero and the other type in a comparison with a 100 would show the participant strongly preferred one over the other. A third reason for multiple methods was to force a preference using the ranking, even if a participant selected no relative preference and rated two multimedia types equally. Taken collectively, the five comparison techniques offered a more accurate preference assessment than any of the techniques individually.

Phase 2 was conducted as soon as possible after Phase 1. There was a total of 110 participants who took the Phase 2 survey, with 56 (51% of the total) taking the C version and 54 (49% of the total) taking the D version. Of the 110 participants, 78 (71% of the total) took the Phase 1 survey and 32 (29% of the total) did not.

Data were collected over about a three-week period as shown in Table 3, although more than half of the responses (58 or 53% of the total) were received two days after the conclusion of Phase 1. There was a one week gap in collecting data as the room with two screens was not available during that time. Participants were asked to sign in as they entered the room, so the researchers would know who participated so participants would not be asked again to complete the same survey. However, the surveys themselves were completely anonymous.

Participants sat wherever there was a package of information and were not directed to sit in any particular location. The very last row in the room was not used in any of the sessions as it was somewhat far away from the projection screens. Depending on how many people accepted the invitation for a given session, information packages were arranged as close to the centre of the room as possible. For smaller sessions, the first row was not used to try to give participants a better angle to view both screens.

The lead researcher stood behind the podium in the front centre of the room where both computers were located containing the PowerPoint slides. A different presentation was loaded onto each computer, so the proper multimedia types would appear on the left and right screens (as viewed by the participants), depending on which survey was being shown. Slides were manually advanced by the researcher based on a visual observation of the participants to determine when everyone had completed the given survey section.

Before showing the overall comparison slide, it was verbally explained the participants would next see two multimedia types on the left screen and two on the right. Going from the participants' left to right, these would be identified as left slide, left image; left slide, right image; right slide, left image; and right slide, right image. Those labels also appeared at the bottom of the screens. The order of multimedia types was partially random, but partially ordered. Because two types were static and two were dynamic, one of each was on each screen to avoid having two moving on the same screen which might have been distracting and biased the results.

Typical Phase 2 survey completion time was approximately 15 min. Participants were instructed to give completed surveys to assistants helping the lead researcher, so researchers did not know who turned in which survey, thus protecting participants' anonymity.

Frequency data collected were categorical. Some were ordinal (e.g. age range, total years of work experience range and total years of work experience at USMC range). Others were nominal (e.g. gender and degree). The appropriate statistical analysis for these data types is chi-square which compares distributions of observed frequencies with expected frequencies (Wickens 1989). These inferential statistical tests are non-parametric (Sheskin 2011). Three assumptions must be met for a valid interpretation of chi-square (Ary et al. 2006):

* Observations must be independent where subjects in each sample were randomly and independently selected.

* Categories are mutually exclusive where each observation appears in one and only category in the table.

* Observations are measured as frequencies.

With the exception of random subject selection, these assumptions were met here. The statistical significance level used here was p < 0.05.

Missing data are a common problem in quantitative educational research studies (Peugh and Enders 2004). This study included two types of missing data: item non-response and participant attrition (Schlomer, Bauman, and Card 2010). In the former, participants complete a survey but do not give a response to every item. In the latter case, some participants are lost in a longitudinal or multiple session study (here not all participants completed surveys in both phases).

There is no consensus regarding how much missing data, which also includes unintelligible data (Schafer and Graham 2002), is problematic. Schlomer, Bauman, and Card (2010) suggest it is not the percentage of missing data that is important but rather the statistical power adequacy of the resulting data-set and the missing data pattern. It is also important to distinguish between missing data patterns and missing data mechanisms (Enders 2010). There are various approaches for handling missing data. In this study, pairwise deletion was used to maximise available data.

For each pairwise media type preference comparison, there was a relative preference, a rating and a ranking. In theory, if a participant preferred one type over another, this should have been reflected in all three pairwise comparison methods. However, there were some discrepancies where a participant preferred one type in a pair with two of the comparison methods, but the other type with the third comparison method. To preserve the integrity of all data, no comparison data were eliminated even if there was such a discrepancy.

To score pairwise preference comparisons, preference points were assigned to the preferred multimedia type based on the Likert-type scale preference strength. If it was strongly preferred, moderately preferred or slightly preferred, the media type received three, two or one points, respectively. The other type in the pair received no points. If there was no preference given between two types, neither received any points. A mean value was calculated for each type in a pairwise comparison. Means were used, instead of total points, because there were different numbers of comparisons for each pair as some participants did not give a response for some pairs (i.e. missing data). Then, the three mean values from the three pairwise comparisons were summed. These total values were then normalised by dividing them by the highest total mean. Therefore, the most preferred type had a normalised score of one and the other three types were between zero and one. The higher the normalised mean, the more preferred the type.

In the pairwise ratings procedure, ratings were normalised by dividing individual ratings by the total amount of rating points for that pair. For example, if one type in a pair was given a rating of 50 and the other type in the pair was given a rating of 100, the first type would have a normalised rating of 50/(50 + 100) = 0.33 and the other type a normalised rating of 100/(50 + 100) = 0.67. The mean was calculated using the normalised ratings for each pair, rather than summing them, again because there were different numbers of ratings as some participants did not rate every pair. The three mean values for the three pairwise comparisons were added together to get a total pairwise rating. These values were then normalised by dividing them by the highest total mean. The highest rated multimedia type had a normalised score of one and the other three types were between zero and one. The higher the mean, the more preferred the type.

A similar procedure was used to compare pairwise rankings. A relative ranking was calculated for each pairwise comparison. No normalisation of the raw scores was required as all participants used the same ranking method (either a one or a two). A mean ranking was determined for each type in each pair, because not every participant ranked every pair. The three mean values for the three pairwise comparisons were added together to get a total pairwise ranking. These values were then normalised by dividing the lowest total mean pairwise ranking (i.e. highest ranked multimedia type) by each total mean. That inverted the ranking scores so they could be directly compared with the relative preference and rating scores. The highest ranked multimedia type had a normalised score of one and the other three were between zero and one. The higher the total pairwise ranking mean, the more preferred the type.

In the overall comparison of all four Phase 2 multimedia types, ratings were normalised by dividing each participant's individual ratings by their total rating points. Then a mean rating was determined for each type. The four overall mean ratings were then normalised by dividing each by the highest overall mean rating. The overall highest rated multimedia type had an overall normalised mean rating of one and the other three had values between zero and one. The higher the normalised overall mean rating, the more preferred the type.

For the overall ranking comparison, a mean ranking was determined for each type. Again, no normalisation of raw scores was required as the same ranking method was used by all participants (one = highest through four = lowest). The lower the mean ranking, the more preferred the type. These rankings were then converted to scores that were directly comparable to the other comparison methods, where higher values meant more preferred. The rankings of one, two, three and four were converted to normalised rankings of 1.00, 0.75, 0.50 and 0.25, respectively, where uniform spacing between rankings was assumed. Means were calculated for each type and normalised by dividing each by the highest mean value (i.e. the most preferred multimedia type). The most preferred type had a normalised value of one and the other three had normalised values between zero and one.

After normalisation, all five preference scoring methods had values between zero and one, where the higher the mean the more preferred the multimedia type. Then, mean values for all five methods of comparing the Phase 2 media types were calculated, rather than using summations, because there were different numbers of participants providing responses for each method. Using mean values, rather than weighted means, assumed no method of comparison was superior or inferior to another since there is no empirical or theoretical rationale favouring any of the methods used.

3. Results

Multimedia Pairs 5 through 10 compared the four different multimedia categories to each other in pairwise comparisons using the three methods of relative preference, rating and ranking and two overall comparison methods of all four categories using rating and ranking. Figure 2 shows the normalised results of the three pairwise and two overall comparison methods for all four multimedia types. These data indicate the drawing and simulated VR types were approximately equally preferred, animation was only slightly less preferred and text was not preferred compared to the other three types. Engineers clearly indicated preference for graphics over text which answered the first research question in this study.

The second research question concerned the relationships between demographic data and multimedia preferences. Figure 3 shows the mean normalised preference score as a function of gender for the four multimedia types. There were no statistically significant differences in preferences by gender.

Figure 4 shows mean normalised preference scores as a function of the participants' age. ANOVA data indicate statistically significant differences (p = 0.046) in the mean values for the simulated VR by age.

Figure 5 shows participants' multimedia preferences related to their total engineering work experience. The means were not statistically significantly different.

Figure 6 shows participants' multimedia preferences related to total engineering work experience. The means were not statistically significantly different.

Participants' management level at USMC and their multimedia preferences are shown in Figure 7. The means were not statistically significantly different.

The highest engineering degree of the participants' was related to their multimedia preferences as shown in Figure 8. The means were not statistically significantly different.

Figure 9 shows multimedia preferences related to the participants' highest engineering degree specialty. The means were not statistically significantly different.

Participants' Professional Engineering license status compared to their multimedia preferences is shown in Figure 10. One statistically significant difference (p = 0.012) was between those who did and did not have a PE license and their simulated VR preferences.

4. Discussion

To summarise the findings of the relationship between multimedia type preferences and demographics, there were generally no statistically significant differences by gender, age, total engineering work experience, engineering work experience at USMC, management level, highest engineering degree and specialty of the highest engineering degree. The only statistically significant differences were for age and engineering license for simulated VR preference. The oldest age group (>65) had the strongest preference for VR, while an intermediate age group (36-45) had the weakest. Those without a license more strongly preferred simulated VR than those with a license.

Dynamic and static media formats were evaluated by engineers in this study, and some of the findings failed to support the assumption that dynamic media are generally preferred. One example of multimedia with the potential to get the attention of learners is animations (Kirby 2008). However, they also have drawbacks as learning tools that should be noted before incorporating them into instruction (Betrancourt 2005). Moreno (2005) warned instructional designers about focusing too much on state-of-the-art technologies without considering how they relate to cognitive theory. This is a potential problem with attractive dynamic multimedia such as animation and VR. For example, there is a tendency to design distance learning materials based more on media technology than on sound instructional design principles (Carr and Carr 2000). The present research showed the most advanced and most dynamic technology, VR, was not strongly preferred over other less advanced technologies such as static graphics.

Instructional designers often assume dynamic graphics such as animations and VR are preferred when motion is involved in a learning context. However, using static images that communicate motion can be as, and sometimes more, effective than using animations (Clark 2005). This study's participants slightly preferred static graphics to both non-interactive and interactive dynamic graphics.

Another practical finding of this study is the working engineers sampled here strongly preferred graphical multimedia over text which is consistent with their strong visual cognitive style (Baukal and Ausburn 2016b). Among the graphical multimedia types studied here, there was not a strong preference for any particular type, but all were strongly preferred over text.

There were some potentially confounding variables in this study. The labels remained on the screen for the verbal and static graphics multimedia types, but did not remain on the screen for the non-interactive and interactive dynamic graphics. One participant wrote of the comparison between the labels + text and the animation, 'Can they both be on the same slide so the definitions are always visible and highlighted when selected?' Another participant made comments on the comparison between the drawing and the simulated VR which the participant slightly preferred. Of the drawing the participant wrote, 'I like how this has all parts pointed @ and labeled at one time.' Of the simulated VR the participant wrote, 'Disadvantage here is there is no slide with all parts labeled and identified and defined at once'.

Another potentially confounding variable was the lower quality of the VR simulations compared to the other multimedia types because of software limitations. This was especially noticeable when the zooming feature was used. Having simulations with at least comparable quality to the other multimedia types may have changed the results, especially since the simulated VR was only slightly less preferred than the drawing.

A further potentially confounding variable was learner interactivity. If these multimedia types were used, for example, in an online learning context, the learners themselves would control the speed of displaying materials by advancing content at their own pace. In this study, materials were advanced at a predetermined pace (approximately one minute for all types). This was done purposely to remove pace as a variable, because participants did not have any individual content control and did not actually interact with any of the graphics, which removed an inherent feature of VR.

5. Conclusions

Given the limited scope of this study, conclusions must be drawn with caution. However, several conclusions may be posited. First, it can be proposed working engineers may prefer more graphical multimedia to more textual multimedia.

The findings do appear to merit the conclusion that multimedia preferences for working engineers are generally independent of demographics. No conclusions can be drawn regarding engineers' multimedia preferences compared to those of the general population or to other occupations, as no data regarding these comparisons were sought.

6. Recommendations

6.1. Instructional design

The range of multimedia preferences demonstrated by engineers in the study leads to a recommendation to use variety in designing engineering instructional materials. As with most things in life, too much of any one thing is often not optimal or even desirable. Clark (1989) recommended using a variety of instructional media as there is no one type better than others. Jensen (2008) recommended changing media frequently. One type of multimedia may deliver all of the content for a topic or may interact with other multimedia where the key is to synchronise the design to enhance learning (Sidhu 2010). Silbur (2010) recommended using only illustrative and not gratuitous (e.g. cartoons) visuals. However, Clark (2011) noted that not all visuals are equally effective for learning so they should be selected based on the features of the visual, the content and goal of the lesson, and the characteristics of the learner. Clark (2015) strongly recommended using graphics that are appropriate to the prior knowledge of the learner; visuals that are too complicated for the learner can reduce rather than enhance learning.

A second reason for variety in designing instructional media is multiple representations can help students develop deeper understanding (Ainsworth 1999), which is particularly important for significant and challenging subjects. In engineering education, it is common to show an equation (a more abstract type of multimedia) along with a graph (a more concrete type of multimedia) to show how variables in an equation are related. While strict duplication of multimedia types should be avoided (e.g. drawing and photograph that are essentially the same), concepts can be reinforced using different types of complementary multimedia. Using multiple types for the same topic will also appeal to a wider range of student learning preferences.

Failure to give participants, in this study, the control that should be present in interactive dynamic media such as VR was considered a confounding variable that may have biased the findings. Giving learners more control can enhance learning. For example, passively watching a video or animation play through may not be as effective for learning as if it were interactive (Cherrett et al. 2009). Non-interactive dynamic graphics can be divided into shorter segments to give learners time to absorb information before revealing succeeding segments.

6.2. Future research

Many things were not studied here which should be considered in future research. Clark (1989) listed five types of technical training content: procedures, concepts, factual information, processes and principles. The present study investigated only one particular type of subject matter--factual information. Future studies should investigate multimedia preferences for the other four technical content types.

The particular topic studied here, components of a specific technology, did not include any motion. The dynamic media hypothesis states dynamic graphics may be superior to static graphics for viewing topics that incorporate motion. Working engineers' preferences may be different for subject matter with motion or movement (e.g. pistons moving in an engine), where dynamic graphics may be more strongly preferred than static graphics.

The effects of colour compared to black-and-white were not considered in this study because the subject matter selected here was essentially black-and-white. Other studies could compare engineers' preferences between black-and-white vs. colour for other subject matter types.

Dimensionality was not considered here. While the subject matter selected was three-dimensional, only side views were shown, which made the static graphics (drawing and photograph) effectively two-dimensional. This was done deliberately to avoid showing the top and bottom of the burner, which would have revealed some important intellectual property for this technology. It might be assumed engineers would prefer three-dimensional over two-dimensional graphics, but other previous assumptions, such as the assumed preference for dynamic over static graphics, were not supported based on this study, so the effect of dimensionality should be investigated.

Much more work remains to be done studying working engineers in other industries, in other parts of the U.S., and in other parts of the world with different languages and cultures. This study only concerned engineers working at a medium-sized Midwestern U.S. manufacturing company in the combustion industry. It is possible that engineers working in other industries such as automotive, aerospace and academia may have different preferences compared to those found here. Cultural differences in other locations may also impact preferences.

Future studies should investigate participants having demographics lacking here. For example, there were relatively few participants over 65 and under 26 years old. There were not many participants who: had between 11 and 20 years of total work experience, were senior managers, had a Ph.D., or were civil/structural engineers.

One study limitation was the quality of the VR object movies which were limited in image resolution and the number of images that could be woven together due to software limitations. The lower resolution was particularly evident when zooming into the image. The limited number of images was also evident because the object movie was choppy when the image was rotated. These limitations should be eliminated in future versions of object movie software. A related issue is the participants did not have their own computers to manipulate the VR simulations. It would be useful to repeat some comparisons in this study with improved VR movies with more and higher resolution images, and where each participant has their own computer to manipulate the images.

It would be very useful in future studies to compare learning with actual objects compared to learning with virtual objects. This is important because it is not normally feasible, for example, for distant learners to use actual objects, but it is possible for them to use virtual objects. It would be important to know if virtual objects are as effective as using actual objects. In some instances, such as when objects are very large or very small, it may actually be preferred to use virtual objects instead of actual objects.

Cueing, also referred to as signaling, helps guide learners to essential information to be learned, emphasises organisation, highlights relations, and can reduce cognitive loads to enhance learning (Mayer 2005). Research has shown using colour or coarse movement (referred to as inherent content cues) in animations for cueing is more effective than using artificial cues such as arrows that are not part of the content being studied (De Konig et al. 2009). Arrows are a common device used for cueing (Clark 2005) and were deliberately used here to make all multimedia considered here to be as informationally equivalent as possible and to minimise colour cueing as a potentially confounding variable. Future studies could consider working engineers' preferences for multimedia with inherent vs. artificial cues.

Another potentially important piece of data that could be collected in future studies is to ask participants what side of the room they sat on during the survey. This could then be compared to their preferences to see if there are any biases towards picking multimedia displayed either on the same side as the participant or on the other side.

Future research should include some qualitative studies to collect more information on participants' thoughts and opinions. This could provide some explanations for why participants prefer certain types of multimedia. Here, some participants offered unsolicited written comments. Qualitative feedback might also identify other aspects of instructional design that could be important. For example, one participant wrote, 'I like to put the mouse over a part and have it tell me what is it.' This is useful feedback that could help improve instructional design. A qualitative study, such as a focus group, might provide other similarly useful information.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Charles E Baukal, Jr is the Director of the John Zink Institute which is part of John Zink Hamworthy Combustion in (Tulsa, Oklahoma, U.S.A. where he has been since 1998. He has a PhD in Mechanical Engineering, an EdD, a Professional Engineering license from the state of Pennsylvania, and is an adjunct instructor at Oral Roberts University and the University of Tulsa. He has over 35 years of industrial experience and over 30 years of teaching experience. He has authored over 150 publications including authoring/editing 13 books on industrial combustion and is an inventor on 11 U.S. patents.

Lynna J Ausburn holds a PhD in Educational Media and Technology from the University of Oklahoma. She is a professor emerita of Occupational Education at Oklahoma State University, has extensive international education experience, and has received several awards for teaching and research excellence. Her professional contributions include more than 50 published articles, 100 presentations and three monographs.

References

Ainsworth, S. 1999. "The Functions of Multiple Representations." Computers & Education 33 (2-3): 131-152.

Ary, D., L. C. Jacobs, A. Razavieh, and C. Sorensen. 2006. Introduction to Research in Education. Belmont, CA: Thomson Wadsworth.

Baukal, C., and L. Ausburn. 2016a. "Multimedia Category Preferences of Working Engineers." European Journal of Engineering Education 41 (5): 482-503.

Baukal, C., and L. Ausburn. 2016b. "Verbal-visual Preferences of Working Engineers." European Journal of Engineering Education 41 (6): 660-677.

Betrancourt, M. 2005. "The Animation and Interactivity Principles in Multimedia Learning." In The Cambridge Handbook of Multimedia Learning, edited by R. E. Mayer, 287-296. Cambridge: Cambridge University Press.

Carr, C. S., and A. M. Carr. 2000. "Instructional Design in Distance Education (IDDE): A Web-based Performance Support System for Educators and Designers." Quarterly Review of Distance Education 1 (4): 317-325.

Cherrett, T., G. Wills, J. Price, S. Maynard, and I. E. Dror. 2009. "Making Training More Cognitively Effective: Making Videos Interactive." British Journal of Educational Technology 40 (6): 1124-1134.

Clark, R. C. 1989. Developing Technical Training: A Structured Approach for the Development of Classroom and Computer-based Instructional Materials. Reading, MA: AddisonWesley.

Clark, R. C. 2005. "Multimedia Learning in E-courses." In The Cambridge Handbook of Multimedia Learning, edited by R. E. Mayer, 589-616. Cambridge: Cambridge University Press.

Clark, R. C. 2011. Graphics for Learning: Proven Guidelines for Planning, Designing, and Evaluating Visuals in Training Materials. 2nd ed. San Francisco, CA: Pfeiffer.

Clark, R. C. 2015. Evidence-based Training Methods: A Guided for Training Professionals. 2nd ed. Alexandria, VA: ATD Press.

De Konig, B. B., H. K. Tabbers, R. M. Rikers, and F. Paas. 2009. "Towards a Framework for Attention Cueing in Instructional Animations: Guidelines for Research and Design." Educational Psychology Review 21: 113-140.

Enders, C. K. 2010. Applied Missing Data Analysis. New York: Guilford.

Jensen, E. 2008. Brain-based Learning: The New Paradigm of Teaching. Thousand Oaks, CA: Corwin.

Kirby, J. R. 2008. "Mental Representations, Cognitive Strategies, and Individual Differences in Learning with Animation: Commentary on Sections One and Two." In Learning with Animation: Research Implications for Design, edited by R. Lowe and W. Schnotz, 165-180. Cambridge: Cambridge University Press.

Mayer, R. E. 2005. "Principles for Reducing Extraneous Processing in Multimedia Learning: Coherence, Signaling, Redundancy, Spatial Contiguity, and Temporal Contiguity Principles." In The Cambridge Handbook of Multimedia Learning, edited by R. E. Mayer, 183-200. Cambridge: Cambridge University Press.

Moreno, R. 2005. "Instructional Technology: Promise and Pitfalls." In Technology-based Education: Bringing Researchers and Practitioners Together, edited by L. Pytlikzillig, M. Bodvarsson, and R. Bruning, 1-19. Greenwich, CT: Information Age Publishing.

Peugh, J. L., and C. K. Enders. 2004. "Missing Data in Educational Research: A Review of Reporting Practices and Suggestions for Improvement." Review of Educational Research 74 (4): 525-556.

Salomon, G., and R. E. Clark. 1977. "Reexamining the Methodology of Research on Media and Technology in Education." Review of Educational Research 47 (1): 99-120.

Schafer, J. L., and J. W. Graham. 2002. "Missing Data: Our View of the State of the Art." Psychological Methods 7 (2): 147-177.

Schlomer, G. L., S. Bauman, and N. A. Card. 2010. "Best Practices for Missing Data Management in Counseling Psychology." Journal of Counseling Psychology 57 (1): 1-10.

Sheskin, D. J. 2011. Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, FL: CRC Press.

Sidhu, M. S. 2010. Technology-assisted Problem Solving for Engineering Education: Interactive Multimedia Applications. Hershey, PA: Engineering Science Reference.

Silbur, K. H. 2010. "A Principle-based Model of Instructional Design." In Handbook of Improving Performance in the Workplace: Volumes 1-3, edited by K. H. Silbur and W. R. Foshay, 23-52. San Francisco, CA: Pfeiffer.

Wickens, T. D. 1989. Multiway Contingency Tables Analysis for the Social Sciences. Hillsdale, NJ: Lawrence Erlbaum.

https://doi.org/10.1080/22054952.2017.1392225

Charles E. Baukal Jr. (a) and Lynna J. Ausburn (b)

(a) John Zink Institute, John Zink Hamworthy Combustion, Tulsa, Ok, USA; (b) Workforce and Adult Education, Oklahoma State University, Stillwater, QK, USA

ARTICLE HISTORY

Received 30 May 2017

Accepted 8 October 2017

Caption: Figure 1. Final comparison with text on the left side of the left screen, simulated VR on the right side of the left screen, animation on the left side of the right screen and photograph on the right side of the right screen.

Caption: Figure 2. results of normalised comparison methods for the verbal (text), static graphic (drawing), non-interactive dynamic graphic (animation) and interactive dynamic graphic (simulated VR) types.

Caption: Figure 3. Multimedia preferences by gender.

Caption: Figure 4. Multimedia preferences by age range.

Caption: Figure 5. Multimedia preferences by total engineering work experience.

Caption: Figure 6. Multimedia preferences by total engineering work experience at usMc.

Caption: Figure 7. Multimedia preferences by management level at usMc.

Caption: Figure 8. Multimedia preferences by highest engineering degree.

Caption: Figure 9. Multimedia preferences by specialty for highest engineering degree.

Caption: Figure 10. Multimedia preferences by professional engineering license.
Table 1. Phase 1 survey versions.

                    Survey A

Multimedia
pair          Left slide    Right slide

1             Labels        Description
2             Drawing       Photo
3             Video         Animation
4             Simulated     Real vr
                vr

                    Survey B

Multimedia
pair          Left slide    Right slide

1             Description   Labels
2             Photo         Drawing
3             Animation     Video
4             Real vr       Simulated
                              vr

Table 2. Phase 2 survey versions.

                    Survey C

Multimedia
pair         Left slide    Right slide

5            Text          Drawing
6            Animation     Simulated
                           VR
7            Drawing       Animation
8            Simulated     Text
             VR
9            Text          Animation
10           Drawing       Simulated
                           VR

                    Survey D

Multimedia
pair         Left slide    Right slide

5            Drawing       Text
6            Simulated     Animation
             VR
7            Animation     Drawing
8            Text          Simulated
                           VR
9            Animation     Text
10           Simulated     Drawing
             VR

Table 3. Phase 2 survey sessions.

Date                       Time     Version   # Participants

07/31/2013 (Wednesday)   10:30 AM      C            30
                          1:30 PM      D            28
                                                    58
08/02/2013 (Friday)      11:00 AM      D             5
                          1:30 PM      D            13
                                                    18
08/12/2013 (Monday)       9:30 AM      C            18
08/16/2013 (Friday)       1:30 PM      D             8
08/19/2013 (Monday)       3:30 PM      C             8
                                                   110
COPYRIGHT 2017 The Institution of Engineers, Australia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Baukal, Charles E., Jr.; Ausburn, Lynna J.
Publication:Australasian Journal of Engineering Education
Article Type:Report
Geographic Code:8AUST
Date:Oct 1, 2017
Words:6578
Next Article:MoodleNFC--integrating smart student ID cards with moodle for laboratory assessment.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters