Printer Friendly

Spatial visualization learning in engineering: traditional methods vs. a web-based tool.


In a previous study (Melgosa, Ramos, & Banos, in press), an interactive tutorial (IT) was presented, which functions as a web-based spatial visualization ability (SVA) learning support tool for students of engineering graphics and as an administrative tool for teachers to track student learning. It is an open-access Internet application that students can use to complete exercises and exams taken at random from its database or as directed by the teacher. ITSVA is designed to communicate knowledge of SVA more evenly and to address weaknesses in SVA among students following undergraduate engineering degrees. Continuous assessment and follow-up of student learning achievements on engineering courses are requirements of the guidelines for the European Higher Education Area.

The ITSVA serves three types of users (student, teacher, and administrator) and has four main parts: (1) a content management system (CMS); (2) an intelligent tutorial system (ITS); (3) a web-based tool for exercise management and correction, (4) a database. ILMAGESV (Interactive Learning Manager for Graphic Engineering: Spatial Vision) is available at the following URL and is a type of IT-SVA. The main difference of this tool compared to other tutorial learning systems is its interactivity due to the use of 3D viewers, allowing computerized simulations of the mental process of rotating 3D models.

The use of ILMAGESV, associated with the methodology proposed by Perez and Serrano (1998) (summarised in section Justification), helps train students to develop spatial visualization skills and brings with it four important features:

1. A 3D model that may be manipulated on-screen as a tool to assist with mental visualization and rotation exercises of particular difficulty to students.

2. Preliminary tests that may be taken to identify the level at which the student experiences spatial visualization difficulties and that recommend a starting level in the ILMAGE_SV application to the student.

3. Instant access to self-evaluation records for the student.

4. Automated tracking of student learning for the teacher.

The main differences between our application and some (but not all) of the other web applications are that it has manipulable 3D objects, calculates and displays the results and grades of the finished exercises, tracks learning achievements on a database, and has been used and/or validated in comparative studies. We highlight that the possibility of manipulating interactive 3D models through a 3D viewer was the feature that our students rated most highly. We therefore consider that these Web3D tools should incorporate a 3D viewer.

It is evident that engineering students should possess very well developed SVA, so that they can effectively communicate and progress as professional engineers (Ferguson, Ball, McDaniel, & Anderson, 2008; Brus, Zhoa, & Jessop, 2004; Sorby, 2001), as one of their core competences is the drafting and execution of projects. A definition given by McGee of spatial visualization is: "The ability to mentally rotate, twist, or invert pictorially presented visual stimuli" (McGee, 1979). Various experiments to improve this ability have been performed, some in a significant way, such as those in which students have real models in their hands to understand what they have to visualize (Ferguson et al., 2008; Sorby, 2001). We proposed the use of virtual on-screen models that may be manipulated by means of 3D viewers. Their manipulation helps students with the process of mental rotation of objects. This idea is also shared by Piburn, Reynolds, McAuliffe, Leedy, and Birk (2005), who argued that the manipulation of 3D computer objects in a virtual terrain could significantly improve students' spatial abilities.

ILMAGESV has been validated with engineering graphics students at the University of Burgos (Spain) by means of a user-satisfaction survey over two academic years. We may highlight the survey results, which suggest that more than 89% of our students would use this application again to improve their spatial visualization skills and that 98% would recommend this application to other people with an interest in SVA.

However, we also think we should confirm whether ILMAGE_SV is an effective learning tool for graphic engineering students, which is precisely our aim in this study.

Figure 1 shows one of the various types of exercises stored on the database, in which the student can manipulate the 3D model with the eDrawings viewer that appears on the left. The set problem is described in the middle and, in the column on the right, the student types in the answers in the form of letters that correspond to the numbered surfaces shown in the problem statement. In the example, the two first correct answers have been introduced, the third is incorrectly answered, and no further answers have been given. The screen shown in Figure 1 appears when the button is activated to view the results, giving the number of correct and incorrect responses and unanswered questions, as well as the grade obtained in the exercise. At the same time, the database stores the results so that the teacher may use the ILMAGE_SV-based student evaluation.

We previously demonstrated that the design, structure, resources, and ease of use of the ILMAGE_SV web application are of sufficient quality to be used as a teaching tool. They have also confirmed that the content of the ILMAGE_SV application in terms of its capacity to motivate, its usefulness, and its appropriateness are suitable to help students develop spatial visualization skills (Melgosa et al., in press). However, the effectiveness of ILMAGESV as a tool that improves the learning of spatial visualization was not evaluated. Thus, we wish to confirm in this article whether ILMAGE_SV is an effective tool for learning spatial visualization, if its use improves learning in comparison with traditional methods, whether it is better for students who experience greater difficulties with spatial visualization, and whether it is better for those students with no previous knowledge of technical drawing.


At present, universities in Europe are currently in the process of introducing university studies according to the criteria of the Bologna plan, which focus on student learning rather than on a teacher-led approach. This situation, together with the emergence of CAD, has led many universities to consider either total or partial elimination of descriptive geometry from their engineering course curricula. There are more and more researchers who acknowledge the need to search for alternatives to descriptive geometry, in order to develop spatial visualization skills on engineering courses.

Various researchers (Hake, 2002; Sorby, 2000; Miller & Bertoline, 1991; Scribner & Anderson, 2005) believe that the most critical component of graphic representational skills is the ability to apply spatial visualization to objects. Contero, Company, Naya, and Saorin (2006) and Connolly (2009) described the continued importance of spatial reasoning in the engineering curricula and stressed that it should be taught using sketching as well as modern technology. From their perspective, emphasis should be placed on orthographic projection skills, mental imagery of 3D objects, and the use of web-based drills, interactive multimedia, and tutorials.

Web3D has great potential for a wide number of educational applications that require visual understanding (Strong & Smith, 2001; Web3D Consortium, 2010), although research into educational techniques and systems associated with its use is very limited. The 3D models (which can be animated) allow students to understand aspects of the taught subject that are not clearly seen in an image, when they are hidden inside the models. It should be noted that more emphasis has been placed on the visualization of 3D objects, because 3D immediately enhances the learning process (Liarokapis et al., 2004). Students can explore 3D visualization through interactive Web3D content of the teaching material, which helps them to understand it more effectively.

According to Ault and Samuel (2010), visualization skills are a strong indicator of success in engineering, science, and a variety of other careers. Studies have shown that training can enhance visualization skills in a relatively short time. Researchers generally agree that spatial visualization skills are enhanced by 3D drawing and by manipulating physical 3D objects. Traditionally, engineering graphics courses have included a strong component of descriptive geometry and sketching. Since the advent of computer-aided design systems in the early 1980s, nearly all US engineering schools have ended their courses in descriptive geometry and most schools have also dropped manual drafting and sketching from their introductory graphics courses. Universities around the world have followed suit. As a result, there has been a noticeable decline in the visualization skills of engineering students.

There are now numerous studies that apply CAD, the manipulation of 3D models in virtual environments, or the completion of different types of tests such as those used by Sorby, Wysocki, and Baartmans (2003), in order to improve spatial visualization. For example:

* Tsutsumi, Schrocker, Stachel, and Weiss (2005), using the mental cutting test (MCT), confirmed in a significant way that students following descriptive geometry courses obtained better visualization results than those taking design courses.

* Sorby and Baartmans (1996), in the conclusions to a course on 3D visualization skills, stated that the completion of computer-based exercises (in this case, with I-DEAS software) improved learning among students with greater learning difficulties.

* Rafi, Anuar, Samad, Hayati, and Mazlan, (2005) also confirmed with significant results that the use of Web3D applications as pedagogic tools with 3D models in VRML format improved the development of spatial skills.

* Ferguson et al. (2008) found that the PSVT:ROT (Purdue Spatial Visualization Test: Rotations) test led to significant improvements in the spatial visualization of students that had real models in their hands to perform the different practical exercises, as opposed to students who did not physically have these models in front of them.

Even so, other investigations suggest otherwise:

* Leopold (2005), Sorby and Gorkska (1998), Sorby (2000), Yue (2001), and Godfrey (1999) all suggested that 3D CAD experience in itself does not appear to enhance visualization skills.

* Koch (2006) found no significant differences between drawing and solid modelling design methods used for solving technical problems (p = .752).

* Konukseven (2010) compared traditional teaching methods on an engineering graphics course with a 3D-based teaching method in VRML format, which engineering students could visualize on the Web. His results suggested both methods were of the same quality and showed that an effective way to keep students active is to use creative visualization and to offer them opportunities to interact with the courseware.

In our opinion, the use of 3D CAD software and the 3D viewers, which assist with mental manipulation of the models and thinking in 3D, produces significant results. Nevertheless, there is no significant improvement in visualization skills when CAD software is used for 3D modelling.

We have to advance in the use of self-correction test-type exercises in the educational virtual environments (EVEs), in order to try to improve the development of visualization skills. These EVEs should have tools that measure student learning and Web3D models as a resource to support learning and to improve spatial visualization skills, formerly acquired in descriptive geometry classes. The most recent literature suggests that, among those methods that use test exercises to improve visualization ability, the enhancing visualization skills-improving options and success (EnViSIONS) project is the most extensively used.

Teachers on the EnViSIONS project (Veurink et al., 2009) developed spatial visualization ability on an introductory course in 10 study modules: isometric sketching; orthographic projection (normal surfaces, orthographic projection); inclined and single curved surfaces (not included in workbook/software [Sorby et al., 2003]); flat patterns; rotation of objects about a single axis; rotation of objects about two or more axes; object reflections and symmetry; cutting planes and cross sections; surfaces and solids of revolution; and combining solids. Following their use at six universities, these materials have been shown to improve visualization significantly. Three minimum study modules are necessary, according to Veurink, Hamlin, and Sorby (2008), to obtain significant results: isometrics; orthographics; and rotations about a single axis.

We applied the methodology suggested by Perez and Serrano (1998) in ILMAGE SV, to develop spatial visualization skills that are divided into six levels: (1) identification and recognition, (2) understanding, (3) application, (4) analysis, (5) synthesis, and (6) evaluation. We included the first four levels, cited earlier, and held the fifth and sixth levels outside ILMAGE_SV in the classroom. Perez and Serrano (1998) demonstrated, in a significant way, that the performance of 72.6% of students, who began with this methodology, was above a pre-established average level, as against the 47.4% of students who performed above that same level before the introduction of this training. This methodology, in relation to the assessment of spatial visualization parameters, is at an intermediary position between the three components proposed by Veurink and the ten modules in the EnViSIONS project.

Since 2003, a group of teachers at the University of Burgos have been firmly convinced that the spatial visualization skills of students enrolling on engineering courses have been weaker than in previous years, an opinion that is also expressed by other authors (Veurink et al., 2009; Sorby, 2009; Duff & Kellis, 2009; Brus & Boyle, 2009; Knott & Kampe, 2009). One of the causes is that technical drawing is an optional material in studies leading up to pre-university exams (the Bachillerato, in Spain). This reason coupled with the shorter length of study plans, the emergence of CAD, and an increasing number of exercises done on 3D all encourage us to look for alternatives to traditional teaching to improve spatial visualization.

Design of the investigation

The graphic expression study module at our university is divided into four parts: technical drawing, geometry, descriptive geometry and CAD. On this study module, although spatial visualization improves over the academic year, the fundamental concepts--types of projections, principal views, and minimum necessary views--are learnt in the first part (technical drawing) and at the start of the course. This investigation focuses on those concepts and related practical sessions, which amount to three classroom hours of taught classes, to which we should add the private study time of each student.

The design of the investigation applied two different methods to test the efficiency of the two methods and to confirm which was best for the development of spatial visualization. One group of students used a traditional method (T), in which the teacher explained the concepts in class and the students completed three practical drawing exercises. The other group used the experimental method (E), in which the students had access, in a computer room, to the ILMAGE_SV software package described above. This application has three videos with the same theoretical concepts as in group T, in which the teacher sets up three practical exercises with 52 test-type questions, as shown in Figure 1. The students worked their way through the graded application levels exercises progressively: identification of surfaces, main views, developments, contact surfaces between blocks, and minimum necessary views.

We used the following indicators in the design phase of the pilot project, in order to form homogeneous groups of students, to ascertain which students held previous knowledge and to compare the visualization results between groups:

* A test of spatial ability, at the start of the study module to classify students by levels of spatial visualization.

* The previous studies completed by students, to ascertain whether the newly enrolled students have previous knowledge of technical drawing.

* An exam at the end of the pilot project, divided into the variable "vision" (spatial visualization ability) and the variable "sketch" (sketching skills).

* An exam upon completion of the course with a final visualization test, in order to confirm the influence of the study module's other components on the development of spatial visualization in groups E and T.

Initially, the pilot project was solely designed for the 2009-10 academic year, but the results were not significant because some of the samples were small. Another pilot project was repeated with the same variables in the 20102011 academic year. After the pilot project, in the first academic year, we agreed to introduce "increase in knowledge" as a new variable (INCRMRT), which required a new test to assess the improvements in SVA. This test was performed both before and after the pilot project in the second academic year.

Measurement instruments

At the start of the course the differential aptitude tests-space relations (DAT-SR) test enabled us to classify students by levels of aptitude for spatial visualization. This test, with a total of 60 questions, requires mental manipulation of objects in a tri-dimensional space. It involves identifying one object among the four options, which, when unfolded, corresponds to the surface development of a die (Figure 2).

Before and after the experiment, in order to establish the increase in spatial visualization knowledge, we used the mental rotations test (MRT), which measures mental rotation ability. The test consists of 20 questions. Each item involves identifying two figures, from among four possibilities, that correspond to the drawing on the left, but have been rotated by several degrees (Figure 3).

At the end of the pilot project, in order to ascertain the extent of student learning achievements with the two methods (E and T) and which method was the most successful, they take an exam in two parts to test the vision variable and the sketch variable. One example of this exam, in Figure 4, consists of three exercises. The first involves hand sketching a view of a part seen from point A, in which the part must be mentally rotated on its axes; the second consists of sketching the way the part develops; and the third, of sketching the minimum necessary views of the part. The intention is to assess spatial visualization skills (mental rotations and spatial visualization).

Lastly, a new test, the "final vision test" variable, was introduced at the end of the course, which was intended to track student progress throughout the course in groups E and T, in the development of spatial visualization. This test was obtained from the selection of 24 test questions taken from the DAT-SR (Bennett, Seashore, & Wesman, 1997), MRT (Vanderberg & Kuse, 1978), PSVT-ROT (Guay, 1977), and the Lappan (1981) tests.

The Lappan test consists of 32 items, each of which shows the projection of a construction formed by cubes and only one out of five options has to be selected, which corresponds to another of its projections. The PSVT-ROT test comprises 30 items, where one of five possible responses corresponds to a piece that has been rotated 90 or 180 degrees about one of its axes.

Pilot project

In the first week of the course, all students on the University of Burgos industrial engineering course, in each of the two academic years 2009-2010 and 2010-2011, took the DAT-SR test to ascertain their previous knowledge of spatial visualization. The percentile of the DAT-SR test was used with a coupling or pairing technique to distribute students between groups E and T. In this distribution of students between the two groups, the different control variables (gender, first-time enrolment, previous studies--pre-university or vocational training--and previous knowledge of technical drawing) were taken into account, beginning with gender, so that these variables would be balanced in the two groups. The pilot project over the two academic years involved 112 students in group T and 132 students in group E. The t-test statistically confirmed that these groups were equivalent with regard to spatial visualization, measured by the DAT-SR test.

During this first week of the 2010-2011 academic year, students from both groups also took the MRT test and repeated it after the pilot project, so as to validate any increase in their knowledge of spatial visualization.

The pilot project lasted for three hours over a period of three weeks, at the start of the course when students were learning about the basic concepts of spatial visualization.

Group T conducted three practices with manual exercises: identification of surfaces in views, sketching the way a part develops, principal views, minimum necessary views, and views when rotating a part around given axes (example Figure 5).

Students from group E conducted three practical exercises with ILMAGESV. They completed 52 test-type exercises of progressive difficulty, involving the same concepts as those for group T, but without sketching (Figure 6).

Following the pilot project, in the fifth week, a visualization exam was given to all students without the use of help tools and completed by hand sketching. In this exam, the variables "vision" and "sketch" were independently assessed (Section "Measurement instruments" and Figure 4).

The end of the pilot project coincided with the final study module exam, in which a test with the points described at the end of section 3.1 was added to the end-of-year exam.

Results of the pilot project and their analysis

In the second academic year of the pilot project, students completed the MRT pretest (40 point test) and repeated a MRT post-test after the pilot project ended. Eighty-five students participated in the pretest and 106 students in the post-test. After the pilot project, it was confirmed that 76 students had responded to both tests, of whom, 45 belonged to the experimental group and 31 to the traditional group, all of which initially confirmed that both methods (T and E) were appropriate to improve spatial visualization after the experiment.

We present the analysis of five studies from among those performed after the pilot project had ended.

We studied whether the average of the post-test minus the pretest differences was significantly greater than zero, in groups T and E, to validate improvements with the two methods. To do so, the t-student statistics test was used. In this test, the MRT variable should follow a normal distribution in both groups, which is in fact the case, as we can confirm from Table 1.

The "One-Sample T Test" run on the SPSS programme (Table 2) confirmed that the variable difference of post-test minus pretest (MRTINCR) in the two groups was significantly greater than zero. Because [alpha] < 0.05, the null hypothesis of equality to zero of improvement in both methods is rejected and we can accept that both methods have been shown to improve the learning of spatial visualization significantly.

In Table 2, we see that the improvement in the traditional group, at a confidence level of 95%, was at an average value of between 2 and 5.6 points, while the improvement in the experimental group stood at an average value of between 3.4 and 7.2 points. It may also be seen that the average increase is 3.8 points in group T and 5.3 points in group E.

A second analysis of these two pilot projects consisted of testing whether the improvement was greater in group E than in group T. The t-student statistical test was therefore completed for independent samples that compared the average of the variable increased visualization MRT_INCR for both groups E and T. The level of significance was 0.178 in the t-test for equality of measurements of the variable MRT_ INCR (Table 3) and because [alpha] > 0.05, we cannot reject the null hypothesis and therefore have to accept that the averages are equal, despite the average increase being 1.7 points greater than in group E with respect to group T.

We can make a global comparison of the variable "exam" and individual comparisons of the variables "vision" and "sketch" in groups E and T. To do so, the averages were once again compared for the two groups using the SPSS software package t-test for independent samples. We also confirmed the equality of the variable visualization with the Mann-Whitney U non-parametric statistical test, which is used for abnormal variables.

The t-test in Table 4, regardless of whether equality of variances is assumed, gives us a signification value that is greater than 0.05. The difference in means is not significant in the three tests ("exam," "vision," and "sketch"), so we must assume the equality of means. In this Table, we can see that the difference of means assumes negative and positive values in the confidence interval.

In Table 5, the level of significance of the variable vision with the Mann-Whitney U non-parametric test was 0.8 > 0.05. The difference of means for the variable visualization is not significant, so we assume equality.

We may also affirm that the introductory component in ILMAGE_SV neither significantly improves nor worsens learning of the variable "vision." Furthermore, the two experimental groups obtained similar results for sketching, which is confirmed in Table 4, despite the various exercises completed in ILMAGE_SV in group E, in comparison with manual completion by hand sketching of the exercises in group T. Finally, Table 4 confirms that the score for sketching is slightly higher in group T compared to group E, but not significatively. We may therefore say that students who used ILMAGE_SV in the pilot project (Group E) did not obtain worse results than students from the traditional group (Group T) in a significant way in the assessment of sketching, despite having dedicated fewer hours to drawing during the pilot project.

At the end of the academic year, all students completed a new visualization test called the "final vision test" as part of the final exam in the study module. The results of the differences in means, when comparing the two groups, E and T, were not significant, so we may only assume that the two methods were equally valid.

In a third study, we identified the students for whom ILMAGESV was more appropriate. We divided the students who had used ILMAGE_SV into three categories of spatial ability: those of greater ability (1), average ability, (2) and lower ability (3). The division into each category was done by 35/65 percentile values of students who had completed the MRT pretest, and the study variable was MRT_INCR.

Figure 7 shows that the average increase in knowledge in the MRT test is greater for students with weaker abilities than in the other two categories. All three categories were compared to see whether significant differences existed between them. The result of the ANOVA statistical test was F = 3.9, with a significance level of 0.028 < 0.05, which indicates that at least one of the three categories has a different average, but does not indicate which one.

Table 6 shows the tests chosen from among various alternatives to ascertain the categories with significant differences: Tukey's honestly significant difference (Tukey HSD) test and the Scheffe test. The Scheffe test is the most conservative Both tests coincide in as much as a significant difference only exists between the averages of categories 1 (greater spatial ability) and 3 (lower spatial ability).

This study was repeated, dividing the categories by the 25/75 percentiles, to obtain at least this significant difference between categories 1 and 3. And finally, if divided into two categories of spatial ability by the 50 percentile, significant differences between both groups may also be found.

It may also be seen from Table 6 that the difference of means between categories 1 and 3 always has negative values in the confidence interval. Because the three divisions by categories (percentages 25/75, 35/65, and 50/50) had no influence on the significance of the results, in our opinion, students with poorer capabilities improve more if they use ILMAGE_SV, since this tool helps to perform mental rotations of 3D models by simulation on the computer. Students with better capabilities have further developed this capability and need less help or none at all.

Thus, we can affirm that students with lower spatial ability improved their visualization ability more than students with greater spatial abilities in a significant way, if their learning achievements were supported by studying with ILMAGE_SV.

In a fourth type of data analysis, the variable "vision" from the exam at the end of the pilot project was compared two by two for all students from the two academic years and between the two groups (E and T). These students initially had the same spatial ability in the DAT-SR test, taking into account that each group had been divided up into three categories of spatial visualization ability: greater ability (1), average ability (2) and lower ability (3) between the 35/65 percentiles. The procedure is similar to the earlier case that also applied the Tukey test (Table 7), which performs multiple comparisons between categories.

The results in Table 7 indicate that neither the differences between students of greater spatial visualization ability in groups E and T nor between those of lower ability are significant. But it may be said that the results are significantly equal among students of average ability between groups E and T.

Although the results of greater and lesser SVA are not significant, the average values for the vision variable are slightly higher among students with greater abilities in group E as opposed to those in group T. Contrary results are found among students of lower ability; the average values of the variable visualization are greater in group T than in group E (Figure 8). But, as mentioned earlier, the results are not significant.

If, instead of the vision variable, we study the variable "final vision test," obtained from the results of the student's end-of-year exam, then we should examine the results in Figure 9 and Table 8. Table 8 confirms that the three comparisons have a level of significance of [alpha] > 0.95, which allows us to affirm that the results are equal for methods T and E among students of greater, equal, and lower spatial visualization ability.

In Spain, students can access university engineering courses with or without previous knowledge of technical drawing, as it is an optional material in pre-university teaching. Thus, in a fifth analysis of the data, we studied whether significant relations existed for the vision variable between students from groups E and T with and without previous knowledge of technical drawing. Variance in the vision variable is not equal among the students of the four subgroups, which makes it impossible to apply the comparison using the Tukey HSD statistic test.

When the variances are unequal, the Welch test is used to compare the differences between the averages. Its results were F = 2.27 and [alpha] = 0.085, which were therefore not significant at a significance level of 0.05. However, when we studied the results at the significance level of 0.1, we found significant differences between the subgroups. Moreover, when the variances are not equal, the Tukey test can be replaced by the Games Howels test. In Table 9, the differences in the vision variable in group T between the students with and without previous knowledge are significant at a level of [alpha] = 0.093 and at a confidence level of 90%. We can also see that the differences for the vision variable in group E, between students who have and those who have no previous knowledge, stands at [alpha] = 0.941, so we may say, at a confidence level of 90%, that there are no significant differences between students from group E who have previous knowledge and those who have none, after the pilot project.

Figure 10 and the results in Table 9, indicate that the differences of means in group E, between students who have previous knowledge of technical drawing and those who do not, is much less than the differences in group T. One interpretation of these results is that the use of ILMAGE_SV is of greater help in the learning of spatial visualization for those students who arrive at university with no previous knowledge of technical drawing.


Our main conclusions are as follows:

* We can affirm that students who used the traditional method as much as those who use the experimental method with the ILMAGE_SV application increased their spatial vision capacity after the pilot project in a significant way.

* We sought to demonstrate that learning among the Group E students improved more than among the Group T students through the variable increase of the MRT test indicated in a significant way that the average increase was equal in both groups, despite the average increase being 1.7 points greater in group E than in group T.

We can also affirm that the results of the evaluation of the vision variable at the end of the pilot project were equal in both groups and coincided with the results of Konukseven (2010). The fact that the general results were equal for both groups, that the use of ILMAGESV is asynchronous and is adapted to the student's speed of learning, and that its 3D viewer adapts itself to the mental rotation of the models means we can recommend the use of this tool for the development of SVA.

* This tool has been demonstrated to be more effective for students whose SVA is lower, because it aims to develop the preliminary concepts of SVA. It is therefore recommended for students with a lower SVA. We should propose exercises of greater difficulty, in order to make it more effective for students with greater levels of SVA.

* If we compare the results of the exam variable of vision, after the pilot project, which students of greater, average, and lower ability obtained in groups E and T, respectively, the results are equal in a significant way. If the results of the "final vision test" variable are compared at the study module, we may affirm that the students of greater, equal, and lower spatial ability from groups E and T obtained very similar results. We have therefore maintained equality between the SVA categories over time.

* The results for sketching among students who used ILMAGE_SV (group E) were no worse than those in group T.

* The use of ILMAGE_SV is suitable for students at the start of their engineering studies at university who have no previous knowledge of technical drawing. Thus, this tool (ILMAGE_SV) is recommended for training prior to the start of the course for those students who come to engineering studies with no previous knowledge of technical drawing.

We should not forget that this tool covers only the fundamental aspects of SVA, without including geometry, sketching, knowledge of technical drawing standards, and CAD, where SVA may also be developed. This may be one of the reasons why the results of different researchers are contradictory.

The use of ILMAGE_SV is viable despite the initial time spent on its development, because: 1) it is used in different groups and over various years; 2) it saves evaluation time because it is automatic; 3) its flexibility means new graphic engineering modules may easily be added. A future line of research could be to determine the number of years before the application is in fact viable.

At present, one line of research is to use ILMAGE_SV in study modules on section views and auxiliary views in which 3D viewers may be used to improve student learning. Another line of research is the adaption of the ILMAGE_SV structure to other knowledge areas where SVA is necessary.

In conclusion, we can state that a recurrent problem is that approximately 20% of engineering students have experienced difficulties with the development of their spatial visualization abilities (Veurink et al., 2009; Sorby, 2009; Duff & Kellis, 2009; Brus & Boyle, 2009; Knott & Kampe, 2009), and that one method by which students can address this problem is to complete preliminary and complementary courses to develop these spatial abilities (Veurink et al., 2008; Sorby, 2009).


Ault, H. K., & Samuel, J. (2010). Assessing and enhancing visualization skills of engineering students in Africa: A comparative study. Engineering Design Graphics Journal, 74(2), 12-20.

Bennett, G. K., Seashore, H. G., & Wesman, A. G. (1997). DAT. Test de Aptitudes Diferenciales [DAT. Differential Aptitude Test]. Publicaciones de Psicologia Aplicada, 79. Madrid, Spain: Tea Ediciones S. A.

Brus, C., Zhoa, L., & Jessop, J. (2004). Visual-spatial ability in first-year engineering students: A useful retention variable? Proceedings of the American Society for Engineering Education Annual Conference and Exposition. Retrieved January 27, 2014, from:

Brus, C. P., & Boyle L. N. (2009). EnViSIONS at the University of Iowa. Proceedings of the 63rd Annual ASEE/EDGD Mid-Year Conference(pp. 28-33). Retrieved January 27, 2014, from

Connolly, P. E. (2009). Spatial ability improvement and curriculum content. Engineering Design Graphics Journal, 73(1), 1-5.

Contero, M., Company, P., Naya, F., & Saorin J. L. (2006). Learning support tools for developing spatial abilities in engineering design. International Journal of Engineering Education, 22(3), 470--177.

Duff J. M., & Kellis H. B. (2009). EnViSIONS at Red Mountain High School. Proceedings of the 63rd Annual ASEE/EDGD Mid-Year Conference(pp. 20-22). Retrieved January 27, 2014, from

Ferguson, C., Ball, A., McDaniel, W., & Anderson, R. (2008). A comparison of instructional methods for improving the spatial-visualization ability of freshman technology seminar students. Proceedings of the 2008 IAJC-IJME International Conference. Retrieved January 27, 2014, from:

Godfrey, G. S. (1999). Three-dimensional visualization using solid-model methods: A comparative study of engineering and technology students (Unpublished doctoral dissertation). Retrieved from

Guay, R. B. (1977). Purdue spatial visualization test: Rotations. West Lafayette, IN: Purdue Research Foundation.

Hake, R. (2002, August). Relationship of Individual Student Normalized Learning Gains in Mechanics with Gender, High-School Physics, and Pretest Scores on Mathematics and Spatial Visualization. Physics Education Research Conference, Boise, Idaho. Retrieved January 27, 2014, from

Knott T. W., & Kampe J. C. M. (2009). EnViSIONS at Virginia Tech. Proceedings of the 63rd Annual ASEE/EDGD Mid-Year Conference(pp. 34-38). Retrieved January 27, 2014, from

Koch, D. S. (2006). The effects of solid modeling and visualization on technical problem solving (Doctoral thesis). Virginia Polytechnic Institute. Blacksburg, Virginia. Retrieved January 27, 2014, from

Konukseven E. I. (2010). Web-based education support tools used for teaching the "engineering graphics" course. Key Engineering Materials, 419-420, 777-780.

Lappan, G. (1981). Middle grades mathematics project, Spatial Visualization Test. Princeton, NJ: Educational Testing Service.

Leopold, C. (2005). Geometry education for developing spatial visualisation abilities of engineering students. The Journal of Polish Society for Geometry and Engineering Graphics, 15, 39-45.

Liarokapis, F., Mourkoussis, N., White, M., Darcy, J., Sifniotis, M., Petridis, P.,... Lister, Paul, F. (2004). Web3D and augmented reality to support engineering education. World Transactions on Engineering and Technology Education, 3(1), 11-14.

McGee, M. (1979). Human spatial abilities: Sources of sex differences. New York, NY: Praeger.G.

Melgosa, C., Ramos, B., & Banos, M. E. (in press). Interactive learning management system to develop spatial visualization abilities. Computer Applications in Engineering Education. doi: 10.1002/cae.21590

Miller, C. L., & Bertoline, G. R. (1991). Spatial visualization research and theories: Their importance in the development of an engineering and technical design graphics curriculum model. Engineering Design Graphics Journal, 55(3), 5-14.

Perez, T., & Serrano M. (1998). Ejercicios para el desarrollo de la percepcion espacial [Exercises for the development of spatial perception]. Alicante, Spain: Editorial Club Universitario.

Piburn, M. D., Reynolds, S. J., McAuliffe, C., Leedy, D. E., & Birk, J. P. (2005). The role of visualization from computer-based images. International Journal of Science Education, 27(5), 513-527.

Rafi, A., Khaairul Anuar, S., Samad, A., Hayati, M., & Mazlan, M. (2005). Improving spatial ability using a web-based virtual environment (WbVE). Automation in Construction, 14(6), 707-715.

Scribner, S. A., & Anderson, M. A. (2005). Novice drafters' spatial visualization development: Influence of instructional methods and individual learning styles. Journal of Industrial Teacher Education, 4(2), 38-60.

Sorby, S. A. (2009). Educational research in developing 3D spatial skills for engineering students. International Journal of Science Education, 31(3), 459-480.

Sorby, S. (2000). Spatial abilities and their relationship to effective learning of 3-D solid modeling software. Engineering Design Graphics Journal, 64(3) 30-35.

Sorby, S. (2001). Improving the spatial ability of engineering students: Impact on graphics performance and retention. Engineering Design Graphics Journal, 65(3), 31-36.

Sorby, S., & Baartmans, B. (1996). A course for the development of 3-D spatial visualization skills. Engineering Design Graphics Journal, 60(1), 13-18.

Sorby, S., & Gorska, R. (1998). The effect of various courses and teaching methods on the improvement of spatial ability. Paper presented at the 8th ICEDGDG, Austin, Texas.

Sorby, S., Wysocki, A., & Baartmans, B. (2003). Introduction to 3D Spatial Visualization: An Active Approach [CD-ROM with workbook]. Clifton Park, NY: Thomson Delmar Publishing.

Strong, S., & Smith, R. (2001). Spatial visualization: Fundamentals and trends in engineering graphics. Journal of Industrial Technology, 18(1), 1-6.

Tsutsumi, E., Schrocker, H. P., Stachel, H., & Weiss, G. (2005). Evaluation of students' spatial abilities in Austria and Germany. Journal for Geometry and Graphics, 9(1), 107-117.

Vanderberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599-604.

Veurink N., Hamlin A. J., & Sorby S. (2008, June). EnViSIONS incorporating spatial visualization curriculum across the United States. Conference on Research and Training in Spatial Intelligence, Chicago, Illinois. Retrieved January 27, 2014, from

Veurink N. L, Hamlin, A. J., Kampe, J. C. M., Sorby, S. A., Blasko, D. G., Holliday-Darr, K. A., ... Knott, T. W. (2009). Enhancing visualization skills-improving options and success (EnViSIONS) of engineering and technology students. Engineering Design Graphics Journal, 73(2), 1-17.

Web3D Consortium. (2010). Retrieved from

Yue, J. (2001). Does CAD improve spatial visualization ability? ASEE Annual Conference & Exposition, Albuquerque, New Mexico. Retrieved January 27, 2014, from

Carlos Melgosa Pedrosa, Basilio Ramos Barbero * and Arturo Roman Miguel

Graphic Expression Department, Higher Polytechnic School, University of Burgos, Avda. Cantabria s/n. 09006 Burgos, Spain // // //

* Corresponding author

(Submitted June 21, 2012; Revised February 1, 2013; Accepted April 12, 2013)

Table 1. Tests of normality for MRT increase

Group        Kolmogorov-Smirnov (a)        Shapiro-Wilk

         Statistic   df    Sig.     Statistic   df    Sig.

T        .128        31    .200 *   .956        31    .234
E        .127        28    .200 *   .972        28    .643

Note. a. Lilliefors Significance Correction. This is a lower
bound of the true significance.

Table 2. One-sample t test for MRT increase

                                          Test Value = 0
       Difference       Std.      Std. Error
      of MRT_INCR    Deviation       Mean         t      df

T        3.806         4.875         .875       4.348    30
E        5.321         4.869         .920       5.783    27

                            Test Value = 0

                                   95% Confidence
                                    Interval of
          Mean                    the Difference
       Difference    Sig. (2-
      of MRT_INCR    tailed)    Lower      Upper

T        3.806         .000     2.018      5.594
E        5.321         .000     3.433      7.209

Table 3. Independent samples for MRT increase

                                  Levene's Test for
                                Equality of Variances

                             F     Sig.      t        df

MRT_    Equal variances    .189    .665    1.363      57
INCR    assumed
        variances not                      1.364    56.519

                              t-test for Equality of Means

                             (2-         Mean       Std. Error
                           tailed)    Difference    Difference

MRT_    Equal variances      .178        1.741         1.277
INCR    assumed
        variances not        .178        1.741         1.276

                           95% Confidence
                           Interval of the

                           Lower     Upper

MRT_    Equal variances    -.816     4.298
INCR    assumed
        variances not      -.815     4.297

Table 4. Independent samples test

                                    Levene's Test for
                                  Equality of Variances

                               F     Sig.       t        df

Exam      Equal variances    .043    .837     -.624      242

          Equal variances                     -.627    238.59
          not assumed

Vision    Equal variances    .513    .475     -.382      242

          Equal variances                     -.383    238.65
          not assumed

Sketch    Equal variances    2.751   .098    -1.060      242

          Equal variances                    -1.072    241.91
          not assumed

                                 t-test for Equality of Means

                               (2-        Mean       Std. Error
                             tailed)   Difference    Difference

Exam      Equal variances     .533        -.116         .187

          Equal variances     .532        -.116         .187
          not assumed

Vision    Equal variances     .703        -.055         .144

          Equal variances     .702        -.055         .143
          not assumed

Sketch    Equal variances     .290        -.061         .058

          Equal variances     .285        -.061         .057
          not assumed

                               Interval of the

                              Lower     Upper

Exam      Equal variances     -.484     .251

          Equal variances     -.482     .250
          not assumed

Vision    Equal variances     -.338     .228

          Equal variances     -.337     .227
          not assumed

Sketch    Equal variances     -.176     .053

          Equal variances     -.175     .051
          not assumed

Table 5. Test statistics (a)


Mann-Whitney U             7258,500
Wilcoxon W                16036,500
Z                           -,243
Asymp. Sig. (2-tailed)       ,808

Note. a. Grouping Variable: Num_group

Table 6. Multiple comparisons

               (I)        (J)         Mean
Test         cap_esp    cap_esp    Difference     Std.
              35_65      35_65        (I-J)      Error

Tukey HSD       1          2         -1.530      1.606
                           3        -4.289 *     1.542
                2          1          1.530      1.606
                           3         -2.759      1.672
                3          1         4.289 *     1.542
                           2          2.759      1.671
Scheffe         1          2         -1.530      1.606
                           3        -4.289 *     1.542
                2          1          1.530      1.606
                           3         -2.759      1.672
                3          1         4.289 *     1.542
                           2          2.759      1.672

                      95% Confidence Interval

Test                 Lower Bound    Upper Bound

Tukey HSD    .610       -5.428         2.368
             .021       -8.033         -0.545
             .610       -2.368         5.428
             .236       -6.816         1.299
             .021       0.545          8.033
             .236       -1.299         6.817
Scheffe      .638       -5.601         2.541
             .029       -8.199         -0.378
             .638       -2.541         5.601
             .267       -6.997         1.479
             .029       0.378          8.199
             .267       -1.479         6.997

Note. The mean difference is significant at the 0.05 level.

Table 7. Multiple comparisons

          (I)       (J)        Mean
        DAT_SR    DAT_SR    Difference    Std.
         35_65     35_65       (I-J)      Error

Tukey     E3        T3         -.485      .232
HSD       E2        T2        -.0398      .254
          E1        T1         .369       .222

                             95% Confidence

          (I)               Lower    Upper
        DAT_SR              Bound    Bound
         35_65     Sig.

Tukey     E3       .297    -1.152     .182
HSD       E2      1.000     -.768     .689
          E1       .558     -.269    1.006

Table 8. Multiple comparisons

           (I)       (J)        Mean
         DAT_SR    DAT_SR    Difference     Std.
          35_65     35_65       (I-J)      Error

Tukey      E3        T3         .096        .390
HSD        E2        T2         -.022       .429
           E1        T1         .269        .357

                              95% Confidence
         DAT_SR              Lower     Upper
          35_65     Sig.     Bound     Bound

Tukey      E3      1.000    -1.029     1.220
HSD        E2      1.000    -1.258     1.214
           E1       .975     -.759     1.298

Table 9. Multiple comparisons

            (I)        (J)      Difference
          ET_DTNT    ET_DTNT       (I-J)      Std. Error

Games-    E_DT        E_NDT        .126          .222
Howell                T_DT         -.216         .158
                      T_NDT        .323          .232
          ENDT        E_DT         -.126         .222
                      T_DT         -.342         .216
                      T_NDT        .198          .275
          T_DT        E_DT         .216          .158
                      E_NDT        .342          .216
                      T_NDT       .540 *         .228
          TNDT        E_DT         -.323         .233
                      E_NDT        -.198         .275
                      T_DT        -.540 *        .228

                               90% Confidence Interval
          ET_DTNT    Sig.    Lower Bound    Upper Bound

Games-    E_DT       .941    -.391          .642
Howell               .521    -.582          .149
                     .509    -.219          .866
          ENDT       .941    -.642          .391
                     .396    -.847          .163
                     .890    -.443          .839
          T_DT       .521    -.149          .582
                     .396    -.163          .847
                     .093    .008           1.072
          TNDT       .509    -.866          .219
                     .890    -.839          .443
                     .093    -1.072         -.008

Note. The mean difference is significant at a level of 0.1.
COPYRIGHT 2014 International Forum of Educational Technology & Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Roman
Publication:Educational Technology & Society
Article Type:Report
Geographic Code:4EUSP
Date:Apr 1, 2014
Previous Article:The impact of a principle-based pedagogical design on inquiry-based learning in a seamless learning environment in Hong Kong.
Next Article:Developing digital courseware for a virtual nano-biotechnology laboratory: a design-based research approach.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters