An alternative approach to student assessment for engineering-laboratory learning.
Laboratory work is an integral component of engineering education (Feisel & Rosa, 2005; Ma and Nickerson 2006; Saniie et al. 2015) and hence the engineering curriculum (Lindsay and Good 2005). Engineering laboratories not only focus on clarifying concepts conveyed through theoretical models but they also permit the development of the overall personal attributes essential for professional practice in students' future engineering careers (Feisel and Rosa 2005). Engineers Australia accreditation guidelines (2008) also express the requirement for institutions to set up proper experimental arrangements so as to meet the expected outcomes from laboratory learning. There are 10 distinct learning outcomes from laboratory work specified in the accreditation guideline that highlight aspects such as instrumentation, models, experiment, data analysis, design and creativity (Couteur 2009; Engineers Australia 2008; Feisel and Rosa 2005; Hofstein and Lunetta 1982).
Essential practical skills in a laboratory can be attained in a systematic order. According to Kolb's Cycle, the laboratory learning process mainly involves the grasping of knowledge before experimentation and transformation of that knowledge during experimentation in the laboratory (Abdulwahed and Nagy 2009). They assert that knowledge acquisition in the laboratory takes place when attempts are made to develop concrete knowledge of a topic on the basis of abstract concepts underpinning that knowledge acquired prior to commencing the practical work. This process can be viewed as two basic steps. First, students develop a tentative idea of what they are about to learn and observe in the laboratory as well as what is expected from them during the work they perform, typically through the laboratory briefing sheets provided to the students. Second, demonstrations by the laboratory instructor, conducting the experiment themselves, and, finally, assessment of the work performed and knowledge gained, all help students to transform and realise the concepts that they assimilated before performing the actual laboratory. Learning through laboratory work is effective when both the above two steps occur together. One of the ways to ensure this is by implementing appropriate methods of assessment for laboratory learning.
Present-day engineering laboratory activities are mature in their design. They provide opportunities to students ranging from performing an experiment in groups in a laboratory to designing a model independently, with different skills to be acquired in each step of the laboratory activity (Feisel and Rosa 2005). Ascertaining students' skills attainment in the learning process is as important as designing the instructions for learning. Evaluation of the students' attainments of practical skills and performance in the laboratory occurs mainly in two stages: during the laboratory session and after the laboratory session. Assessments which are carried out during the laboratory session are called assessment for learning and those carried out at the end of laboratory sessions are called assessment of learning (Hunt, Koenders, and Gynnild 2012; Wiliam 2011). Both these modes have their own advantages and disadvantages. They are also sometimes referred as formative and summative assessments, respectively, although the purpose and context of their use can vary. Generally, formative assessments help identify areas of student improvement through teaching and learning practices in the laboratory while summative assessments merely inform the instructor of the level of student performance in the laboratory (Wiliam 2011).
Gregory & Moron-Garcia (2009) found that report-based assessments are popular at undergraduate level studies allowing students to learn time management and workload management skills. These practices are not found to consider students' perspectives, their experiences and attitudes as part of the implementation process. The use of such assessment methods are solely determined by convenience to the educational institution in terms of ease of applicability and resource friendliness. However, educational research and discussion among academics have led to reform in this practice over the years (Boude and Dochy 2010; Evans 2013).
Assessment in any context not only assesses whether students' learning is aligned with the expected learning outcomes but also whether the instructional design of the laboratory actually serves the attainment of the learning outcome. Assessments are also important for monitoring students' progress and development (Bone 2010; Caspersen, Smeby, and Olaf Aamodt 2017; Ross, Brown, and Torabi 2017; Williams 2014) as well as making the teaching and learning process a motivating journey for both student and teacher. Students are strongly driven by assessment and the feedback that they receive on their assessment task (Nicol and MacFarlane-Dick 2006; Torrance 2007). Students are able to acknowledge their shortcomings and then work on the right areas for improvement and their development (Olds and Miller 1998). The assessment also helps students to determine the right approach to laboratory learning (Jones 2005; Olds and Miller 1998). As a consequence, there is much active debate over why and how students' learning should be assessed (Guskey 2003; McColskey and O'Sullivan 2012; Olds and Miller 1998; Ross, Brown, and Torabi 2017; Sadler 2005; Stassen, Doherty, and Poe 2001; Williams 2014). Theoretical assessments are purely content-based and do not cover areas such as personal and professional development. By contrast, the assessment of practical work (Derek 1992; Hofstein and Lunetta 1982; Olds and Miller 1998) can include all aspects of learning such as content knowledge, team-building and collaborative skills, analytical skills, communication skills and error analysis to name a few (Caspersen, Smeby, and Olaf Aamodt 2017; Ramirez et al. 2014). Accordingly, laboratory learning and its assessment offer much greater scope for inculcating and reinforcing the attributes required of engineering graduates.
In this article, we offer a model for the design of laboratory class assessment and then apply this to the practical work programme of a second-year engineering unit. The effectiveness of the assessment regime is then studied through the lens of student experience and its ability to account for the diverse range of engineering skills that students are able to develop through laboratory learning.
2. Model for designing a laboratory class assessment method
In order to understand how the design of a laboratory assessment method is relevant to the laboratory learning environment, we used and modified the Model of Educational Reconstruction (MER) (Duit et al. 2012), which emphasises the following essential aspects required for an effective teaching and learning process:
* Clarification of the subject matter and analysing the educational significance of the chosen subject matter;
* Accounting for both teachers' and student's perspectives including students' prior knowledge of the subject, their attitudes, skills and interests in the subject matter; and
* Combining the above two aspects to design and evaluate a learning environment that is appropriate for teaching and learning to take place.
In the present study, we focus on assessment method and its significance for students' learning in the laboratory, rather than clarifying the subject matter and analysing the significance of that subject matter. Based on the Model of Educational Reconstruction, the design of an assessment method can, in general, be depicted schematically in Figure 1. There are three important components involved in the design of an assessment method for a laboratory class that we describe as follows.
The design is generally initiated by the faculty, who develop certain specific grounds and criteria for assessing students' learning from the laboratory experiments. This is an important step as the courses at the institutions require compliance with the guidelines provided by accrediting bodies, such as Engineers Australia in the context of Australian Universities. Assessment, therefore, plays a significant role in ensuring that students learn and acquire a set of skills prescribed by the accrediting body. Olds and Miller (1998) propose the consideration of few basic questions before designing or developing an assessment method; these are 'What are the program objectives?' and 'What should students know and be able to do when they complete the program'. Based upon the answers to these questions, along with a clear analysis of the significance of its implementation in laboratory education, the faculty proposes a certain conceptual framework to design an assessment method for the evaluation of laboratory work.
2.2. Design and evaluation
A clear concept of the design for an assessment method, with possible correct and appropriate alternatives, leads to designing the actual assessment method. This process is accompanied by the evaluation of the design in terms of validity and reliability measures. An important step in this process is the selection of appropriate environments for implementing the proposed assessment method.
The assessment method designed should reflect the nature of the task and type of laboratory being assessed. The learning outcomes measured by the assessment method should also align with the competencies required by the accrediting body to ensure that the students are acquiring the right skills for their future professional careers. This is then followed by the actual implementation of the assessment tool in the laboratory classes, through rigorous instructor interactions in the laboratory, and allowing students to conduct experiments in the laboratory, based on the concept that underpins the assessment method.
2.3. Investigation of students' experiences and perceptions
In order to obtain information regarding the effectiveness of the assessment method, it is necessary to investigate students' perspectives, their behaviour during the assessment and finally their experience of the assessment method used in the laboratory. This can be carried out in numerous ways. The most commonly used methods are qualitative and quantitative surveys. This provides two important types of information. First, the benefits and drawbacks of using the assessment method in the laboratory skills assessment can be identified, and second, the satisfaction level of students undergoing the assessment is measured. The former information leads to the redesign or improvement of the assessment tool while the latter provides feedback on students' ability and motivation to construct their knowledge of the topics explored in the laboratory activity. Refinements of the tool to address both types of feedback may modify the original concept and thus enhancements to the assessment design occur through an iterative process.
2.4. Further design factors
It is also remarked that any assessment tool designed using this model must align assessment with the learning outcomes of the laboratory. A further design constraint is that the assessment method should be simple and convenient to use within the specified time limit of the intended laboratory. Laboratory instructors' ease of adoption of the method and their ability to implement the method within the specified time period allocated to the laboratory activity should also be considered during the design process.
3. Research questions
To illustrate the application of the foregoing process, the remainder of this article describes, as an example, the design of assessment methods for a second-year engineering laboratory programme, in doing so presenting students' experience and satisfaction when assessed using two different modes of assessment within the programme.
The study, therefore, serves to answer the following questions.
(1) How should students be assessed in the engineering laboratories so as to measure the essential practical skills attainment as required by Engineers Australia for graduate engineers?
(2) Does the mode of assessment affect students' activities in the laboratory and the marks they are awarded?
(3) How does the assessment method affect students' satisfaction with their laboratory work?
4. Design of assessment comprising in-class and report-based methods for an engineering laboratory programme
Herein, we first describe separately the mechanisms of the two different methods, namely in-class and report-based, used in the assessment of laboratory learning and offer review comments based upon observations of their implementation. Thereafter, we show how these combine to create the overall assessment of the laboratory programme of the second-year engineering course in Fluid Mechanics at Curtin University, Australia, taken by students in the sub-disciplines of Chemical, Civil, Mechanical, Mining and Petroleum Engineering. Students observed were a multicultural cohort with male predominance. Both the laboratory sessions were instructed and assessed by 5 sessional staff members and there were 10 laboratory sessions per week run by these five sessional staff. The main activities in the fluids laboratory include operating equipment such as that shown in Figure 2, using instrumentation to collect data, processing data to obtain the overall results and the interpretation of these results.
4.1. In-class assessment method
The in-class assessment concept has been designed to assess students' performance within the laboratory session focusing on aspects such as data capture, its synthesis and the ability to draw inferences through a discussion of questions that follow the completion of the experimental investigation. There are three sets of experiments in this laboratory session, namely: (1) Stability of floating bodies; (2) Investigation of Bernoulli's equation in a closed water circuit; and (3) Discharge (Flow rate) measurement using different devices. The equipment used for experiments 1 and 2 is illustrated in Figure 2 below.
The score from the in-class assessment contributes 30% of the total laboratory programme marks for the unit. In the two-hour laboratory session, there are typically 12 students who are divided into 3 groups with a maximum of 4 students in each group. Since the assessment is purely on the basis of the students' laboratory performance and their understanding developed in the laboratory, students do not need to learn any underlying theory prior to perform the experiments. Students receive a live demonstration of the experiments from their instructor for approximately 35 min and also obtain the first set of data from the demonstration itself. Thus, students not only learn how to perform the experiment but also get a glimpse of the nature of the data-sets expected from each of the three experiments. They then spend their own 'discovery' time totaling about one hour for the experiments to obtain a complete set of data for each. With the data in hand, students utilise a further 30 min undertaking calculations (data synthesis) and analyse the discussion questions posed in the laboratory briefing sheet; the latter generates rigorous consultation and discussion amongst the group members. In general, the concept of the in-class assessment is simple to understand and implement for both students and instructors in any laboratory. An example of the in-class assessment tool for just one of the three experiments is presented in Appendix 1.
During the session, the instructors also perform calculations on the data obtained by each group using an Excel spreadsheet which is later used to evaluate the accuracy of the students' calculations. Each group has to submit just one completed laboratory sheet with calculations and discussions for the three experiments. The instructor finally scores the students' performance on the grounds of accuracy and their understanding or inferences drawn from the phenomena that each experiment illustrates. Each member of the group receives the same score, reflecting the need for cohesive teamwork. This means each student in a particular group is adjudged to have performed in an identical way unless there is clear evidence of non-participation or inactive participation as observed by the instructor. Students have the freedom to distribute tasks among the group members and also the opportunity to share their knowledge and experience. Finally, the group receives their assessment mark and any oral feedback on their work at the end of the laboratory session. Not only does this provide immediate feedback to students but it also obviates post-laboratory marking work for the instructor which is beneficial for the institution in terms of human and financial resource costs.
The design of the in-class assessment method is effective in terms of focus upon its objectives, namely the inculcation of student skills in the engineering sequence of: conduct of experimental procedure, data capture, data synthesis and inference of concepts. However, there are some visible drawbacks which should inform the iterative process of improvement, most notably due to the pressures of the time limit and the number of activities planned within the session. While working under tight time constraints is a feature of professional engineering work, the quality of students' learning process can be compromised. For example, the time constraint in the present design denied students the opportunity to review their data and/or recalculate in the case of outliers or errors. The time constraint may also put instructors under pressure to complete the assessment thereby diminishing their role as teachers. It is clear that when implementing this method of assessment, careful planning should be used to determine what can feasibly be achieved within a time-constrained laboratory activity.
4.2. Report-based (conventional) assessment method
For the second session of the laboratory programme of the Fluid Mechanics course, a conventional assessment method is used. In the second laboratory students study flow through pipes, the objective being to understand laminar and turbulent flow regimes following the exposition of these concepts in the lecture series. Working in groups (again, typically four students), students perform the experiment and collect data after receiving a thorough demonstration from the instructor. At the end of the experimentation phase, the instructor explains how to prepare the laboratory report which each student has to submit individually within two weeks of completing the laboratory activity. This session is also designed within a two-hour time period. In this method of assessment, students and demonstrators do not interact with each other during the assessment process. Students' marks are purely based on the quality and content of the laboratory report that they prepare. However, a marking rubric, based on the development of the Engineers Australia Stage 1 competencies (Engineers Australia 2011), is provided. These competencies include the ability to write scientific reports, interpret data logically and correctly, use of theory to understand experimentally observed phenomena and apply written and diagrammatic communication skills. The mark for this laboratory contributes 70% of the total laboratory programme marks for the course.
In this method of assessment, like that of the in-class method, students know the criteria upon which they are being assessed. However, in the case of report-based assessment, students are less likely to make mistakes as they have post-session access to help during the period in which they prepare their report. There is also greater opportunity to reflect upon and review the work in their report, although they cannot revisit the experiment. Because assessment occurs after the laboratory session, the pressure to conduct data synthesis is reduced and students ostensibly get more time to spend on the practical aspects as compared with the first laboratory session that used in-class assessment. Nevertheless, a disadvantage of report-based assessment is that the mark given by the instructor often reflects the amount of effort put into preparing the report but not how students performed or actively learned from the practical work they performed. Additionally, authenticity checks for authorship can be difficult to carry out. Finally, although professional and team-working development skills are part of the laboratory learning objectives, they are difficult to measure through the conventional report-based assessment method (Hunt, Koenders, and Gynnild 2012).
4.3. Design objectives behind the in-class and report-based assessment methods
The in-class assessment method requires each student to have become familiarised with the equipment and its instrumentation process after the completion of the practical work. We remark that it does not measure students' level of adaptation of strict procedural instruction. The most clearly measured aspect of student learning is their ability to collect and record data accurately as captured in the group's laboratory briefing sheet. Marks assigned to each section in the laboratory sheet encourage students to critique their own collected data. Students generally tend to verify their data through comparison with those of other groups in the laboratory; this form of verification (reproducibility) is valid and promotes communication skills applied to the discussion of engineering work. The in-class assessment method also compels students to learn how to synthesise data by following a set of instructions given by the instructor together with those already provided in the laboratory briefing sheet. The discussion questions at the end of laboratory briefing sheet assess their ability to identify and interpret trends in collected data and synthesised results. This also assesses their ability to identify or infer the physical phenomena and their causes from observations made during experimentation and through the data collected during the process. This practice assists students in developing the ability to apply knowledge learned during the laboratory work to a different but related application in future.
Since the in-class assessment is a group assessment, each section of the laboratory sheet demands group collaboration. Students' discussions within their group generally focus on calculations (data synthesis) and answering the discussion questions (inferences drawn) so as to come to a consensus and complete the work within the specified time limit. This process develops the ability of students to communicate with team members and optimise the group's workflow to meet a deadline. During the assessment process, students are asked to elaborate on their calculations and discussion in front of the instructor (Ross, Brown, and Torabi 2017). This assesses students' ability to communicate the results obtained from the experimental work.
While the in-class assessment method measures the ability to use appropriate techniques in an engineering laboratory, the much more commonly used report-based individual assessment method measures compliance against a different group of Engineers Australia Stage 1 competencies for a professional engineer (Engineers Australia 2011). Report-based assessment mainly focuses on aspects such as development and research, conceptual understanding and use of techniques applied in preparing the report and, only implicitly, communication ability when working in groups and interacting with peers and instructors effectively. Accordingly, the combination of the two assessment methods can serve to measure most of the major personal, professional and technical skills required by Engineers Australia (Engineers Australia 2008) for students graduating with an engineering degree.
Appendix 2 tabulates the 10 EA learning objectives along with the corresponding assessment methods assessing those objectives.
5. Investigation of students' experience and satisfaction
5.1. Research method and participants
The quantitative research method is applied by means of a survey questionnaire (Creswell 2013). Closed-ended questions are included to ascertain students' participation and engagement at various levels in the two Fluid Mechanics laboratory sessions described in Section 4. The data were collected in Semester 1 2017. Students received the survey form after completing the second laboratory session in which the conventional report-based assessment was used, noting that the in-class assessment method was used in the first laboratory session. The survey forms were completed only at the end of the second laboratory because the questions are mostly couched in the form of a comparison of various aspects of the two laboratory sessions. The survey questions covered such aspects as preparation, active participation or teamwork both during the practical work and for completing the task or assessment. A total of 10 polar type questions were posed. A further focus of the questionnaire was on student satisfaction measured on a scale of 1 to 10, where 1 represents 'Not satisfied' and 10 represents 'Extremely satisfied' (see Appendix 3 for the sample of the survey form). A total of 263 students responded to the survey conducted over a period of five weeks of laboratory sessions. Students' laboratory marks in the two laboratory sessions were also considered in order to study the effect of the assessment methods on student learning.
5.2. Results and discussion
Out of 263 students, 259 students responded to all of the questions in the survey form. The results are presented in Table 1 that quantifies both positive and negative responses for the overall class as percentages of the total cohort survey (hence the absence of some responses, noted above, means that in a number of cases these do not sum to 100%).
It is evident from Table 1 that most of the students responded positively to the questions asked in the survey. This can be considered, as a whole, an encouraging response towards both of the assessment methods used for laboratory work assessment. Students reported that they are well prepared for the activities and seem to participate actively in the laboratory work. However, many students (above 40%) reported that they were not provided with suitable opportunities to find reasons for errors in their measurements and/or repeat tasks or measurements in the laboratory in both modes of assessment. These two aspects are important for students learning from laboratory work. These negative responses reflect inadequacies in both assessment methods suggesting the need for design improvements. Students' responses also reveal that they perceive that they have successfully attained some important skills, required by Engineers Australia, such as communication for task completion and group collaboration; both assessment methods exhibit significant social interaction (Lowe et al. 2009; Park et al. 2017) and active engagement in the laboratory.
A Wilcoxon signed rank test was performed to test the significant difference in responses for each item in the survey and the z-value obtained for each item has been added to Table 1 above with a * beside those which were statistically significant with p-value < 0.05. The test results show that differences in the response for controlling the equipment (z = -4.11), finding measurement errors (z = -2.82), repeating task or measurement (z = -2.91), discussing the activity or report preparation (z = -2.72) and finally sufficient time for task (z = -7.38) were statistically significant. Further features of the results in Table 1 are discussed below.
In order to compare students' preparedness and active engagement encouraged by the two assessment methods, only the positive response data from Table 1 are plotted in Figure 3 below; this form of visualisation (of the same data) has its emphasis upon student engagement in the overall laboratory activity process.
While the two forms of assessment yield broadly similar bar-chart profiles, we note the following differences. Figure 3 shows that students report that they are slightly better prepared for the laboratory session when the in-class assessment is used. The in-class assessment method also provided significantly better opportunities for students to control equipment individually and take the necessary data readings. By contrast, the opportunity to detect errors in measurements and repeat tasks and/or measurement was slightly better when the post-session report-based assessment was used wherein students also found that they had significantly more time to complete the laboratory task. Critically, many students reported that they did not have sufficient time to complete their tasks when the in-class assessment method was used for the first laboratory. Features observed in Figure 3 are in agreement with the statistical results obtained in Table 1.
As shown in Figure 4, students' satisfaction levels for both types of assessment method are similar. However, students seem to be slightly more satisfied with the conventional report-based assessment. The most common rating given for the assessment methods was 8 in both modes of assessment, 26% for in-class and 30% for report-based assessment methods. The average satisfaction levels found for in-class and report-based assessment are 7.71 (SD = 2.01) and 8.37 (SD = 1.3), respectively. It should be noted that a very small proportion of students was dissatisfied with the conventional method of assessment and about 11%, appeared to be dissatisfied with the in-class assessment method.
Overall, most students reported their satisfaction with both modes of laboratory learning assessment. This may be due to the fact that both assessment methods allow students to perform across all aspects of the laboratory activities at an almost similar level of familiarity. Gray & Diloreto, (2016) have indicated that student satisfaction is strongly influenced by the time given for task completion. The over-riding factor for dissatisfaction with the in-class assessment method also seems to be the time allotted for conducting the laboratory activities and their assessment. This is reflected by some students' comments communicated through the open-ended feedback section of the survey form, examples of which are:
Time was really less to do calculations and explanations,
Not enough time in the first lab to complete the in-class assessment,
Lab 1 provided very limited time to complete calculations and questions
The average laboratory marks obtained by students through in-class and report-based assessment methods were 89.0% (SD = 9.9%) and 73.6 (SD = 16.9%), respectively. Students' attainment of higher marks in the in-class assessment can probably be attributed to the fact that students worked and were assessed in groups for in-class assessment while there was a significant individual effort in the report-based assessment. The higher standard deviation arising from the report-based assessment method also reveals a greater variation in scores among students because students were assessed on an individual basis. Thus, for example, there were students who scored full marks in the laboratory with in-class assessment and scored as low as 24% in the report-based assessment. What can be deduced from this data is that the in-class assessment does not disadvantage students in terms of the marks that they receive. It is to be noted that there is a possibility that face-to-face marking could yield higher scores because of personal dynamics at work in the student-instructor interactions. However, observations indicated that even students who performed less well accepted the scores without dispute. Therefore, instructors seemed to be unafraid of marking objectively.
Informal observations of student behaviour during the first laboratory session that utilised in-class assessment suggested that some students compromised their learning experiences. In particular, students sometimes divided their tasks among the four group members with, for example, two students in the group focused solely on data collection while the other two students involved themselves in the calculations. Thus, the overarching objective of the group was to complete the tasks within the given time limit as opposed to learning how to perform an experiment and explore their understanding of the phenomena illustrated by the experiment. This behaviour, while optimising team work for task completion, was not an intended outcome of the design of the in-class assessment method. Similar instances were not observed in the second laboratory session in which time constraints were far less pronounced.
5.4. Overall findings and implications
Both the in-class and the conventional report-based methods assess students' performance in laboratory work and promote the development of essential skills expected of students when undertaking practical work. The marking rubrics for the two assessment methods clearly differ but each aligns with a subset of the Engineers Australia attributes of engineering graduates. Given that the student survey results show similarities in students' experience and their satisfaction level, the use of in-class and report-based assessment methods within the overall laboratory programme complement each other. Accordingly, their combined application in engineering laboratory learning is able to foster the development of a wider range of personal, professional and technical skills through laboratory learning as compared with the use of just (the currently predominant) report-based assessment method.
Notwithstanding the above recommendation, the addition of in-class assessment methods requires careful thought regarding the scope and scale of activities that can be realistically completed and assessed within a time-bound laboratory session. In the example of the present study, reducing the number of experiments in the first laboratory session could probably give more time for students to reflect upon and analyse the procedures and results obtained. Removing the excessive pressure (as identified through the student experience) to complete the activities would also provide increased opportunities for every student in a group to participate in each aspect of the activity. This would have the additional advantage of making the single mark awarded reflect more accurately the laboratory learning of each of the group's members.
The Model of Educational Reconstruction has been adapted and used to formulate a conceptual framework for the design of assessment for laboratory learning. This framework has then been used to design the assessment regime for the laboratory programme of a second-year Engineering Fluid Mechanics course at Curtin University. The novel feature of the resulting design is that it adds an in-class assessment method to the commonly used report-based assessment method across the sequential laboratory sessions undertaken by students. The in-class assessment of the first session is complementary to that of the report-based assessment in that its focus is upon promoting and rewarding the development of skills required in the actual conduct of practical work in a team work setting. These are critical skills for graduate engineers who will inevitably find themselves working on, operating or supervising practical processes in their careers. In contrast, a conventional assessment that requires an individual report to be submitted after carrying out the laboratory has a bias towards the application or reinforcement of concepts already taught in lectures and upon a student's report writing skills. A further difference is that in-class assessment provides immediate, and most often formative, feedback to students, whereas report-based assessment tends to be summative in practice.
The combined in-class and report-based components have been implemented in the laboratory programme and the resulting student experience quantitatively studied using a survey tool. The results of this investigation suggest that students prepare better when the in-class assessment is applied and that their interactions with equipment in the laboratory are greater. By contrast, when the report-based assessment is used students tend to focus more on obtaining a results data-set that they will analyse after the laboratory session. The survey results also indicate that there is very little difference in students' stated satisfaction levels between the two methods. Consideration of class marks awarded through the two methods reveals that instructors awarded higher scores using in-class assessment with a lower variance. This is probably due to the fact that the in-class mark was for group work whereas the report-based assessment was at an individual student level although students carried out the experimentation in groups. The most significant negative feedback on the use of in-class assessment arose from the fact that many students were pressured by the time constraint in which to complete their work and have it assessed. Clearly, careful consideration needs to be given so as to form realistic expectations of what can be accomplished within the session.
Overall, the findings of the present article suggest that the assessment of laboratory learning addresses a more comprehensive set of student attainments and fosters the development of a broader set engineering graduate attributes when a combination of in-class and report-based methods are used and that this is a practical approach from both institutional and student perspectives.
The work reported in this article contributes to a larger research project on laboratory learning in Science and Engineering that is supported by the Australia Research Council through grant DP140104189 for which the authors wish to express their gratitude.
The authors declare no conflicts of interest.
This work was supported by the Australia Research Council [grant number DP140104189].
Notes on contributors
Sulakshana Lal is a second-year doctoral student in Engineering Education at Curtin University in Perth, Western Australia.
Anthony D Lucey is a John Curtin Distinguished Professor, Dean of Engineering and Head of the School of Civil and Mechanical Engineering at Curtin University.
David F Treagust is also a John Curtin Distinguished Professor in the School of Education at Curtin University.
Euan Lindsay is the Foundation Professor of Engineering at Charles Sturt University in Bathurst, NSW.
Ranjan Sarukkalige is a senior lecturer in the Department of Civil Engineering and postgraduate coordinator in Civil Engineering at Curtin University.
Mauro Mocerino is an Australian Learning and Teaching Fellow and Coordinator of Chemistry post-graduate coursework at Curtin University.
Marjan Zadnik is assistant professor from the Department of Physics and Astronomy in the School of Science at Curtin University.
Abdulwahed, M., and Z. K. Nagy. 2009. "Applying Kolb's Experiential Learning Cycle for Laboratory Education." Journal of Engineering Education 98 (3): 283-294. doi:10.1002/j.2168-9830.2009.tb01025.x.
Bone, A. 2010. "Sue Bloxham and Pete Boyd." Developing Effective Assessment in Higher Education: A Practical Guide. The Law Teacher 44 (1): 106-108. doi:10.1080/03069400903541476.
Boude, D., & F. Dochy. 2010. "Assessment 2020: Seven Propositions for Assessment Reform in Higher Education." Education and Training 330-333. doi:10.1200/ JOP.2011.000236.
Caspersen, J., J. C. Smeby, and P. Olaf Aamodt. 2017. "Measuring Learning Outcomes." European Journal of Education 52 (1): 20-30. doi:10.1111/ejed.12205.
Couteur, P. Le. 2009. "Review of Literature on Remote & Web-Based Science Labs." 1-22. Accessed http://www.nic. bc.ca/pdf_docs/carti/Review_of_Literature_on_Remote_ and_Web-based_Science_Labs.pdf
Creswell, J. W. 2013. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 4th ed. SAGE. doi:10.1007/s13398-014-0173-7.2.
Derek, H.1992. "Assessment of Practical Work Some Considerations in Philosophy of Science." Science & Education 1 (2):115-144.
Duit, R., H. Gropengiesser, U. Kattmann, M. Komorek, and I. Parchmann. 2012. "The Model of Educational Reconstruction--a Framework for Improving Teaching and Learning Science." In Jorde D., Dillon J. (eds) Science Education Research and Practice in Europe: Retrospective and Prospective, 13-37. doi:10.1007/978-94-6091-9008_2.
Engineers Australia. 2008. "Accreditation Management System-Education Programs at the Level of Professional Engineer. Document G02 - Accreditation Criteria Guidelines." Engineer Australia, Canberra. https://www. engineersaustralia.org.au/sites/default/files/content-files/2016- 12/G02_Accreditation_Criteria_Guidelines.pdf.
Engineers Australia. 2011. "Stage 1 Competency Standard for Professional Engineer." Engineers Australia, 1-6. Accessed http://www.engineersaustralia.org.au/shadomx/apps/fms/ fmsdownload.cfm?file_uuid=DBA27A80-95B2-94BD-5BE8-A105DEDDED21&siteName=ieaust
Evans, C. 2013. "Making Sense of Assessment Feedback in Higher Education." Review of Educational Research 83 (1): 70-120. doi:10.3102/0034654312474350.
Feisel, L. D., and A. J. Rosa. 2005. "The Role of the Laboratory in Undergraduate Engineering Education." Journal of Engineering Education 94 (1): 121-130. doi:10.1002/j.2168-9830.2005.tb00833.x.
Gray, J. A., and M. Diloreto. 2016. "The Effects of Student Engagement, Student Satisfaction, and Perceived Learning in Online Learning Environments." NCPEA International Journal of Educational Leadership Preparation 11 (1): 98119.
Gregory, K., and S. Moron-Garcia. 2009. "Assignment submission, student behaviour and experience." Engineering Education 4 (1): 16-28. https://doi. org/10.11120/ened.2009.04010016
Guskey, T. R. 2003. "How Classroom Assessments Improve Learning." Educational Leadership 60 (5): 6-11. Accessed http://search.ebscohost.com/login. aspx?direct=true&db=a9 h&AN=9029496&site=ehost-live
Hofstein, A., and V. N. Lunetta. 1982. "The Role of the Laboratory in Science Teaching: Neglected Aspects of Research." Review of Educational Research 52 (2): 201-217.
Hunt, L., A. Koenders, and V. Gynnild. 2012. "Assessing Practical Laboratory Skills in Undergraduate Molecular Biology Courses." Assessment & Evaluation in Higher Education 37 (7): 861-874. doi:10.1080/02602938.2011.5 76313.
Jones, C. 2005. "Assessment for Learning." Learning and Skills Development Agency 16-19.
Lindsay, E. D., and M. C. Good. 2005. "Effects of Laboratory Access Modes upon Learning Outcomes." IEEE Transactions on Education 48 (4): 619-631. doi:10.1109/ TE.2005.852591.
Lowe, D., S. Murray, E. Lindsay, and D. Liu. 2009. "Evolving Remote Laboratory Architectures to Leverage Emerging Internet Technologies." IEEE Transactions on Learning Technologies 2 (4): 289-294. doi:10.1109/TLT.2009.33.
Ma, J., and J. V. Nickerson. 2006. "Hands-on, Simulated, and Remote Laboratories: A Comparative Literature Review." ACM Computing Surveys 38 (3): 1-24. doi:10.1145/1132960.1132961.
McColskey, W., and R. O'Sullivan. 2012. "How to Assess Student Performance in Science." Journal of Turkish Science Education (TUSED) 9 (3): 131-136. Accessed http://ehis.ebscohost.com.ezpustaka.upsi.edu.my/ehost/ pdfviewer/pdfviewer?sid=bacd0ff7-72 cd-478a-b92d- 1ad45ed3ddd7@sessionmgr12&vid=13&hid=114
Nicol, D., and D. MacFarlane-Dick. 2006. "Formative Assessment and Selfregulated Learning: A Model and Seven Principles of Good Feedback Practice." Studies in Higher Education 31 (2): 199-218. doi:10.1080/03075070600572090.
Olds, B. M., and R. L. Miller. 1998. "An Assessment Matrix for Evaluating Engineering Programs." Journal of Engineering Education 87 (2) (April): 173-178. doi:10.1002/j.2168-9830.1998.tb00338.x.
Park, J. J., N. H. Choe, D. L. Schallert, and A. K. Forbis. 2017. "The Chemical Engineering Research Laboratory as Context for Graduate Students' Training: The Role of Lab Structure and Cultural Climate in Collaborative Work." Learning, Culture and Social Interaction 13 (March): 113-122. doi:10.1016/j.lcsi.2017.04.001.
Ramirez, E., J. Tejero, R. Bringue, C. Fite, F. Cunill, and I. Montserrat. 2014. "Revamping of Teaching--Learning Methodologies in Laboratory Subjects of the Chemical Engineering Undergraduate Degree of the University of Barcelona for Their Adjustment to the Bologna Process." Education for Chemical Engineers 9 (3): e43-e49. doi:10.1016/j.ece.2014.04.002.
Ross, R., K. Brown, and T. Torabi. 2017. "LUS--A Tablet-Based NFC System to Facilitate Instant Database Archival of Laboratory Assessment." Australasian Journal of Engineering Education 4952 (July): 1-7. doi:10.1080/2205 4952.2017.1312837.
Sadler, D. R. 2005. "Interpretations of Criteria-Based Assessment and Grading in Higher Education." Assessment & Evaluation in Higher Education 30 (2): 175-194. doi:10. 1080/0260293042000264262.
Saniie, J., E. Oruklu, R. Hanley, V. Anand, and T. Anjali. 2015. "Transforming Computer Engineering Laboratory Courses for Distance Learning and Collaboration." International Journal of Engineering Education 31 (1): 106-120.
Stassen, M., K. Doherty, and M. Poe. 2001. Course-Based Review and Assessment: Methods for Understanding Student Learning, 54.
Torrance, H. 2007. "Assessment as Learning? How the Use of Explicit Learning Objectives, Assessment Criteria and Feedback in Post-Secondary Education and Training Can Come to Dominate Learning." Assessment in Education: Principles, Policy & Practice 14 (3): 281-294. doi:10.1080/09695940701591867.
Wiliam, D. 2011. "What is Assessment for Learning?" Studies in Educational Evaluation 37 (1): 3-14. doi:10.1016/j. stueduc.2011.03.001.
Williams, P. 2014. "Squaring the Circle: A New Alternative to Alternative-Assessment." Teaching in Higher Education 19 (5): 565-577. doi:10.1080/13562517.2014.882894.
Sulakshana Lal (a), Anthony D. Lucey (a), Euan D. Lindsay (b), Priyantha R. Sarukkalige (a), Mauro Mocerino (d), David F. Treagust (c) and Marjan G. Zadnik (d)
(a) School of Civil and Mechanical Engineering, Curtin University, Perth, Australia; (b) Faculty of Engineering, Charles Sturt University, Bathurst, NSW, Australia; (c) school of Education, Curtin University, Perth, Australia; (d) School of Science, Curtin University, Perth, Australia
Received 27 October 2017
Accepted 27 January 2018
Caption: Figure 1. Model for designing a laboratory class assessment method.
Caption: Figure 2. Types of equipment used by students in the laboratory work for: (a) Stability of floating bodies, and (b) investigation of Bernoulli's equation in a closed water circuit.
Caption: Figure 3. Comparison of students' preparedness and engagement in the two assessment methods.
Table 1. Students' recorded responses (as %) to the questionnaire. Overall students' response Yes Areas of Basic laboratory In-class Report- assessment activities based Preparation Read laboratory 85 74 instruction Read relevant lecture 70 63 materials Active Talk to peers/ 98 93 participation/ demonstrator team work Control equipment 92 70 yourself Record readings 95 91 yourself Find measurement 56 58 errors Repeat task/ 52 56 measurements Completion of Complete 84 68 task/assessment calculations in laboratory Discussion for 86 85 activity/report Sufficient time 69 89 for the task Overall students' response No Areas of Basic laboratory In-class Report- assessment activities based Preparation Read laboratory 15 23 instruction Read relevant 29 32 lecture materials Active Talk to peers/ 2 2 participation/ demonstrator team work Control equipment 7 25 yourself Record readings 3 5 yourself Find measurement 43 37 errors Repeat task/ 47 40 measurements Completion of Complete 15 26 task/assessment calculations in laboratory Discussion for 11 8 activity/report Sufficient time 29 5 for the task Overall students' response Areas of Basic laboratory z-value assessment activities Preparation Read laboratory -1.39 instruction Read relevant -0.23 lecture materials Active Talk to peers/ -1.96 participation/ demonstrator team work Control equipment -4.11 * yourself Record readings -0.16 yourself Find measurement -2.84 * errors Repeat task/ -2.91 * measurements Completion of Complete -1.60 task/assessment calculations in laboratory Discussion for -2.72 * activity/report Sufficient time -7.38 * for the task * p-value < 0.05. Appendix 1. Sample of assessment form used for in-class assessment Experimental Results and Calculations Dimensions (mm) Mass (gram) Length 357mm Movable mass 200g ([W.sub.j]) Rreadth 905mm Mass of 4200g asssmbled pontoon ([W.sub.p]) Distance of Angle of heel Angle of heel moveable mass C.G1 = ... mm C.G2 = ... mm from centre (x) Distance (mm) 8 degree 8 radian 8 degree 8 radian RIGHT (+ve) 0 10 30 60 LEFT (-ve) 20 40 Distance of Metacentric moveable mass height from centre [W.sub.j] . X (x) GN = Distance (mm) C.G1 C.G2 RIGHT (+ve) 0 10 30 60 LEFT (-ve) 20 40 [6 marks] Discussion: 1. Does the position of the metacentre depend on the position of Centre of Gravity? [1 mark] 2. Does the metacentric height vary with angle of heel? [1 mark] 3. Active engagement, participation and team work [2 mark] (Total 10 marks will be offered for experiment # 1) Appendix 2. Mapping of 10 EA learning objectives with the two assessment methods 10 Learning 13 learning Assessments objectives from objectives from methods EA accreditation Feisal & Rosa satisfying the guidelines (2005) * learning objectives 1. An appreciation Models In-class of the scientific assessment and method, the need report-based for rigour and a assessment sound theoretical basis 2. A commitment Ethics in the to safe and laboratory sustainable practices 3. Skills in the Instrumentation selection and characterisation of engineering systems, devices, components and materials 4. Skills in the Psychomotor selection and application of appropriate engineering resources tools and techniques 5. Skills in the Models development and application of models 6. Skills in the Design In-class design and conduct assessment of experiments and measurements 7. Proficiency in Experiment In-class appropriate assessment laboratory procedures; the use of test rigs, instrumentation and test equipment 8. Skills in Learn from In-class assessment recognising failure and report-based unsuccessful assessment outcomes, diagnosis, fault finding and re-engineering 9. Skills in Data analysis In-class assessment perceiving and report-based possible sources assessment of error, eliminating or compensating for them where possible, and quantifying their significance to the conclusions drawn 10. Skills in Data analysis, In-class assessment documenting communication and report-based results, analysing assessment credibility of outcomes, critical reflection, developing robust conclusions, reporting outcomes. * teamwork, sensory awareness, creativity are covered in the professional competencies standard mentioned in Engineers' Australia, (2008) document. Appendix 3. Survey questionnaire form ENGR2000--Fluid Mechanics laboratory--2017--Semester 1 Comparison of in-class assessment lab (Lab # 1) and post-class assessment lab (Lab # 2) Lab group: Date of the lab Reflecting on the laboratory classes you have completed (lab # 1--In-class assessment and present lab (lab #2)--report-based assessment), please put your view/comments/thoughts on the following aspects for both labs 1. To register a response completely, fill the bubble * with a blue or black ballpoint pen. 2. If you make an error, cross out the unwanted response X and completely fill the circle to your corresponding wanted response. 3. Do not make any other stray marks on the page. Lab # 1 Lab # 2 (Preparation): Before you attend the lab, did you Read the laboratory A B A B instruction shee/notes Read the relevant lecture A B A B materials (Active participation/Team work): Did you, Talk to group A B A B members/demonstrator Control the equipment/ A B A B experimental rigs yourself Record the A B A B measurements/readings yourself Find any measurement A B A B errors Repeat any task/ A B A B measurements (Completion of task/Assessment): Did you, Complete the calculations A B A B within the class time Discuss the activity/ A B A B report content with your Have sufficient time for A B A B the task set To what extent were you satisfied with the method of assessment (Not the outcome/marks) Please indicate your response (1--not satisfied, 10--extremely satisfied) In-class (Lab #1) 1 2 3 4 5 6 7 8 9 ) Post-class (Lab #2) 1 2 3 4 5 6 7 8 9 ) Please provide any other feedback on the assessment process:
|Printer friendly Cite/link Email Feedback|
|Author:||Lal, Sulakshana; Lucey, Anthony D.; Lindsay, Euan D.; Sarukkalige, Priyantha R.; Mocerino, Mauro; Tr|
|Publication:||Australasian Journal of Engineering Education|
|Date:||Oct 1, 2017|
|Previous Article:||MoodleNFC--integrating smart student ID cards with moodle for laboratory assessment.|
|Next Article:||Respondent disengagement from a peer assessment instrument measuring Collaboration Viability.|