Printer Friendly

Improved class preparation and learning through immediate feedback in group testing for undergraduate nursing students.

Abstract

PURPOSE A mixed-method educational evaluation project to increase learning through testing was conducted in a required senior nursing course.

METHOD The Immediate Feedback Assessment Technique (IF-AT) was used to motivate preparation for optimal final examination performance. Students took multiple-choice final exams in small groups, used critical thinking and collaboration to select best answers, and then used scratch-off answer sheets indicating correct answers. A causal-comparative evaluation design was used to assess effectiveness of the IF-AT technique in improving learning as measured by final exam scores compared with results of traditional individual multiple-choice final exams.

FINDINGS Results indicated that the IF-AT format was significantly more effective than traditional testing in enhancing learning.

CONCLUSION Descriptive and qualitative evaluation data from students indicated students increased their learning, engaged in critical thinking, and prepared adequately for the exam. Students evaluated the test method as superior to standard testing. Implementation information is included.

KEY WORDS

Collaborative Testing--Group Testing --Learning While Testing--Education Evaluation Research

**********

Changes in classroom methodology frequently stem from the instructor's desire for improved student learning (Fink, 2003). For the senior nursing course "Nursing: Health Enhancement," one aspect targeted for improvement was preparation for the final examination. This four-credit course focuses on adult health theory and research for the management of chronic illness, topics that require a heavy student reading load. However, individual faculty-student conversations, class discussions, final exam results, and final course grades suggested that students were not preparing enough nor studying in ways that proved successful. Faculty, therefore, focused on their final examination methodology in an effort to improve student course performance and learning.

BACKGROUND

After attending a workshop facilitated by L. Dee Fink (www.deefinkandassociates.com), who modeled the Immediate Feedback Assessment Technique (IF-AT) (Epstein Educational Enterprises, Cincinnati, OH), the four course instructors decided to implement IF-AT collaborative testing in their classroom. The IF-AT (Epstein, Epstein, & Brosvic, 2001) consists of a scratch-off answer form for use with either 25 or 50 multiple-choice questions, each of which can have four or five answer choices. When using the forms, students follow a typical collaborative testing method; they discuss each question in a small-group setting and choose their answer through consensus building, often teaching each other and employing critical thinking skills as they negotiate for the correct response. They then scratch off the film for their choice. If they are correct, they see a star and receive maximum points. If incorrect, they discuss again and make a second or third choice, with the points decreasing with each incorrect answer. Stars are placed randomly in different positions within each answer choice rectangle to prevent cheating. This collaborative approach leads to lively discussion, debate, and audible cheers for successful choices. More detailed testing and analysis of this process indicates that it has significant benefits for both student preparedness (Mitchell & Melton, 2003; Slusser & Erickson, 2006) and student learning (Epstein et al., 2001, 2002; Giuliodori, Lujan, & DiCarlo, 2008; Kapitanoff, 2009; Skidmore & Aagaard, 2004; Wiggs, 2011).

The purposes of the project reported in this article are to describe and explain the implementation of the collaborative testing technique in the Nursing: Health Enhancement course; to assess its effectiveness in improving final exam scores by comparing scores from courses using the testing technique with scores from courses not using this method; and to evaluate the testing technique as implemented, as well as student perceptions of outcomes after using this method. These purposes called for a mixed-method educational evaluation approach.

Nurse educators have long employed collaborative and group testing to improve learning outcomes. Use of group testing has enhanced test scores during written exams for anatomy and physiology for nursing students across gender, age, and ethnicities (Rice, 2007). In situations where students took the exam individually and then in groups, exam scores increased while test anxiety decreased (Beggs, Shields, & Goodin, 2011; Cortright, Collins, Rodenbaugh, & DiCarlo, 2003; Lusk & Conklin, 2003; Mitchell & Melton, 2003; Slusser & Erickson, 2006; Wiggs, 2011). Double-testing methods have provided immediate opportunities for students to receive answers to lingering questions, correct erroneous thinking, and gain more knowledge (Giuliodori et al., 2008; Kapitanoff, 2009; Wink, 2004). Furthermore, when using collaborative techniques for post-test reviews, Steele (2006) observed that "students debate, integrate, and synthesize course material while it is still foremost in their mind" (p. 96).

COLLABORATIVE TESTING

In comparing the immediate feedback method with end-of-test feedback and 24-hour delayed feedback, Dihoff, Brosvic, and Epstein (2003) concluded that immediate feedback increased the number of correct responses on the first try, increased student confidence, and decreased perseverative incorrect responding. Additional research (Dihoff, Brosvic, Epstein, & Cook, 2004; Epstein et al., 2002) showed increased retention when immediate feedback was provided and erroneous responses were corrected at the time of the error. Finally, in analyzing the results of two groups of students (one of which used only Scantron sheets for quizzes and a final examination and the other group that used IF-AT forms for quizzes and a Scantron for the final exam), Epstein et al. (2001) found improved scores on the final exam by those students who received immediate feedback from the collaborative quizzing technique.

The collaborative testing technique combines student collaboration, group testing, and forward-looking assessment (Fink, 2003) while meeting the needs of millennial students: active and collaborative learning, instant gratification, and structure (Oblinger, 2003). Employing a method that takes advantage of the way millennial students want to learn while providing immediate error correction can aid in knowledge retention, decrease test anxiety, and motivate student preparedness (Lusk & Conklin, 2003; Mitchell & Melton, 2003; Sandahl, 2010). Can a collaborative testing method using IF-AT forms motivate students to increase their preparation regarding course content prior to group testing? Can it motivate students to improve their study habits, collaborate, and actually learn more as measured by final examination scores?

Procedures for the First Use of the Collaborative Testing Method

The Nursing: Health Enhancement course, required for students in the relevant nursing program, is taken during the first semester of the senior year. Four or five faculty members teach this course each semester, with at least two faculty carrying over from year to year. The two primary investigators were part of the course across the testing period. All classes employ a comprehensive final examination to measure the learning outcomes of the course. These 70- to 80-item exams are developed each semester, guided by course objectives; items are designed by the faculty member who taught the specific content. Faculty often refer to test bank items developed for previous tests. The entire examination is revised and evaluated for item content, ambiguity, difficulty, and coherence with course objectives by all course faculty, and items are modified following consultation among the faculty group. The faculty group decided to implement the IFAT collaborative testing technique after being concerned about the reading and study habits and resulting learning of students in the course. Based on previous research, it was believed that this technique might enhance student learning.

The implementation procedures the faculty group used to plan for and administer the IF-AT exam included:

* One of the course faculty presented the method and rationale to the class approximately one month before the final exam. Per institutional review board protocol, students were asked to provide consent to take part in this testing method and were free to refuse.

* Two weeks before the final exam, students signed up for groups. There were 39 in the first class, yielding three groups of five and six groups of four. Negative feedback, and the concern that groups might select topical areas for special study, led to the decision to have faculty assign the groups in subsequent semesters.

* An email was sent to the class a week before the final exam with details such as time frame, number of questions, and testing procedures.

* The comprehensive exam consisted of 75 items: one item for each hour of content prior to the second exam and three items per each hour of content for the final weeks of the semester. The multiple-choice items were collected, evaluated by course faculty, and word-processed with the correct answer fitting the scoring sheet answer format. Two IF-AT scoring sheets for each group were used, one with 50 items and one with 25 items. The answer pattern differed between the two scoring sheets.

* The week before the exam, students were shown the form they would be required to use to evaluate members of their group following completion of the exam. How these evaluations would be scored was also discussed with the students.

On the day of the exam, tables in the exam rooms were placed in position for the nine groups. Lists of group members were posted on the doors to the room, and tables were labeled with student names. Placed on all tables were the IF-AT exam answer sheets, forms for evaluating other students, and forms for overall evaluation of the exam. After exam questions were distributed, students were given 30 minutes to read and contemplate each item and answer individually. These scores were not graded. During the final 90 minutes of the exam, groups discussed items and completed their test answer forms.

Students collaborated to select the correct answer by discussing the focus of the question and rationale they understood for each answer choice. Discussion fostered learning among the group and built critical thinking skills, requiring group collaboration and consensus to determine the best answer. Once an answer was chosen, one student would scratch off the selection on the answer sheet with a coin (similar to a lottery scratch-off ticket). If the answer was correct, a star appeared, and four points were awarded. If the wrong answer was selected, students again discussed and negotiated their second choice and scratched it off. The process continued until the correct answer was chosen. Points were awarded for subsequent correct choices as follows: three points for a correct answer selected on the second try, two points for the third try, and zero points if all circles had to be scratched off to identify the correct answer.

All groups finished the exam within the specified time and were able to total their scores before leaving the room. Students commented that the test format allowed them to know the correct answers immediately, rather than trying to remember items to look up in their textbook once they got home. Many stated they had learned a great deal and would remember the answers because of explanations their peers provided during the discussion.

Comparison Study of Final Exam Scores

To assess whether or not the collaborative testing method improved final examination results (learning), exam scores from this initial IF-AT semester (spring 2005) and three subsequent semesters in which the technique was used (fall 2007, spring 2008, and fall 2008) were compared with those of two previous semesters (fall 2003 and fail 2004) where traditional, standard, individual multiple-choice final examinations were administered. A causal-comparative design (Gall, Gall, & Borg, 2006) was used. This design is appropriate for educational evaluation research where the causal associations among variables are evaluated, comparing samples that are different on a critical variable (in this case, the type of testing). Final exam percentage correct scores representing learning were the dependent variable.

Two instructors (the principal investigators of this study) were consistently present in the course for all selected semesters. Classes were made up of all seniors in the baccalaureate program. Numbers of students varied from semester to semester. Groups were assumed to be essentially similar since these groups contained all nursing students who had progressed through the rigorous program to their senior year. Since the entire student populations for each group were tested, random selection was not appropriate.

Descriptive statistics (range, mean and standard deviations) for these final examination percentage scores by year and semester were tabulated and are presented in the Table. A one-way analysis of variance (ANOVA) for comparison of independent means was performed on the percentage of correct scores from the identified semesters. Findings of the one-way ANOVA (F = 136.19; p < .0001) indicate that mean percentage scores differed significantly across the six semesters. The Tukey HSD post-hoc test for multiple comparisons indicated that the traditional exam mean scores (fall semesters 2003 and 2004) were significantly lower than all semesters in which collaborative testing was employed. The spring 2005 mean percentage exam score for 39 students was significantly higher than for any other group; faculty speculated that this was a highly academically successful group.

Evaluation of the Testing Method by Students

Evaluation of the collaborative testing method by students was achieved in two ways. First, using a descriptive design, students were required to complete an anonymous evaluation of the method immediately after finishing the final exam. The faculty-developed form consisted of six short, Likert-type response items, with scores ranging from 5 (strongly agree) to 1 (strongly disagree). This evaluation confirmed that, for the 39 students in one group, the format overwhelmingly helped them learn more than they knew (M = 4.77, SD = .478), encouraged critical thinking (M = 4.97, SD = .158), assessed personal learning (M = 4.87, SD = .334), resulted in the perception of fairness (M = 4.79, SD = .404), and should be used in the future (M = 4.90, SD = .303). Furthermore, faculty observed students during the discussion period and found them engaging in critical thinking.

Responses to an open-ended question asking for additional comments yielded interesting and positive responses. They included: "I feel as though I learned a lot from this exam. We had to discuss rationale, which facilitated learning. In addition, I had to study a lot so as not to disappoint other group members." and "I liked this method. It makes you want to try even harder because you don't want to let the team down!"

The findings are consistent with research results showing that decreased anxiety improves performance (Beggs et al., 2011; Lusk & Conklin, 2003; Mitchell & Melton, 2003). Students may still bring a level of anxiety to the exam, but discussion and the provision of rationales provided by peers while using the collaborative testing format facilitates student achievement and performance.

During one semester (fall 2006), students felt little trust and friendship with one another and chose to take the traditional individual exam using Opscan forms, despite being aware of the potential differences in final exam scores using the individual versus collaborative testing method. Scores for this section were considerably lower than for those sections that used the collaborative format.

Student Evaluation of Participation of Other Students

Because group collaboration is inherent in collaborative testing, faculty designed a short questionnaire for the evaluation of student participation in the IF-AT process. This evaluation method was explained to students two weeks prior to the exam to give students time for serious preparation. Students were required to assess the contributions of others in their group immediately following the exam. All students were rated "5" (excellent) on all participation/cooperation items by their peers, with the exception of two individual ratings of "4" (very good). The results indicate that students took the method seriously, prepared earnestly, participated well, and engaged in the group process. The ratings resulted in no deductions for student grades.

Recommendations for Collaborative Testing

Based on the initial implementation of the IF-AT collaborative testing method, the course faculty generated a list of recommendations for seating arrangements and testing procedures for future moderators. Recommendations included: a) clearly explain all procedures to students in advance of the exam, b) assign groups, as self-selected groups might assign a portion of the content to certain students, and c) stress the need for complete honesty and integrity in students' evaluations of other group members, with the understanding that peer ratings will be confidential.

Faculty found that the collaborative testing process yields benefits in developing tests. While it is fairly easy to develop answers to exam questions, the IF-AT process precludes the use of answers such as "none of the above" or "all of the above." The flow of stem questions and the layout of possible choices must make sense to the reader and fit the pattern of the IF-AT answer sheets, which strengthens the exam. The process continues to be used for both midterm and final exams in the Nursing: Health Enhancement course. Students' comments continue to focus on how the discussion process during testing fosters their learning and their ability to think on their feet and work well in groups.

RECOMMENDATIONS FOR FUTURE RESEARCH

This project raises several questions for further research.

* How does group composition (homogeneous or heterogeneous based on past performance) affect student scores? Several researchers (Giuliodori et al., 2008, Skidmore & Aagaard, 2004; Slusser & Erickson, 2006) have examined this issue. Would the results differ for nursing students who have higher scores on other course components compared to those in the class who have lower scores?

* How does collaborative testing with immediate error correction affect long-term content retention? Does it contribute to differences on student NCLEX results?

* Does test anxiety affect performance differently during individual and collaborative testing in highly competitive nursing courses?

CONCLUSION

The IF-AT testing format demonstrated a best practice in classroom assessment in this project and study. The exam served as a learning tool, as well as an evaluative tool (Boud, Cohen, & Sampson, 1999; McKeachie, 2002). The answer form allows students to "scratch off" an answer to get immediate feedback. Data have shown that when using assessments as learning tools, learning increases when students are provided with immediate feedback and the opportunity for instantaneous error correction (Brosvic, Epstein, Dihoff, & Cook, 2006; Dihoff et al., 2004). Immediate feedback, combined with immediate error correction, leads to longer retention of the correct information (Brosvic, Epstein, Cook, & Dihoff, 2005). Use of the collaborative testing method combined with group assessment afforded students in this study the opportunity to actively engage with the content and with one another while obtaining instant feedback and error correction. This group exam experience contributed to group collaboration, increased learning, and enhanced critical thinking.

doi: 10.5480/11-507

REFERENCES

Beggs, C., Shields, D., & Goodin, H.J. (2011). Using guided reflection to reduce test anxiety in nursing students. Journal of Holistic Nursing, 29(2), 140-147.

Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4), 413-426.

Brosvic, G., Epstein, M., Cook, M., & Dihoff, R. E. (2005). Efficacy of error for the correction of initially incorrect assumptions and of feedback for the affirmation of correct responding: Learning in the classroom. Psychological Record, 55(3), 401-418.

Brosvic, G., Epstein, M., Dihoff, R., & Cook, M. (2006). Acquisition and retention of Esperanto: The case for error correction and immediate feedback. Psychological Record, 56(2), 205-218.

Cortright, R. N., Collins, H. L., Rodenbaugh, D. W., & DiCarlo, S. E. (2003). Student retention of course content is improved by collaborative testing. Advances in Physiology Education, 27(3), 102-108. doi:10.1152/advan.00041.2002

Dihoff, R., Brosvic, G., & Epstein, M. (2003). The role of feedback during academic testing: The delay retention effect revisited. Psychological Record, 53(4), 533-548.

Dihoff, R., Brosvic, G., Epstein, M., & Cook, M. (2004). Provision of feedback during preparation for academic testing: Learning is enhanced by immediate but not delayed feedback. Psychological Record, 54(2), 207-231.

Epstein, M., Epstein, B., & Brosvic, G. (2001). Immediate feedback during academic testing. Psychological Reports, 88(3), 889-894.

Epstein, M., Lazarus, A., Calvano, T., Matthews, K., Hendel, R., Epstein, B., & Brosvic, G. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. Psychological Record, 52(2), 187-201.

Fink, L. D. (2003). Creating significant learning experiences: An integrated approach to designing college courses. Hoboken, NJ: Jossey-Bass Higher and Adult Education Series.

Gall, M., Gall, J., & Borg, W. (2006). Educational research: An introduction (8th ed.). Boston, MA: Allyn & Bacon.

Giuliodori, M. J., Lujan, H. L., & DiCarlo, S. E. (2008). Collaborative group testing benefits high and low-performing students. Advances in Physiology Education, 32(4), 274-278. doi:10.1152/advan.00101.2007

Kapitanoff, S. H. (2009). Collaborative testing: Cognitive and interpersonal processes related to enhanced test performance. Active Learning in Higher Education, 10(1), 56-70. doi:10.1177/1469787408100195

Lusk, M., & Conklin, L. (2003). Collaborative testing to promote learning. Journal of Nursing Education, 42(3), 121-124.

McKeachie, W. (2002). McKeachie's teaching tips: Strategies, research, and theory for college and university teachers. Boston, MA: Houghton Mifflin.

Mitchell, N., & Mekon, S. (2003). Collaborative testing: An innovative approach to test taking. Nurse Educator, 28(2), 95-97.

Oblinger, D. (2003). Boomers, gen-Xers, and millennials: Understanding the 'new students.' Educause Review, 38(4), 36-47.

Rice, J. (2007). The effect of group testing and selected demographic variables on student performance on written examinations. Unpublished doctoral dissertation, University of Kansas, Lawrence.

Sandahl, S. S. (2010). Collaborative testing as a learning strategy in nursing. Nursing Education Perspectives, 30(3), 171-175.

Skidmore, R. L., & Aagaard, L. (2004). The relationship between testing condition and student test scores. Journal of Instructional Psychology, 31(4), 304-313.

Slusser, S. R., & Erickson, R.J. (2006). Group quizzes: An extension of the collaborative learning process. Teaching Sociology, 34(3), 249-262.

Steele, S. (2006). Group test review and analysis: Learning through examination. Journal of Nursing Education, 45(2), 95-96.

Wiggs, C. M. (2011). Collaborative testing: Assessing teamwork and critical thinking behaviors in baccalaureate nursing students. Nurse Education Today, 31(3), 279-282.

Wink, D. (2004). Effects of double testing on course grades in an undergraduate nursing course. Journal of Nursing Education, 43(2), 138-143.

Susan D. Peck, PhD, RN, GNP-BC, CHTP/I, and Joan L. Stehle Werner, PhD, RN, FAAETS, are professors emerita at the University of Wisconsin's Eau Claire College of Nursing and Health Sciences. Donna M. Raleigh, MST, is emerita at the University of Wisconsin's Eau Claire Learning and Technology Services. Contact Dr. Peck for more information at pecksd@uwec.edu.
Table: Descriptive Statistics for Final Examination
(Percentage Correct) Scores by Year and Semester

Semester N Range Mean SD

Fall 2003 56 71.60-97.73 84.10 6.61
Fall 2004 62 59.18-91.84 78.36 6.46

Switch to IF-AT Format

Spring 2005 39 94.67-98.33 96.51 1.05
Fall 2007 64 76.00-98.00 90.88 2.83
Spring 2008 37 91.60-98.60 94.01 2.35
Fall 2008 62 90.00-95.00 92.90 1.49
COPYRIGHT 2013 National League for Nursing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Peck, Susan D.; Werner, Joan L. Stehle; Raleigh, Donna M.
Publication:Nursing Education Perspectives
Date:Nov 1, 2013
Words:3673
Previous Article:Nursing student perceptions of concept maps: from theory to practice.
Next Article:Millennial generation student nurses' perceptions of the impact of multiple technologies on learning.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters