Printer Friendly

Teaching critical management skills to senior nursing students: videotaped or interactive hands-on instruction?

ABSTRACT This study examined and compared the effectiveness of videotape training versus hands-on instruction in preparing senior nursing students to respond to emergency clinical situations. Fourth year nursing students (n = 27) were randomly assigned to one of three groups; one group received videotaped instruction, one group engaged in a hands-on experience, and one group, a control, received no instruction. Students were evaluated using a three-station objective structured clinical examination that involved high-fidelity simulations. Differences between the control and the two instructional groups were significant (p = .007); however, there was no significant difference between the two types of instruction. It was concluded that instruction on crisis management with a high-fidelity simulator, using either video or hands-on instruction, can result in a significant improvement in performance.

Key Words Nursing Education--Nursing Student--Critical Incident --High-Fidelity Simulation

HOW HEALTH CARE PROFESSIONALS RESPOND TO EMERGENCY CLINICAL SITUATIONS CAN BE THE DIFFERENCE BETWEEN LIFE AND DEATH FOR A PATIENT. But how do nursing students learn the crisis management skills necessary to respond appropriately when providing patient care? When emergencies arise in a clinical context, the risks involved make it unlikely that nursing students will be given a responsible role. Thus, it is not surprising that students approach emergency situations with considerable fear, which may compromise patient care and incapacitate their performance in the clinical setting (Baxter, 2003; Eva, 2002).

Student concerns about causing harm to the patient, augmented by diminished learning opportunities, have provided the impetus for the use of high-fidelity manikins and simulations (HFS) (Bearnson & Wiker, 2005; Henneman & Cunningham, 2005). These realistic, computer-controlled manikins provide immediate, objective feedback and the opportunity to learn and practice with acute clinical scenarios. Learning in this manner often reduces fear, increases confidence, and ensures that there is no risk to the patient (Henneman & Cunningham; Reznick & MacRae, 2006).

Although high-fidelity simulators can be used for relatively uncomplicated skills such as incubations or auscultation (Issenberg, McGaghie, Petrusa, Lee, & Scalese, 2005), their capabilities are best illustrated when applied to learning more complex clinical skills. A systematic review of HFS by Issenberg et al. located 109 studies that employed either an experimental or quasi-experimental design and used "a simulator as an educational assessment or intervention with learner outcomes" or used "simulation as an educational intervention" (p. 10). This review concluded that high-fidelity simulation was effective for teaching complex clinical skills with some qualifiers, including opportunity for students to debrief following the experience, to receive faculty feedback, to engage in the learning activity repeatedly, and to participate in individualized learning.

Although there is a clear role for simulators, it is not clear how they can best be used in an instructional setting. It seems almost self-evident that the opportunity to actively practice skills under supervision would lead to optimal learning. Much has been written about the importance of active learning and of situated cognition, that is, learning in the actual application environment (Bradley, 2006; Maran & Glavin, 2003). Surprisingly, recent evidence suggests that pure active discovery learning, where the student actively works on goals with little or no guidance, may be less effective than more structured, guided discovery (Mayer, 2004). Further, the benefits that do exist may only apply to students who have relatively high prior knowledge in the domain (Kirshner, Sweller, & Clark, 2006).

Extrapolating from Mayer's (2004) findings to the clinical setting, the optimal approach to instruction may be related to which task is being taught. For some clinical skills involving motor dexterity, repeated practice in the use of instruments with relatively low-fidelity simulation may be optimal. As one example, Matsumoto, Hamstra, Radomski, and Cusimano (2002) taught senior medical students cystoscopy and urethroscopy using low- and high-fidelity simulators. A control group learning from text only scored 78 percent; the high-fidelity group scored significantly better at 93 percent, while the low-fidelity group did nearly as well at 90 percent.

If the skill involves learning a sequence of steps to mastery, it may be as effective to observe and study an expert's performance as to have the opportunity to practice with feedback. Gilbart, Hutchison, Cusimano, and Regehr (2000) used a surgery objective structured clinical examination (OSCE) (Medlnfo Consulting, 2009) to compare two groups that learned trauma management on a simulator or in a seminar against a control group that had no instruction. Students were expected to observe the patient, collect data, and respond as necessary while being evaluated. The instructional groups performed better than the control group both on a trauma station similar to the instruction and on a dissimilar trauma station.

Scherer, Bruce, and Runkawatt (2007) compared a faculty-run seminar to a simulation for teaching the management of atrial arrhythmias to advanced practice nurses; based on a written knowledge test, both methods were equally effective. A weakness of this study is the use of a written test to evaluate outcomes, rather than using an OSCE where students could demonstrate their knowledge and how to apply it. Morgan, Cleave-Hogg, Mcllroy, and Devitt (2002) compared video-based to simulator-based instruction in learning operating room management of a critical event. Although students preferred the simulator, there was no difference in knowledge of procedure assessed by a 12-point checklist. Wenk et al. (2009) examined the performance of two groups learning rapid sequence induction (RSI), a standardized procedure for the anesthesia of patients. According to Wenk et al., RSI "differs from a standard induction in various ways but is well defined as a standardized algorithm and requires several preparatory actions to guarantee a safe induction" (p. 161). One group worked with an instructor and a high-fidelity simulator; the second group discussed the approach to the same case in a problem-based learning session. When participants were tested several weeks later using the simulator, the simulator-based group was more confident in their skills but were only slightly, not significantly, higher in their observed performance.

The present study was an attempt to extend these findings by comparing the responses of two groups to a critical incident. One group had hands-on instruction provided by an instructor who used HFS to demonstrate the patient's physiological changes. The second group watched a video recording of the same instructor teaching students how to respond to and manage the incident. The only difference between these groups was the manner in which they received the instruction; the content was identical for each group. Both groups were compared to a control group that had no new instruction on responding to or managing a critical incident. This group relied on prior instruction on the principles of crisis management taught during the course of the nursing degree. The video recording provided a format that is cheaper to produce but lacks the pedagogical advantages of one-on-one, hands-on instruction using HFS.

Choice of the outcome measure in simulation studies involves some compromise. Ideally, instruction on dealing with crisis situations should be tested in actual crisis situations to demonstrate that the simulator can be used to teach skills applicable to the real setting. However, such a study would be difficult and costly to conduct, and may have so many sources of contamination that it would be difficult to demonstrate a treatment effect. For these reasons, many studies test students' skills on the simulator.

Testing on the simulator contains inherent biases. Simulator training may lead to better performance from increased familiarity, but may not transfer well to the real world. For example, studies of Harvey, a lifelike heart sound simulator, show impressive gains in accuracy, often exceeding 40 percent, when students are tested (Woolliscroft Calhoun, Tenhaken, & Judge, 1987). But the one study examining transfer to real patient problems showed a far more modest gain of 2.3 percent (Ewy et al., 1987). Thus, while testing on the simulator is perhaps the only practical way to proceed, it must be recognized that this is an intermediate outcome and may be inherently biased in favor of the simulator instruction group.

A potential advantage of the simulator is that students gain confidence in their skills in dealing with crisis situations, which suggests that self-assessed confidence is one important outcome. On the other hand, increased confidence must be accompanied by increased ability to perform. This study addressed the primary question: How effective is videotaped training versus face-to-face, hands-on instruction when compared to standard clinical teaching in preparing senior nursing students to respond to real-life emergency clinical situations, as assessed by performance on an OSCE?

Method PARTICIPANTS AND SETTING This pilot study took place over a two-day period in the clinical learning and simulation center at the McMaster University School of Nursing (SON) in southwestern Ontario. Senior-year nursing students, enrolled in their first undergraduate degree program, were recruited through the use of posters placed in the SON and via internal email postings. Of the 32 students who agreed to participate, 27 completed the experiment (6 control, 10 video, 11 interactive). All participants were asked to complete a demographic questionnaire that included questions about past clinical experiences and the number of patients managed with a critical incident.

Three students from the control group failed to complete the three OSCE stations and two failed to show up on the day of testing. All participants received an honorarium for their participation, which lasted approximately 1.5 to 3 hours (depending on the group to which they were assigned). Ethics approval was granted for this study by the University Research Ethics Review Board and permission to access nursing students was received from the SON.

RESEARCH DESIGN Students were randomly allocated to one of three groups: control, video, or interactive learning. The principles of crisis resource management were used as a framework to develop six objectives (Gaba, Howard, Fish, Smith, & Sowb, 2001) that informed the interventions and the evaluation criteria used in the OSCE:

* Make safe clinical decisions based on collection and interpretation of patient data

* Learn appropriate nursing actions

* Learn when to collaborate with other members of the health care team

* Learn to communicate (verbally and nonverbally)

effectively with patient and health care team

* Learn how to manage a crisis

* Provide safe patient care.

More complex aspects of crisis management in team settings, such as communication hierarchies, shared mental models, or the effects of stress on cognition, were not studied.

PROCEDURE Students in the control group (n = 6) were introduced to the equipment that they would encounter throughout the course of the testing via a seven-minute videotaped presentation. Students had the opportunity to ask the research coordinator questions prior to the testing. These students relied on their previous four years of study (which included concepts taught to the intervention groups) to guide them through the clinical simulations they would encounter in the testing.

After receiving the same orientation video, students in the video group (n = 10) were shown a second, 30-minute video of a high-fidelity manikin displaying signs of a pending myocardial infarction. Students observed the data being generated by the cardiac monitor, the [O.sub.2] saturation, blood pressure, and heart rate. They also observed a final year student and a nurse faculty member (not a member of the research team) gathering relevant patient data and discussing the patient findings. The video showed the instructor asking questions to stimulate the student's thinking and to promote discussion of potential hypotheses.

Once the student and faculty member had determined that the patient was experiencing a myocardial infarction, they proceeded to treat the patient based on an airway, breathing, and circulation (ABC) approach. The faculty member also discussed the potential role of other health care providers (HCPs), including the respiratory technician, physician, experienced nurse, and electrocardiogram technician. The videotape was immediately followed by a 15-minute debriefing session led by the faculty member featured in the video. Students had an opportunity to discuss the patient situation and ask questions.

After viewing the orientation video, students in the interactive group (n = 11) received instruction for a period of 30 minutes with the faculty member who was featured in the video. In groups of three, students surrounded a hospital bed occupied by a high-fidelity manikin. The scenario, the same as that viewed by the video group, reflected a pending cardiac arrest, and students had to respond appropriately to this emergency situation. The simulator was programmed for a 12-minute scenario, which could be stopped at any time to provide instruction. During the entire instructional period (45 minutes), each student had the opportunity to collect data by talking to the patient, looking at the cardiac monitor, or using a blood pressure cuff, pulse oximeter, or stethoscope.

Once students had gathered data and determined their nursing diagnoses, they provided care using appropriate medications and equipment, which included an intravenous line, nitroglycerin spray, morphine, and an oxygen mask and tubing. The faculty member also discussed the potential use of other HCPs. Students were encouraged to ask questions of the faculty member and to pose questions to the other two students at the bedside. A 15-minute debriefing session led by the faculty instructor followed this intervention.

OSCE TESTING For each group, testing followed immediately after the intervention. Each student completed three OSCE stations that were specifically designed for the study. To ensure face and content validity of the scenarios used in each station, five nurse faculty with acute care experience reviewed the scenarios to ensure that they were consistent with best practice guidelines. Each scenario was revised based on their feedback.

Stations were set up to resemble a patient room on a hospital medical unit; all rooms included the same equipment and resources. Upon arrival at the station/patient's room, students were instructed to listen to an audiotaped shift report, which included the patient's name, medical history, medical diagnoses, current physiological state, intravenous (when applicable) medications, and diagnostic tests completed and to be completed. Prior to entering the room, students were told to review the patient's chart located in a binder. Once the taped report had ended and students felt ready to proceed, they entered the patient's room and began to respond to the scenario. To increase the realism of the simulation, voice actors seated at the head of the bed, hidden behind a curtain, responded to questions and provided sound effects when necessary. All voice actors were introduced to the scenario the day before the testing; they had the opportunity to run through their scenario, ask questions, and practice responses to a variety of questions students might pose.

All students proceeded through three stations: a pending cardiac arrest, a pulmonary embolism, and an exacerbation of chronic obstructive pulmonary disease. Students had 15 minutes to interact with each simulation. Because it was important to determine how students responded to a pending critical incident, students were given a call bell to use if they wanted to request assistance from another HCP. When called, the HCP(s) would enter the room and ask,"What seems to be the matter?" This provided students with the opportunity to verbalize what they felt was happening to the patient physiologically and what needed to be done.

To ensure completion of the objectives, team members worked collaboratively with students to address the client issues, which included communication and collaboration with team members. Students debriefed with the multidisciplinary team for a period of five minutes before proceeding to the next station.

EVALUATION To record how well students performed, two independent assessors for each scenario (faculty experienced in performing OSCEs) were in the room, out of direct sight of students. Assessors were introduced to the evaluation tool prior to the testing and provided with a videotape of students performing the OSCE. The six assessors used a scoring sheet to assess students' performance. To increase consistency, assessors were provided with the types of actions to expect of students based on best practice guidelines for the specific medical condition enacted at the OSCE station. They were also given correct objective data that should be collected for each patient situation. When testing the tool prior to implementation, there was 80 percent agreement across assessors. Although the students were aware of the assessors' presence, they were not allowed to ask them questions. At no time did assessors discuss the situation or students' performance with one another.

MEASUREMENT Data were gathered using a scale developed to evaluate students in five key areas: decision-making, collaboration, communication, crisis management, and global impression of performance. A sevenpoint Likert scale (1 = poor, 7 = excellent) was used for each construc. A total of seven items were scored; decision-making received three scores based on three elements (assessment, synthesis, and nursing action). To examine content and face validity, the instrument was tested by clinical faculty who had extensive experience in emergency medicine and fourth year nursing students.

DATA ANALYSIS Analysis of variance followed by a Tukey test was used to compare the OSCE scores for the control, video, and interactive groups. All statistical analyses were performed using SPSS 15.0. Interrater and interstation reliability were computed using a GeneralizabilityTheory analysis (Streiner & Norman, 2008) by the six raters nested in the three OSCE stations.

Results The generalizabilty analysis yielded an interrater reliability of 0.67, and an average correlation across stations of 0.48. The internal consistency (reliability across items) was 0.98. Although not particularly relevant, since the goal of the study was not to differentiate levels of performance between students, the reliability of the overall test was 0.73.

With seven scores for three stations, the mean OSCE scores for the three groups were: control, 3.64 (SD = 1.22); video, 4.74 (SD = .88), and interactive, 5.04 (SD = .48). Overall, there was a significant difference between the three groups (F = 6.01, df = 2; p = .007). Post hoc analysis (Newman-Keuls) showed a statistically significant difference between the control group and video group with an effect size (ES) of 1.29, and between the control group and interactive group, ES = 1.64. However, there was no difference between the two interventions, ES = 0.35.

There was no significant relation between OSCE performance and prior student clinical experience. Correlation with the question "Have you worked in the ICU?" was 0.14, "Worked in the CCU?" was 0.11, "Worked in Emergency?" was 0.17, and the number of patients managed with a critical incident was 0.25.

Discussion This experiment demonstrated that students who learned an approach to critical care management, either through observing a videotaped expert performance or participating in an interactive learning session with HFS and a qualified instructor providing feedback, showed considerable learning advantage over a control group of students who were restricted to relying on their preexisting knowledge and skills. However, there was only a small difference in test scores between the video and interactive learning groups. The effect size corresponding to the difference between the two interventions was 0.35, which Cohen (1988) labeled in the small to moderate range. Although the treatment effects are very large in comparison to typical effects of educational interventions (Lipsey & Wilson, 1993), this must be viewed as a best case, with testing occurring immediately after instruction on the simulator, and no opportunity to assess transfer to real critical care situations.

This study is consistent with previous studies (Gilbart et al., 2000; Scherer et al., 2007; Wenk et al., 2009) in showing that video-based learning can provide nearly equivalent skill acquisition to guided, simulator-based instruction, as measured in a simulated setting. However, despite the large observed gains, it may be that both interventions are educationally less than optimal. In the one case, students saw a video of expert performance, but had no opportunity for practice; in the other case, students had some opportunity to practice, but without first observing a criterion performance. It would seem that a combination of both instructional strategies, with ample opportunity to practice on multiple cases, may be more appropriate. Further, simulation may be most useful to teach and practice complex skill sets such as team management and dynamic treatment algorithms, neither of which was addressed in this study.

Conclusion One limitation of this study is the small sample size. Although appropriate for a pilot study, to complete a full study, a sample size of 67 participants per group would be required to be statistically detectable. The second limitation is that the students were tested at only three stations; however, an overall reliability of 0.73 is acceptable. Future studies must consider providing students with additional opportunities to practice on multiple cases prior to testing.

There is also the possibility that the sampling strategy led to selection bias. The posters were placed in the SON and those students who were most interested and most motivated may have volunteered to participate. One unexpected finding in this study was that students who had prior exposure to critical care incidents did not score better than those who did not have exposure. This finding replicates at least one other study, where no relation was found between students' self-reported experience and performance on a three-station anesthesia simulation OSCE (Morgan & Cleave-Hogg, 2002).

There are a number of possible explanations for this finding. Paradoxically, the students may have been unable to transfer skills they had learned in past experiences, in the real world, back to the simulation; the students may not have acquired sufficient skills while working as part of a team in these settings; or, it is possible the skills were not retained. Studies of advanced cardiac life support training suggest that these specific skills decay considerably over a period of one year (O'Steen, Kee, & Minick, 1996).

This pilot study has demonstrated that relatively brief instruction in the management of critical care situations can lead to large gains in skills, as assessed in a similar simulated situation. Presentation of information by video resulted in gains comparable to those observed with an interactive session with a simulator. It remains to be shown whether these skills can be transferred to the real-world clinical setting.

References

Baxter, P. (2003). The development of nurse decision making: A case study of a four year baccalaureate nursing programme. Hamilton, ON, Canada: McMaster University.

Bearnson, C. S., & Wiker, K. M. (2005). Human patient simulators:A new face in baccalaureate nursing education at Brigham Young University.Journal of Nursing Education, 44(9), 421-425.

Bradley, P. (2006). The history of simulation in medical education and possible future directions. Medical Education, 40(3), 254-262.

Cohen, J.J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.

Eva, K. W. (2002). Teamwork during education: The whole is not always greater than the sum of the parts. Medical Education, 36(4), 314-316.

Ewy, G.A., Felner, J. M., Juul, D., Mayer, J. W., Sajid, A. W., & Waugh, R. A. (1987). Test of a cardiology patient simulator with students in fourth-year electives. Journal of Medical Education, 62(9), 738-743.

Gaba, D., Howard, S., Fish, K., Smith, B., & Sowb,Y. (2001). Simulation-based training in anesthesia crisis resource management (ACRM):A decade of experience. Simulation & Gaming, 32(2), 175-193. doi: 10.1177/104687810103200206

Gilbart, M. K., Hutchison, C. R., Cusimano, M. D., & Regehr, G. (2000). A computer-based trauma simulator for teaching trauma management skills. American Journal of Surgery, 179(3), 223-228. doi: org/10.1016/S0002-9610(00)00302-0

Henneman, E. A., & Cunningham, H. (2005). Using clinical simulation to teach patient safety in an acute/critical care nursing course. Nurse Educator, 30(4), 172-177.

Issenberg, S. B., McGaghie,W. C., Petrusa, E. R., Lee, G. D., & Scalese, R.J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning:A BEME systematic review. Medical Teacher, 27(1), 10-28. doi: 10.1080/01421590500046924

Kirshner, P.A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry--based teaching. Educational Psychologist, 4, 75-86. doi: 10.1207/s 15326985ep4102_1

Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis. American Psychologist, 48(12), 1181-1209. doi: 10.1037/0003-066X.48.12.1181

Maran, N.J., & Glavin, R. J. (2003). Low- to high-fidelity simulation:A continuum of medical education? Medical Education, 37(Suppl 1), 22-28. doi: 10.1046/j.1365-2923.37.s1.9.x

Matsumoto, E. D., Hamstra, S.J., Radomski, S. B., & Cusimano, M. D. (2002). The effect of bench model fidelity on endourological skills:A randomized controlled study. Journal of Urology, 167(3), 1243-1247.

Mayer, R. E. (2004). Should there be a three- strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59(1), 14-19. doi: 10.1037/0003-066X.59.1.14

MedInfo Consulting. (2009). What is objective structured clinical examination, OSCEs? Retrieved from www.oscehome.com/ What is Objective-Structured-ClinicalExamination_OSCE.html

Morgan, P. J., & Cleave-Hogg, D. (2002). Comparison between medical students' experience, confidence and competence. Medical Education, 36(6), 534-539. doi: 10.1046/j.1365-2923.2002.01228.x

Morgan, P.J., Cleave-Hogg, D., Mcllroy, J., & Devitt, J. H. (2002). Simulation technology: A comparison of experiential and visual learning for undergraduate medical students. Anesthesiology, 96(1), 10-16.

O'Steen, D. S., Kee, C. C., & Minick, M. P. (1996). The retention of advanced cardiac life support knowledge among registered nurses.Journal of Nursing Staff Development, 12(2), 66-72.

Reznick, R. K., & MacRae, H. (2006). Teaching surgical skills: Changes in the wind. New England Journal of Medicine, 355(25), 2664-2669.

Scherer, Y. K., Bruce, S. A., & Runkawatt, V. (2007). A comparison of clinical simulation and case study presentation on nurse practitioner students' knowledge and confidence in managing a cardiac event. International Journal of Nursing Education Scholarship, 4(1), Article 22.

Streiner, D. L. & Norman, G. R. (2008). Health measurement scales: A practical guide for their development and use (4th ed.) Oxford, UK: Oxford University Press.

Wenk, M.,Waurick, R., Schotes, D., Wenk, M., Gerdes, C., Van Aken, H. K., et al. (2009). Simulation-based medical education is no better than problem-based discussions and induces misjudgment in self-assessment. Advances in Health Sciences Education Theory and Practice, 14(2), 159-171. doi: 10.1007/s10459-008-9098-2

Woolliscroft, J. O., Calhoun, J. G., Tenhaken, J. D. & Judge, R. D. (1987). Harvey: The impact of a cardiovascular teaching simulator on student skill acquisition. Medical Teacher, 9(1), 53-57. doi: 10.3109/01421598709028980

About the Authors The authors are faculty at McMaster University, Hamilton, Ontario, Canada. Pamela Baxter, PhD, RN, is an associate professor in the School of Nursing, where Noori Akhtar-Danesh, PhD, is also an associate professor and Janet Landeen, PhD, RN, is assistant dean of the School of Nursing. Geoff Norman, PhD, a professor in clinical epidemiology and biostatistics, is assistant dean of education in the Department of Educational Research and Development. This project received funding from the Ontario Ministry of Health and Long-Term Care, Grant No. 06259. For more information, contact Dr. Baxter at baxterp@mcmaster.ca.
COPYRIGHT 2012 National League for Nursing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:TEACHING WITH TECHNOLOGY: CRITICAL SKILLS
Author:Baxter, Pamela; Akhtar-Danesh, Noori; Landeen, Janet; Norman, Geoff
Publication:Nursing Education Perspectives
Article Type:Report
Geographic Code:1USA
Date:Mar 1, 2012
Words:4426
Previous Article:"Me and my computer": emotional factors in online learning.
Next Article:Teaching home care electronic documentation skills to undergraduate nursing students.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters