Printer Friendly

Glasgow Coma Scale: Generating Clinical Standards.

Clinical evaluation of neurologic status has included the Glasgow Coma Scale (GCS) since its introduction in 1974. (1) Teasdale and Jennett (1,2) introduced the GCS to provide a clinical scale for assessment of "the depth and duration of impaired consciousness and coma." They designed the GCS for clinicians with minimal training in its use to provide a practical and consistent system for describing patients with impaired consciousness in an array of settings. (1) Appropriate use of the GCS is essential because these assessments may be the foundation for changes in treatment, as intended by Teasdale and Jennett. (1) Although the GCS has achieved international acceptance and continues to be widely used, several authors cite that it may be used inconsistently and data may be misinterpreted. (3-4) There is a question as to whether the GCS is a reliable indicator given that each clinician may conduct and interpret.results slightly differently. (5) There have also been attempts to replace or supplement it with alternative tools such as the Full Outline of Unresponsiveness score. (6-8) The neurological examination is notoriously subjective, and the results depend a great deal on the clinician's skills in the examination techniques as well as in interpreting the results.

Reliability is the consistency with which an instrument measures what it is intended to be measuring. One factor in assessment of reliability is "equivalence (interrater reliability)." Equivalence evaluates whether or not an instrument yields measurements of the same traits in the same subjects (2 observers using the same instrument to measure the same thing simultaneously) (9) Interrater reliability of the GCS has been evaluated previously. (10-14) Across many of these studies evaluating the GCS, a 1-point change in the sum score was often found because of interobserver variability. If a difference between sum scores is greater than 1 point, it is most likely to be due to a real change or difference in patient condition. This is a reason many clinical practice guidelines or thresholds for notification and treatment have used a change of 2 or more points in the GCS sum score as a threshold. Accuracy of the GCS is high in the most experienced observers over the range of scores. Lesser experienced observers made more errors in GCS scores, particularly for midrange scores. Furthermore, midrange reliability depends on whether there are untestable features present, such as in the cases of intubation, eye swelling, or spinal cord injury. (15) A recent systematic review identified that, despite adequate reliability in good quality studies, further improvement is needed. (14) From a clinical perspective, a renewed emphasis on assessment standardization and education is essential.

Close observation and assessment of a patient's neurologic status in the neurological intensive care unit (ICU) is critical. Clinician assessments must be made with certainty, particularly when identifying changes over time. These changes need to be reported promptly to the provider (neurologist, neurosurgeon intensivist, or advanced practice provider). Staff observed variation in methods and skills in performing and interpreting neurological examinations in a 29-bed adult neurosurgical/neurological ICU at a quaternary medical center. A few providers also cited inconsistencies in the reports of neurologic change based on the GCS given by different experienced nurses on the same patient. Interventions (eg, urgent head computerized tomography and/or mannitol bolus infusions) were ordered based on reported changes in GCS that were not consistent even among senior nursing staff. Given these observations, the purpose of this study was 3-fold: first, to examine the recent literature about the GCS; second, to evaluate the limitations and discrepancies in GCS scoring among nurses; and third, to compile evidence to use for development of a standardized GCS educational program.

Methods

Starting in 2012, clinical nurses routinely met to conduct a literature search, analyze the evidence, design a survey for participants, and plan an evidence-based educational intervention. On May 18, 2012, the institutional review board approved this protocol on a prospective cohort study of staff GCS knowledge and performance before and after an educational intervention. Interested nurses were recruited in November 2014. Although all staff in the neurological ICU were invited to participate, 20 participants completed the baseline survey before attending the educational intervention, which was offered on 4 occasions. This educational session consisted of live oral presentations of slides reviewing the GCS, how to score the tool, and cases. Case presentations included opportunities for attendees to identify how they would score the GCS in a given scenario. The presentation then offered a review of the findings and what should be scored for these scenarios along with rationale. After the educational sessions, participants were invited to take a follow-up survey. Sixteen participants (80% of presurvey participants) completed the post-intervention survey within 5 months of the sessions (April 2015). Data were subsequently analyzed by the research team. The study was completed in May 2016.

The baseline survey included demographic information, tested participant knowledge of the GCS, and tested participant application of the GCS to common clinical scenarios. Participants then attended the 90-minute educational intervention and were asked to complete a follow-up survey. The pre--and post-educational intervention surveys were designed identically except for the demographic information and a question asking participants to identify where they learned about the GCS. An answer key was agreed upon by identified experts (C.M.E., K.H.C., and T.d.L.). These surveys were completed online. Study data were collected and managed using Research Electronic Data Capture electronic data capture tools hosted at the medical center campus. (16) Data were exported and analyzed using SPSS (version 23.0).

To maximize participation without concern for consequence of performance, surveys were completed anonymously without any identifiers for tracking. As a result, survey data were not paired (presurvey and postsurvey). Analysis was composed of evaluation of overall performance on survey questions (total survey), general questions related to the GCS (GCS general questions), and scoring based upon scenarios (GCS eye score, GCS verbal score, GCS motor score, and GCS sum score).

Results

Most of the 20 neurological ICU nurses who completed the baseline survey would be considered as experienced nurses, with 4 to 20 years of experience (Supplemental Digital Content 1, available at http:// links.lww.com/JNN/A168 [participant years of RN practice]). Participants reported practice in the neuroscience nursing specialty for 4 to 10 years (45%), followed by 11 to 15 years (30%) (Supplemental Digital Content 2, available at http://links.lww.com/JNN/A169 [participant years of neuroscience practice]). Many of the nurses had practiced in the unit of study for 4 to 15 years (65%). Ninety percent of the nurses had attained their bachelor's degree. Forty-five percent of participating nurses reported having their CNRN specialty certification, and 25% reported CCRN certification (Supplemental Digital Content 3, available at http://links.lww.com/JNN/A170 [participant specialty certification]).

Although participants noted that they learned about the GCS in a variety of settings, most noted learning about the GCS during nursing school (16 nurses, 80%). Other locations identified included training at another hospital (n = 8, 40%), critical care orientation at the medical center (n = 5, 25%), clinical orientation with preceptor at the medical center (n = 5, 25%), and continuing education (n = 3, 15%) (Supplemental Digital Content 4, available at http://links.lww.com/ JNN/A171 ["Where did you learn about the Glasgow Coma Scale (GCS)?"]).

Comparison was made between presurvey and post-survey results by percent correct between the 2 groups. Although the difference was not statistically significant (P = .19), overall survey scores did improve (pre-survey mean, 81.1% correct; postsurvey mean, 84.1% correct). This may be due to the small sample size and inability to perform paired t test (Figure 1).

Performance was improved across all questions specific to assessment except GCS eye scoring, which decreased from a mean score of 99.4% correct to 97.9% correct (Table 1). Of note, despite this relative decrease, GCS eye score was the assessment component that was most accurate. The standardized GCS educational program significantly improved nurse knowledge of the GCS as measured by presurvey and postsurvey general GCS question scores (P = .02). However, presurvey and postsurvey performance across other GCS components and the GCS sum score are not significantly different. This may be due to the fact that the GCS sum score is directly related to the scoring of its components (GCS eye, GCS verbal, and GCS motor). There was improvement in overall performance (total survey) after the educational intervention; however, the improvement was not significant.

Discussion

Participation in this study was limited to less than half of eligible participants in this small sample. Across the different question groups, participant performance improved after the educational intervention, as measured by presurvey and postsurvey GCS verbal component, motor component, and sum scores. The exception was the GCS eye scoring, which decreased minimally, yet scores remained high. Furthermore, both the research team and study participants noted positive effects from the study.

The sample was too small to infer any relationship between certification status and performance. Regardless of source of initial instruction regarding GCS use, the area of highest performance both before and after education was assessment of the GCS eye score. The area of lowest performance at both time points was the GCS sum score. The solution for improving the sum score is to improve the accuracy of component score assessments. The GCS motor score was the least accurate component. This aspect of GCS score education is a key component of educational initiatives. Although part of general neurological assessment education, the necessity of addressing these aspects of patient assessment was underscored by these research findings. These findings support the mandate for assessment standardization and education. A larger sample size might have allowed for analysis of scoring performance and any correlation with GCS educational setting.

Conclusion

Standardized education regarding neurological assessment is integrated into instruction offered to nurses entering the medical center. Educational programming for nurses training to transition into the intensive care areas includes neurological assessment, specifically addressing the GCS. On the basis of these findings, this fundamental part of the assessment of the neurologically impaired patient has been specifically included in the new graduate orientation beginning in October 2016. Although there have not been any formal changes to neurological ICU-specific orientation or training, neurological ICU nurses have expressed that the education has informed the unit culture and has empowered clinical nurses to speak to their practice with more authority. The nurses involved reported that the conversations that took place during the classes struck a chord. They even had nurses from other units signing up for the class and suggested that the class be offered annually. One nurse noted that she "thought that offering the class once a year might help a broad range of staff from the neuro ICU and from other units, as well." This project was initiated based on observations in clinical practice in the medical center's neurological ICU. Participants reported a favorable experience that empowered practice. However, improvement in knowledge of GCS reflected in the posttest did not reflect a meaningful improvement in accuracy of GCS scoring. In this small sample, the transfer from knowledge to application in practice is lacking. Although case-based learning was a part of the intervention, another approach to case-based learning, such as incorporation of the evaluation into high-fidelity simulation, may provide an educational scenario with more applied testing. Both education regarding the accurate use of the GCS and empowered and engaged nursing staff contribute toward best practice in the care of neurologically impaired patients in the neurological ICU.

Questions or comments about this article may be directed to Lori Kennedy Madden, PhD RN ACNP-BC CCRN-K CNRN, at lkmadden@ucdavis.edu. She is a Clinical Nurse Scientist and Director, Center for Nursing Science, University of California Davis Health, Sacramento, CA.

Catherine M. Enriquez, BSN RN CNRN, Clinical Nurse II, University of California San Francisco Medical Center, San Francisco, CA.

Karen H. Chisholm, BSc(Hons) RN RNC, Clinical Nurse II, University of California San Francisco Medical Center, San Francisco, CA.

Amy D. Larsen, MS RN CCRN SCRN, is Clinical Nurse Specialist, Institute for Nursing Excellence, University of California San Francisco Medical Center, San Francisco, CA.

Tuesday de Longpre, MSN, RN, CRNA, is Certified Registered Nurse Anesthetist, University of California San Francisco, San Francisco, CA.

Daphne Stannard, PhD RN-BC CNS, is Professor, San Francisco State University, San Francisco, CA.

The authors declare no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.jnnonline. com).

DOI: 10.1097/JNN.0000000000000448

References

(1.) Teasdale G, Jennett B. Assessment of coma and impaired consciousness. A practical scale. Lancet. I974;2(7872):81-84.

(2.) Jennett B, Teasdale G. Aspects of coma after severe head injury. Lancet. 1977;1(8017):878-881.

(3.) Ingram N. Knowledge and level of consciousness: application to nursing practice. J Adv Nurs. 1994;20(5):881-884.

(4.) Reith FC, Brennan PM, Maas AI, Teasdale GM. Lack of standardization in the use of the Glasgow Coma Scale: results of international surveys. J Neurotrauma. 2016;33(1):89-94.

(5.) Edwards SL. Using the Glasgow Coma Scale: analysis and limitations. Br J Nurs. 2001;10(2):92-101.

(6.) Fischer M, Ruegg S, Czaplinski A, et al. Inter-rater reliability of the Full Outline of UnResponsiveness score and the Glasgow Coma Scale in critically ill patients: a prospective observational study. Crit Care. 2010;14(2):R64.

(7.) Wijdicks EF, Kramer AA, Rohs T, Jr., et al. Comparison of the Full Outline of UnResponsiveness score and the Glasgow Coma Scale in predicting mortality in critically ill patients. Crit Care Med. 2015;43(2):439-444.

(8.) Wijdicks EF, Bamlet WR, Maramattom BV, Manno EM, McClelland RL. Validation of a new coma scale: the FOUR score. Ann Neurol. 2005;58(4):585-593.

(9.) Polit DF, Beck CT. Essentials of Nursing Research: Appraising Evidence for Nursing Practice. 8th ed. Philadelphia, OA: Wolters KIuwer Health /Lippincott Williams & Wilkins; 2014.

(10.) Ingersoll GL, Leyden DB. The Glasgow Coma Scale for patients with head injuries. Crit Care Nurse. 1987;7(5):26-32.

(11.) Rowley G, Fielding K. Reliability and accuracy of the Glasgow Coma Scale with experienced and inexperienced users. Lancet. 1991;337(8740):535-538.

(12.) Teasdale G, Knill-Jones R, van der Sande J. Observer variability in assessing impaired consciousness and coma. J Neurol Neurosurg Psychiatry. 1978;41 (7):603-610.

(13.) Juarez VJ, Lyons M. Interrater reliability of the Glasgow Coma Scale. J Neurosci Nurs. 1995;27(5):283-286.

(14.) Reith FC, Van den Brande R, Synnot A, Gruen R, Maas AI. The reliability of the Glasgow Coma Scale: a systematic review. Intensive Care Med. 2016;42(1):3-15.

(15.) Marion DW, Carlier PM. Problems with initial Glasgow Coma Scale assessment caused by prehospital treatment of patients with head injuries: results of a national survey. J Trauma. 1994;36(1):89-95.

(16.) Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381.

Caption: FIGURE 1 Mean Presurvey and Postsurvey Scores
TABLE 1. Glasgow Coma Scale Survey Score Performance

                         Glasgow Coma Scale (GSC)
                         Survey Score Performance

                          Group     N    Mean    Standard     P
                                                 Deviation

General GCS questions    Pretest    20   75.7%    0.0693     .016
                         Posttest   16   82.1%    0.0833
GCS eye score            Pretest    20   99.4%    0.0248     .203
                         Posttest   16   97.9%    0.0448
GCS verbal score         Pretest    20   91.1%    0.0994     .511
                         Posttest   16   93.1%    0.0688
GCS motor score          Pretest    20   75.6%    0.1057     .733
                         Posttest   16   77.1%    0.1596
GCS sum score            Pretest    20   67.2%    0.1855     .464
                         Posttest   16   71.5%    0.1570
Total survey             Pretest    20   81.1%    0.0665     .186
                         Posttest   16   84.1%    0.0656
COPYRIGHT 2019 American Association of Neuroscience Nurses
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Enriquez, Catherine M.; Chisholm, Karen H.; Madden, Lori Kennedy; Larsen, Amy D.; de Longpre, Tuesda
Publication:Journal of Neuroscience Nursing
Article Type:Report
Date:Jun 1, 2019
Words:2638
Previous Article:Injury, Sleep, and Functional Outcome in Hospital Patients With Traumatic Brain Injury.
Next Article:Reliability and Validity of the Patient Health Questionnaire-9 as a Screening Tool for Poststroke Depression.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |