Printer Friendly

Gap analysis: synergies and opportunities for effective nursing leadership.

THERE IS A GAP BETWEEN best evidence and current health care delivery practice resulting in subpar patient safety and quality outcomes. The Institute of Medicine (2001) identifies this gap as a quality chasm in the provision of health care in the United States and presents several challenges and opportunities to improve patient safety and quality outcomes. Moreover, recent patient safety and quality outcomes trends remain suboptimal (Landrigan et al., 2010). The scope and gravity of the challenges in current patient safety and quality outcomes demands the formulation of new, efficient, and effective modes of health care delivery. Organization-wide approaches are needed to improve safety and quality outcomes to move health care organizations closer to the triple aim of improving the individual care experience, improving population health, and reducing the cost of care (Berwick, Nolan, & Whittington, 2008; Pronovost et al., 2006; Pronovost et al., 2008).

Nursing, as the largest health care workforce involved in direct patient care, is central to the formulation and implementation of effective organizational approaches to address many patient safety and quality care challenges. Nurse leaders need tools to help identify and close the existing gap between real-world practice and desired service, quality, and patient outcomes. Gap analysis (GA) provides an applicable organizational management tool to identify differences between desired and actual practice conditions, including service delivery and quality patient outcomes as measured against evidence-based benchmarks while incorporating key stakeholder concerns and expectations. An overview of the GA framework and the necessary steps for conducting GA are described.

An Overview

The overarching purpose for using GA is to identify discrepancies between known benchmarks for efficient and effective health care delivery and practice with the real-world conditions. Gap analysis uses evidence-based practice standards to measure organizational outcomes. The evidence for best practice is distilled from the strongest available research findings into systematic reviews and practice guidelines. In the absence of strong research-based evidence, the evidence can come from expert clinician guidance. In addition to measuring actual conditions against desired evidence-based best practice, GA includes internal and external stakeholder input, with a key stakeholder being the patient perspective.

Often, the goal for conducting GA is to implement corrective action and monitor progress. Nurse leaders need to know, incorporate, and deliver on dynamic, evolving, evidence-based practice standards to deliver efficient and effective nursing services, and meet quality and outcome benchmarks while balancing both patient and professional perceptions and expectations. To that end, the American Organization of Nurse Executives (2005) lists GA as a core leadership business skill competency for strategic management. Gap analysis provides the structure to identify, measure, address, and monitor factors associated with deficiencies in meeting desired outcomes in a real-world nursing context. The GA process forms a dynamic feedback loop conducive to creating sustainable improvement.

Gap analysis can enhance decision-making acumen. It may be used retrospectively to assess problems such as sentinel events or other breaches in standards of care. Ideally, GA is embedded as a proactive tool for a dynamic continuous improvement strategy. Gap analysis aligns well with Donabedian's (1998, 2003) conceptual model of health determinants which can serve as an overarching framework. A conceptual model of health determinants depicting the relationship to managing patient safety is shown in Figure 1.

Donabedian's structure, process, and outcomes framework accommodates perspectives ranging from the population level to the individual patient or clinical unit perspective. This model incorporates the internal and external context by capturing clinical and patient factors as well as system and institutional factors. Finally, it maps the links between care interventions, outcomes with the relationships to effectiveness evaluation including measuring and monitoring for benchmarking, and improving efficiencies, effectiveness, and patient outcomes. (Aday, Begley, Lairson & Balkrishnan, 2004).

Significance and Relevance

Gap analysis provides rich information needed for the Plan Do Study Act phases often incorporated into continuous quality improvement (QI) systems (Nolan, 2007). Employing the GA process and acting on the findings helps create sustainable conditions found in learning organizations. Proactive GA helps organizations respond and adapt to the health care industry and nursing practice demands to optimize desired outcomes for excellence. Gap analysis helps identify focus areas to guide those setting priorities in organizational problem solving and strategic decision making. A clinical microsystem where small groups of people work together to provide care to a group of patients is an ideal setting to test QI initiatives before disseminating the initiative to the larger organization (Nolan, 2007).

Gap analysis involves several steps. First, decision makers must identify and loosely classify the problem into a main domain focus --service, practice, or patient outcomes--and then sub-classify the main domain by scale and scope of the problem. Second, identify and codify the evidence defining best practice associated with the problem and map the evidence to the previously identified main domain focus. Third, those with expertise in the main domain under investigation use a systematic and dynamic approach to measure actual conditions against objective standards and benchmarks. Fourth, and centrally important, elicit information about internal and external contextual factors that contribute to or mitigate the problem, including key stakeholder input. Finally, compile the findings and further refine how the information compares to standards and benchmarks, and key stakeholder perceptions and expectations. As an outcome, GA should lead to discussions and decisions about how to act on the findings and monitor progress towards affecting change to close the gap.

Tentative Models And Frameworks

Gap analysis measures the difference between the evidence-based desirable level of care and actual conditions and can be illustrated through models and frameworks. Gap analysis does more than measure outcomes against benchmarks; it captures various stakeholder viewpoints and expectations regarding service delivery, quality performance, and/or patient outcomes. Different models illustrate GA (see Figure 2). The most basic models depict GA as a simple linear relationship between service and quality (Parasuraman, Zeithaml, & Berry, 1985). However, problems often appear more complex than what can be captured in a linear relationship. Two-dimensional nonlinear models illustrate relationships between service and quality, adding the dimension of customer satisfaction (Kano, Seraku, Takahashi, & Tsuji, 1984). The linear and two-dimensional models present gaps rather simplistically. Health care organizations are often characterized as complex adaptive systems. Algorithmic models develop the logic to use for approaches where optimization and prediction are needed in complex systems (Tsai, Chen, Chan, & Lin, 2011).

Regardless of the underlying model sophistication, GA helps measure specific identifiable gaps and provides objective data to guide decision making to set priorities and monitor performance. In health care settings, gaps can be characterized as differences between:

1. Patient expectations and management perceptions of patient expectations.

2. Management perceptions of patient expectations and specific service-delivery/performance-quality indicators and patient outcomes.

3. Service delivery/performance-quality indicators compared to actual conditions.

4. Actual service delivery/performance conditions compared to what is communicated to nursing or the public about service, quality, and outcomes.

5. Patient expectations and patient perceptions about service, quality, and outcomes.

Implementing the GA Process

Step 1: Identify and classify the problem. To begin, an organization identifies a problem amenable to GA. This early stage involves classifying the gap by domain (service, practice, or patient outcomes) and perspective (population, organizational, or individual unit), and determining the timeframe (retrospective or prospective) and model sophistication (linear, two dimensional, or algorithmic) needed for the analysis. Making these determinations gives leaders a preliminary understanding of the information needed to capture the organizational structure, process, and outcomes associated with the gap and assemble the investigative team.

Classifying the problem into a main domain in terms such as service, practice, or patient outcomes brings focus to the issue and helps guide the GA. This main focus area can further be sub-classified by scale and scope in terms of urgency, severity, or pervasiveness. Sub-classifying by scale and scope helps determine the depth of analysis needed to understand the gap, and guide decisions about resource allocation for investigating and addressing the gap. This sets the stage for deciding what change management strategies to pursue. For instance, some gaps are amenable to incremental improvement whereas others require transformational immediate change, for instance a gap in safety standards associated with a "never event" such as wrong-site surgery. Assigning a main domain (service, practice, or patient outcomes) and using sub-classifications for scale and scope (urgency, gravity, pervasiveness) also helps determine the type of data needed and available and whose perspective to take in order to capture desired versus actual key stakeholder perceptions and expectations about patient safety and quality outcomes.

The perspective may be broad (population, state or community level), specific (organizational), or narrow (individual patient or unit level). Often, the initial GA takes a retrospective approach. Ongoing monitoring or systems approaches where GA is incorporated into QI may take a prospective approach.

This first phase conceptually maps out how to leverage and capture organizational capabilities and knowledge as well as the internal and external contextual factors. Determining the type of problem, whose perspective to take, and the level of model sophistication needed to analyze the issue helps determine the types of skills, knowledge, and abilities needed to conduct an informed GA. Leaders can then assemble appropriate teams with enough expertise to conduct the GA. Classifying the issue also gives direction to later steps identifying the evidence for best practice. The main domain focus could be narrowed to such things as service delivery indicators (e.g., patient satisfaction), nursing practice standards of care, nurse-sensitive care measures, Agency for Healthcare Research and Quality (AHRQ) quality indicator benchmarks, or other patient-reported outcomes.

Step 2: Identify and define best practice. Defining best practice begins after the assumed gap is classified by main domain, scale and scope, and perspective, and leaders gauge the level of model sophistication needed for the analysis to capture the structure, process, and outcomes influencing the gap. Defining best practice(s) involves reviewing appropriate legal and regulatory expectations, and health care industry and nursing practice standards and benchmarks. The review identifies, codifies, and maps pertinent best practices to the gap under review. Expectations and performance standards get framed with the appropriate perspective (population, organizational, individual) and specific local context associated with the gap. Definitions for level of performance using terms such as the ideal, desired high bar, or minimal thresholds as set by industry, regulations, accrediting bodies, or practice standards occur at this stage. Discussions could outline the parameters for an achievable high bar. Moreover, the gap gets assessed in relation to absolute standards such as "should never miss a desired service, quality, or outcome benchmark" as may occur when evaluating gaps associated with sentinel events. Defining performance thresholds at this stage prior to measuring actual conditions in relationship to benchmarks establishes a foundation for decisions based on the GA as to implement incremental or absolute changes as corrective approaches.

Step 3: Measure and benchmark. The third step in GA measures how close the organization and/or nursing department comes to meeting evidence-based benchmarks. The real-world condition measures get classified using predetermined definitions for performance. Various tools measure gaps against standards. One such tool is a GA tool developed by The Agency for Healthcare Research and Quality (AHRQ, 2005b) (see Table 1). The AHRQ GA tool provides a systematic method to compare current practices with best practices. This approach requires an ability to comprehensively identify organizational processes associated with generally accepted standards and key stakeholder expectations for best practice. Organizational processes and practices associated with the gap as well as situational barriers that prevent implementation of best practice get codified, benchmarked, and reviewed. The final step determines if the best practice will be implemented. If not, the reason should be stated clearly.

Step 4, Part 1: SWOT. Once the gap is measured against benchmarks, the focus turns toward stakeholder views about actual real-world conditions for the service delivery, quality performance, or health outcomes. Measuring a gap against best practices captures important elements; however, it is not sufficient for a holistic assessment of real-world actual conditions. In this fourth step, SWOT analysis often is employed to map organizational performance in relation to benchmarks while capturing stakeholder perceptions. How the organization fairs in relation to evidence based benchmarks gets placed into the SWOT framework along with information elicited from key stakeholders. Patient perceptions come to the fore during this step. SWOT analysis provides structure to eliciting internal organizational factors and external influences associated with the gap (Luecke, 2005).

Although a detailed discussion of SWOT analysis is beyond the scope of this article, a few basics are appropriate to keep in mind for GA. SWOT is an acronym for Strengths, Weaknesses, Opportunities, and Threats. Strengths are organizational capabilities that enhance performance and can be leveraged to further help the organization maximize efficiency and effectiveness. Weaknesses connote factors that hinder or prohibit desired organization performance. Opportunities encompass external trends, forces, events, and/or ideas that can be leveraged to improve service delivery, quality performance, or patient outcomes. Finally, threats are outside influences or unplanned situations the organization needs to mitigate poor performance or outcomes. The internal and external context may encompass assessing technological factors, emerging issues and trends, time dimensions, and workforce dynamics impacting the identified gap domain, among other things. Overall, SWOT analysis provides a venue to explore perspectives from various stakeholders including but not limited to the patient, management, regulators, families, external professional organizations, as well as policy and decision makers.

Step 4, Part 2: Survey stakeholder perspectives. This phase elicits additional in-depth stakeholder perspectives, often by using a SWOT analysis. Based on information gathered in the SWOT analysis, a survey instrument may be developed whose purpose is to determine the significant factors driving the gap. The survey puts forth a set of measurable experience and expectation statements based on the summary focus areas identified in the SWOT analysis. The survey may be administered to patients, managers, or other stakeholders either influential in creating the gap or directly impacted by the gap in practice. Data from the survey can then be analyzed to determine the significant factors perceived by constituents as driving the gap between desired and actual conditions. The results can be measured, quantified, and summarized to help decision makers determine how to best address and monitor progress on the gap (Brown & Swartz, 1989).

Final phases: Resource allocation and feedback loop. To close the loop in the identification, measuring, decision making, and monitoring phases of GA, nursing leaders need to synthesize the information to set organizational priorities to address issues uncovered during the GA. Resource allocation enters the discussion as nurse leaders make decisions about what to do with the findings. The findings may suggest areas needing capital improvements or adjustments in how to best deploy the workforce to concentrate efforts in correctable areas identified in the GA. Gap analysis may uncover new and urgent matters not currently being addressed in strategic plans or budgets or show deficiencies in current allocation of organizational resources in an area of poor performance yet high priority. Decisions about how to prioritize and proceed in order to find sustainable solutions to the GA findings often requires weighing pros and cons against competing interests. Deliberations can begin by identifying the significant factors influencing the gap and placing them on a continuum from those that impact poor performance or outcomes to those that are central to maintaining excellence. Then weigh the priorities and designate them in a range from low to high. Apply formal weights and place the factors in a 2 by 4 grid with low performance on the left side and low priorities at the bottom.

Visualizing where to concentrate efforts to address issues uncovered in the GA helps leaders determine how to deploy resources. Figure 3 illustrates a performance-priority grid useful for guiding decisions about priorities and resource deployment.

Conducting a GA without following up by structuring actionable approaches diminishes the usefulness of GA. Ideally, one could concentrate efforts and resources in the lowest performing areas with the highest, most urgent priority. During times of resource constraint, it also becomes useful to match the intervention efforts to the resources needed to actualize improvements from the intervention. Visualizing resource allocation within a performance/priority grid could help identify the feasible versus the nonaffordable or not practical approaches to solving the problems contributing to the identified gap. Using a performance/priority grid also helps determine what is central versus peripherally important in creating and establishing the required course of action(s) to close a gap. It helps those implementing change to drill down to core issues. This GA phase helps delineate accountability and reporting lines and assists in identifying the monitoring activities needed to determine if the interventions and resource allocations were successful in narrowing the gap.

Monitoring may take various forms based on the complexity and urgency of the issue. Mapping out the complex interrelationships among the changes to structures, processes, and desired outcomes with the central issue can be a useful tool in establishing a visual feedback loop. A concept map that diagrams the complex inter-relationships among the influential factors contributing to the gap is one such tool. If by the end of the GA process, decision makers decide formal new interventions are needed, then formal pilot studies or programs can be implemented. Often, incremental changes in organizational learning or processes suffice, and sometimes more drastic direct change is in order. To sustain new approaches to address a gap, leaders need the time and tools necessary to implement and adjust to change, as well as to follow up and monitor progress.

Application in a Hospital Setting

An illustration could help provide conceptual clarity about how to approach gap analysis. The following case first describes how to apply the gap analysis process to a clinical situation, and then discusses the implications. We begin by describing the clinical situation, a situation not uncommon in rural and urban inpatient hospital settings, and then discuss the gap analysis process as it relates to the case.

A 29-bed adult surgical orthopedic/trauma unit (UNIT) located in a Northeastern tertiary hospital (NETH) shows chronically low Press Ganey Survey patient satisfaction scores related to nurse call response. The unit's mean four quarter ranking falls below the University Health Care Consortium (UHC), an alliance of 116 academic medical centers and 276 affiliate hospitals, benchmark (UHC 91.2%), and below the NETH goal (UNIT = 82.9% vs. NETH goal = 90% vs. UHC benchmark = 91.2%).

Step 1: Identify and classify the problem. This problem presents a straightforward classified situation. This scenario takes a unit-level perspective for a retrospectively identified gap in service related to patient satisfaction as reported by Press Ganey Associates, Inc.: inpatient patient survey responses that are associated with nurse call response. The main domain falls into a service classification. Scale takes a unit level or localized perspective, and urgency, although not emergent or urgent, may need further analysis to determine the level of concern. For instance, a decision tree illustrating trends (declining vs. improving), pervasiveness (isolated unit vs. surgical services vs. hospital wide), and the presence of any highly visible known errors associated with nurse call, can help determine the severity of the gap and begin to identify any additional data points to address the gap. To construct a decision tree for this case study, we assigned a Likert scale number ranging from 1 = low importance to 5 = high importance for trends (improving=1, declining=5), pervasiveness (isolated=1, unit=2.5, hospital wide=5), and associated incidents (no incidents=1, several incidents=5). The UNIT decision tree total raw score = 8.5; divided by 3 gave a 2.8 mean score. The initial classification puts the score near the midpoint. This assigns a moderate level of importance. However, to prioritize this score, it should be evaluated in relation to other decision tree gap scores (not reported here).

Step 2: Identify and define best practice. For this case, the gap misses the predetermined patient satisfaction benchmark. In step 2, we focus on identifying best practices and benchmarks that drive patient satisfaction in surgical orthopedic units, particularly those that relate to nurse call response (Van Handel & Krug, 1994). We drill down on the main issues by conducting a literature review, and eliciting staff and management views about what influences nurse call response. For this GA, the literature and staff input suggest teamwork as being significantly associated with patient satisfaction (Meterko, Mohr, & Young, 2004). This information helps refine the focus for the GA. This stage suggests teamwork as a key internal driver affecting the gap and provides some direction about what internal factors to monitor. Knowing teamwork is a significant factor helps determine how to conduct further analysis needed to assess the situation and who to place on a gap analysis team.

Step 3: Measure and benchmark. To address the gap, the analysis needs to continue to monitor patient satisfaction scores and conduct a thorough trend analysis, but also must move beyond this broad service benchmark provided in the Press Ganey survey score to further measure and benchmark nurse call response in the context of teamwork. To outline the best evidence, we elected to use the AHRQ Quality Indicators Gap Analysis tool (AHRQ, 2005b). A summary of selected results for best evidence using the AHRQ gap analysis tool is shown in Table 1.

Our literature search also suggests documenting the current approach and quantifying the proportions of answered call lights, how well the unit communicates the patient request, and quantifying the number of call light requests that were followed up, resolved, and unresolved (Tzeng, 2011). This required a mixed methods approach, a qualitative observational study, as well as gathering counts for statistical analysis.

Step 4, Part 1: SWOT The SWOT analysis focuses the investigation on the stakeholder perspective and places the findings into a 2 by 2 grid. Select findings from the SWOT analysis are illustrated in Table 2.

Step 4, Part 2: Survey stakeholder perspectives. The stakeholder perspective for this case focuses on nursing views about teamwork and patient interviews about nurse call response. Other perspectives not elicited for this case could include a broader systems perspective, for instance the ability of the call bell system to handle the volume of patient calls, alert nursing personnel of patient requests, or the ability to track follow-up. The initial gap assessment suggests a unit-level perspective. Taking a system-level focus would move beyond the scope of the gap. Broadening the scope should only occur if warranted based on information gathered during earlier phases of the gap analysis.

To elicit the nursing perspective, we used the nursing teamwork survey tool developed by Kalisch, Lee, and Sales (2010). Patient interviews were conducted using a four-item questionnaire designed to gain insight about (a) the most frequent patient need prompting call bell use, (b) patient perception about nurse call response time, (c) actual response time in minutes, and (d) patient perception about whether their needs were met.

Step 5: Resource allocation and feedback loop. Management requests a monetary resource-neutral approach in addressing the gap. To arrive at this decision, managers review the decision tree, the UNIT budget, and a summary of the gap analysis results. The gap analysis results are placed into the performance-priority grid (undesirable-poor performance in a priority area) in context with the strategic plan for the UNIT. Maintaining Press Ganey patient satisfaction scores at or above benchmark is a UNIT and systemwide priority; however, our GA shows low performance in a high-priority area, placing it in the upper-left quadrant of the performance-priority grid. To determine whether to allocate additional resources, the managers place the gap in context with other priorities, such as AHRQ inpatient quality indicators, infection control, length of stay, professional staff recruitment and retention, to name a few, as well as the amount of resources already allocated to the UNIT.

Intervention is an appropriate approach to the findings in the GA. Designing a monetarily resource-neutral intervention becomes the challenge. The GA highlights areas to leverage to address the gap in patient services. Teamwork stands out in the GA as an area amenable to an intervention. A teamwork intervention strategy was developed using rounding, huddles, and structured communication (Denver Health, 2013). To create a feedback loop to track the UNIT progress throughout implementation, we also created a dynamic concept map (available on request). The concept map begins with the central issues (in this case, low patient satisfaction) and shows the link between patient call light and teamwork. As the feedback loop develops, the concept map fills in working outward over time as the intervention gets implemented. This provides a structure to organizational feedback about the progress of the intervention and shows the areas where refining the approach to nurse call response can influence the desired outcome, an improvement to meet or exceed hospital benchmarks in Press Ganey patient satisfaction scores.

Conclusion

Gap analysis encompasses a comprehensive process to identify, understand, address, and bridge gaps in service delivery and nursing practice. Conducting GA provides structure to information gathering and the process of finding sustainable solutions to important deficiencies.

Nursing leaders need to recognize, measure, monitor, and execute on feasible actionable solutions to help organizations make adjustments to address gaps between what is desired and the actual real-world conditions contributing to the quality chasm in health care. Gap analysis represents a functional and comprehensive tool to address organizational deficiencies. Using GA proactively helps organizations map out and sustain corrective efforts to close the quality chasm. Gaining facility in gap analysis should help the nursing profession's contribution to narrowing the quality chasm.

REFERENCES

Aday, L.A., Begley, C.E., Lairson, D.R., & Balkrishnan, R. (2004). Evaluating the healthcare system: Effectiveness, efficiency, and equity (3rd ed.). Washington, DC: Health Administration Press, Academy Health.

Agency for Healthcare Research and Quality (AHRQ). (2005a). Figure 2. The Donabedian model of patient safety. Retrieved from http://www. ahrq.gov/research/findings/finalreports/medteam/figure2.html

Agency for Healthcare Research and Quality (AHRQ). (2005b). Appendix of AHRQ quality indicators[TM] toolkit for hospitals: Improving performance on the AHRQ quality indicators. Gap analysis tool. Retrieved from http:// www.ahrq.gov/professionals/sys tems/hospital/qitoolkit/d5-gapanaly sis.pdf

American Organization of Nurse Executives. (2005). The AONE nurse executive competencies. Retrieved from http://www.aone.org/resources/leadership%20tools/PDFs/AONE_NEC.pdf

Berwick, D.M., & Nolan, T.W., & Whittington, J. (2008). The triple aim: Care, health, and cost. Health Affairs, 27(3), 759-769. doi:10.1377/hlthaff.27.3.759

Brown, S.W., & Swartz, T. (1989). A gap analysis of professional service quality. Journal of Marketing, 53(2), 92-98.

Culley, T. (2008). Reduce call light frequency with hourly rounds. Nursing Management, 39(3), 50-52.

Denver Health. (2013). Patient safety through teamwork and communication. Retrieved from http://www.safe coms.org/ImplementationToolkit/AdditionalTools/tabid/836/Default.aspx

Donabedian, A. (1988). The quality of care. How can it be assessed? Journal of the American Medical Association, 260(12), 1743-1748.

Donabedian, A. (2003). An introduction to quality assurance in health care. New York, NY: Oxford University Press.

Gluck, P. (2010). Physician leadership: Essential in creating a culture of safety. Clinical Obstetrics and Gynecology, 53(3), 473-481. doi:10.1097/GRF.0bO13e3181ec1476

Halm, M.A. (2009). Hourly rounds: What does the evidence indicate? American Journal of Critical Care, 18, 581-584. doi:10.4037/ajcc2009350

Institute for Healthcare Improvement. (2010). Optimize the care team. Retrieved from http://www.ihi.org/ knowledge/Pages/Changes/Optimize theCareTeam.aspx

Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the twenty-first century. Washington, DC: National Academies Press.

Kalisch, B.J., Lee, H., & Sales, E. (2010). The development and testing of the nursing teamwork survey. Nursing Research, 59(1), 42-50. doi: 10.1097/NNR.0b013e3181c3bd42

Kano, N., Seraku, N., Takahashi, F., & Tsuji, S. (1984). Attractive quality and must-be quality. Journal of Japanese Society for Quality Control, 14, 39-48.

Landrigan, C.P., Parry, G.J., Bones, C.B., Hackbarth, A.D., Goldman, D.A., & Sharek, P.J. (2010). Temporal trends in rates of patient harm resulting from medical care. New England Journal of Medicine, 363(26), 2124-2134. doi: 10.1056/NEJMsa1004404

Luecke, R. (2005). SWOT analysis I: Looking outside for threats and opportunities. In Strategy: Create and implement the best strategy for your business. Boston, MA: Harvard Business School Press.

Matzler, K., Bailom, F., Hinterhuber, H.H., Renzl, B., & Pichler, J. (2004). The asymmetric relationship between attribute-level performance and overall customer satisfaction: A reconsideration of the importance-performance analysis. Industrial Marketing Management, 33(4), 271-277. doi:10. 1016/S0019-8501(03)00055-5

Meade, C.M., Bursell, A.L., & Ketelsen, L. (2006). Effects of nursing rounds: On patients' call light use, satisfaction, and safety. American Journal of Nursing, 106(9), 58-70.

Meterko, M., Mohr, D.C., & Young, G.J. (2004). Teamwork culture and patient satisfaction in hospitals. Medical Care, 42(5), 492-498.

Nolan, T.W. (2007). Execution of strategic improvement initiatives to produce system-level results. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement.

Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49(4), 41-50.

Pronovost, P.J., Berenholtz, S.M., Goeschel, C.A., Needham, D.M., Sexton, J.B., Thompson, D.A., ... & Hunt, E. (2006). Creating high reliability in health care organizations. Health Services Research, 41(Part II), 1599-1617. doi:10.1111/j.1475-6773.2006. 00567.x

Pronovost, P.J., Rosenstein, B.J., Paine, L., Miller, M.R., Haller, K., Davis, R., ... & Garrett, M.R. (2008). Paying the piper: Investing in infrastructure for patient safety. Joint Commission Journal of Quality and Patient Safety, 34(6), 207-221.

Tzeng, H.M. (2011). Perspectives of staff nurse toward patient--and family--in usage and response time to call lights. Applied Nursing Research, 24(1), 59-63. doi: 10.1016/j.apnr.2009.03.003

Tsai, M., Chen, L., Chan, Y., & Lin, S. (2011). Looking for potential service quality gaps to improve customer satisfaction by using a new GA approach. Total Quality Management, 22(9), 941-956. doi:10.1080/1483363. 2011.593854

Van Handel, K., & Krug, B. (1994). Prevalence and nature of call light requests on an orthopaedic unit. Orthopaedic Nursing, 13(1), 13-18, 20.

ADDITIONAL READING

Kitson, A., & Straus, S.E. (2010). The knowledge-to-action cycle: Identifying the gaps. Canadian Medical Association Journal, 182(2), E73-77. doi:10.1503/cmaj.081231

MARY LYNN DAVIS-AJAMI, PhD, MRA, MS, NP-C, RN, is an Assistant Professor, Organizational Systems and Adult Health, University of Maryland, Baltimore School of Nursing, Baltimore. MD.

LINDA COSTA, PhD, RN, NEA-BC, is Assistant Professor, Organizational Systems and Adult Health, University of Maryland, Baltimore School of Nursing, Baltimore, MD; and Nurse Researcher, The Johns Hopkins Hospital, Baltimore, MD.

SUSAN KULIK, DNP, MBA, RN, is Coordinator, Nursing Programs, Department of Surgical Nursing, The Johns Hopkins Hospital, Baltimore, MD.

Table 1.
AHRQ Quality Indicator Toolkit, Gap Analysis Tool,
Selected Results for an Orthopedic Trauma Unit,
Call Light Response Gap Analysis

Project: Orthopedic/trauma unit: GA patient satisfaction,
call light response, unit-based teamwork

Best Practice: Huddle intervention

Individual completing this form: SK

                                                    How Your
Best Practice             Best Practice         Practices Differ
(references)                Strategies         from Best practice

RN hourly rounds       RN rounds hourly.      Modified rounding,
                                              once an hour one
(Culley, 2008; Halm,                          member of the care
2009; Meade,                                  team (clinical tech,
Bursell, & Ketelsen,                          clerical associates,
2006)                                         etc.) has a
                                              designated hour to
                                              round.

Teamwork               RN rounds hourly       Small team clusters,
                       with a care team pod   but no sense of
(Institute of          member.                having an entire
Medicine, 2001;                               team unit.
Meade et al., 2006)    Communicate updates
                       in patient request,
                       condition, medical
                       information to UNIT,
                       and the designated
                       RN caring for the
                       patient.

Huddle                 Charge nurse
                       facilitates.
(Gluck, 2010;
Institute for          Schedule huddles at
Healthcare             set times throughout
Improvement, 2010)     the shift.

                       Call a huddle for
                       emergent or critical
                       events.

                            Barrier to           Will Implement
Best Practice             Best Practice          Best Practice
(references)              Implementation       (Yes/No; Why not?)

RN hourly rounds       Number of admissions   No
                       and discharges per
(Culley, 2008; Halm,   shift interrupts RN    Unable to address
2009; Meade,           ability to conduct     how underlying
Bursell, & Ketelsen,   hourly rounds.         issues with
2006)                                         admissions and
                                              discharges and their
                                              influence on RN
                                              rounding.

Teamwork               Lack of structures     Yes
                       to communicate up-
(Institute of          to- date patient       Develop ways to
Medicine, 2001;        care information       facilitate
Meade et al., 2006)    beyond ASCOM phones.   communication about
                                              up-to-date patient
                                              needs, requests, and
                                              medical information,
                                              as well as UNIT
                                              shift workload
                                              demands.

Huddle                 Not a current focus.   Yes

(Gluck, 2010;          Evidence supporting    Develop an
Institute for          huddles not well       operational approach
Healthcare             disseminated.          to huddles
Improvement, 2010)                            appropriate for the
                       Number of total new    UNIT.
                       UNIT interventions
                       prevents a
                       comprehensive focus
                       for educating the
                       staff, implementing
                       a huddle, and
                       measuring outcomes.

Table 2.
Select Findings from a SWOT Analysis of an
Northeast Tertiary Orthopedic Unit

Strengths                           Weaknesses

* RN retention = 96.6%              * Clinical tech retention and
                                    recruitment = low

* RN skill mix                      * UNIT staff satisfaction below
                                    NE tertiary hospital mean and
  ** 28% nurse expert > NCIIE)      national benchmarks

  ** 45% nurse master (= NCIIE)     * UNIT nurse call response time
                                    below 50th percentile

  ** 27% nurse clinician I (NCIM)   * Multiple new interventions and
                                    projects on UNIT

* New customer care coordinator

Opportunities                       Threats

* New call bell system with         * Hiring freeze
ability to track response time

* Patient-centered care model       * Increased outpatient orthopedic
                                    volumes decreasing inpatient
* Constructing a new clinical       patient volume
building to house orthopedic/
trauma services = opportunity to
redesign patient care workflow      * Macro level changes in payments
                                    for 30-day re-admissions
COPYRIGHT 2014 Jannetti Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Davis-Ajami, Mary Lynn; Costa, Linda; Kulik, Susan
Publication:Nursing Economics
Article Type:Report
Date:Jan 1, 2014
Words:5533
Previous Article:Resonant leadership and workplace empowerment: the value of positive organizational cultures in reducing workplace incivility.
Next Article:Nursing characteristics and patient outcomes.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters