Printer Friendly

Rethinking quality: improving resident care requires the MDS.

For almost a decade, the quality of nursing home care has been measured through the survey and certification process and public quality outcome measures in the form of Quality Indicators (QI), Quality Measures (QM), and currently QI/QMs.

Revisions in these measurement systems have improved their precision and integrity--however they still remain influenced by factors other than facility care of the resident or adherence to federal and state standards. Namely, data accuracy and resident case mix are important factors that affect quality measurement systems.

The result is that the reliance on outcome measures provides only one-third of the true picture of a nursing home's quality. Including structure and process outcomes is necessary to get an accurate measure of the quality of nursing home care.

Error-prone MDSs

The Minimum Data Set (MDS), the source that generates QI/QMs, has demonstrated significant reliability and validity in scientific studies. However, its reliability is more challenged in the nursing home workplace setting than has been demonstrated in control investigations.

Lack of standardized MDS training requirements, surveyor inconsistency in interpretation of guidelines, high turnover of MDS coordinators, and lack of access to up-to-date educational tools and manuals are key reasons why MDS data are challenged.

LTCQ, Inc., evaluated a random sampling of 1,026,722 MDS assessments and found that 77% had errors or issues. These issues ranged from direct violations of basic coding conventions to logical and clinical inconsistencies. Of those assessments with issues, on average there were 1.9 issues per assessment.

MDS repercussions

The consequences of invalid MDS data are significant. Simply examining the rate of facilities inappropriately triggering QMs demonstrates the extent of this issue. The MDS drives the care planning process and has a significant influence on the public perception of the nursing home when QMs and survey results are published. Because Medicare--and in many states Medicaid--reimbursement is derived from the same data, it's easy to conclude that similar challenges are apparent in nursing home payment.

These results demonstrate the impact of MDS data integrity issues on QMs for 300 randomly identified nursing homes. All facilities were offered systematic feedback on the integrity of their MDS assessments prior to state transmission. MDS coordinators would then revise the assessments as needed in response to the feedback and then resubmit them with changes.

QMs were calculated on first submission to LTCQ and then on the final submission. As you can see in Figure 1, the analysis showed that almost 40% of facilities either wrongly triggered the activities of daily living (ADL) decline QM, or should have triggered it but did not. Even outcomes as critical as a pressure ulcer in a resident at low-risk, which indicates a sentinel health event, was incorrect almost 30% of the time.

MDS data accuracy affects the measurement-and consequently the improvement--of a facility's performance. However, other conditions or circumstances affect CMS' publicly reported measures, such as resident case mix and preadmission conditions.

The end-of-life conundrum To demonstrate this point, LTCQ examined the impact of high concentrations of residents at the end of life on the Pressure Ulcer QI/QM. Nursing homes (n = 1,100) with the highest concentration (top quartile) of end-of-life residents (EOL) were identified as "EOL Facilities" All other facilities were designated as "typical EOL Facilities."

As Figure 2 demonstrates, facilities with a higher case mix of EOL residents had higher numbers of residents at low-and high-risk for pressure ulcers. Potentially, these facilities caring for more EOL residents will be considered inferior to other facilities with more typical proportions of dying residents.

[FIGURE 2 OMITTED]

The point is not that facilities should find pressure ulcers in EOL residents acceptable, but that their performance evaluation or comparative benchmarks should be more specific to acknowledge their resident case mix. Then, appropriate improvement goals can be established and care evaluated. Comparing these facilities to those with significantly different case mixes may unfairly categorize them as poorer performers.

The cognitive factor

Similarly, when LTCQ investigated the impact of cognitive impairment on the Behavior and Pressure Ulcer QI/QM, it revealed a similar finding.

Cognitive impairment was defined as 5 or 6 on the Cognitive Performance Scale, which translates to the resident being either severely impaired in decision-making or in a coma. As shown in Figure 3, facilities caring for more cognitively impaired residents (highest quartile) had a higher rate of residents triggering the high-risk Behavior and high-risk Pressure Ulcer QI/QM. This finding was apparent despite the high-risk Behavior QI/QM being adjusted for cognitive impairment.

[FIGURE 3 OMITTED]

Again, this is not to say that the presence of behavioral issues or pressure ulcers in cognitively impaired residents is acceptable, but simply that they are more common in this population. Case mix must be considered when creating benchmarks or establishing outcome goals.

In many cases, preadmission conditions affect facility QI/QM rates. For example, facilities with the highest (top quartile) number of residents with pressure ulcers on admission, as represented by the green bar in Figure 4, have significantly more residents with high-risk and low-risk pressure ulcers on their 90-day assessments.

[FIGURE 4 OMITTED]

What you can do

Despite the challenges with MDS data accuracy and the influence of other factors on the measurement of quality, there are ways to use the MDS and QI/QMs to accurately measure and improve quality. Although MDS data quality has been called into question by various investigators and government agencies, there are effective measures you can take to ensure data accuracy:

1. Data improvement through education.

Select professional organizations have sought to improve MDS accuracy through education, (e.g., certifying or accrediting MDS coordinators.) There are many credible programs available in multimedia formats to assist clinicians with MDS assessment.

Further, many private consulting groups offer on-site auditing, and a few technology-driven companies provide complete auditing of MDS assessments prior to state submission. CMS' Data Assessment and Verification (DAVE) project and now DAVE2 initiatives seek to help facilities improve the quality of their MDS data.

2. Internal monitoring. Facilities with a stable case mix can use QI/QM findings to track and trend performance internally. Although their percentile comparison might seem unfair, percentages with tailored goals are effective tools for quality improvement. Expanding the traditional outcome monitoring of decline to measures of improvement and maintenance provides important information for a facility's quality improvement activities.

3. Strength in numbers. Multiple-facility corporations should expand their tracking to include the depth and breadth of any changes they effect. Single "roll-up" rates for the entire chain can distort excellent accomplishments of a few facilities or hide facilities in need of support.

The following two charts, Figures 5a and 5b on p. 36, illustrate this point. Here we see that member corporations of the Alliance for Quality Nursing Home Care have a lower rate of restraint use compared to two appropriately represented comparison groups, Non-Alliance Members and Peers. In addition, their care improvement rate is greater.

[FIGURE 5 OMITTED]

However, this finding does not fully describe the improvement activities of member corporations. Can this rate of improvement be attributed to few "superstar" corporations? Were struggling facilities overlooked in the aggregation of data? Here we see that nearly 40% of Alliance facilities improved in restraint use, whereas 27% and 31% improvements were evidenced in their competitive peer groups. Similarly, multisite corporations can use these calculations to better understand improvement and &dine within their corporation.

Structure v. process measures

Future quality measurement and improvement systems must include structure and process indicators. Structure refers to the rules that are in place that affect the care experience. Process refers to actions taken or how the care is rendered. The outcome is the end result of these actions.

Avedis Donabedian, the father of this conceptual framework, identified that most healthcare providers had a problem understanding the relationship between quality and systems. His 1966 article "Evaluating the quality of medical care," published in the Milbank Memorial Fund Quarterly, was where he divided quality of healthcare measures into structure, process, and outcome.

By moving from simply focusing on resident outcomes and shining a spotlight on facility governance (i.e., structure and action taken), you will better manage the quality systems, which ultimately affect the residents.

Structure measures

Appropriate staffing is an example of a structure measure. You cannot expect excellent care outcomes without appropriate staffing. Our profession struggles to define "appropriate," and although it is easy to point to high staff-to-resident ratios and feel satisfied, LTCQ has demonstrated that staffing patterns tailored to reflect resident acuity have a more effective impact on resident outcomes.

To see whether staffing-to-acuity has an effect on measures of quality, LTCQ compared the effect of raw certified nursing assistant (CNA) staffing and CNA staffing-to-acuity measures against five quarters of QM data representing areas sensitive to CNA staffing. Higher staffing-to-acuity was significantly associated with better QM scores.

On the other hand, higher raw staffing was either not associated with QM improvement or was associated with worse QM performance. Staffing-to-acuity significantly propels QM improvement.

Process measures

An example of a process measure is turning and repositioning for residents with impaired bed mobility. Turning and repositioning is a fundamental intervention for good pressure ulcer prevention.

Figure 6 separates residents who are dependent in bed mobility from residents who are not dependent. The percentage of those dependent residents receiving turning and repositioning compared to those not dependent is significantly different, as one would expect.

[FIGURE 6 OMITTED]

The majority of those residents totally dependent in bed mobility receive turning and repositioning, but what about those residents who do not? Do they receive other interventions to prevent skin breakdown? Do these residents not receive appropriate care? Or is it simply that the MDS hasn't been completed accurately?

Exploration into this single process measure offers the facility a needed quality improvement strategy. It is easy to see the value in a "process measure prompt" when a resident is identified as being dependent in bed mobility.

It would also be advantageous to further investigate those facilities that employ turning and repositioning as a preventive measure and measure the effect that this has on pressure ulcer development. Early analysis by LTCQ supports this concept of secondary prevention.

Current quality measurement and improvement systems rely on flawed or otherwise challenged data. Recent CMS initiatives and industry-sponsored interventions strive to improve the quality of MDS data. Focusing on rates of outcome is limiting, however decline, improvement, and maintenance rates can be used for internal benchmarking when a consistent case mix is present.

Depth and breadth change should be considered by multisite corporations. Dr. Donabedian reminds us to include structure and process, along with outcomes indicators, when we try to create a system of quality.

Steven Littlehale, APRN, BC, is executive vice president and chief clinical officer at LTCQ, Inc., in Lexington, MA. Contact him at littlehale@ltcq.com.

BY STEVEN LITTLEHALE, MS, APRN, BC
FIGURE 1
Inaccurate MDSs
equal inaccurate QMs

CMS Quality Measures (QM) % Facilities with Inaccurate QMs

ADL decline 39.5%
Pain 13.0%
Urinary tract infections 41.2%
Mood (decline) 53.5%
Low-risk pressure ulcers 28.9%

FIGURE 5b
Facilities that
improved care

 Non-Alliance Peers Alliance
Bedfast 24% 29% 35.9%
Restraints 25.6% 30.6% 38.5%
ADL decline 30.6% 34.9% 45.4%
Mobility 29.7% 33.8% 43.3%
COPYRIGHT 2006 Non Profit Times Publishing Group
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Littlehale, Steven
Publication:Contemporary Long Term Care
Date:Nov 1, 2006
Words:1876
Previous Article:Cheap thrills: inexpensive methods to motivate and recognize staff.
Next Article:Great Expectations! Educate family members about risks to reduce legal exposure.


Related Articles
AAHSA's "MDS-Based Quality Improvement System." (American Association of Homes and Services for the Aging)(Not-For-Profit Report)
Managing the facility's MDS system.
The power of the MDS: quality indicators shouldn't be feared--they are a useful resource. (Feature Article).
New IT products to enhance resident care and well-being. (Computer Technology Update).
Resources on government programs. (Product Watch).
Defending the MDS--from regulators and litigators: poorly prepared MDSs are exposing facilities to major new risks.
The 'gift' of restorative nursing: focusing on restorative care benefits both residents and staff at Cove's Edge Comprehensive Care Center.
What your MDS reports can do for you: learn to extract key data from your reports to drive up quality care.
Mastering reimbursement under RUGs 53: four steps that start from the top.
The finer points of the RAI: how effective leadership can deliver quality care and proper reimbursement.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters