Printer Friendly

First rule to improving clinical outcomes--understand how you are measured.

In this article ...

Take a look at how coding, documentation and other factors can severely affect your hospital ratings.

We've all been there: we get a report from Healthgrades, Thomson, or CMS using Medpar (Medicare billing data) that our clinical outcome measure (mortality, complications, length of stay, cost per case and readmissions) performance is about as expected versus the rest of the nation's performance.

Not bad, but hospital leadership and the board want to be better, so we segregate out those areas where we are not performing well and we implement performance improvement projects and the end result; we only make a small improvement. We work at it again, yet we just can't seem to make our outcomes improve significantly.

Well don't despair; it is a problem that we all face and will continue to face as long as the determination of our performance is based wholly on Medpar (Medicare) billing data, which are publicly available for purchase by those organizations who use that data to score our performance and report those results to the public.

So why are our performance numbers so slow to improve even when we know internally we are performing better?

Unfortunately, the answer lies in how we are measured and more specifically the lack of timely data for producing the measure along with real variability in the fundamental elements that drive the measure; documentation and coding.

Availability of data

One of the most frustrating issues related to having our performance measured based on publicly available Medicare data is how old the data actually are when the reporting organizations purchase the data.

The lag time between the point that care is rendered and our coded bill is publicly available from Medicare can be one to two years. Think about that; the information about our performance that the public receives with regard to how we perform on clinical outcome measures such as mortality and complications is at best one to two years old.

Even worse than that, those organizations typically use two or three years of data to determine our performance. So in reality when they are releasing to the public our 2009 performance in early 2010, it is more than likely the data are as old as care delivered in 2005.

So even if we were to start today and were the best performing hospital in the nation in mortality it would be two years before we would see that excellent performance even begin to impact our outcomes that are reported, and in that year it would only have a 33 percent impact on our three-year cumulative score.

So before the world knew we were the best-performing hospital with regard to mortality it may take three or four years from the point we improved that performance to best in class before anyone would know.

That is a very sad state of affairs for our industry and certainly it is difficult to make meaningful change if we rely on these data; worse yet it gives the consumer very poor information by which to evaluate and choose where to receive care.

If that were the only bad news it would be depressing enough but it is only part of the story. The real culprit in the variation of our performance is likely in the data we produce ourselves for them to score us. The infamous UB - 04 Medicare bill!

Documentation and coding

If availability seems like a challenge, then the reality of the impact of documentation and coding on the performance measures we are evaluated by is significantly more onerous.

The vast majority of credible organizations that evaluate our performance base use severity or risk adjustment in their calculations. Clearly, we as clinicians realize that severity of illness impacts a patient's outcome.

For years, before there were reliable severity/risk adjustment methodologies, we even used that very argument to explain why there was variation in outcomes of patients between providers; the proverbial "My patients are sicker!!"

Today it is common to see severity/risk adjustment methodologies used to flatten the playing field and allow us to look at performance using a scientific method to adjust for the severity/risk of the patient.

The importance of this, which is sometimes overlooked, is vital to understanding the real challenge. The way that we identify the severity of a patient can have a major impact on our actual outcome scores.

Almost all of the groups that evaluate severity/risk adjusted outcomes, base that severity/risk adjustment in some way on the all payer refined - diagnostic related groups (APR-DRG) codes initially developed by the 3M Company.

Basically what APR - DRG's do is group every DRG or now MS-DRG into four levels of severity: minor, moderate, severe, and extreme. That severity level is based on age along with the number and intensity of the diagnoses placed in the secondary diagnosis categories in the UB-04 Bill submitted to Medicare.

How it may work

Basically an expected performance for each outcome measure is established by averaging performance across the whole Medicare database at each level of severity/risk. This topic is an article in itself, but to help understand we can use a specific example.

Let's look at the principle diagnosis of congestive heart failure (CHF). (1) A minor level of severity may be determined for a CHF patient who is age 65 and has a couple of minor secondary diagnosis like simple chronic bronchitis and atrial fibrillation.

If we take all the patients in the Medicare database that have the principle diagnosis of CHF and fall into the same minor severity/risk level for CHF, we may find the expected length of stay (LOS) to be 3.26 days, the expected mortality rate to be 1.48 percent and an expected cost relative weight of 0.612, which we compare to our reimbursement for this Medicare DRG relative weight of 1.034. (3)

In essence this patient with this level of severity would be expected to use fewer resources, have a short LOS and a low mortality rate, based on the actual data for all patients at this severity level in the Medicare database.

If the only thing we change in this patient is age, then an interesting thing happens. If the same patient is now 80 with the same secondary diagnosis, they now become a moderate severity and the average expected outcomes also change because now we are averaging all patients at a moderate level with the diagnosis of CHF.

They would now have an expected LOS of more like 4.92 days, an expected cost relative weight of 0.84 compared to the Medicare DRG relative weight of 1.034, but now a mortality rate of 4.0 percent.

Still low but 2.7 times the mortality rate of our other CHF patient age 65. So in this case, as long as you got the age correct on the UB - 04 bill to Medicare you would get credit for the increasing severity/risk of your patient compared to others like them.

But if we now look at a CHF patient who is 80 and has some other health issues, both acute and chronic, we add respiratory failure, acidosis, cardiogenic shock, malnutrition, and chronic vascular insufficiency of the intestine.

This is a sick patient and this patient would move to the highest severity level of "extreme," which means they may have an expected LOS more like 8.17 days, an expected cost relative weight now of 1.67 compared to that Medicare DRG relative weight of 1.03 and an expected mortality rate of 23.3 percent or six times higher than the moderate patient.

But, here is the problem; all of those secondary diagnoses can only be coded by your hospital coders if the physician documents their existence. Even if anyone could look at an arterial blood gas and determine the patient is in respiratory failure and acidotic, if the physician themselves does not document the presence of both it can't be coded.

So why is this important? Well, when your patient bill goes out to Medicare on your very sick patient who is truly an extreme level of severity and you did not have enough documentation to move them into the extreme level for comparison, your performance will be compared to expected outcomes of those patients at the level of severity that your documentation allowed them to end up.

Consider if you were dealing with a large number of very sick, extreme level of severity, CHF patients but your lack of documentation has your performance being compared at the moderate or minor level, you would look much worse than expected in your care of CHF. No, that's not fair but it is how we are measured.

Improving documentation

So now that we know how poor documentation affects our performance, lets look at how just improving documentation can improve performance.

I have provided some data from two hospitals that at different times implemented a concurrent documentation improvement program. I can identify the exact point in time the comprehensive program was introduced at each hospital and you can clearly see how improved documentation dramatically improved their performance compared to what was expected. In fact, the actions moved them into the top decile of overall performance in mortality compared to national benchmarks. (4)

Graph 1 shows mortality in a hospital that began a comprehensive documentation program during the first quarter of calendar year 2007. You can see from the graph that their outcomes in severity/risk adjusted performance in mortality based on the severity/risk adjusted mortality index (observed/expected) improved dramatically and in essence reset at a performance that was better than top 10 percent of community teaching hospitals and better then the top 10 percent of all U.S. hospitals.

[GRAPHIC 1 OMITTED]

Also another small improvement began in the fourth quarter of 2008 when the program expanded and was able to review a greater percentage of the inpatient Medicare population.

If we look at the other facility, you will see there was also a significant improvement in severity/risk adjusted mortality outcome. In this case, (Graph 2) the compare group was top 10 percent of small community hospital and I provided both the rolling year by quarter and the actual quarterly performance (Graph 3) since the documentation program was implemented in the fourth quarter of 2008.

[GRAPHIC 2 OMITTED]

[GRAPHIC 3 OMITTED]

Interestingly if you test this as a single factor influence the p value at the first hospital is <.001 and the p value at the second is < .02. So it is statistically appropriate to say that the change in mortality at both hospitals was not random but because of the intervention of a comprehensive documentation management program.

Graphs 4 and 5 show second hospital's performance in LOS and cost per case. You can see in both cases there was improvement starting in the fourth quarter but not as dramatic as the mortality improvement.

[GRAPHIC 4 OMITTED]

[GRAPHIC 5 OMITTED]

The reasons for that are many, but the reality is that there are many more factors that impact LOS and cost measurements as compared to mortality so because of that mortality seems to respond more significantly to improvement in severity adjustment.

I am not here to advocate any specific clinical documentation program or how you might want to address the issue of documentation, but if you are frustrated by your lack of significant improvement in your severity-adjusted outcome performance, then don't be afraid to take on documentation as a root cause.

The terminology and concepts around documentation and coding arc quite different than the clinical terminology we physicians understand. We may use the same words but the coding world has somewhat different definitions than we do clinically.

Also, the rules change all the time so just trying to educate physicians on this one time will bear little fruit. In the end the best opportunity to make a transformational change is to invest in a comprehensive concurrent program of which there are many competitors out there to choose from.

But, unfortunately without addressing this you may not be getting credit for the good work you are doing. And in the end, it will help give you credit for the reality that "My patients are sicker." You just need to do a better job of showing it.

[ILLUSTRATION OMITTED]

Footnotes:

(1.) The values described in the example are partial assumptions because they change regularly but arc for demonstration purposes based on Pre-MS DRG Implementation.

(2.) Expected cost relative weight is the relative cost of the APR - DRG severity/risk Level compared to a reference point of 1.0.

(3.) Medicare DRG relative weight is the relative weight for determining the payment to a facility for the DRG that has been coded. Medicare takes the DRG relative weight and multiplies it by the hospitals base Medicare payment that is determined at a reference point of 1.0. So if the hospital's Medicare base payment is $5,000 and the relative weight for the DRG is 1.034 then the hospital would expect a payment of $5,170 to care for this patient no matter what level of severity/risk of the patient.

(4.) Data arc measured using the Thomson "CareDiscovery" severity/risk adjusted database. They are actual data and actual hospital performance.

Anthony F. Oliva, DO, MMM, CPE, is chief medical officer at Guthrie Healthcare System in Sayre, Pa.

By Anthony F. Oliva, DO, MMM, CPE
COPYRIGHT 2010 American College of Physician Executives
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Outcomes
Author:Oliva, Anthony F.
Publication:Physician Executive
Geographic Code:1USA
Date:Sep 1, 2010
Words:2243
Previous Article:The secret to improving health care services.
Next Article:How hospitals can arm themselves in the war on waste.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters