Golden nuggets: clinical quality data mining in acute care.
See how a New Jersy medical center used data mining technology to help physicians work more efficiently and reduce medical mistakes.
As physicians, we know how challenging quality improvement in the acute care setting can be. Each year a staggering average of 71,000 people die in U.S. hospitals from medical error, making our $2 trillion (1) health care system one of the leading causes of death in this country.
Additionally, the Centers for Disease Control and Prevention estimates that patients develop 1.7 million infections in hospitals each year and those infections contribute to the death of 270 people a day. (2)
Our society demands health care perfection, yet we struggle to provide good care to patients while suffering from overwhelming budgetary and medical-legal constraints. To make matters worse, Medicare has just announced it will no longer pay for certain complications. This public demand for high-quality, low-cost health care is driving many health organizations, especially acute care, to develop and implement evidence-based practices (EBP).
Faced with this and increasing market competition, health care organizations are also introducing large integrated hospital information system (HIS) warehouses to capture diagnostic and patient data that can be consolidated and employed in the pursuit of quality health care.
At Kimball Medical Center, an affiliate of the Saint Barnabas Healthcare System in New Jersey, we devised a unique new way to combine EBP and HIS using data mining. From this data mining process, we found "golden nuggets" of clinical quality data that helped us improve patient care while providing best-practice information to physicians.
What is data mining?
Defined, data mining is the extraction of hidden predictive information from large databases such as those found in HIS warehouses. It allows miners to analyze large amounts of data by scouring databases for hidden patterns, finding predictive information that some may miss because it lies outside their expectations. (3)
Data mining is an extension of statistics, but like statistics, it is not a business solution but a technology. It is what one does with the data collected from the mining effort that is important.
Stepping up to the challenge of getting physicians to adopt best practices at Kimball Medical Center, we elected to work with the data mining technology firm, (ProcessProxy[TM] Corporation) based in Owings Mills, Md., that uses a new, patent-pending technique.
We also implemented Lean Six Sigma (LSS) techniques to create a robust picture of improvement opportunity. Simply put, LSS is a disciplined, data-driven methodology for eliminating non-valued added steps in a process.
LSS makes sure we are working on the right things at the right time for the right reasons. Since LSS has had significant published success in health care it was a given that we would apply these techniques in concert with the data mining.
Our investigation into how to improve patient care came from the realization that some physicians appeared to have significantly higher complication rates than others. When the medical cases were manually reviewed, we observed that:
* Many of the complications were "present on admission" but were not justifiably managed.
* Another portion of the cases were not classified correctly to reflect justifiable management of the cases' severity.
See how a New Jersey medical center used data mining technology to help physicians work more efficiently and reduce medical mistakes.
* There were even some cases with conditions that were not significant enough to warrant any management at all.
With the objective to help physicians improve their best practices (thus reducing errors in justification management) while simultaneously helping the hospital reduce length-of-stay (LOS), we began our investigation of data that included:
* Transcription (H&P, consultant, ED notes, radiology dictations, etc.)
* Coding (principal/admitting, diagnosis, LOS, etc.)
* Admission, discharge and transfer
* Laboratory results and cultures
* Pharmacy orders
* Computerized physician order entry (orders for procedures, consults, etc.)
All of these data existed in one format or another, so it was relatively easy to get them into the model for data mining. Listed below are the steps we took in our mining effort.
Step I: Identify opportunities.
We started with a fundamental analysis to examine the spreads between LOS overages, severity estimates and differential diagnosis complexities. Any of the medical cases that had a positive spread typically fell into one of four categories:
* One or more complications
* Not justifiably managed
* Justifiably managed, but incorrectly classified
* Weak support of one or more diagnoses
This complexity spread identification is illustrated in Chart 1.
To interpret the chart, one must look at the relationships between the DRGs on the x-axis and the DRGs on the y-axis. Ideally, you don't want to see any circles in the grid as that means the case management processes are the best that they can be.
In our case, from the comparison one can easily see that each circle here represents the estimated number of LOS days being added to each DRG on each row of the grid. Heart failure (HF) LOS was being extended most by renal failure (44 days), then by respiratory infection (14 days) during our three-quarter time period under examination. This meant that heart failure patients were probably being managed more for renal failure and respiratory infection than for their original diagnosis!
Step 2: Validate the arbitrage opportunities reasoning
Next, we determined our data mining opportunities by analyzing patient cases for severity and complexity. Severity was established by formula estimation and coder validation and then compared to our in-house best practices (IHBP).
A key finding for us was that if a case was too severe for the DRG, then either case management or classification fell short. Complexity was established by determining all of the co-morbid and complicating conditions in the case.
These two fundamental analyses were "golden nuggets in the rough"--offering gems for identifying improvement possibilities. Chart 2 illustrates the severity and complexity spread by attending physician.
As one can see at Kimball Medical Center, Attending Physician 17 had at least two cases where there was high complexity but low overage or low complexity but high overage. There are various conclusions that can be drawn from the many spreads, (4) but the golden nugget here is when severity and complexity lines (which can be subdivided to more lines) are both high, but LOS overage is low.
As is true in many medical situations, often the fundamentals of the patient's case don't tell us enough, so we included technical analyses of key resource-intensive "trader issues" (physician volume) such as admissions or discharges on that day or days around the case.
These additional spreads were incorporated into the improvement valuation modeling. Chart 3 displays some of the workload-to-performance issues that we discovered.
What we found is that, for approximately the same severity and case mix, LOS overage increased dramatically with just small rises in daily admissions. For instance, on
Week 23, the admit rate for the physician was 2.5 per day, and severity was at an average of 5.00.
However, the LOS overage spiked to nine days over for that physician in that week alone. With daily data modeling we saw falling LOS overages, even with the same admit rate, case mix and patient severity. This gave us new (and formally hidden) "triggers" to add to the software alert mechanisms so that we could act on the information in real time.
Step 3: Dealing with an audit trail
Finally, we used the software to help us develop data-based materials that we could use with physicians in one-on-one coaching.
Charts 4-6 illustrate some of the data graphs we employed in our coaching sessions. We also kept an audit trail of the cases that resulted in the spreads--this was a critical piece in helping physicians understand where they needed to improve.
Step 4: Find the best examples to use for rapid knowledge transfer to physicians
Using both the software and LSS, we were able to find the best examples to use for rapid knowledge transfer, which meant that we could help physicians adopt best practices by learning quickly from their own and their peers' cases.
By looking for the best examples we were able to "train" the data mining tool so that it could reduce the options it had to consider through each iteration of the predictive model, making each golden nugget clear and concise.
Using the predictive models daily also allowed us to reduce the false negatives and false positives when concurrent teaching opportunities arose for those problems.
Through the data mining, we were able to truly understand the factors affecting physician past success and failure as opposed to IHBP medical cases. Using our four-step process improvement model, we effectively and efficiently demonstrated significant improvement in physician best practice adoption by:
* Converting, collating, and digesting vast amounts of complex, disparate patient-related data from various sources to glean the "golden nuggets"
* Matching those golden nuggets to evidence-based medicine
* Using software to match the historical best cases to the current cases that are best to teach--and doing this on a concurrent basis identifying problem cases and developing an IHBP Portfolio[TM] for them highlighting those golden nuggets presenting the portfolio to the physician to positively influence practice behavior change
In this way, we used a "bottomup" approach to best practice innovation--by finding examples and taking advantage of what works well in our medical center based on our facility's innovators.
Some additional benefits to using this process were the less time and effort it took to effectively learn and the less time and effort to effectively teach. We were pleased to find that process improvement and best practice adoption were enhanced. Estimates based upon our initial study alone demonstrated a $50,000+ monthly gain!
(1.) 1. Porter ME and Teisberg EO. Redefining Health Care: Creating Value-basedCompetition on Results, Boston: Harvard Business School Press, 2006.
(2.) Pear R. Medicare Says It Won't Cover Hospital Errors, The New York Times, August 19, 2007.
(3.) Thearling K. "An Introduction to Data Mining: Discovering Hidden Value in Your Data Warehouse, Thearling.com, 1996, <http://www.thearling.com/text/dmwhite/dmwhite.htm> (September 18, 2007).
(4.) Process [Arbitrage.sub.TM] and In-House Best Practice [Portfolio.sub.TM] are trademarks of ProcessProxy[TM] Corporation.
By Ragupathy Veluswamy, MD, MMM, CPE, FAAP
Ragupathy Veluswamy, MD,MMM, CPE, is vise president and chief medical officer at Wyoming Valley Healthcare System in Wilkes Barre, Pa.He can be reached at firstname.lastname@example.org.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Data Mining|
|Date:||May 1, 2008|
|Previous Article:||Risky business--health care finance and the VA.|
|Next Article:||Election year entertainment: fun finding fallacies.|