# Predictions of Charges of Individual Hospital Cases and a Method for Selecting Cases for Review.

In the normal course of events, an acute care hospital does not have any specific idea how much charges should be for an individual case. In the United States, there are set standards for how much third parties will pay for a case, but charges are a different matter. Since third parties reimburse hospitals based primarily on diagnosis, hospital staff assign the case to a given Diagnostic Related Group (DRG), depending on the principal diagnosis, which is the diagnosis that was primarily responsible for the hospitalization.The payment itself is determined for all cases of the given DRG in the given hospital by negotiation (or by imposition) between the hospitals and third-party reimbursers, who are usually insurers. The first and second parties, that is, the patient and the hospital, rarely negotiate directly between themselves or impose price. The third-party reimbursers are called that because the amount of their payment is theoretically supposed to reimburse the hospital for its cost of the case. This amount varies with the nature of the principal diagnosis, sometimes secondary levels of diagnosis, the patient's age, the hospital's location (urban or not), wage rates incurred by the hospital in its geographic market, the number of teaching programs the hospital sponsors, utility rates, and other similar factors. (1)

The insurers fall into various categories. They are governmental (Medicare and Medicaid) or private, both for-profit and not-for-profit, such as Blue Cross Blue Shield, Humana, and Aetna. Employers generally contract these companies for coverage of employees. This has changed somewhat in the last few years due to the Affordable Care Act, also known as Obamacare, that establishes and covers marketplaces. Funded by the federal government, Medicare covers hospitalization, physician, and prescription drug charges for the aged and permanently disabled, while the federal and state governments fund Medicaid, which covers the indigent. Other insurers include Tricare for the military services, workers' compensation, a state-funded governmental program, and other private sector payers. A small fraction is self-insured, and another fraction has no insurance, and hospital provisions for charity care generally handle the latter cases. Total inpatient costs in the U.S. were reported as $381.4 billion during calendar year 2013. (2)

A hospital, in contrast, generally keeps track of variable charges as well as fixed charges it allocates to each case, but these charges do not correspond directly to the hospital's costs. There are historical reasons for this discrepancy, stemming in part from decades-old reimbursement practices. (3) The charges are generally much greater than the actual costs. A hospital uses a ratio of cost to charges (RCC), an institution-wide deflator that restates charges as costs. Because of its broad application to the entire institution, the RCC is a very blunt and nonspecific tool, which has led to departmental converters, a more cumbersome method because of their number, which is at least 20 in a facility of any size. (4) The hospital accumulates the charges and then presents either the patient or the insurer with a bill.

Recorded hospital charges for individual episodes of care are subject to error from a variety of causes. Error rates in the principal diagnosis alone range between 7% and 22%.5 Errors in other variables are certainly possible, perhaps even at greater rates than those for the principal diagnosis. Misattribution of charges would result from the incorrect attribution of charges to a patient to whom the hospital did not provide specific services or goods. Nonattribution of charges could result from a lack of properly applying charges to any patient. Episodes of care are herein referred to as cases, patients, or discharges.

At this point, the hospital has no real gauge of reliability in the bill because there is simply no standard for how much a given bill should be. This is not only an issue of accuracy, but at times there is a question as to whether the bill even corresponds vaguely to the reality of the hospital stay.

A NEW TOOL FOR ACCURACY?

This work proposes, for the first time, to present an equation that can flag a possibly inaccurate bill. The equation will present limits for each individual bill within which an individual bill should fit 95% of the time. The equation functions as a management accounting tool to discriminate between bills that are within limits and those that fall outside the limits. Management accountants should then evaluate all bills exceeding these limits, either low or high, to determine how much out of bounds they are in dollar amounts. This article will explore the practical utility of this process as well as the possible financial consequences.

The study looks to create a way to pinpoint cases for intensive utilization review of a special class of patients, those who are likely to use or not use financial resources that place them beyond the limits of what can be reasonably expected for each individual case. The indicator can identify who those patients might be when the hospital assigns critical values to the case prior to submitting the case for external review, compilation, and payment.

Utilization review can be either prospective, concurrent, or retrospective. "Utilization review (UR) is a safeguard against unnecessary and inappropriate medical care. It allows health care providers to review patient care from the perspectives of medical necessity, quality of care, propriety of decision-making, place of service, and length of hospital stay." (6)

From a management accounting perspective, this tool is capable of much greater specificity than is possible through ordinary procedures. A hospital or an insurer can use the tool to flag those discharges that bear charges significantly different from what it can expect from that individual case. Of course, medical necessity must direct patient care, but, where possible, the cost of a case, as approximated by the charges, may be able to show instances of misattribution or nonattribution of charges that need correcting. The purpose of utilization review would be expanded by the facility to include the location and correction of costing discrepancies found with the use of this tool because of the focus on charge discrepancies.

Using this tool follows certain statistical procedures with data collection as step one. A New York State body, Statewide Planning and Research Cooperative System (SPARCS), collects detailed data on all hospital cases within the state. A legal authority requires this data collection. (7) The data consists of complete sets of all statewide discharges for 2011 and 2012, which included between 2 million and 2.5 million cases each year. Hospitals outside New York State can possibly use this prediction method by rerunning the equation to develop a new institutional coefficient. The hospital could formalize predictions at or after any time that a principal diagnosis and corresponding DRG are specified for the individual case.

The statistical method, which is the general linear regression model, allows us to find numerical coefficients for variables that are expressed in categories. Examples include variables such as gender, race, the case's DRG, and a hospital designation. Only three items are necessary to adequately correlate the total charges on the bills within a given year and include the federal new DRG for the case's classification, the days of the inpatient hospital stay, and a hospital designation. These three variables form the R-squared for the equation, which was 0.7462. With a perfect correlation having an R-squared of 1.0, this is remarkably high for almost any work in the social sciences and indicates that this resulting equation explains almost three-quarters of the data variation. Other variables tested included age, which did not change the R-squared by as much as 0.0001, indicating the three critical variables already explained any effect of age.

Of the three critical variables, two were categorical in nature: federal new DRG and the indicator variable for the hospital itself. Given that 748 DRGs were in use statewide during 2011 and 2012 and there were 224 hospitals in the data set, it is necessary to construct 972 individual variables to represent the possible categories. This is because a categorical variable for an individual hospital would have each variable constructed completely of zeroes, except for one cell, representing the hospital in question, which would contain a 1.

DEVELOPING THE VARIABLES

This many designating variables turns the problem into a large structure. Each variable utilizes a degree of freedom, a term that describes a coordinate value to help determine a system. In this case, I determined the system (a set of coordinate values) when I had specified values for each hospital and each DRG in the sample, in addition to one for length of stay because it is a numeric variable and one other included in the structure of the analysis for the intercept (the value at which the fitted line crosses the y-axis), which is also a numeric variable. With only three critical variables, I used 972 degrees of freedom in the analysis. (8) I included all 2 million to 2.5 million cases per year because too many degrees of freedom can adversely affect the analysis. To counteract this effect, I used as many cases as I could obtain.

The actual calculation mechanics of the individual coefficients is beyond this discussion. Any book on applied regression analysis can be a reference for this. Such a text would also be a reference for the mechanics of computation of the lower and upper limits of the prediction interval, which in about 95% of cases will bracket the actual value for the case. (9)

I calculated the actual predicted value of the patient's bill by adding the intercept, which is the same for all cases, to the coefficient determined by the regression for the individual hospital for the case. I added the result to the coefficient found by the regression for the individual case's DRG, then added that result to the product of the length-of-stay coefficient found by the regression times the stay in days. The total gives us the predicted value of the bill in dollars. Examples of this calculation appear in Table 1 and are not from actual cases. Although the data is typical, it cannot be traced to actual cases because we must comply with the SPARCS policy and its requirement that prohibits release.

In 2.9% of the cases, the actual bill exceeded the upper confidence limit for the case. This could mean that the hospital inaccurately recorded one of the parameters

(hospital itself, DRG, or length of stay), inaccurately recorded someone else's charges on the given bill, or inaccurately recorded the bill itself. It could possibly mean the case itself was just truly unusual in nature, possibly because of major unforeseen complications in the pathology of the patient. Similarly, in 1.8% of the cases, the actual bill was less than the lower confidence limit for the case. Analogous types of causes could be responsible for this result. At this point, a trained team including clinical, billing, coding, and financial specialists could review the case. The cost of review might be $500 on average per case.

In 2011, almost 69,000 cases showed an aggregate amount of about $2.7 billion over the upper confidence limits. On average, this is an amount of about $39,000 per discharge. Even in the DRG system, payment is not limited to the adjusted DRG amount for all cases.

In the top 5.1% of cases based on charges, the hospital and the insurer share the expenses. (10) Using the method in this work, I assumed that the atypicals, those discharges with charges above the upper confidence limit or below the lower confidence limit, are possibly inaccurate and can be corrected.

A CLOSER LOOK

The following sections present detailed information about the data, methods, and results of this analysis. Other researchers can evaluate the work in detail and duplicate the results if they use the same methods and data.

Data and Data Collection

This study uses data legally required by SPARCS. This consists of all discharge records from all acute-care inpatient hospitals in New York State except specialty facilities such as psychiatric hospitals. After collection, SPARCS reviews and compiles the data, which is then screened. For example, length of stay must be a numeric value greater than zero. Additional conditions sometimes are required. For example, if the patient is discharged the same day as that of admission, length of stay is a 1 as is the Same Day Discharge Indicator, which is a 0 otherwise.

The data consists of more than 4 million records representing about 2.2 million to 2.5 million discharges per year from more than 200 hospitals. It appears in 5,076 character records in a flat-file format and comes from variables representing clinical, demographic, administrative, and financial elements. Data elements differ somewhat over time. The data dictionary is an electronic volume about 250 pages long.

There are three versions of SPARCS data: public use, limited, and identifiable data sets. Public-use data allows access through a query system that presents user-specified tables on an annual basis. The limited and identifiable data sets contain individual case records accessible to the user. The limited data set is directly anonymous but contains what are defined as indirect identifiers by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), a federal law. A hospital cannot use the limited data set to track multiple admissions for a single individual because it contains no direct identifiers. The identifiable data set contains the same data as the limited, but it also contains individual identifiers the analyst specified when compiling the data set for use. The provisions of the contract between SPARCS and the analyst prohibit release of actual data in cells containing information from less than six individuals. The Data Governance Council of the State of New York must approve all requests for the limited or identifiable data sets. (11)

This study used the limited data set, which includes the complete sets of inpatient data from 2011 and 2012. I chose these sets because they were relatively recent, well-settled, and therefore not readily likely to change in any material respect.

The data to develop equations in this study is from 2,459,687 individual discharges in 2011 at 224 separate hospitals. The complete population was part of this analysis. To trim outliers from the sample, I dropped all discharges with an indicator that showed same day of discharge as admission, which included 46,241 discharges in this group (almost 1.9%) in the 2011 data. On the high end of the sample, I found 24,071 cases (almost 1.0%) had a stay greater than 39 days. The 39day cutoff is arbitrary and eliminates what is roughly the top 1% of cases in length from consideration since these cases are nonrepresentative. This left 2,389,375 cases--a usability of about 97.1% of the original sample.

The data for the application portion of this study also consists of 2,393,356 individual discharges from 224 hospitals in 2012. Again, I considered the complete population in this analysis. Trimming nonrepresentative cases was done similarly for this sample, with 47,270 of the cases having the same day of discharge as of admission (almost 2.0%) and 23,263 cases having a length of stay greater than 39 days (about 1.0%). This left a usability percentage of approximately 97.1% of the original 2012 sample or 2,322,823 cases for analysis. This percentage is the same (to three significant figures) for the 2011 and 2012 data.

Methods

I used SAS 9.4 to analyze the data. (12) The analysis would have been impossible as a practical matter without special equipment to process the large amount of data in a reasonable time. The computer had 16 gigabytes of Random Access Memory, an Intel I7 central processing unit, and a 1-terabyte solid-state drive.

I chose variables based on prior knowledge to provide maximum explanatory power with a minimum number of actual variables as well as those that have face validity for their ability to contribute to explaining the variation in amounts of total charges a patient encounters in a hospital stay.

The three variables provided the necessary explanatory power and include the following:

* The Diagnostic Related Group assigned to the case that includes the principal clinical factors (the principal diagnosis) responsible for the hospital stay,

* The length of stay specified in days, and

* The unique nature of the individual hospital.

Length of stay is the only quantitative variable, while the other two indicate states of being. There were 748 DRGs for categorizing the 2011 data and 224 hospitals. Each category occupies one degree of freedom. Two, one for each of the categorical variables, are subtracted. Add one degree of freedom for length of stay and one for the intercept, and the result is 972 degrees of freedom. (13) The result is a matrix of a very large size (748 x 224 x 1 = 167,552) and an average of 14 cases in each of the categories in it. This is the primary reason for using all the cases available in the population. Testing about a quarter of a million cases per year--a 10% sample of cases--showed very little difference in the overall equations, but almost all cells showed better significance in the test with a full year's data.

The technique to analyze this data is generalized linear regression. Ordinary least squares regression (OLS) will only accommodate quantitative variables. The expansion of capabilities necessary to handle categorical variables, such as race, gender, type of post-operative care, or DRGs, brings the scope of possibilities for analysis to an entirely different level. As in any regression, mathematical operations calculate coefficients for each variable, but in generalized least squares regression (GLS) there is usually one cell for each categorical variable that has a nonzero coefficient with that case having a value of one. For example, for all cases except its own, Hospital A has a value of zero for the value calculated for the hospital, and, by extension, each hospital has its coefficient calculated just for itself.

Next I substituted the coefficients into the equation and added the results for each individual variable to the total. Then I added the intercept, and the grand total became the predicted value for that case, with its given hospital, DRG, and length of stay. (14) An example of the calculation appears in Table 1.

R-squared, the coefficient of determination, generally shows the percentage of the variation in the data the equation explains. I calculated the adjusted R-squared to take into account the number of degrees of freedom in the calculations. I calculated the R-squared and adjusted R-squared. It was at this point where I attempted to add other variables into the equation in the hope of increasing the R-squared of the result. I attempted several variables, and in no case did the R-squared value increase by as much as one part in 10,000. The three variables together were then judged to be sufficient to explain as much of the data as one could reasonably expect. Quantitative results appear in the next section.

At this point, I stored the model's parameters and conducted a test of conclusions using 2012 data. I had already easily trimmed the 2012 data using the same objective criteria described earlier. I first tested the process by performing the same GLS regression on the 2012 data that I had performed on the 2011 data.

After this procedure, which I used as validation of the original model, I calculated 95% upper and lower confidence limits for every case in the 2012 data. Then I performed calculations on each data point to determine which ones exceeded their upper limits or were below their lower limits. This functioned very much like a control chart for each individual case. (15)

I calculated the two types of differences--those above the upper limits and those below the lower limits--for each case in which they occurred and then summed. The results are in dollars and can indicate whether the differences were material in amount compared to the total amount of charges for all cases as well as for individual cases.

Results

Graphs of trimmed data of the two years showing length of stay vs. frequency (Figures 1 and 2, pp. 25 and 26) display clearly a nearly perfect set of matched extreme value distributions. Both members of the pair also show a matched discrepancy in the distribution at the length of stay of one day. There is stability in the underlying processes during at least these two years. The same phenomenon appeared in the untrimmed data.

Using GLS regression showed that, for the untrimmed data, the R-squared was .6120 for 2011 and .6489 for 2012. For the trimmed data, the R-squared was .7462 in 2011 and .7433 for 2012. All these numbers came from regressions of individual case total charges against the three variables of new federal DRG, length of stay, and the hospital itself.

A point of caution is necessary. R-squared is a quantity that is subject to increasing sensitivity as the number of cases increases. (16) Since the cases number in the millions, the comparison between the years will show the two values to be different from each other. If there are fewer cases, the two years will test as equal, which is a limitation of several statistical tests. In this analysis, it is more important that the number of cases is high so that coefficients are more accurate than it is that the tests indicate in a way desired.

The variable of patient age would seem to have considerable face validity as a correlating factor, so I tested it with the three primary variables. Its inclusion added one degree of freedom to the analysis but did not change the results at all to four significant figures. This would seem to infer that any explanatory power exhibited by the variable of patient age was already expressed by the other three variables.

It seemed of interest to determine what explanatory power each primary variable had and how they interacted with each other. To accomplish the first goal, I performed separate regressions for each variable on the 2011 data. For the second, I performed stepwise regression on the set of three variables. I also regressed the patient age variable separately against the 2011 data and then included it in the stepwise process with the other three variables. Results for these calculations appear in Table 2.

Scoring is a process to apply regression coefficients developed in one data set to another. Here I applied the regression developed on 2011 data to the 2012 data set. I developed upper and lower confidence limits for every case where I made a prediction. There were 4,590 cases where I could not apply a prediction because those cases did not have a combination of variables that had been used to develop the regression coefficients. This happened in only about 0.2% of the cases. I dropped cases where confidence limits could not be developed from further analysis.

The next phase of the analysis was purely financial. I determined that the total face value of all charges in New York in 2012 was $72,069,265,988, with $11,840,661,640 of that total being cases where the actual charges were greater than the upper confidence limit. These cases exceeded those limits by the aggregate amount of $2,682,414,001 and included 61,039 cases, approximately 2.6% of the total.

Those cases with the actual value of charges less than the lower confidence limits had a value of $1,911,589,989 and included 30,830 cases, approximately 1.3% of the total. The aggregate amount by which they were less than the lower confidence limit was $587,297,237. (See Table 3.)

The average case over its confidence limits was $43,946 (2,682,414,001/61,039) over those confidence limits, and cases less than their lower confidence limits were $19,019 (587,297,237/30,880) on average lower than their confidence limits. The average total charge for each in the trimmed data was $31,027 (72,069,265,988/2,322,823). The amounts not over actual, but over the much more stringent standard of being over the upper confidence limits, are more than the actual average of all studied cases (142%). The amounts below the lower confidence limits, on average, are a substantial fraction of the average total charge per case (61.3%).

DISCUSSION

The findings of this study build on each other. First, the distribution of hospital charge data is stable and the same, at least over a two-year period once it is trimmed and even in the percentage of cases trimmed. Then we find that three variables alone are enough to explain almost three-quarters of the variation in the trimmed data. Next, regression results on both the trimmed and untrimmed data are very close to each other over the two-year period, showing yet another dimension of similarity and stability in the data.

When the second year is scored using the results of the regression on the first year, the results are as they should be in a well-ordered data set. That is, the majority of the actual charges (96.0%) are within their upper and lower confidence intervals despite the fact that their distribution is definitely not normal (see Figures 1 and 2).

It is my contention that if the actual amount of the charges on a bill is outside the confidence limits for an individual case, then the case itself deserves closer examination.

The cases that are above their upper confidence limit or below their lower confidence limit can be called atypicals. They are not outliers in the sense typically used in the field of hospital reimbursement because they depend on a confidence limit to be categorized as atypical. A normal outlier must be one of a group exceeding diagnosis-based limits that are also specific to the individual hospital. The atypical, however, considers both the diagnosis and the individual hospital, but it does not explicitly limit inclusion to a percentage of cases as does the outlier, whereas the outlier must be among a group that composes no more than 5.1% of the discharges at a given hospital. The atypical is not subject to such a limit. The outlier presupposes accuracy in patient classification and the reporting of charges, but the atypical seeks to be identified as an inaccuracy.

What should be done with the atypicals? They need close examination. Such examination requires the skills of several professionals, such as a nurse, medical coder, billing specialist, and accountant. It would be reasonable to speculate that such investigation could cost approximately $500 per case. This would require access to the case's source documents, including the bill and the patient's chart or Electronic Health Record (EHR).

An atypical could occur, for example, if the hospital recorded a myocardial infarction as a case of chest pain. Reimbursement for the case could increase dramatically if the hospital had recorded the proper diagnosis. The hospital could have created the atypical when it attributed a group of charges to the wrong patient. Proper reimbursement might decrease for the patient to which the charges were improperly attributed if that patient was also an outlier to begin with. Other scenarios are possible.

Hospitals other than those native to New York could also include their data with the regression runs and therefore develop coefficients and predictions for their cases. There would be an assumption with this action that the same structures and rules apply to out-of-state facilities that are good for New York hospitals. This would have to be tested. Fortunately, an out-of-state facility can easily compile the variables in the calculations for all discharges.

FURTHER RESEARCH

There is room for further research. The relationship between individual pairs of years of clinical data also should be explored. This work has shown that two consecutive years show very similar characteristics. Is this true in a broader sense? The data also shows that bill amounts for certain cases clearly have no face validity. A cost of $3.65 for an overnight stay in an American acute care hospital is beyond the realm of plausibility. SPARCS, however, in its disclosure of edit conditions, considers this amount to be valid because it is numeric and not negative. Similarly, a length of stay of more than 2,000 days (approximately 5.5 years) certainly deserves at least some attention to determine whether this case is a genuine anomaly.

The major premise in this work deserves testing in the field. A team, such as the one specified earlier, should validate or correct a properly designed sampling of multifacility cases. If researchers can show the hypothesis to produce beneficial results, then a larger application of the principles could possibly alter the flow or the size of a stream of cash denominated in billions of dollars per year.

ENDNOTES

(1) William Cleverley and Andrew Cameron, Essentials of Health Care Finance, 6th ed., Jones and Bartlett: Sudbury, Mass., 2007.

(2) Celeste Torio and Brian Moore, National Inpatient Hospital Costs, Statistical Brief #204 2016, April 2016, www.hcup-us. ahrq.gov/reports/statbriefs/sb204-Most-Expensive-Hospital-Conditions.jsp (accessed 8/4/2016).

(3) Rick Ungar, "The Great American Hospital Pricing Scam Exposed," Forbes, May 8, 2013, www.forbes.com/sites/ rickungar/2013/05/08/the-great-american-hospital-pricing-scam-exposed-we-now-know-why-healthcare-costs-are-so- artificially- high/#3e7a5d5 f3bff.

(4) Michael Schwarz, David Young, and Richard Siegrist, "The Ratio of Costs to Charges: How Good a Basis for Estimating Costs?" Inquiry, 1995-1996, pp. 476-481.

(5) J.E. Calle, "Quality of the Information Contained in the Minimum Basic Data Set: Results from an Evaluation in Eight Hospitals," European Journal of Epidemiology, November 2000, pp. 1073-1080.

Kathy Terry, et al., "Room for Improvement: Gastrointestinal Disorders and Payment Errors," Journal of Community Health, June 2008, pp. 111-116.

Colin Cyrille, et al., "Data Quality in a DRG-based Information System," International Journal for Quality in Health Care, September 1994, pp. 275-280.

J. Holstein, et al., "Quality of Medical Database to Valorize the DRG Model by ISA Cost Indicators," Revue D Epidemiologie Et De Sante Publique, December 2002, pp. 593-603.

Luca Lorenzoni, Roberto Da Cas, and Ugo Aparo, "Continuous Training as a Key to Increase the Accuracy of Administrative Data," Journal of Evaluation in Clinical Practice, November 2000, pp. 371-377.

Beth Reid, Corinne Allen, and Jean McIntosh, "Investigation of Leukemia and Lymphoma AR-DRGs at a Sydney Teaching Hospital," Health Information Management: Journal, June 2005, pp. 34-39.

(6) Richard Spector, "Utilization Review and Managed Health Care Liability," Southern Medical Journal, March 2004, pp. 284-286.

(7) SPARCS Operations Guide, November 2016, Version 1.2., www.health.ny.gov/statistics/sparcs/training/docs/sparcs_ operations_guide.pdf.

(8) Statsoft, Electronic Statistics Textbook, 2013, Available from: www.statsoft.com/textbook/statistics-glossary/d#Degrees of Freedom.

(9) John Neter, et al., Applied Linear Regression Models, 3rd ed., Irwin: Chicago, Ill., 1996.

(10) "Outlier Payments Medicare Acute Inpatient Prospective Payment System," April 2013, www.cms.gov/Medicare/ Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/ outlier.html (accessed 7/7/2017).

(11) SPARCS Data Governance Policy and Procedure Manual, September 2014, www.health.ny.gov/statistics/sparcs/ training/docs/sparcs_dgc_manual.pdf (accessed 7/8/2017).

(12) SAS Institute Inc., SAS System for Windows, T.S. Institute, Editor, Cary, N.C.; SAS Institute Inc., 2014, www.sas.com.

(13) Statsoft, 2013.

(14) Neter, et al., 1996.

(15) M. Best and D. Neuhauser, "Walter A Shewhart, 1924, and the Hawthorne factory," Quality and Safety in Health Care, April 2006, pp. 142-143.

(16) Richard Lowry, VassarStats, 2017, http://vassarstats.net/ rdiff.html (accessed 7/7/2017).

(17) Donald Morrison, Multivariate Statistical Methods, 2nd ed., McGraw-Hill: New York, N.Y., 1976.

By Gerald S. Silberstein, Ph.D., CMA, CFM, CPA

Gerald S. Silberstein, Ph.D., CMA, CFM, CPA, is the accounting program coordinator and an assistant professor at The Sage Colleges School of Management in Albany, N.Y You can reach him at (518) 292-8628 or silbeg@sage.edu.

Caption: Figure 1: Number and Length of Stays, 2011

Caption: Figure 2: Number and Length of Stays, 2012

Table 1: Calculation of Predictions Case 1 Intercept $ 546.11 Hospital itself (7,714.88) Federal new DRG 9,654.00 Length of stay (LOS) 13,435.20 Total predicted $15,920.43 Total charges 32,363.11 Lower confidence limit 785.57 Upper confidence limit 79,379.16 Action suggested Accept Case 2 Same value for all cases $ 546.11 Individual hospital coefficients 6,342.31 Individual DRG coefficients 64,485.00 Evaluated at $4,478.40 per day 35,82720 Sum of the above $107.200.62 Actual charges 230,255.70 Lowest acceptable charges 70,312.76 Highest acceptable charges 148,884.60 Investigate Table 2: Regression Results Variable Name R-squared Alone Cumulative Stepwise R-squared Federal new DRG 0.4255 0.4255 Length of stay (days) 0.4177 0.6532 Hospital identifier 0.1322 0.7462 Patient age in years 0.0496 0.7462 Table 3: Breakdown of Charges Description of Category Charges by Category Percent of Total Charges by Category Total charges 72,069,265,988 100.00% Total of cases over UCL* 11,840,661,640 16.43% Total charges over UCL* 2,682,414,001 3.72% Total of cases under LCL** 1,911,589,989 2.65% Total charges under LCL** 587,297,237 0.81% *Upper confidence limit **Lower confidence limit

Printer friendly Cite/link Email Feedback | |

Author: | Silberstein, Gerald S. |
---|---|

Publication: | Management Accounting Quarterly |

Geographic Code: | 1USA |

Date: | Jan 1, 2018 |

Words: | 5555 |

Previous Article: | Transfer Pricing and FIN 48: How Managers Attempt to Mitigate Audit Risk. |

Next Article: | The Hidden Cost of Mergers and Acquisitions. |

Topics: |