Financial incentives, hospital care, and health outcomes: Evidence from fair pricing laws.
As described above, length of stay is our preferred measure of the quantity of care hospitals deliver to uninsured patients. Here we study several alternative measures of care quantity: hospital charges, admission decisions, and transferring patients. We include these both as robustness checks for our length of stay results, and also to investigate other margins upon which hospitals may ration care to uninsured patients. We first briefly describe each measure, and then present the results of our event study models together.
5.5.1 Total charges
FPLs limit the portion of the bill that hospitals can collect, but not what is actually listed on the bill. Thus, the charges reported in our data reflect the care given rather than the direct limits imposed by the laws. Total charges may provide a better measure of the intensity of care of a hospital stay as long as they bear some, albeit an inflated, relationship with costs. While arguably a more comprehensive measure of resource use, the variation in rates of charge increases among hospitals introduces a potential limitation since we cannot separately identify hospital-specific charge trends and the effects of FPLs.
5.5.2 Admission decisions
The QI software also calculates the rate of admissions that could potentially have been avoided. These are generally marginal admissions from conditions that could alternatively be treated in outpatient settings, or prevented with more comprehensive primary care. We study these admission rates to determine if fair pricing laws are associated with hospitals pushing more of these patients to outpatient care, which is typically lower cost. There are 13 such conditions identified by AHRQ (listed in Appendix H). Several examples are COPD/asthma, and complications from diabetes. The 13 conditions account for approximately 12% of admissions in our data.
Hospitals may attempt to reduce the burden of unprofitable patients who still require medical care by transferring them to other facilities. The EMTALA and various state laws prohibit transfers that are driven by financial considerations of the hospital, but the guidelines encourage transfers when they are in the patient's best interest. These reasons are often medical, such as hospitals specializing in the treatment of different conditions, but they can also be financial, like only certain hospitals accepting the patient's insurance. There will be situations when it is clear that one hospital is better suited to treat the patient because, for instance, only it has access to a particular piece of equipment. But it is also easy to imagine scenarios where the relative advantages of treatment in different locations are less clear. Thus, it is plausible that hospitals more frequently lean on the medical justifications of a transfer when the patient represents an expected loss rather than a profit. If this is the case, price ceilings would make hospitals more likely to transfer uninsured patients.
5.5.4 Results for alternative measures of quantity
The results for the alternative measures of care quantity show further evidence of cost-reducing behavior after a fair pricing law is enacted. Panel A of Figure 11 shows that reductions in (ln) total charges are consistent with those for length of stay. In total, charges fell by 6.5% after enactment of the FPL, but the decline appears to grow in magnitude over time and reach 8% by two years post.
Panel B show that the yearly treatment effects for potentially preventable admissions are consistently negative in the years following enactment of an FPL, though not always significant. However, the diff-in-diff results indicate a 3 percent drop in preventable admissions (significant at the 5% level). This suggests that when faced with a borderline case, hospitals will be more likely to treat the patient in a less costly outpatient setting after passage of an FPL. However, as shown in section 6.1, these cases appear to be rare enough that they do not materially affect the overall patient population. (34)
Finally, panel C shows evidence that hospitals transfer more of their uninsured patients after fair pricing laws are enacted. (35) Again, the yearly treatment dummies fall short of significance, but the diff-in-diff estimate is significant at the 5% level. On average, 8% of patients are transferred, so these estimates represent approximately a 6% increase.
5.6 Strategic Diagnosing
We have shown that hospitals restrict the quantity of care under fair pricing laws, but it may also be possible to circumvent price controls. Recall that most of the states we study enacted FPLs based on public payers who use prospective payment systems - where payments are almost entirely determined by a patient's diagnosis, rather than amount of care received. In these states the maximum collection after the imposition of a FPL is a direct function of a patient's diagnosis. So hospitals could artificially inflate the diagnosis to increase the maximum amount they can collect (this behavior is often termed "DRG creep").
The relevant outcome variable for studying upcoding is the DRG weight. As described earlier, this weight represents the expected cost of treating a patient within that DRG, and is directly related to the amount Medicare will reimburse. Panel A of Figure 12 shows that unlike in other settings where hospitals have a similar incentive, FPLs do not induce upcoding for uninsured patients. (36) One possible explanation for the null results is that upcoding under FPLs only increases the maximum amount a hospital can collect, while upcoding Medicare patients increases the payment with certainty.
Although DRG weight often determines the FPL payment cap, all-patient refined (APR-DRG) weight is a more granular measure of severity. For our purposes, the primary distinction is that each class of diagnosis is separated into four rather than three severity levels. The two measures are determined by the same set of information (ICD codes), but given the extra granularity, it is possible to alter the APR-DRG while leaving the DRG unchanged. (37) Unlike the DRG, the APR-DRG assigned is unlikely to directly affect the payment received by hospitals in our sample. Instead, we study the APR-DRG because we consider it to be a more complete numerical representation of the diagnosis. Surprisingly, Panel B of Figure 12 shows that using the finer measure, patients have been diagnosed with approximately 4% less severe conditions after enactment of fair pricing laws. (38) Interestingly, the reduction in severity persists if we control for the CCS diagnosis category (Panel C), but not if we control for number of individual diagnoses recorded (Panel D). (39) This is consistent with our suspicion that strategic diagnosing occurs by altering the severity within a disease category (such as by omitting a complicating factor), rather than moving from one category to another.
To some extent, the reduction in diagnosis may be a natural result of shorter lengths of stay. With patients spending less time in the hospital, doctors have less time to observe and record the type of ancillary conditions that are being omitted. Alternatively, a strategic explanation for the reduction in APR-DRG weight is that hospitals feel a need to match the diagnosis to the treatment delivered. With the financial value of uninsured patients falling under fair pricing laws, and hospitals scaling back the amount of care they deliver, doctors may shade their initial diagnosis to justify the planned reduction in care. A doctor's own sense of medical ethics is one channel by which he or she could discount a potentially complicating aspect of the patient's condition, but doctors and hospitals are also subject to external reviews of the care they provide. The review that likely carries the most weight is medical malpractice, where an expert offers an opinion about whether the care delivered meets the defined practice guidelines for the patient's condition.
The potential reasons to lower the severity of the diagnosis does create some tension with the incentive to upcode because the APR-DRG and DRG are related. It is interesting to note that while making this trade-off, providers appear able target diagnosis shading (as measured by the more granular APR weight) in a way that does not lower the DRG weight, and thus avoids an adverse financial outcome for the hospital.
In this paper, we utilize fair pricing laws to investigate how hospitals alter care in response to financial incentives. Specifically, we test whether these laws impact the quantity and quality of care delivered to uninsured patients. We find that when governed by fair pricing laws, hospitals do cut back on care to uninsured patients. They shorten inpatient stays by seven to nine percent, reduce intensity of care, treat certain marginal patients in outpatient rather than inpatient settings, and more frequently transfer patients to other care facilities. Despite the reduction in care, we do not see clear evidence of deterioration in the quality of inpatient care received using a number of quality measures. Uninsured patients do not die in the hospital at higher rates, they do not experience higher rates of medical complications, they do not receive fewer "beneficial" medical procedures, and they are not readmitted with higher frequency than absent a FPL. Finally, even though upcoding diagnoses could increase the maximum collections under a FPL (in states with PPS-like regulations), we do not see evidence of such strategic diagnosing behavior by hospitals.
Of course, our work has limitations. First, the NIS does not report how much hospitals actually collect from uninsured patients, so we cannot directly measure the reduction in hospital bills for the uninsured, nor any relief from the debt collection process. Beyond this, it is important to note that our results do not imply that any further price reductions in this market would not harm short or long term quality of care. Determining the threshold below which further cuts would have observable adverse health outcomes is a topic for further study.
Overall, our study provides strong evidence that providers do respond to financial incentives, but suggests they do so by forgoing relatively low value care. Still, the implications for patient welfare are not immediately clear. In a typical consumer market, any price ceiling that prevents a transaction from occurring would be welfare reducing. However, because patients ultimately aim to purchase health rather than healthcare, and it can be difficult to determine how effectively the latter produces the former, the lessons from the typical consumer market may not apply. Given that the price restrictions introduced by FPLs are not associated with clear evidence of worsening quality, and they likely significantly reduce financial strain, our results are broadly consistent with the idea that these laws improve consumer welfare.
The notion that there is not a direct link between healthcare and health outcomes at the margin may be surprising, but there is theoretical and empirical evidence which suggests that efficiency gains in the U.S. healthcare system are attainable. In seminal work, Arrow (1963) discusses the breakdown of market forces in healthcare. These issues are exacerbated for the uninsured patients we study who do not have insurance companies to help them act as better informed consumers. In practice, researches have documented wide geographic variation in health spending (both between the U.S. and other industrialized nations as well as among different regions in the U.S.) without the associated differences in health that one might suspect. (40) Evidence like this has led the Institute of Medicine to conclude that roughly 30% American health spending is wasteful (Smith et al., 2013). Our findings are consistent with the view that the financial pressures placed on providers via Fair Pricing Laws reduced some of this excess care.
A Differences in FPL Provisions
Although the FPLs we study are broadly similar, there are several generalizable differences. The first is how the laws cap prices. Capping prices at (100-115%) the amount paid by public insurers, as opposed to private insurers (or cost), is significant not only because reimbursement from public payers is typically lower, (41) but also because it is explicitly based upon a patient's diagnosis rather than the medical services actually delivered. In contrast, most private insurers use a variety of payment mechanisms, including a non-trivial amount of fee-for-service reimbursement. Second, in addition to the limit on charges for medium income uninsured patients, several FPLs mandate free care for low income patients. Table 8 summarizes these FPL provisions by state.
There is reason to believe that these provisions may alter how hospitals respond to FPLs. Tying a FPL to the PPS used by public payers means the payment cap is determined by the diagnosis, and additional treatment will not generate marginal revenue. This suggests PPS-based FPLs would produce stronger reductions in care. Similarly, mandating free care to low income patients will also give a hospital a stronger reason to reduce care.
Our data allows some, albeit limited opportunity to study these differences. Minnesota's FPL contains neither provision, while California and New Jersey are based upon the PPS, and New York, Illinois, and Rhode Island include a significant amount of free care to the poorest uninsured patients. Thus, Minnesota can be used as a reference against which to measure the effects of the two provisions. Unfortunately, all the variation in the laws occurs across rather than within states, so this analysis may be confounded by other unobservable state-level factors. In addition, the fact that states either have PPS-based FPLs or provide free care means we have limited independent variation upon which to identify the different effects (recall, New Jersey's free care provision is from a pre-existing law).
To investigate, we estimate a difference-in-differences model with dummy variables for any type of FPL, PPS-based FPL, and FPL with free care. The basic FPL dummy measures the effect of a generic FPL common to all states, while the other two dummies measure the additional effects of the two law provisions. Table 9 reports the results of this model.
As expected, we observe reductions in care with all types of FPLs. However, the additional provisions do not produce stronger responses. Because the effects of these provisions are identified relative to only one fairly small state, Minnesota, we believe this analysis reveals more about their relative rather than absolute effects. (42) Based upon this limited evidence, mandating free care appears to produce a stronger incentive to reduce hospital stays than does linking payment to the PPS. Although both provisions essentially reduce the marginal revenue of treatment to zero, the free care may produce a stronger effects because it is clear the patient represents a loss to the hospital, whereas the patient may still be profitable in aggregate under a PPS-based FPL.
B Legislative Path to Fair Pricing Laws
Another way to assess whether FPLs impose real constraints is to study how hospitals have received them. We suspect they would be hesitant to invest political and financial capital fighting a law that is both popular among the public, and would have minimal impact on their operations. A brief look into the legislative process in California suggests that hospitals were concerned with its potential impact (similar stories apply to the passage of fair pricing regulations in New York and Illinois). In the early 2000s, a series of newspaper articles brought attention to examples of uninsured patients who were charged much more for hospital care than were other payers. Motivated by this perceived inequity, California's legislature passed a fair pricing law in 2003 which was very similar to what was ultimately enacted several years later. In response to mounting public and legislative pressure, both the American Hospital Association and California Hospital Association published guidelines for their member hospitals about financial assistance policies for uninsured patients. These guidelines advocated for the development and publication of financial assistance policies, but include few specifics on what these policies should include. They also contained no enforcement or accountability mechanisms. In early 2004, Governor Schwartzeneger vetoed the fair pricing bill, arguing that the voluntary guidelines should be given a chance to work. By late 2006, health advocates and legislators effectively argued that the voluntary guidelines were not appropriately addressing the issue, and they enacted what is California's current fair pricing law. Though ultimately unsuccessful, these attempts to avoid legislation suggest that hospitals believe these laws do introduce meaningful constraints.
C Percentage of List Price Paid by Medicare and Medicaid Patients (MEPS)
In section 2 we present the distributions of percentage of list price paid for publicly insured and uninsured patients. We do so because the price caps imposed by FPLs are based upon a mix of Medicare and Medicaid payments, rather than because we believe the publicly insured patients are comparable to uninsured patients. In this section we show that the broad payment patterns hold whether we focus only on the Medicare or Medicaid distributions (that latter group likely being a more comparable patient group to the uninsured).
D Impact of FPLs on Hospital List Prices
In this section we investigate whether FPLs had any impact on hospital list prices (or "Chargemaster" prices). As outlined in the introduction to this paper, list prices have risen substantially over time. While list prices are largely irrelevant to insured patients whose insurers negotiate large discounts off list prices, they are the basis by which uninsured patients are initially billed. This has lead some to suggest that one of the explanations for high, and increasing, list prices is that hospitals are attempting to extract higher revenues from uninsured patients.
If generating revenues from uninsured patients is a motivation for increasing list prices, then it is possible that FPLs may reduce, or slow the growth of, list prices. By capping the maximum collection from uninsured patients below the list price, FPLs effectively render the list price irrelevant for uninsured patients. If this is the case, hospitals would have a diminished incentive to increase prices as aggressively.
To investigate this we run our event study specification where the list price markup (or ratio of list price to costs) is the outcome variable. These price-to-cost ratios are provided by AHRQ but are originally derived from CMS Cost Reports. Since we are interested in hospital-level reactions to FPLs, we collapse our data to the hospital-year level for this exercise. As before, standard errors are clustered at the state level. Hospital and year fixed effects are included, but seasonal fixed effects are dropped since list price data are provided annually.
The results are shown in Figure 15. Prior to the enactment of FPLs, list price markups are trending similarly to markups in control states. After the introduction of FPLs we do not see any evidence of a divergence in pricing patterns between treated and control states. FPLs do no appear to alter hospital list prices.
There are a few potential explanations for this null effect. First, it is possible that collections from uninsured patients, while a popular theory on list pricing, is not a major motivation for hospital pricing. It is also hospital list prices do target uninsured patients, but only those exempt from protection under FPLs. Recall that FPLs generally cover all but the wealthiest uninsured (those abover 400-600% of the poverty level). If high list prices are only targeted at this exempted group, then FPLs may not have any impact on list prices.
E Additional Robustness Checks
E.1 Treatment Effects on LOS Including Only FPL States
Because of the differential timing of FPLs, we can identify treatment effects with just the set of states that ever pass a FPL. In our primary analysis, however, we include all states that never pass a FPL in our control group. These states help to improve the precision of our estimates, and based on our pre-FPL results, it appears they are a reasonable control group. As a more conservative robustness check, we re-estimate the effect of FPLs on length of stay excluding all uninsured patients from states that never pass a law. Figure 16 shows that the results are quite similar, though predictably, eliminating control states reduces the precision of our estimates. If we pool all post-FPL years in a difference-in-differences model the reduction in length of stay is significantly different from zero.
E.2 Using a Count Regression Model
Given that our primary outcome variable, length of stay, is reported as integers in our data, one might consider using a count regression model as an alternative method of analysis. In this section we report results using a Poisson regression. The estimated model includes our full set of controls and risk-adjusters. As shown in Figure 17, the results are comparable to our main specification. By the end of our analysis window, fair pricing laws are associated with a 7 percent reduction in the average length of stay.
F Hospital Characteristics in FPL States
The treatment effects we estimate are driven by the 432 hospitals in FPL states that we observe both before and after enactment. This section investigates whether there is any evidence that our results are driven by biased hospital sampling. The primary concern is that if certain hospitals respond more or less strongly to FPLs, and those hospitals are disproportionately identifying our treatment effect, then our estimates may be biased.
To address this concern, we first compare the set of hospitals driving our treatment estimates to other hospitals from FPL states along a number of dimensions that could conceivably impact responsiveness to FPLs. Table 11 shows that across a number of hospital characteristics, the sample of hospitals driving our treatment estimates look similar to the rest of the hospitals from treated states. This evidence suggests that our main identifying hospitals are largely representative of hospitals from their states.
Another way to address this issue is to re-estimate our main specification using the trend weights provided by AHRQ. These weights are used to adjust for the complex sampling structure of the NIS and produce nationally representative estimates. Figure 18 illustrates the effect of FPLs on length of stay utilizing the NIS sampling weights. The estimated model includes a full set of controls and risk-adjusters (as in model (3) from Table 4). Reassuringly, the results are very similar to the main results presented earlier in Figure 6.
G Regression Tables for Quality of Care and Alternative Measures of Quantity
This appendix shows the additional regression tables for the quality and alternative measures of quantity that were referenced in section 6.
H Quality Metrics
Below we list the specific quality metrics employed in each of the four categories.
H.1 Mortality from selected conditions and procedures
* Acute Myocardial Infarction
* Heart Failure
* Acute Stroke
* Gastrointestinal Hemorrhage
* Hip Fracture
* Esophageal Resection
* Pancreatic Resection
* Abdominal Aortic Aneurysm Repair
* Coronary Artery Bypass Graft
* Percutaneous Coronary Intervention
* Hip Replacement
H.2 Use of procedures believed to reduce mortality
* Esophageal Resection
* Pancreatic Resection
* Abdominal Aortic Aneurysm Repair
* Coronary Artery Bypass Graft
* Percutaneous Coronary Intervention
* Carotid Endarterectomy
H.3 Incidence of potentially preventable in-hospital complications
* Death in Low-MortalityDRGs
* Pressure Ulcer Rate
* Death among Surgical Inpatients
* Iatrogenic Pneumothorax Rate
* Central Venous Catheter-Related Blood Stream Infection
* Postoperative Hip Fracture Rate
* Postoperative Hemorrhage or Hematoma Rate
* Postoperative Physiologic and Metabolic Derangement Rate
* Postoperative Respiratory Failure Rate
* Postoperative Pulmonary Embolism or Deep Vein Thrombosis Rate
* Postoperative Sepsis Rate
* Postoperative Wound Dehiscence Rate
* Accidental Puncture or Laceration Rate
H.4 Potentially preventable hospital admissions
Potentially Preventable Conditions ((A) acute, (C) chronic):
* Diabetes short-term complications (C)
* Diabetes long-term complications (C)
* Uncontrolled diabetes (C)
* Lower extremity amputation from diabetes (C)
* Perforated appendix (A)
* COPD/Asthma in older adults (C)
* Asthma in younger adults (C)
* Hypertension (C)
* Heart failure (C)
* Dehydration (A)
* Bacterial pneumonia (A)
* Urinary tract infection (A)
* Angina without procedure (C)
I Results for CDC Death Rates
We study three outcomes for non-injury deaths of 25-64 year-olds from 1999-2010. Each is measured as an age-adjusted death rate per 100,000 people (the age-adjustment is calculated by the CDC to account for the aging population over time). The outcomes are overall deaths, deaths that occurred outside of the hospital, and deaths that occurred outside the hospital from one of the mortality QI procedures and conditions. We first study the entire US population, and then focus on counties with more than 25% uninsured.
We start with our event study framework. Since our data is a state-year panel, we do not have patient-level control variables, and employ state as opposed to hospital fixed effects. We add state-specific linear time trends to account for differential drift in death rates over the time period (both treatment and control states experiences roughly linear declines in age-adjusted death rates, but the trend in treatment states is steeper). Thus, the year effects measure deviations from these trends that are common to all states, and the yearly FPL dummy variables measure deviations that are specific to treatment states.
Faraz S. Ahmad, Joshua P. Metlay, Frances K. Barg, Rebecca R. Henderson, and Rachel M. Werner. Identifying hospital organizational strategies to reduce readmissions. American Journal of Medical Quality, 28(4):278-285, 2013.
Gerard Anderson. "From Soak the Rich to Soak the Poor: Recent Trends in Hospital Pricing". Health Affairs, 26(3):780-9, 2007.
Kenneth Arrow. "Uncertainty and the Welfare Economics of Medical Care". The American Economic Review, pages 941-973, 1963.
David Card, Carlos Dobkin, and Nicole Maestas. "The Impact of Nearly Universal Insurance Coverage on Health Care Utilization: Evidence from Medicare". American Economic Review, 98 (5):2242-58, 2008.
David Card, Carlos Dobkin, and Nicole Maestas. "Does Medicare Save Lives?". The Quarterly Journal of Economics, 124(2):597-636, 2009.
Kathleen Carey. "Hospital Length of Stay and Cost: A Multilevel Modeling Analysis". Health Services and Outcomes Research Methodology, 3(1):41-56, 2002.
Grace M. Carter, Joseph P. Newhouse, and Daniel A. Relles. "How Much Change in The Case Mix Index is DRG Creep?". Journal of Health Economics, 9(4):411-428, 1990.
Amitabh Chandra, David Cutler, and Zirui Song. "Chapter Six - Who Ordered That? The Economics of Treatment Choices in Medical Care". In Thomas G. Mcguire Mark V. Pauly and Pedro P. Barros, editors, Handbook of Health Economics, volume 2 of Handbook of Health Economics, pages 397 - 432. Elsevier, 2011.
Jeffrey Clemens and Joshua D Gottlieb. "Do Physicians' Financial Incentives Affect Medical Treatment and Patient Health?". The American Economic Review, 104(4):1320-1349, 2014.
Timothy G Conley and Christopher R Taber. "Inference With Difference in Differences With a Small Number of Policy Changes". The Review of Economics and Statistics, 93(1):113-125, 2011.
Teresa A Coughlin. "Uncompensated Care for the Uninsured in 2013: A Detailed Examination". 2014. The Urban Institute.
Robert Coulam and Gary Gaumer. "Medicare's Prospective Payment System: A Critical Appraisal". Health Care Financing Review, 12, 1991.
Leemore S Dafny. "How Do Hospitals Respond to Price Changes?". The American Economic Review, 95(5):1525-1547, 2005.
Joseph J Doyle. Health insurance, treatment and outcomes: using auto accidents as health shocks. Review of Economics and Statistics, 87(2):256-270, 2005.
David Dranove and Michael Millenson. "Medical Bankruptcy: Myth Versus Fact". Health Affairs, 25(2):w74-w83, 2006.
Randall P. Ellis and Thomas G. McGuire. "Supply-Side and Demand-Side Cost Sharing in Health Care". Journal of Economic Perspectives, 7(4):135-151, 1993.
Amy Finkelstein, Sarah Taubman, Bill Wright, Mira Bernstein, Jonathan Gruber, Joseph P New-house, Heidi Allen, Katherine Baicker, et al. "The Oregon Health Insurance Experiment: Evidence from the First Year". The Quarterly Journal of Economics, 127(3):1057-1106, 2012.
Dominic Hodgkin and Thomas G. McGuire. "Payment Levels and Hospital Response to Prospective Payment". Journal of Health Economics, 13(1):1 - 29, 1994.
Renee Y. Hsia, Donna MacIsaac, and Laurence C. Baker. "Decreasing Reimbursements for Outpatient Emergency Department Visits Across Payer Groups From 1996 to 2004". Annals of Emergency Medicine, 51(3):265-274, 2008.
Louis Jacobson, Robert LaLonde, and Daniel Sullivan. "Earnings Losses of Displaced Workers". The American Economic Review, 83(4):685-709, 1993.
Mireille Jacobson, Craig C Earle, Mary Price, and Joseph P Newhouse. "How Medicare's Payment Cuts for Cancer Chemotherapy Drugs Changed Patterns of Treatment". Health Affairs, 29(7): 1391-1377, 2010.
Stephen F. Jencks, Mark V. Williams, and Eric A. Coleman. Rehospitalizations among patients in the medicare fee-for-service program. New England Journal of Medicine, 360(14):1418-1428, 2009. doi: 10.1056/NEJMsa0803563. PMID: 19339721.
Helen Levy and David Meltzer. "The Impact of Health Insurance on Health". Annual Review of Public Health, 29:399-409, 2008.
Neale Mahoney. Bankruptcy as implicit health insurance. The American Economic Review, 105 (2):710-746, 2015.
Willard G Manning, Joseph P Newhouse, Naihua Duan, Emmett B Keeler, and Arleen Leibowitz. "Health Insurance and the Demand for Medical Care: Evidence From a Randomized Experiment". The American Economic Review, pages 251-277, 1987.
Glenn Melnick and Katya Fonkych. "Hospital Pricing and the Uninsured: Do the Uninsured Pay Higher Prices?". Health Affairs, 27:116-22, 2008.
Glenn Melnick and Katya Fonkych. "Fair Pricing Law Prompts Most California Hospitals To Adopt Policies To Protect Uninsured Patients From High Charges". Health Affairs, 32(6):1101-1108, 2013.
Nguyen X. Nguyen and Frederick W. Derrick. "Physician Behavioral Response to a Medicare Price Reduction". Health Services Research, 32(3):283, 1997.
U. Reinhardt. "The pricing of U.S. hospital services: chaos behind a veil of secrecy". Health Affairs, 25(1):57-69, 2006.
Thomas H. Rice. "The Impact of Changing Medicare Reimbursement Rates on Physician-Induced Demand". Medical care, 21(8):803-815, 1983.
Elaine Silverman and Jonathan Skinner. Medicare upcoding and hospital ownership. Journal of health economics, 23(2):369-389, 2004.
Frank A Sloan. Not-for-profit ownership and hospital behavior. Handbook of health economics, 1: 1141-1174, 2000.
Frank A Sloan, Gabriel A Picone, Donald H Taylor, and Shin-Yi Chou. Hospital ownership and cost and quality of care: is there a dime's worth of difference? Journal of health economics, 20 (1):1-21, 2001.
Mark Smith, Robert Saunders, Leigh Stuckhardt, J Michael McGinnis, et al. "Best Care at Lower Cost: The Path to Continuously Learning Health Care in America". National Academies Press, 2013.
Jason M Sutherland, Elliott S Fisher, and Jonathan S Skinner. "Getting Past Denial: The High Cost of Health Care in the United States". New England Journal of Medicine, 361(13):1227-1230, 2009.
Christopher Tompkins, Stuart H. Altman, and Efrat Eilat. "The Precarious Pricing System For Hospital Services". Health Affairs, 25(1):45-56, 2006.
Winnie C Yip. "Physician Response to Medicare Fee Reductions: Changes in the Volume of Coronary Artery Bypass Graft (CABG) Surgeries in the Medicare and Private Sectors". Journal of Health Economics, 17(6):675-699, 1998.
(34) To the extent that marginal cases are admitted to the hospital less frequently after FPLs, the remaining admitted population would have more severe conditions on average, and thus would bias against our findings of care reductions.
(35) These results do not distinguish between transfers to another acute care hospital and transfers to other types of care facilities, but the estimate for each type of transfer is similar to the overall results.
(36) We also see no evidence of strategic diagnosing if we use the approach used in Silverman and Skinner (2004), where upcoding is detected by an increase in the percentage of pneumonia patients assigned the most lucrative pneumonia diagnosis.
(37) All-patient refined (APR) DRGs were developed to better suit the non-Medicare population, and are in use by Medicaid and quality reporting in some states.
(38) Several of the yearly estimates are just outside of conventional significance level, but the difference-in-differences estimate is significant. Also, if we control for patient severity in our quantity and quality of care regressions using APR-DRG rather than CCS category or DRG we still find significant effects, but the magnitudes are slightly reduced.
(39) Our data records up to sixty diagnoses made by the doctor for each patient (average is 5.5). We do not show the result here, but there is a significant reduction in the number of diagnoses after FPLs.
(40) For a discussion see Sutherland et al. (2009).
(41) For instance, Melnick and Fonkych (2008) show that in 2000-2005, private insurers in California paid around 40% of charges, where public insurers paid around 20%.
(42) Minnesota's FPL is also unique because it is the result of a voluntary agreement that came about after a lengthy negotiation and threat of law suit by the state Attorney General.
Table 8: Fair pricing laws by state Percent of Fed. Percent of State Year Poverty Level Uninsured Enacted Covered Covered Minnesota 2005 ~500% 86% New York 2007 300% 76% California 2007 350% 81% Rhode Island 2007 300% 77% New Jersey 2009 500% 87% Illinois 2009 ~600% ~95% Maximum Free Care State Collection below X% Amount of Poverty Minnesota Largest private payer NA New York Highest volume payer 100% California Highest price public payer NA Rhode Island Private payers 200% New Jersey 115% of Medicare 200% Illinois 135% of cost 200% Note: New Jersey's free care provision was actually part of a law passed in the early 1990s so our study does not capture its effect. New York also provides discounted care on a sliding scale between 100% and 250% of the poverty line. Table 9: Comparing reductions in lengths of stay by FPL provision Outcome Variable: Length of Stay Fixed Effects, Patient Included Controls: Demographics and DRG Weight FPL -0.367 (***) [-0.443,-0.292] PPS-Based FPL 0.250 (***) [0.181,0.319] Free-Care FPL 0.138 (*) [0.0188,0.257] Observations 3134363 Outcome Variable: Length of Stay Fixed Effects, Patient Included Controls: Demographics, and CCS Category FPL -0.269 (***) [-0.329,-0.209] PPS-Based FPL 0.138 (***) [0.0981,0.177] Free-Care FPL 0.0148 [-0.114,0.143] Observations 3134363 Note: Standard errors are clustered at the state level. CIs are reported in brackets. (*) p<0.05, (**) p<0.01, (***) p<0.001. All models include hospital, year, and season fixed effects. Table 10: Summarizing hospital charges and percentage of list price paid by payer-type Insurance Count Mean Hospital Charges Mean Percentage of List Price Paid Public Insurance 18,187 $15,088 37% Medicare 9,252 $19,881 39% Medicaid 8,935 $10,126 34% Uninsured 3,939 $6,045 37% Note: The data are from the Medical Expenditure Panel Survey from 2000-2004. Table 11: The effect of fair pricing laws on various indicators of quantity of care delivered to uninsured patients "Identifying" Hospitals "Non-identifying" hospitals from treated states Ownership Characteristics For-profit 12.2% 11.5% Non-profit 71.9% 70.7% Government, non-federal 15.7% 17.7% Member of multi-hospital system (a) 59.1% 57.4% Size Total discharges per year Location 10,544 9,974 Urban 78.1% 75.5% Teaching Status Teaching Hospital 25.2% 26.4% Paitent Characteristics Percent Uninsured 4.58% 4.54% Number of Hospitals 432 461 Note: Data are from the Nationwide Inpatient Sample for years 2003-2011. (a) indicates variable only available beginning in 2007. Table 12: The effect of fair pricing laws on various indicators of quantity of care delivered to uninsured patients Outcome Variable: n(Total Charges) Diff-in-Diff Specification FP Law In Effect -0.0657 (***) state clusters [-0.103,-0.0286] Conley-Taber [-0.1005,-0.0352] Event Study Specification 3 or more years prior 0.0138 [-0.0152,0.0428] 2 years prior 0.0143 [-0.0331,0.0616] Enactment year -0.0207 [-0.0525,0.0112] 1 year post -0.0610 (*) [-0.109,-0.0127] 2 year post -0.0795 (*) [-0.148,-0.0108] Observations 3110006 States 41 Outcome Variable: Frequency of Preventable Admissions Diff-in-Diff Specification FP Law In Effect -0.00366 state clusters [-0.0075,0.0002] Conley-Taber [-0.011,0.003] Event Study Specification 3 or more years prior -0.0005 [-0.0013,0.0003] 2 years prior -0.0005 [-0.0025,0.0016] Enactment year -0.0014 [-0.0029,0.00012] 1 year post -0.0012 [-0.0026,0.0002] 2 year post -0.0014 [-0.0042,0.0013] Observations 2699691 States 41 Outcome Variable: Frequency of Transfers Diff-in-Diff Specification FP Law In Effect 0.00466 (*) state clusters [-0.0037,0.0130] Conley-Taber [0.0023, 0.0130] Event Study Specification 3 or more years prior -0.0011 [-0.0063,0.0041] 2 years prior -0.0022 [-0.0086,0.0042] Enactment year 0.0042 [-0.0035,0.0120] 1 year post 0.0029 [-0.0021,0.0080] 2 year post 0.00348 [-0.0032,0.0102] Observations 3144168 States 41 Note: Data are from the Nationwide Inpatient Sample for years 2003-2011. Standard errors are clustered at the state level for yearly effects, and both state clustering and Conley-Taber are shown for DD results. CIs are reported in brackets. (*) p<0.05, (**) p<0.01, (***) p<0.001. Each regression includes hospital, year, and season fixed effects. All models also include the patient demographics and risk-adjusters. See the footnote of Table 4 for a full list of controls. Table 13: The effect of fair pricing laws on various quality metrics Mortality From Outcome Variable: Selected Conditions Risk-Adjustment AHRQ Expected Demographics Strategy: Mortality and Primary CCS Diagnosis Diff-in-Diff Specification Fair pricing law -0.0040 (*) -0.0065 (**) in effect [-0.0077,-0.0003] [-0.0112,-0.0019] Event Study Specification 3 or more years prior 0.00001 -0.0022 [-0.0077,0.0077] [-0.0125,0.0081] 2 years prior -0.0003 -0.0026 [-0.0098,0.0093] [-0.0131,0.0080] Enactment year -0.0048 -0.0087 [-0.0145,0.0050] [-0.0226,0.0053] 1 year post -0.0024 -0.0046 [-0.0110,0.0063] [-0.0140,0.0049] 2 or more years post -0.0051 -0.0117 [-0.0148,0.0045] [-0.0242,0.0008] Observations 276477 276477 States 41 41 Mortality From Frequency of Outcome Variable: Any Condition Beneficial Procedures Risk-Adjustment Demographics Demographics Strategy: and Primary CCS and Primary CCS Diagnosis Diagnosis Diff-in-Diff Specification Fair pricing law 0.0002 0.0121 in effect [-0.0012,0.0017] [-0.0131,0.0373] Event Study Specification 3 or more years prior -0.0011 0.0005 [-0.0037,0.0014] [-0.0170,0.0180] 2 years prior -0.0015 0.0163 [-0.0040,0.0011] [-0.0022,0.0348] Enactment year -0.0012 0.0284 [-0.0042,0.0019] [-0.0014,0.0581] 1 year post -0.0002 0.0053 [-0.0035,0.0030] [-0.0117,0.0223] 2 or more years post -0.0007 0.0158 [-0.0029,0.0016] [-0.0113,0.0429] Observations 3142717 146715 States 41 41 Frequency of Outcome Variable: Preventable Complications Risk-Adjustment AHRQ Predicted Demographics Strategy: Frequency and Primary CCS Diagnosis Diff-in-Diff Specification Fair pricing law 0.0001 0.0001 in effect [-0.001,0.0016] [-0.0010,0.0013] Event Study Specification 3 or more years prior -0.0014 -0.0016 [-0.0033,0.0005] [-0.0035,0.0002] 2 years prior -0.0002 -0.0004 [-0.0015,0.0010] [-0.0017,0.0010] Enactment year -0.0003 -0.0001 [-0.0018,0.0011] [-0.0009,0.0006] 1 year post -0.0012 -0.0004 [-0.0027,0.0003] [-0.0017,0.0010] 2 or more years post -0.0002 -0.0016 (*) [-0.0025,0.0022] [-0.0032,-0.0001] Observations 2551837 2551837 States 41 41 Note: Data are from the Nationwide Inpatient Sample for years 2003-2011. Standard errors are clustered at the state level. CIs are reported in brackets. (*) p<0.05, (**) p<0.01, (***) p<0.001. Each regression includes hospital, year, and season fixed effects. All models also include the patient demographics and risk-adjusters. See the footnote of Table 4 for a full list of controls.
|Printer friendly Cite/link Email Feedback|
|Author:||Batty, Michael; Ippolito, Benedic|
|Publication:||AEI Paper & Studies|
|Date:||Nov 1, 2015|
|Previous Article:||Financial incentives, hospital care, and health outcomes: Evidence from fair pricing laws.|
|Next Article:||The selection effects of tied health insurance contracts.|