Printer Friendly

Date stamping: will it withstand the test of time?

The past two decades have seen an explosion of work on measuring and reporting health care quality as a means to ensure accountability and to stimulate quality improvement. In the Unites States, hospital quality has been a central focus of this activity. Hospitals generate almost a third of all health care costs (Smith et al. 2006) and most deaths occur in this setting (IOM 2000).

Currently the most readily available information for assessing hospital quality is administrative data, which are large, computerized data files compiled primarily for the purpose of billing for health care services (Iezzoni 2003). Patient-level hospital discharge data are an example of administrative data. Administrative data are a valuable resource for health services research and quality assessment because they are nearly universal in their coverage of hospitalizations in a state, available for a large number of patients, uniform in their coding, available electronically, and relatively inexpensive to obtain (Scinto, Sherwin, and Fowler 2000). Mandatory collection of these data from all hospitals avoids the biases that typically arise with voluntary reporting systems (Tuttle, Panzer, and Baird 2002).

Over 30 states mandate the collection of patient-level hospital discharge data and many participate in a national effort organized by the Agency for Healthcare Research and Quality (AHRQ) to compile longitudinal patient-level data. States establish their own requirements for what data elements hospitals need to report and how the data are disseminated. The Health Insurance Portability and Accountability Act (HIPAA) has stimulated a movement to streamline standards for all state health datasets.

States typically collect approximately 15-20 types of data elements, including patient demographics, health insurance status, hospitalization dates, diagnoses, procedures, and disposition in their patient-level hospital discharge datasets, which are generally available from medical records. Most of these elements are originally derived from the Uniform Hospital Discharge Data Set (UHDDS), and have standardized definitions. Although abstracted patient-level hospital discharge data do not typically include physiologic information such as vital signs or laboratory or test results, substantial clinical information is conveyed through the ICD-9-CM diagnostic and procedure codes.

Discharge data support some assessments corresponding to the Institute of Medicine's six key dimensions of health care quality, which include safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness (IOM 2001). However, these assessments are limited by the small number of elements in the dataset. For example, the only personal characteristics that can be used for equity assessments are typically age, sex, race, ethnicity, and health insurance status. Discharge datasets do not include measures of financial status, education, language, or a variety of other personal characteristics for which it might be important to monitor health care disparities. These datasets are also particularly poor at reflecting patient-centered care. A few states have added a code for whether the patient expressed a wish to not be resuscitated (DNR) but even this may not be purely an expression of the patients' preferences and is limited to just one aspect of care. While hospital discharge data contain little in the way of information on specific processes of care, they do include information on hospital survival that can generate outcome reports (AHRQ 2005). Patient identifiers such as the social security number and birth date can facilitate linkage of records across multiple admissions and facilities, as well as to death certificate data, to enable a more complete picture of outcomes than can be obtained by facility-specific data alone.

Because patients are not randomly assigned to hospitals, differences in the health status of patients who are cared for in different hospitals can confound an assessment of health care quality based on patients' health outcomes. Outcomes reporting usually relies upon risk-adjustment, a multivariate statistical technique that accounts for observed variation in the health of patients being cared for in different health care settings.

Risk-adjustment is a powerful statistical method, but the quality and type of diagnostic information contained in routine hospital discharge datasets limits its application. First, there is a concern that administrative diagnostic information is incomplete or inaccurate. The linking of reimbursement to the coding of diagnostic information in these datasets has improved the accuracy of reported information, but legitimate questions remain about the validity and reliability of diagnostic coding (Fisher et al. 1992).

Second, the administrative diagnostic information in hospital discharge datasets is better able to account for health status differences due to comorbidities than severity of a particular condition. For many conditions, the ICD-9-CM coding schema does not allow sufficient clinical specificity to support judgments about disease severity. This limitation contributes to clinicians' skepticism about the value of quality assessments obtained from patient-level hospital discharge data. The addition of clinical variables to patient-level hospital discharge datasets does improve risk-adjustment models (Romano, Remy, and Luft 1996; Pine, Jones, and Lou 1998).

Third, diagnostic information reported in most states' patient-level hospital discharge data does not distinguish between comorbidities and complications. Some diagnostic codes are obviously complications (e.g., iatrogenic pneumothorax). However, often the same diagnosis (pneumonia) that is a comorbidity in one clinical situation can be a complication in another. The uncertainty in the classification of a secondary diagnosis as a complication or a comorbidity can undermine a risk-adjustment model in two ways.

First, including a diagnosis in a risk-adjustment model that is the result of poor quality care can lead to over-adjustment that obscures the accurate assessment of providers. Second, excluding diagnoses from a model that could sometimes reflect a complication can result in the undercounting of true comorbidities and thereby inappropriately penalize hospitals that care for sicker patients.

The studies by Glance et al. provide new and important information on the potential for date stamping--or more specifically an indicator of whether a condition was present at admission--to distinguish between comorbidities and complications and how excluding from risk-adjustment models diagnoses that are complications rather than comorbidities affects ratings of hospital performance. Conditions present at admission are by definition comorbidities while those that occur after admission are complications.

The authors report several interesting findings. First, approximately 93 percent of secondary diagnoses used in two commonly applied risk-adjustment models, Dartmouth/Charlson Index and Elixhauser comorbidity measures, (Charlson et al. 1987; Elixhanser et al. 1998) are reported as present at the time of admission, suggesting that these models appropriately reflect comorbidities and not complications (Glance et al. 2006b). Second, some diagnoses (e.g., ectopic pregnancy, old myocardial infarction, and diabetes) are almost always present at admission and therefore clearly comorbidities, while others (respiratory complications, cardiac complications, and postoperative shock) are more often complications. Third, the use of date stamping to distinguish comorbidities from complications changes hospitals' risk-adjusted outcomes and their ranking of performance relative to when risk-adjusted outcomes are calculated without date stamping (Glance et al. 2006a).

These findings provide a compelling argument in favor of the use of date stamping to improve the usefulness of information in hospital discharge data. However, there are still questions about the validity of date stamping, particularly for acute conditions, that need to be addressed before recommending the adoption of this approach and encouraging states to invest resources toward collecting information on whether a condition was present at admission.

Vagaries in how hospitals are instructed to code conditions can contribute to the problem. For example, how should a condition that may have been present at admission but was not diagnosed until later during the hospitalization be coded? Conceptually one might ideally want conditions that are not the result of treatment and that reveal themselves during a hospitalization to be coded as comorbidities. While some hospitals may apply this logic, others may be inclined to interpret the coding guidelines more strictly and include only those conditions that are discovered within the first 24 hours of the admission. Better standards are needed to ensure uniform application of date stamping.

Perhaps a larger problem arises from the fact that the ratings of hospital performance can have financial consequences that can compromise the reliability of hospitals' determination of whether a condition was truly present at admission. A hospital has a vested interest in coding its patients' secondary conditions as present at admission because this will tend to increase its expected mortality rate. The higher an institution's expected mortality rate is, the easier it is for that institution to achieve a quality rating based on a comparison of its observed versus expected mortality rate that is as good or better than average. A coding bias toward labeling secondary conditions as present at admission may have relatively little impact on the accurate assessment of chronic conditions, as one would expect that most chronic conditions are truly present at admission. For acute conditions, a coding bias is more problematic. Assuming that a smaller percentage of acute conditions than chronic conditions are truly present at admission, a bias toward misclassifying secondary conditions as present at admission has a greater potential to mischaracterize acute rather than chronic conditions as comorbidities when they were in actuality complications. This is concerning because acute conditions may be some of the most significant predictors of mortality (Krumholz et al. 1999). For example, in the setting of an acute myocardial infarction, cardiogenic shock is a major predictor of mortality. A secondary diagnosis code of cardiogenic shock without date stamping could be indicative of either an acute comorbidity that accompanied the presentation of the acute myocardial infarction or a complication arising from suboptimal treatment of the acute myocardial infarction. The assignment of a highly predictive acute condition as a comorbidity or a complication of treatment could determine whether a hospital is labeled as a high, average, or low quality health care provider.

Variability in how date stamping is deployed across hospitals could undermine the conclusions that are drawn from applying it in risk-adjustment models. If coding errors are randomly distributed across institutions that are being compared this is less concerning than if there are systematic biases in how information is coded at particular institutions. If we hope to have valid and reliable performance reports, then independent medical record abstractions need to be done to audit and ensure the accuracy of the information hospitals report about their patient discharges. Over-coding of conditions as present at admission when they truly were not can inflate the assessment of a hospital's expected mortality rate and thereby minimize the contribution disease severity plays in the expected mortality rate of hospitals that actually care for sicker patient populations. In addition, miscoding complications as conditions that were present at admission can contribute to the generation of a diagnostic-related group that is associated with a higher payment. This would have the effect of rewarding hospitals for worse quality care.

Date stamping does not address all of the limitations of patient-level hospital discharge data, but it does offer the promise of improving the calculation of risk-adjusted health outcomes to assess hospital performance. Future studies need to demonstrate that this information can be coded in a valid and reliable manner across hospitals. In time we will know whether the date stamp concept, combined with thoughtful guidelines on how to ensure that the coding is uniform, merits expansion to states beyond California and New York.


AHRQ. 2005. "Health Care Cost and Utilization Project" [accessed on November 1, 2005]. Available at

Charlson, M., P. Pompei, K. Ales, and C. MacKenzie. 1987. "A New Method of Classifying Prognostic Comorbidity in Longitudinal Studies: Development and Validafion." Journal of Chronic Diseases 40 (5): 373-83.

Elixhauser, A., C. Steiner, D. Harris, and R. Coffey. 1998. "Comorbidity Measures for Use with Administrative Data." Medical Care 36 (1): 8-27.

Fisher, E., F. Whaley, W. Krushat, D. Malenka, C. Fleming, J. Baron, and D. Hsia. 1992. "The Accuracy of Medicare's Hospital Claims Data: Progress Has Been Made, but Problems Remain." American Journal of Public Health 82 (2): 243-8.

Glance, L. G., A. W. Dick, T. M. Osler, and D. B. Mukamel. 2006a. "Accuracy of Hospital Report Cards Based on Administrative Data." Health Services Research DOI:10.1111/j.1475-6773.2006.00554.x.

--. 2006b. "Does Date Stamping ICD-9-CM Codes Increase the Value of Clinical Information in Administrative Data?" Health Services Research 41 (1): 231-51.

Iezzoni, L. 2003. Risk-Adjustment for Measuring Health Care Outcomes. Chicago: Health Administration Press.

IOM. 2000. To Err Is Human. Washington, DC: National Academies Press. --. 2001. Crossing the Quality Chasm: A New health System for the 21st Century. Washington, DC: National Academies Press.

Krumholz, H. M., J. Chen, Y. Wang, M.J. Radford, Y. T. Chen, and T. A. Marciniak. 1999. "Comparing AMI Mortality among Hospitals in Patients 65 Years of Age and Older: Evaluating Methods of Risk-Adjustment." Circulation 99 (23): 2986-92.

Pine, M., B. Jones, and Y. Lou. 1998. "Laboratory Values Improve Predictions of Hospital Mortality." International Journal for Quality in Health Care 10 (6): 491-501.

Romano, P. S., L. L. Remy, and H. S. Luft. 1996. "Second Report of the California Hospital Outcomes Project (1996): Acute Myocardial Infarction Volume Two." Center for Health Services Research in Primary Care. Reports Prepared for the California Office of Statewide Health Planning and Development.

Scinto, J. D., T. E. Sherwin, and J. Fowler. 2000. Use of Administrative Data in Measuring Quality of Care. Middletown, CT: Qualidigm.

Smith, C., C. Cowan, S. Heffler, and A. Catlin. 2006. "National Health Spending in 2004: Recent Slowdown Led by Prescription Drug Spending." Health Affairs (Millwood) 25 (1): 186-96.

Tuttle, D., R.J. Panzer, and T. Baird. 2002. "Using Administrative Data to Improve Compliance with Mandatory State Event Reporting." Joint Commission Journal on Quality Improvement 28 (6): 349-58.

Address correspondence to Andrew B. Bindman, M.D., Professor of Medicine, University of California, San Francisco, Box 1364, San Francisco, CA 94143.
COPYRIGHT 2006 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:hospital quality
Author:Bindman, Andrew B.; Bennett, Adam
Publication:Health Services Research
Geographic Code:1USA
Date:Aug 1, 2006
Previous Article:Accuracy of hospital report cards based on administrative data.
Next Article:Health care organizations' use of data on race/ethnicity to address disparities in health care.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters