Objectives: The authors determined whether standardized hospital mortality rates varied for six common medical diagnoses.
Methods: The retrospective cohort study included 89,851 patients aged 18 years and older discharged from 30 hospitals in a large metropolitan area in 1991 to 1993 with a principal diagnosis of acute myocardial infarction, congestive heart failure, pneumonia, stroke, obstructive lung disease, or gastrointestinal hemorrhage. For each hospital, standardized mortality ratios (observed/predicted mortality) were determined using validated risk-adjustment models that were based on clinical data elements abstracted from patients' hospital records. Hospitals also were categorized into quintiles on the basis of standardized mortality ratios. Correlations between standardized mortality ratios and agreement between quintile rankings were determined for each pair of diagnoses.
Results: Correlations between hospital-standardized mortality ratios for individual diagnoses were generally weak. For the 15 possible pairs of diagnoses, Pearson coefficients ranged from -0.10 to 0.43; only six were 0.30 or greater. Agreement between hospital quintile rankings was also generally low, with weighted kappa values ranging from -0.12 to 0.42. Three of 15 kappa values were less than 0 (ie, agreement lower than chance), and only four exceeded 0.20, the threshold for "fair" agreement. Although simulated analyses found that random variation and relatively low hospital volumes accounted for some of the difference in standardized mortality ratios for diagnoses, a large proportion of the difference remained unexplained.
Conclusions: Standardized hospital mortality rates varied for six diagnoses that likely are managed by similar practitioners. Although variability may be decreased by restricting analyses to hospitals with large volumes, the findings indicate that for many hospitals, diagnosis-specific mortality rates may be an inconsistent measure of hospital quality, even when data are aggregated for multiple years.