Identifying Poor-Quality Hospitals
- 1 August 1996
- journal article
- Published by Wolters Kluwer Health in Medical Care
- Vol. 34 (8) , 737-753
- https://doi.org/10.1097/00005650-199608000-00002
Abstract
Many groups involved in health care are very interested in using external quality indices, such as risk-adjusted mortality rates, to examine hospital quality. The authors evaluated the feasibility of using mortality rates for medical diagnoses to identify poor-quality hospitals. The Monte Carlo simulation model was used to examine whether mortality rates could distinguish 172 average-quality hospitals from 19 poor-quality hospitals (5% versus 25% of deaths being preventable, respectively), using the largest diagnosis-related groups (DRGs) for cardiac, gastrointestinal, cerebrovascular, and pulmonary diseases as well as an aggregate of all medical DRGs. Discharge counts and observed death rates for all 191 Michigan hospitals were obtained from the Michigan Inpatient Database. Positive predictive value (PPV), sensitivity, and area under the receiver operating characteristic curve were calculated for mortality outlier status as an indicator of poor-quality hospitals. Sensitivity analysis was performed under varying assumptions about the time period of evaluation, quality differences between hospitals, and unmeasured variability in hospital casemix. For individual DRG groups, mortality rates were a poor measure of quality, even using the optimistic assumption of perfect casemix adjustment. For acute myocardial infarction, high mortality rate outlier status (using 2 years of data and a 0.05 probability cutoff) had a PPV of only 24%, thus, more than three fourths of those labeled poor-quality hospitals (high mortality rate outliers) actually would have average quality. If we aggregate all medical DRGs and continue to assume very large quality differences and perfect casemix adjustment, the sensitivity for detecting poor-quality hospitals is 35% and PPV is 52%. Even for this extreme case, the PPV is very sensitive to introduction of small amounts of unmeasured casemix differences between hospitals. Although they may be useful for some surgical diagnoses, DRG-specific hospital mortality rates probably cannot accurately detect poor-quality outliers for medical diagnoses. Even collapsing to all medical DRGs, hospital mortality rates seem unlikely to be accurate predictors of poor quality, and punitive measures based on high mortality rates frequently would penalize good or average hospitals.Keywords
This publication has 34 references indexed in Scilit:
- Percutaneous transluminal coronary angioplasty in New York State. Risk factors and outcomesPublished by American Medical Association (AMA) ,1992
- APPLICATIONS FOR RISK-ADJUSTED OUTCOME MEASURESInternational Journal for Quality in Health Care, 1991
- Biased estimates of expected acute myocardial infarction mortality using MedisGroups admission severity groupsJAMA, 1991
- A critique of the use of generic screening in quality assessmentPublished by American Medical Association (AMA) ,1991
- Explaining variations in hospital death rates. Randomness, severity of illness, quality of carePublished by American Medical Association (AMA) ,1990
- Hospital Characteristics and Mortality RatesNew England Journal of Medicine, 1989
- Flaws in mortality data. The hazards of ignoring comorbid diseaseJAMA, 1988
- Involving Consumers in Quality of Care AssessmentHealth Affairs, 1988
- Adjusted hospital death rates: a potential screen for quality of medical care.American Journal of Public Health, 1987
- Selective Contracting for Hospital Care Based on Volume, Quality, and Price: Prospects, Problems, and Unanswered QuestionsJournal of Health Politics, Policy and Law, 1987