Intended for healthcare professionals

Rapid response to:

Editorials

The death of death rates?

BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h3466 (Published 14 July 2015) Cite this as: BMJ 2015;351:h3466

Rapid Response:

Re: The death of death rates?

Re: The death of death rates?
Doran et al’s editorial[1] asks whether a study by Hogan et al [2] in the same issue represents a fatal blow to the use of mortality in measuring quality in hospitals. While Hogan et al note an association between standardised mortality ratios for a specific disease and measures of quality of care, the tone of Doran et al’s editorial is to question all mortality indicators. Our experience of developing and publicly reporting mortality outcomes in New South Wales (NSW), Australia, suggests that the question should be recast, away from “are mortality measures useful?" to "which mortality measures are useful?"

In 2013, our organisation published a report using a 30-day Risk-Standardised Mortality Ratio (RSMR) indicator for five clinical conditions (acute myocardial infarction; ischaemic stroke; haemorrhagic stroke; pneumonia; hip fracture surgery) to highlight outlier hospitals in the state. [3] The analyses drew on linked databases of hospital records and death registries going back 12 years (July 2000 – June 2012).

Extensive development and analyses highlighted a number of advantages of the RSMR approach we adopted. In our context, the RSMR draws on linked data, and considers patients rather than hospitalisations to assess mortality. This means it captures deaths that occur both within the hospital as well as immediately following discharge - relating to both the short-term as well as medium-term consequences of care provision. The use of linked data in our initial three year reporting period (July 2009 – June 2012) increased the number of deaths captured by between 21% (for haemorrhagic stroke) and 100% for hip fracture surgery. Linked data also provide increased capacity to identify comorbidities in medical records using a look back period, strengthening risk adjustment.

The RSMR is based on disease specific adjustment models that calculate for each of the selected conditions the number of expected deaths in the 30-days following admission. The ratio compares this expected number to the observed number of deaths. The significance of each hospital’s RSMR is interpreted using a funnel plot.[4] Funnel plots illustrate the relationship between the RSMR and sample size and reduce the chance of making a type I error (a false positive) with wider control limits for hospitals with smaller volumes of patients.

As with any indicator, the RSMR also has limitations that must be borne in mind when interpreting results. It can be affected by quality of record keeping and coding, the appropriateness of the risk-adjustment method, the capacity to capture exceptional organisational circumstances as well as the complex nature of health systems. In our work and in the NSW context, RSMRs however are not used to make direct comparisons between hospitals. Similar mortality ratios may be significantly higher than expected in one hospital and fall within the control limits for another, given their differing sizes. In addition, RSMRs are not to be used as a measure of the number of avoidable or iatrogenic deaths. While it is known that variations in clinical care can affect the likelihood of survival for acute conditions and that a proportion of deaths are attributable to suboptimal care, RSMRs do not distinguish between deaths that are avoidable or those that are a reflection of the natural course of illness. Our use of the RSMR focuses on its strength as a screening tool to point to areas of practice or organisation where further investigation may be warranted.

Many deaths are unavoidable, and may even be an expected outcome in some circumstances; however variation in mortality across healthcare facilities, after taking account of patient level factors can be a reflection of unwarranted clinical variation. According to Hogan et al’s analyses, 3.6% of deaths were avoidable. This is not an insubstantial number. In our analyses for our five conditions and three year period in NSW – it would equate to 427 avoidable deaths.

Doran et al’s editorial is cast in terms of ‘comparative mortality data’. Careful reading suggests their criticisms are directed at HSMRs. The lack of precision in terminology is problematic in that it risks devaluing a useful approach to measuring performance and informing improvement. Hogan et al’s paper closes with implications for research. The suggestions remain focused on exploring the non-specific hospital-standardised mortality ratio. We argue that the debate and effort should move from HSMRs to more specific RSMRs – measures that have, in NSW, been shown to be meaningful, actionable and relevant.

Jean-Frédéric Lévesque
Kim Sutherland
Sadaf Marashi-Pour
Douglas Lincoln
Huei-Yang Chen
Julia Chessman

Bureau of Health Information, Sydney NSW, Australia

References
1. Doran T, Bloor K, Black N. The death of death rates?: BMJ 2015;351:h3466
2. Hogan H, Zipfel R, Neuburger J, et al. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ 2015;351:h3239.
3. Bureau of Health Information. 30-day mortality following hospitalisation, five clinical conditions, NSW, July 2009 – June 2012.
4. Spiegelhalter DJ. Funnel plots for comparing institutional performance. Statistics in Medicine 2005; 24:1185-1202.

Competing interests: No competing interests

Competing interests: No competing interests

17 July 2015
Jean-Frederic Levesque
Chief Executive
Kim Sutherland, Sadaf Marashi-Pour, Douglas Lincoln, Huei-Yang Chen, Julia Chessman
Bureau of Health Information, NSW, Australia
Level 11, Sage Building, 67 Albert Avenue, Chatswood, NSW. 2067 AUSTRALIA