Penny of Prevention Doesn't Seem to Produce a Pound of Cure. NEJM's Hot-spotting Study

By Chuck Dinerstein, MD, MBA — Jan 09, 2020
The idea was promoted with much fanfare. And to be honest, it made a lot of sense. For the 5% of patients who are chronically ill, the superusers that use up to 20% of our healthcare resources provide them with the necessary additional support. In the tradeoff, their health will improve, and our costs will decline. With similar coverage, the New England Journal of Medicine says that there was no improvement in outcomes. But there's more to it. Let's take a look.
Image courtesy of Rick Bolinger on Flickr

First, kudos to the Camden Coalition of Healthcare Providers, who started the movement and had the integrity to see whether the program accomplished its goals. They didn't take the money and run, they measured the outcomes and reported them, even when the results were not as hoped or predicted.

The study

"The intervention targeted super-utilizers of the health care system — persons with medically and socially complex needs who have frequent hospital admissions."

All the participants had at least two chronic conditions and two or more of the following medical issues: five or more medications, mental illness, substance abuse, or socioeconomic issues, homelessness, or lack of social support. While hospitalized for the "index" hospitalization the Coalitions multidisciplinary team employed all the support possible, home visits, scheduling, and accompanying patient to follow up and specialty care, medication management, coaching for disease-specific self-care, help in applying for social and behavioral services – the kitchen sink of care. The control group received the usual discharge care, which includes home care and social service outreach. Of the identified 1500 patients, 800 consented to the study and were randomized to the control or treatment arms. The groups were relatively well-matched, with slightly more depression in the treatment arm and slightly more substance abuse in the control group.

The primary outcome was hospital readmission after discharge from the hospital. Secondary outcomes were the number of those readmissions, cost, and payment data, as well as mortality all for the six months after the index hospitalization. Let's pause here to consider what was being measured; in essence, it was a comparison of whether a full-court clinical press could prevent readmission better than usual care. It didn't measure whether their health or quality of life had improved.

On average, patients in the treatment group received 7.6 home visits, 8.8 telephone calls, and were accompanied on 2.5 physician visits. Median participation was three months. The treatment arm pursued "aggressive" goals in aiming for follow-up with the program within five days of discharge and follow up with their primary care provider within seven days. Here is the first sign of problems when theory hits the harsh pavement of reality. While the program contacted 60 % of the patients within five days, only 36% were seen by their primary in the seven-day interval, for an even lower combined percentage of 28%. Care coordination is not easy

"The 180-day readmission rate was 62.3% in the treatment group and 61.7% in the control group. The intervention had no significant effect on this primary outcome … The intervention also had no effect on any of the secondary outcomes or within any of the prespecified subgroups."

Here is another lesson for the readers, and another reason to acknowledge the integrity of these researchers. The "kitchen-sink" support did significantly reduce subsequent readmissions during the six months follow up. And lesser authors might have been tempted to make that the leading dot of interest.

"In contrast with the results of the randomized, controlled trial, a comparison of admission rates for the intervention group alone in the 6 months before and after enrollment misleadingly suggested a substantial decline in admissions in response to the intervention because it did not account for the similar decline in the control group."

This is an old take-home message but bears repeating. No number can be understood in isolation; there must be a comparison, and you must scrutinize the comparator if you are to really get the message.

What to consider

The results of this study are at odds with other studies attempting to support the superusers of healthcare fully. At best, the results are a mixed bag. Those hospital systems and thought leaders that are embracing their need to control the socioeconomic determinants of health need to take notice. Just attempting to provide more inclusive services does not guarantee a better outcome, although it will guarantee higher costs.

This work was performed by a small, flexible organization, dedicated to the task at hand. They were frustrated by homelessness and a lack of a means to communicate with the patients they so deeply desired to serve. If we cannot reliably improve outcomes with this level of motivation and engagement, then the idea that scaling this up will somehow generate the results we desire is wishful thinking.  

The results demonstrate regression to the mean, or as noted statistician Yogi Berry more aptly put it. "In theory, there is no difference between theory and practice. In practice, there is."

I think one of the problems is that this approach is being tried a bit too late. By the time these individuals are superusers, many of the underlying causes cannot easily or ever be corrected. Those pennies of prevention and pounds of cure may need to be given earlier, before diabetes and heart disease, before the homelessness and mental illness.

Source: Health Care Hotspotting – A Randomized Controlled Trial NEJM DOI: 10.1056/NEJMsa1906848

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: