Health systems are using machine learning to predict high-cost care. Will it help patients?
April 13, 2022:
Health systems and payers eager to trim costs think the answer lies in a small group of patients who account for more spending than anyone else.
If they can catch these patients — typically termed “high utilizers” or “high cost, high need” — before their conditions worsen, providers and insurers can refer them to primary care or social programs like food services that could keep them out of the emergency department. A growing number also want to identify the patients at highest risk of being readmitted to the hospital, which can rack up more big bills. To find them, they’re whipping up their own algorithms that draw on previous claims information, prescription drug history, and demographic factors like age and gender.
A growing number of the providers he works with globally are piloting and using predictive technology for prevention, said Mutaz Shegewi, research director of market research firm IDC’s global provider IT practice.
Crafted precisely and accurately, these models could significantly reduce costs and also keep patients healthier, said Nigam Shah, a biomedical informatics professor at Stanford. “We can use algorithms to do good, to find people who are likely to be expensive, and then subsequently identify those for whom we may be able to do something,” he said.
But that requires a level of coordination and reliability that so far remains rare in the use of health care algorithms. There’s no guarantee that these models, often homegrown by insurers and health systems, work as they’re intended to. If they rely only on past spending as a predictor of future spending and medical need, they risk skipping over sick patients who haven’t historically had access to health care at all. And the predictions won’t help at all if providers, payers, and social services aren’t actually adjusting their workflow to get those patients into preventive programs, experts warn.
“There’s very little organization,” Shah said. “There’s definitely a need for industry standardization both in terms of how you do it and what you do with the information.”
The first issue, experts said, is that there’s not an agreed-upon definition of what constitutes high utilization. As health systems and insurers develop new models, Shah said they will need to be very precise — and transparent — about whether their algorithms to identify potentially expensive patients are measuring medical spending, volume of visits compared to a baseline, or medical need based on clinical data.
Some models use cost as a proxy measure for medical need, but they often can’t account for disparities in a person’s ability to actually get care. In a widely cited 2019 paper examining an algorithm used by Optum, researchers concluded that the tool — which used prior spending to predict patient need — referred white patients for follow-up care more frequently than Black patients who were equally sick.
“Predicting future high-cost patients can differ from predicting patients with high medical need because of confounding factors like insurance status,” said Irene Chen, an MIT computer science researcher who co-authored a Health Affairs piece describing potential bias in health algorithms.
If a high-cost algorithm isn’t accurate, or is exacerbating biases, it could be difficult to catch — especially when models are developed by and implemented in individual health systems, with no outside oversight or auditing by government or industry. A group of Democratic lawmakers has floated a bill requiring organizations using AI to make decisions to assess them for bias and creating a public repository of these systems at the Federal Trade Commission, though it’s not yet clear if it will progress.
That puts the onus, for the time being, on health systems and insurers to ensure that their models are fair, accurate, and beneficial to all patients. Shah suggested that the developers of any cost prediction model — especially payers outside the clinical system — cross-check the data with providers to ensure that the targeted patients do also have the highest medical needs.
“If we’re able to know who is going to get into trouble, medical trouble, fully understanding that cost is a proxy for that…we can then engage human processes to attempt to prevent that,” he said.
Another key question about the use of algorithms to identify high-cost patients is what, exactly, health systems and payers should do with that information.
“Even if you might be able to predict that a human being next year is going to cost a lot more because this year they have colon cancer stage 3, you can’t wish away their cancer, so that cost is not preventable,” Shah said.
For now, the hard work of figuring out what to make of the predictions produced by algorithms has been left in the hands of the health systems making their own models. So, too, is the data collection to understand whether those interventions make a difference in patient outcomes or costs.
At UTHealth Harris County Psychiatric Center, a safety net center catering primarily to low-income individuals in Houston, researchers are using machine learning to better understand which patients have the highest need and bolster resources for those populations. In one study, researchers found that certain factors like dropping out of high school or being diagnosed with schizophrenia were linked to frequent — and expensive — visits. Another analysis suggested that lack of income was strongly linked to homelessness, which in turn has been linked to costly psychiatric hospitalizations.
Some of those findings might seem obvious, but quantifying the strength of those links helps hospital decision makers with limited staff and resources decide what social determinants of health to address first, according to study author Jane Hamilton, an assistant professor of psychiatry and behavioral sciences at the University of Texas Health Science Center at Houston’s Medical School.
The homelessness study, for instance, led to more local intermediate interventions like residential “step-down” programs for psychiatric patients. “What you’d have to do is get all the social workers to really sell it to the social work department and the medical department to focus on one particular finding,” Hamilton said.
The predictive technology isn’t directly embedded in the health record system yet, so it’s not yet a part of clinical decision support. Instead, social workers, doctors, nurses, and executives are informed separately about the factors the algorithm identifies for readmission risk, so they can refer certain patients for interventions like short-term acute visits, said Lokesh Shahani, the hospital’s chief medical officer and associate professor at UTHealth’s Department of Psychiatry and Behavioral Sciences. “We rely on the profile the algorithm identifies and then kind of pass that information to our clinicians,” Shahani said.
“It’s a little bit harder to put a complicated algorithm in the hospital EHR and change the workflow,” Hamilton said, though Shahani said the psychiatric hospital plans to link the two systems so that risk factors are flagged in individual records over the next few months.
Part of changing hospital operations is identifying which visits can actually be avoided, and which are part of the normal course of care. “We’re really looking for malleable factors,” Hamilton said. “What could we be doing differently? Statnews.com