This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is testing AI’s usefulness in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And new research from Stanford University outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by KFF Health News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
“Red and Blue States Alike Want To Limit AI in Insurance. Trump Wants To Limit the States,” by Darius Tahir and Lauren Sausser.
“AI Will Soon Have a Say in Approving or Denying Medicare Treatments,” by Lauren Sausser and Darius Tahir.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.
USE OUR CONTENT
This story can be republished for free (details).