Some applications of augmented intelligence (AI), often called artificial intelligence, may indeed be saving lives, reducing physician burnout, making health care more efficient and have the potential to do much more. But those who are proclaiming that AI is changing health care as we know it need to slow down.
“There are some incredible uses right now, but I think the whole concept of it transforming health care in its present state—we probably ought to pump the brakes just a little bit on that,” said Brett A. Oliver, MD, chief medical information officer at Baptist Health Medical Group, an AMA Health System Program member based in Louisville, Kentucky.
Dr. Oliver, a family physician, spoke during a AMA Insight Network virtual interview that covered his organization’s AI journey.
The AMA Insight Network helps AMA Health System Program members gain early access to innovative ideas, get feedback from their peers, network, and learn about pilot opportunities. Learn more.
Baptist Health has a successful AI telestroke program that identifies potential stroke patients whose conditions are “amenable to procedures.”
“With stroke, like a heart attack, time is tissue, and so, the faster we can get that information to the clinicians that can intervene, the better,” Dr. Oliver said.
Another AI use being tested involves converting a securely recorded office visit into a structured physician note. This has the potential to improve patient satisfaction, reduce physician burnout and improve documentation.
“We’re really excited about that,” he said. “Most importantly, it’s getting clinicians away from the computer and engaged again with the patient—that’s the main goal with that.”
Baptist also developed a COVID-19 related tool that identifies patients who may be more safely treated via telehealth than an in-person visit. Conversely, another trial has been testing an application identifying which COVID-19 patients can more quickly be discharged from the hospital and sent home with a continuous monitoring device.
While that trial is lacking a control arm, Dr. Oliver says about 340 patients have been sent home using this application and none have needed to be readmitted.
But before anyone claims these applications are transforming health care, Dr. Oliver said quality questions need be resolved and end users need to be assured that the data used to create AI algorithms is sound and matches the patient populations it is meant to serve.
Dr. Oliver added that he recognizes some algorithms may be generated using proprietary methods, but then, in those cases, there needs to be an official, disinterested third party who is allowed to look into the AI “black box” and assure users that its algorithms are sound and will help—and not harm—patients or introduce bias.
Physicians “want to understand why,” Dr. Oliver explained. If they don’t understand why the algorithm generated the clinical decision that it did, “buy-in is a struggle.”
Read why smart use of health care AI starts with asking the right questions.
For organizations just starting their AI journey, Dr. Oliver recommended focusing on applications that produce actionable data that can be used toward solving a priority problem such as improving access or reducing physician burnout.
Dr. Oliver warned beginners that “there are a lot of bright and shiny things out there in the AI world that are really cool,” but will only waste their time.
For successful AI adoption, an organization needs to first establish data governance policies and then continually train staff.
“It’s kind of dry and boring—I get that,” Dr. Oliver said. “But if you have strong governance upfront, you have a standardized data intake, evaluation and review processes.”
Staff education doesn’t have to be formal, he said. But it should be ongoing.
“Don’t assume that your colleagues know anything about any of this,” Dr. Oliver said.