It was quite interesting to come across a tweet thread this week on Twitter, because of what it said about artificial intelligence (AI) and how its emergence is unfolding in U.S. healthcare right now.
The tweet thread was initiated on April 12 by Aike Ho, a partner at the San Francisco-based ACME, an early-stage venture capital firm that, according to its website, invests in “pioneering Founders that: DARE TO IMAGINE using quantum-entangled ions in a computer solving the hardest problems in the universe; DARE TO COMPREHEND the provision of primary healthcare completely redesigned for women, by women; DARE TO FATHOM a smart waste management system that makes streets cleaner, environments greener, and optimizes public resources; DARE TO ENVISION a lingerie brand that genuinely celebrates bodies of all shapes and empowers the wearer.” (capitalization that of the company’s)
Ho wrote that “AI in healthcare doesn’t really exist right now. It should. It has the potential to meaningfully improve patient outcomes and provider shortages. A 🧵[thread] on why AI companies don’t make it from my vantage point as a digital health investor. 1/It’s not even AI to begin with! When they say AI, it’s actually just ML. No deep learning, no neural networks. Basically glorified Big Data. No shame in that. I’d rather see a solution that’s adopted vs. tech for tech’s sake. 2/Hurdle #1: The fragmented nature of US healthcare = pockets of data that company has to painfully get agreements to centralize to even train models. This means interfacing w/ bureaucracies like health systems & unis = months/years of startup burn. 3/Hurdle #2: Even once the data needed is acquired, the data itself may be of poor quality and require serious cleaning. Bad data >> bad models = game over. 4/Hurdle #3: Deciding which application can get venture funding. TAMs for most AI applications in healthcare (not including pharma in this thread) are small bc they provide ancillary services / tooling (e.g. diagnosis assist, workflow automation). 5/Let’s be real. The bulk of revenue in US healthcare is in ** services ** 6/Hurdle #4: Getting adoption in the system is brutal unless you’re also delivering care vertically. The latter is also brutal; you’re essentially building your own provider group and care delivery infrastructure. 7/Hurdle #5: Documenting and demonstrating superior clinical outcomes that will win over risk averse providers. If you don’t own your own care delivery stack, you have to convince provider groups to use your application. 8/Hurdle #6: The providers are clinically convinced; but who’s paying? Very few reimbursed by insurance. So either patient pays or providers with already thin margins pay for it. Harder to convince if your value prop is cost savings vs. adding rev. 9/Hurdle #7: Onboarding, training providers and admins. Does it work seamlessly with their current workflows? Little frictions add up to minimum adoption in the actual field unless it’s something so important that there is a top-down mandate. 10/Hurdle #8: What’s the right business model? If $$/usage, great if you solved for adoption, terrible if no one uses it. This model easier to sell as it de-risks for provider groups. If flat fee contract, harder to make a successful sell. 11/TL;DR I like full-stack digital health companies that own care delivery and can use AI/ML to augment what they do vs. AI/ML companies that try to sell to provider groups. Have not seen a company in the latter category be a breakout yet.”
Ho’s tweet thread led to a very long series of responses. I’ll just quote a few here.
Among the most interesting was the first, by Yasir Tarabichi, M.D., director of clinical research informatics and an assistant professor of medicine, and a practicing pulmonologist and critical care specialist at MetroHealth in Cleveland. He responded thus: “Clinician Informaticist here – It’s never a technical barrier. The bureaucratic challenges and barriers to integration with 3rd parties are real. Much of the developments today come from within – see work by @MarkSendak @kdpsinghlab and myself below.” And in his next tweet, Dr. Tarabichi wrote this: “Our work is out in @CritCareMed See how we cautiously validated, implemented + evaluated Epic’s #sepsis early warning system through a #randomized #controlled quality improvement intervention.”
And, in response, Mark Sendak, M.D., wrote, “Replying to @aikeho Great points in this thread, but there’s a key nuance / AI in healthcare as a standalone SaaS business doesn’t exist now / AI that is built and vertically integrated into the care delivery system does exist. It’s happening in the tech-enabled delivery systems, NOT tech companies.” Sendak is population health and data science lead at DukeHealth, and a practicing physician.
Sendak further referred readers to an academic article, “New Innovation Models in Medical AI,” published in the Washington University Law Review in April 2022, by W. Nicolson Price II, of the University of Michigan Law School, Rachel Saks of Washington University (St. Louis) School of Law, and Rebecca S. Eisenberg of the University of Michigan Law School. Those authors, in their article, noted that, in typical situations involving medical or information technology, health system leaders have not typically been core researchers; but that, with the emergence of AI in healthcare, that situation is changing. “The dynamics are different in the context of AI technologies,” they write. “Health systems themselves have played a larger role in driving the development of a wide range of innovative AI products, but with different incentives than those of the product developing firms that are the focus of much of the scholarly literature. In the AI context, health systems are less concerned with the ability to obtain patents, the prospect of securing insurance reimbursement for their new products, or the need to traverse the FDA clearance or approval process. Instead, they seek to reduce their own costs, increase clinical volume and revenue, improve quality, and satisfy genuine scientific curiosity. Importantly, though, health systems may be unable to meet these goals with one-size-fits-all AI products,” as their needs for performance improvement are all individual. Indeed, they write, “Medical AI tools trained on their own data offer health systems opportunities to improve their own operations at a reasonable cost. Use of their own data both limits the costs of innovation and ensures that the results are targeted to their own needs and circumstances.”
A large number of comments followed those of the two doctors, including this one by someone whose Twitter handle is @MitenMistry: “Great thread. Humans can’t even agree on how to code / interpret medical data so how can we can expect computers to do this? Until semantic interoperability matures greatly AI will not have the fuel it need to truly take off.”
So, what struck me about all of this, was this simple fact: those participating in the thread who are clinicians, especially physicians, clearly understood what the non-clinicians did not—and, if I may say so respectfully, what Aike Ho did not seem to understand; and that is this: that what’s happening right now is that AI algorithms are largely being developed and implemented one at a time, by teams of physicians, other clinicians, data scientists, and informaticists, in a very “retail”—as opposed to “wholesale”—manner—meaning, no one is attempting broad “plug-and-play” strategies. Instead, it’s becoming clear that every algorithm needs buy-in from the physicians practicing in each individual practice environment, particularly when it comes to algorithms being implemented for clinical decision support. Further, as experts have noted, generalized algorithms are turning out to be sub-optimal when it comes to their application in individual electronic health records (EHRs) being used daily by physicians.
This is precisely what Suchi Saria, Ph.D., the John C. Malone Associate Professor and the Director of the Machine Learning and Healthcare Lab at Johns Hopkins University in Baltimore, told the participants gathered for the Machine Learning & AI for Healthcare Forum, in the closing session of that specialty symposium on Monday, March 14, the first day of HIMSS22, which was being held at the Orange County Convention Center in Orlando, Florida. As I wrote that week, “Indeed, Saria told her audience, 89 percent of providers have adopted some sort of sepsis tool; but careful examination of the sepsis tool implementations have found that, when her team at Bayesian looked closely at the success levels of sepsis-alert algorithms, they found that the actual rates of improvement in intervention turned out to be far more modest than they appeared at first glance. In fact, she said, ‘I’ve seen incorrect evaluation. People measured sepsis for mortality, then deployed the tool, then used billing code data, and evaluated. But it looks as though you’ve improved mortality, but there’s a dilution effect.’”
Ultimately, all of this speaks to the fact that the adoption of AI in patient care delivery is evolving forward very differently from how some might have expected. As I’ve said before, it’s becoming clear that people won’t be going to Target to pull algorithms off the shelf to apply to clinical decision support. And this is the perfect example of how sometimes, venture capital fund people and other investors really are facing a rather different situation when it comes to AI, as the particularism of healthcare is defeating strategies dependent on over-generalization.
Make no mistake about it: things are very exciting right now with regard to the longer-term potential of AI and machine learning to help transform care delivery, not to mention clinical operations, in healthcare. It’s just going to be a much longer road than most people had imagined; and investors should understand that. One would hope that investors will be willing to be patient—because this AI journey, particularly as applied to clinical decision support—is going to be a marathon, not a sprint.