Why Arianna Huffington and Sam Altman are making an A.I. health app startup.

Why Arianna Huffington and Sam Altman are making an A.I. health app startup.

Even big banks and venture capital funds now fear that their zeal for artificial intelligence is fueling a big ol’ bubble—but those memos don’t appear to have reached Arianna Huffington. The onetime media maven and current-day wellness-tech CEO reached back to her blogging roots this week to co-author a Time magazine op-ed with OpenAI’s Sam Altman, announcing a joint investment into a “customized, hyper-personalized AI health coach” that will be branded under and marketed by Huffington’s Thrive Global startup.

“It will be trained on the best peer-reviewed science as well as Thrive’s behavior change methodology,” the two write. “And it will also be trained on the personal biometric, lab, and other medical data you’ve chosen to share with it.” (Interesting wording, there.)

The app-based bot will train itself to understand your daily health habits, from your vaccination status to your sleep patterns to your soft-drink consumption, in order to “scale and democratize the life-saving benefits of improving daily habits and address growing health inequities.” It sounds innocuous and idealistic enough, in line with both Huffington’s focus on “Microsteps” for a consistent health regimen—i.e., making small changes in your everyday habits to swap out harmful health practices in favor of beneficial ones—and Altman’s ambitions to incorporate his bots into anything/everything.

(As an aside: The op-ed features a disclosure at bottom about OpenAI’s licensing agreement with Time magazine; notably, the news site formerly known as the Huffington Post has welcomed integrations with GPT-powered apps. Oh yeah, and current Time owner Marc Benioff has invested in Thrive Global. Anyway.)

Employing technological innovation to fix our shoddy, underfunded, inefficient, and highly capitalized health care systems is certainly an admirable goal. Further app-ification may not seem like an obvious solution, though: The over-computerization of health care has led to unhelpful changes in workflow processes and medical-tech management for doctors, as Atul Gawande has written—not least by increasing hospitals’ vulnerability to devastating cyberattacks.

Yet field professionals have also cited positive changes in incorporating machine learning algorithms to speed up time-sucking duties like administrative file management, speech transcription, data-scanning for diagnoses, and drug discovery. Virginia Rep. Jennifer Wexton, who lost use of her voice last year to a sudden onset of progressive supranuclear palsy, has likewise demonstrated an A.I.-generated voice replica she can now use for everyday communication. Advanced robotic prosthetics are already on the market for amputees. There is great potential for A.I. when it comes to certain branches of health care, provided we don’t depend too much on still-faulty algorithms.

However, I’m not sure Arianna Huffington and Sam Altman are the ones who should be leading humanity to this next frontier.

To start, it’s worth treating any Huffington wellness initiative with—dare I say—a healthy dose of skepticism. Back in the early HuffPost days, then–editor-in-chief Arianna Huffington famously gave several celebrities (including the now-likely-election-spoiler candidate Robert F. Kennedy Jr.) plenty of space to write deranged screeds promoting the always dubious “links” between vaccines and autism. But the Huffington Post’s health pseudoscience was hardly limited to spreading this anti-vax conspiracy, which still continues to wreak real havoc on public health via COVID misinformation and the resurgence of once contained diseases like measles and even polio. Up until her departure in 2016, Huffington used her namesake publication to publish false claims, like a piece on how cigarettes don’t cause cancer, while directly interfering with reporting that (rightly) questioned the efficacy of the famed “12-step program” for addiction recovery.

(While HuffPost has generally done excellent work, it did publish a baffling op-ed a few weeks ago suggesting that A.I.-assisted campaigning could help President Joe Biden to reach worried voters following that horrific debate. Yeah, I’m not sure that’s gonna fix Biden’s credibility crisis.)

For Thrive Global, Huffington has hosted an online community whose members frequently publish writings that traffic in myths about “alternative” COVID cures and prioritize Huffington’s focus on “mindfulness” as a method for preemptively keeping one’s health in line (a theory that’s facile at best and thoroughly misleading at worst). She also remains happy to endorse celeb friends like Gwyneth Paltrow, whose Goop wellness brand peddles horrifically unsound products.

Multiple analyses have also cast doubt on the efficacy of the “nudge theory”—targeted incentives to influence behavioral changes—that Huffington and Altman say is a benefit of their proposed health coach in the Time article. Another recent study from Australian academics took stock of insurance company Discovery Limited and found its methodologies for pricing health premiums via “hyper-personalization” similar to Thrive A.I.’s behavior tracking to be sketchy at best.

In view of this track record, any A.I. health bot blessed with Huffington’s imprimatur and integrated “within Thrive Global’s enterprise products” warrants plenty of scrutiny. In fact, when she was touting the concept of an A.I.-powered “wellness health copilot” on Bloomberg TV back in February, she cited the controversial behavioral scientist and Thrive adviser BJ Fogg—the guy who’s all but credited with influencing social media platforms to embrace their most addictive functions—as the person who knows how to get Thrive’s “copilot” to, uh, counter the negative mental health effects of doomscrolling?

Sam Altman, of course, has his own weirdo quasi-medical obsessions, from his eyeball-scanning cryptocurrency WorldCoin to his life-extension moon shots. Still, credit where it’s due: OpenAI established another relationship this week with Los Alamos Labs—yes, the OG atomic bomb developer—which bluntly expressed its desire to temper ChatGPT’s potential for misuse in “providing information that could lead to the creation of biological threats.” And some OpenAI health partners, like Rhode Island’s Lifespan hospital, genuinely seem quite pleased with their applications of the tech, while other doctors are independently deploying ChatGPT to quickly generate and fine-tune patient claims that algorithm-dependent insurance companies won’t end up rejecting. I say this completely earnestly: That’s pretty neat!

That doesn’t mean that some of the medical-A.I. arguments Altman and Huffington put forth in their op-ed aren’t somewhat imprudent. For one, the claim that this Thrive coach would have “a superhuman long-term memory” when most A.I. models, no matter how advanced, still struggle with data/training/learning memory bandwidth and consistency. For another, the correct observation that “chronic diseases … are distributed unequally across demographics,” coupled with the utopia of A.I. scaled at the level of the New Deal that makes it easier to make healthy changes by doing things like suggesting “a healthy, inexpensive recipe that can be quickly made with few ingredients.”

You know what else is distributed unequally across demographics? Basic technological access to everything from working health care infrastructure to functional grocery stores to smartphones, computers, and internet bandwidth. (As a recent paper from nine leading British cancer experts notes, more tech bottlenecks mean A.I. may impose “additional barriers for those with poor digital or health literacy.”) Plus, the fossil-fuel plants that have poisoned and sickened countless minority neighborhoods but just have to be kept in operation because of—wait for it—the energy-intensive needs of the data centers used to train all this A.I.

More to the point, can an “A.I. coach” really overcome inequities when racial bias is already embedded in so many A.I. algorithms and the health data used in training, across inputs and outputs? Emphasizing a “behavioral” approach to addressing systemic health inequities already sounds like a fancy version of “personal responsibility” rhetoric before you get into the ways this magic-bullet tech only retrenches modern-day inequality.

By the way, who does Sam Altman think he is, to promise he will be the one to deliver on all these lofty goals? Should we entrust him with a “miracle cure” for mental health when the underpaid Kenyan workers ridding ChatGPT of toxic content speak of their traumatic working experiences and lack of any relief from their putative boss? Does the CEO who reportedly deploys oppressive employee guidelines and psychological abuse tactics in his workplace really care about empowering everyone to embrace healthy daily habits? Is Altman really the one to be providing “assurances that these technologies are reliable and that [users’] personal health data will be handled responsibly,” in light of how OpenAI has failed to live up to that task? The company that refuses to source its own models’ pilfered training data responsibly, and whose chatbots have leaked users’ private info and can’t even properly connect to the websites whose archives they’re supposedly linked with?

Look, I hope you all at Thrive and OpenAI use this article to train your health coach, because then at least there’ll be a chance it will tell the truth about what it, and its founders, will actually be able to do for American patients.


link