The Future Is Here
We may earn a commission from links on this page

Congress's Lone Surgeon Wants States to Regulate AI

A lack of federal safeguards has led some states to begin proposing their own laws limiting AI's use in healthcare.

We may earn a commission from links on this page.
Image for article titled Congress's Lone Surgeon Wants States to Regulate AI
Photo: Marko Aliaksandr (Shutterstock)

A new battle is brewing between states and the federal government. This time the fight isn’t over taxes or immigration but rather the limits of regulating advanced artificial intelligence systems. Political disagreements around AI’s role in healthcare, in particular, could be the tip of the spear in that emerging skirmish.

Those were some of the concerns voiced by North Carolina Republican representative Greg Murphy speaking this week at the Connected Health Initiative’s AI and the Future of Digital Healthcare. Murphy, the only active practicing surgeon in Congress and co-chair of the GOP Doctors Caucus, believes, like many, that the technology could transform healthcare, but warned against broadly applying the same rules and standards nationwide.

Advertisement

“The federal government does not know the difference between Montana and New Jersey, but the folks in Montana do,” Murphy said at the event according to Politico. “It should be up to the folks who understand it to control that.”

Advertisement

Doctors and technologists alike say predictive AI tools could radically improve healthcare by deeply scanning X-rays, CT scans, and MRIs for early signs of disease in ways unavailable to human doctors in the past. Generative AI chatbots trained specifically on a corpus of medical journals, on the other hand, can potentially assist doctors with quick medical suggestions, perform administrative tasks, or (in some cases already) help communicate with patients with more compassion. The American Medical Association estimates one in five doctors in the US already use some form of AI in their practice.

Advertisement

But even as its use proliferates, the rules governing what AI can and can’t be used for remain murky from state to state or are just flat-out nonexistent. That’s an issue, especially if future doctors choose to rely more on chatGPT-style chatbots which regularly spit fabricated facts out of thin air. Those AI “hallucinations” have already led to libel lawsuits in the legal field. Murphy worries doctors could one day face another conundrum in the age of advanced AI. What happens when a human doctor wants to overrule an AI’s medical suggestion?

“The challenge is: Do we lose our humanity in this?” Murphy asked at the event. “Do we let the machines control us or do we control them?”

Advertisement

Doctors probably aren’t anywhere near at risk of being overruled by an AI chatbot anytime soon. Still, states are nonetheless drafting legislation to rein in more mundane, but more common ways misused AI could harm patients. California’s proposed AB 1502, for example, would ban health insurers or healthcare service plans from using AI to discriminate against patients based on their race, gender, or other protected categories. Another proposed legislation in Illinois would regulate the use of AI algorithms in diagnosing patients. Georgia has already enacted a law regulating AI use in conducting eye exams.

Those laws risk coming into conflict with far more widely covered federal AI regulations. In the past month, Senate Majority Leader Chuck Schumer has convened around half a dozen hearings specifically on AI legislation, with some of the biggest names and personalities in tech passing their way through his chambers to weigh in on the topic. Top AI firms like OpenAI, Microsoft, and Google, have already agreed to voluntary safety commitments proposed by the White House. Federal health agencies like the FDA, meanwhile, have issued their own recommendations on the issue.

Advertisement

It’s unlikely these quickly evolving federal rules governing AI will mesh perfectly with Americans from state to state. If the battle over AI regulation resembles anything like the disagreements over digital privacy before it, rules governing the technology’s use could vary widely from state to state. A lack of strong regulations explicitly barring doctors from making operational decisions based on an AI chatbot, for example, could encourage lawmakers to push for their own stricter requirements on a state level.

At least for now, US adults have made it clear they largely aren’t interested in AI dictating their next doctor’s office visit. More than half (60%) of adults recently surveyed by Pew Research said they would feel uncomfortable if their healthcare provider used AI to diagnose a disease or recommend treatment. Only a third of respondents thought using AI in those scenarios would lead to better outcomes for patients. At the same time, new polling shows Americans overwhelmingly want more government intervention when it comes to AI. More than eight in ten (82%) of respondents in a recent survey conducted by the AI Policy Institute said they did not trust tech companies to regulate themselves on AI.