The Future Is Here
We may earn a commission from links on this page

ChatGPT Creator Tells Congress He Wants an AI Regulatory Agency

Sam Altman tried to convince skeptical lawmakers to pursue light-handed AI reforms that give OpenAI room to stay ahead of competitors.

We may earn a commission from links on this page.
Image for article titled ChatGPT Creator Tells Congress He Wants an AI Regulatory Agency
Photo: Win McNamee (Getty Images)

OpenAI CEO Sam Altman made his Congressional debut before a Senate Judiciary subcommittee today. He rolled out the charm offensive to try and convince lawmakers to pursue light-handed legislation that gives artificial intelligence—company’s product ChatGPT first among them—a wide runway to rapidly advance. Altman’s rosy testimony expounding on the societal virtues of generative AI ran counter to other expert witnesses, who expressed more skepticism and brought up discrimination and other unintended AI harms.

Altman said of the public’s understanding of ChatGPT and text-generating AI, “For a while, people were fooled by PhotoShop. Then they quickly developed an understanding around altered images. This will be like that, but on steroids.”

Advertisement

Lawmakers questioning Altman on Tuesday said the stakes were dire. In his opening statement, Missouri Sen. Josh Hawley AI could go one of two routes: a new printing press or an atomic bomb.

Advertisement

“We could be looking at one of the most important technological innovations in human history,” Hawley said. “It’s really like the invention of the internet at scale, at least.”

Advertisement

You can watch a live stream of ChatGPT creator Sam Altman testifying before the Senate here:

Advertisement

The hearing came during a crucial inflection point for large language models like ChatGPT with lawmakers and regulators publicly struggling to stay a step ahead of the rapidly evolving tech. Google, Altman’s Open AI, Microsoft, Meta, and others are simultaneously in a dead sprint to determine who will emerge as the biggest winner in the new AI arms race.

Sam Altman supports regulation of ChatGPT and AI at large... So long as it’s his preferred type of legislation 

Major tech executives over the years have learned it’s a fool’s errand to dig in their heels and vocally oppose any sign of regulation. Instead, the more common playbook, which Almtan followed, is to advocate in favor of their preferred type of legislation. On Tuesday, Altman told lawmakers that “regulation of AI is essential” but that such regulation should balance safety against ensuring wide access of the tech to the general public. ChatGPT, which debuted in November, has already amassed 100 million users, according to OpenAI.

Advertisement

During the testimony, Altman recommended lawmakers pursue new sets of safety requirements that could test products before they are released. Altman suggested new testing and licensing requirements for AI developers could help set a level playing field for competition. The CEO appeared open to a recommendation by Sen. Blumenthal and others to consider a “nutrition label for AI,” and other transparency proposals, but caveated that by saying he still believes the benefits of AI “outweigh the risks.” Safety requirements, according to Altman, will need to be flexible enough to adapt to potentially unforeseen new advances in the tech.

“We think that regulatory intervention of governments will be crucial,” Altman said.

Advertisement

Other expert witnesses, like former New York University professor Gary Marcus, are expected to take a more measured stance toward the technology and warn of a recent rise in potentially dangerous “AI hype.” Marcus urged lawmakers Tuesday to approach AI safety with a profound sense of urgency and warned them against repeating the same mistakes they made in failing to regulate social media years ago.

“We’re facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability,” Marcus said during the hearing.

Advertisement

Marcus and Altman were joined by IBM Chef Privacy and Trust Officer Christina Montgomery, who urged lawmakers against regulating AI as a technology and advised lawmakers to instead consider regulating particularly harmful use cases of the tech.

AI will come for jobs, but how much?

All three of the witnesses speaking on Tuesday agreed AI could disrupt and transform the workplace, but the extent and timeframe of those changes were a matter of debate. Altman said he believed his products and other AI services would have a “significant impact on jobs” but noted it’s unclear exactly how that will play out. For now, Altman said GTP-4, OpenAI’s latest large language model, and other AI systems excel at completing tasks but are not as proficient at completing full jobs “I believe there will be far greater jobs on the other side of this,” he said.

Advertisement

Marcus went a step further and said artificial general intelligence could threaten most jobs. That day, however, could be as far as 50 years away. The AI skeptic said OpenAI’s models were a far cry away from achieving artificial general intelligence.

Altman voices support for a new agency to monitor AI

Altman and Marcus, who at times clashed during the hearing, came out united on the idea of a new government agency staffed by AI experts. That hypothetical agency of AI experts would be tasked with monitoring the tech’s development and setting standards around its use. When questioned by South Carolina Sen. Lindsey Graham, Altman said he would support a government agency capable of both granting AI companies operating licenses and taking those licenses away if the company has violated standards.

Advertisement

Marcus took that idea a step further and advocated for a cabinet-level organization able to address AI harms on a global level. AI’s global reach and international interest, Marcus said, will require some form of an international agency to set common standards. Exactly how that organization would navigate the intense geopolitical tension between countries like the US and China, he noted is “probably above my pay grade.” Montgomery of IBM detracted from that support and said government oversight of AI systems should be left to current regulatory bodies like the FTC and FCC.

“We don’t want to slow down regulation to address real risks right now,” Montgomery said. “We have existing regulatory authorities in place who have been clear that they have the ability to regulate in their respective domains.”

Advertisement

Altman to Congress: ‘Can’t people sue us?’

Though the witnesses and lawmakers alike were more than willing to speculate about the possible enforcement powers of some hypothetical new agency, there was far less clarity over what can be done to hold AI companies accountable in the here and now. Lawmakers like Hawley and Minnesota Sen. Amy Klobuchar questioned whether or not Section 230 of the Communications Decency Act, social media’s main liability shield for user content on its platform, would apply to AI-generated content.

Advertisement

“I don’t know yet, exactly what the right answer here is,” Altman said. “I don’t think Section 230 is even the right framework.”

Altman appeared unsure of what, if any, framework consumers actually have to hold his or other AI companies accountable for harm.

Advertisement

“Can’t people sue us?” Altman asked Hawley unconvincingly.

OpenAI won’t rule out relying on ads in the future

During this testimony, Altman tried to separate himself and OpenAI from social media firms like Facebook and Instagram that have come under regulatory scrutiny for addictive, advertisement-driven platforms. Altman told lawmakers OpenAI does not currently operate under an ad-based business model and actively tries to build models that specifically do not maximize for engagement.

Advertisement

“We’re so short on GPUs that the less people use it, the better,” Altman said to a few chuckles. “We’re not an advertising model, we’re not trying to get people to use it [GPT-4] more and more.”

Those remarks were intended to bolster Altman’s narrative of OpenAI as an idealistic research company prioritizing human flourishing over massive profits. Altman had to caveat that distinction later, however, when asked by New Jersey Sen. Cory Booker if the company would commit to never pursuing an ad-based business model. “I wouldn’t say never,” Altman said. “There may be people we want to offer services to and no other service works.”

Advertisement

Lawmakers will want to appear prepared for a new era of tech

As with any tech hearing, some lawmakers are likely to appear underprepared and out of touch with the technology in question. At the same time, there’s good reason to think the wide experimentation with ChatGPT-style chatbots and the recent surge in public concern over AI harms has motivated at least some of the senators in attendance to research the issues and come out swinging. Lawmakers from both sides of the political spectrum are trying to avoid a repeat of earlier hearings on cryptocurrency and social media where they failed to press executives on the impact of their technologies and appeared incapable of drafting meaningful legal safeguards.

Advertisement

Around half a dozen new bills or legislative actions on AI have emerged in recent weeks led by Colorado Sen. Michael Bennet and California Rep. Ted Lieu. On the regulatory side, the Federal Trade Commission has released several statements clarifying its intent to use existing laws to punish AI companies up to no good. Chairwoman Lina Khan further signaled her aggressive approach toward AI earlier this month with an editorial in the New York Times succinctly titled “We Must Regulate A.I.

On the executive side, the White House has so far tried to straddle a middle ground, simultaneously investing in new AI research and speaking amicably with major tech executives about AI while still expressing concerns over areas of potential abuse.

Advertisement

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.