The Future Is Here
We may earn a commission from links on this page

‘The Risk of Extinction:’ AI Leaders Agree on One-Sentence Warning About Technology’s Future

Hundreds of AI executives and researchers joined in a detail-free statement about how their daily work might kill us all.

We may earn a commission from links on this page.
A road sign reading "Killer robots in area."
Photo: Tilted Hat Productions / Shutterstock.com (Shutterstock)

Over 350 AI executives, researchers, and industry leaders signed a one-sentence warning released Tuesday, saying that we should try to stop their technology from destroying the world.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement, released by the Center for AI Safety. The signatories including Sam Altman, the CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and Geoffrey Hinton, the so called “Godfather of AI” who recently quit Google over fears about his life’s work.

Advertisement

As the public conversation about AI shifted from awestruck to dystopian over the last year, a growing number of advocates, lawmakers, and even AI executives united around a single message: AI could destroy the world and we should do something about it. What that something should be, specifically, is entirely unsettled, and there’s little consensus about the nature or likelihood of these existential risks.

Advertisement

There’s no question that AI is poised to flood the world with misinformation, and a large number of jobs will likely be automated into oblivion. The question is just how far these problems will go, and when or if they will dismantle the order of our society.

Advertisement

Usually, tech executives tell you not to worry about the threats of their work, but the AI business is taking the opposite tactic. OpenAI’s Sam Altman testified before the Senate Judiciary Committee this month, calling on Congress to establish an AI regulatory agency. The company published a blogpost arguing that companies should need a license if they want to work on AI “super intelligence.” Altman and the heads of Anthropic and Google DeepMind recently met with President Biden at the White House for a chat about AI regulation.

Things break down when it comes to specifics though, which explains the length of Tuesday’s statement. Dan Hendrycks, executive director of the Center for AI Safety, told the New York Times they kept it short because experts don’t agree on the details about the risks, or what, exactly, should be done to address them. “We didn’t want to push for a very large menu of 30 potential interventions,” Hendrycks said. “When that happens, it dilutes the message.”

Advertisement

It may seem strange that AI companies would call on the government to regulate them, which would ostensibly get in their way. It’s possible that unlike the leaders of other tech businesses, AI executives really care about society. There are plenty of reasons to think this is all a bit more cynical than it seems, however. In many respects, light-touch rules would be good for business. This isn’t new: some of the biggest advocates for a national privacy law, for example, include Google, Meta, and Microsoft.

For one, regulation gives businesses an excuse when critics start making a fuss. That’s something we see in the oil and gas industry, where companies essentially throw up their hands and say “Well, we’re complying with the law. What more do you want?” Suddenly the problem is incompetent regulators, not the poor corporations.

Advertisement

Regulation also makes it far more expensive to operate, which can be a benefit to established companies when it hampers smaller upstarts that could otherwise be competitive. That’s especially relevant in the AI businesses, where it’s still anybody’s game and smaller developers could pose a threat to the big boys. With the right kind of regulation, companies like OpenAI and Google could essentially pull up the ladder behind them. On top of all that, weak nationwide laws get in the way of pesky state lawmakers, who often push harder on the tech business.

And let’s not forget that the regulation that AI businessmen are calling for is about hypothetical problems that might happen later, not real problems that are happening now. Tools like ChatGPT make up lies, they have baked-in racism, and they’re already helping companies eliminate jobs. In OpenAI’s calls to regulate super intelligence — a technology that does not exist — the company makes a single, hand-waving reference to the actual issues we’re already facing, “We must mitigate the risks of today’s AI technology too.”

Advertisement

So far though, OpenAI doesn’t actually seem to like it when people try to mitigate those risks. The European Union took steps to do something about these problems, proposing special rules for AI systems in “high-risk” areas like elections and healthcare, and Altman threatened to pull his company out of EU operations altogether. He later walked the statement back and said OpenAI has no plans to leave Europe, at least not yet.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.