The Future Is Here
We may earn a commission from links on this page

Should Section 230 Protect AI Companies From Being Sued Out of Existence?

A bill in the Senate would bar the likes of OpenAI from legal immunity in regards to the content its products produce.

We may earn a commission from links on this page.
Image for article titled Should Section 230 Protect AI Companies From Being Sued Out of Existence?
Photo: Trisha Leeper (Getty Images)

Welcome to AI This Week, Gizmodo’s weekly deep dive on what’s been happening in artificial intelligence.

This week, there’ve been rumblings that a bipartisan bill that would ban AI platforms from protection under Section 230 is getting fast-tracked. The landmark internet law protects websites from legal liability for the content they host and its implications for the fate of AI are unclear. The legislation, authored by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), would strip “immunity from AI companies” when it comes to civil claims or criminal prosecutions, a press release from Hawley’s office claims. It’s yet another reminder that AI is a veritable hornet’s nest of thorny legal and regulatory issues that have yet to be worked out.

Advertisement

Broadly speaking, Section 230 was designed to protect internet platforms from getting sued over the content created by third parties. While individual users of those platforms may be liable for the things they post online, the platforms themselves are afforded legal immunity in most cases. The law was developed in the 1990s largely as a way to protect the nascent internet, as regulators seem to have realized that the web wouldn’t survive if all of its search engines and message boards were sued out of existence.

Advertisement

Of course, times have changed since the law was passed in 1996 and there have been ongoing calls to reform Section 230 over the past several years. When it comes to AI, there seem to be all kinds of arguments for why (or why not) platforms like ChatGPT shouldn’t be covered by the landmark legislation.

Advertisement

We’ve already seen prominent law professor Jonathan Turley complain that ChatGPT falsely claimed that he’d sexually harassed someone. The threat of defamation suits or other legal liabilities hangs over every company developing AI products right now, and it’s probably time to set some new precedents.

Matt Perault, a professor at the University of North Carolina at Chapel Hill, wrote an essay in February arguing that AI companies would not be covered by Section 230—at least, not all the time. According to Perault’s view of things, AI platforms have set themselves apart from platforms like Google or Facebook, where content is passively hosted. Instead, companies like OpenAI openly market their products as content generators, which would seem to preclude them from protection under the law.

Advertisement

“The distinction currently between platforms that can get 230 protection and those that can’t is basically: Are you a host or are you a content creator?” said Perault, in a phone call. “The way the law defines that term is if you create or develop content ‘in whole or in part.’ That means that even if you develop content ‘in part,’ then you can’t get 230 protections. So my view is that a generative AI tool, wherein the name of the tool is literally ‘generative’—the whole idea is that it generates content—then probably, in some circumstances at least, it’s not going to get 230 protections.”

Samir Jain, the vice president of policy at the Center for Democracy and Technology, said that he also felt there would be circumstances when an AI platform could be held liable for the things that it generates. “I think it will likely depend on the facts of each particular situation,” Jain added. “In the case of something like a “hallucination,” in which the generative AI algorithm seems to have created something out of whole cloth, it will probably be difficult to argue that it didn’t play at least some role in developing that.”

Advertisement

At the same time, there could be other circumstances where it could be argued that an AI tool isn’t necessarily acting as a content creator. “If, on the other hand, what the generative AI produces looks much more like the results of a search query in response to a user’s input or where the user has really been the one that’s shaping what the response was from the generative AI system, then it seems possible that Section 230 could apply in that context,” said Jain. “A lot will depend on the particular facts [of each case] and I’m not sure if there will be a simple, single ‘yes’ or ‘no’ answer to that question.”

Others have argued against the idea that AI platforms won’t be protected by Section 230. In an essay on TechDirt, lawyer and technologist Jess Miers argues that there is legal precedent to consider AI platforms as outside the category of being an “information content provider” or a content creator. She cites several legal cases that seem to provide a roadmap for regulatory protection for AI, arguing that products like ChatGPT could be considered “functionally akin to ‘ordinary search engines’ and predictive technology like autocomplete.”

Advertisement

Sources I spoke with seemed skeptical that new regulations would be the ultimate arbiter of Section 230 protections for AI platforms—at least not at first. In other words: it seems unlikely that Hawley and Blumenthal’s legislation will succeed in settling the matter. In all likelihood, said Perault, these issues are going to be litigated by the court system before any sort of comprehensive legislative action takes place. “We need Congress to step in and outline what the rules of the road should look like in this area,” he said, while adding that, problematically, “Congress isn’t currently capable of legislating.”

Question of the day: What is the most memorable robot in movie history?

Image for article titled Should Section 230 Protect AI Companies From Being Sued Out of Existence?
Photo: Rozy Ghaly (Shutterstock)
Advertisement

This is an old and admittedly sorta trite question, but it’s still worth asking every once in a while. By “robot,” I mean any character in a science fiction film that is a non-human machine. It could be a software program or it could be a full-on cyborg. There are, of course, the usual contenders—HAL from 2001: A Space Odyssey, the Terminator, and Roy Batty from Blade Runner—but there are also a lot of other, largely forgotten potentates. The Alien franchise, for instance, sort of flies under the radar when it comes to this debate but almost every film in the series features a memorable android played by a really good actor. There’s also Alex Garland’s Ex Machina, the A24 favorite that features Alicia Vikander as a seductive fembot. I also have a soft spot for M3GAN, the 2022 film that is basically Child’s Play with robots. Sound off in the comments if you have thoughts on this most important of topics.

More headlines this week

  • Google seems to have cheated during its Gemini demo this week. In case you missed it, Google has launched a new multimodal AI model—Gemini—which it claims is its most powerful AI model yet. The program has been heralded as a potential ChatGPT competitor, with onlookers noting its impressive capabilities. However, it’s come to light that Google cheated during its initial demo of the platform. A video released by the company on Wednesday appeared to showcase Gemini’s skills but it turns out that the video was edited and that the chatbot didn’t operate quite as seamlessly as the video seemed to show. This obviously isn’t the first time a tech company has cheated during a product demo but it’s certainly a bit of a stumble for Google, considering the hype around this new model.
  • The EU’s proposed AI regulations are undergoing critical negotiations right now. The European Union is currently trying to hammer out the details of its landmark “AI Act,” which would tackle the potential harms of artificial intelligence. Unlike the U.S., where—aside from a light-touch executive order from the Biden administration—the government has predictably decided to just let tech companies do whatever they want, the EU is actually trying to do AI governance. However, those attempts are faltering, somewhat. This week, marathon negotiations about the contents of the bill yielded no consensus on some of the key elements of the legislation.
  • The world’s first “humanoid robot factory” is about to open. WTF does that mean? A new factory in Salem, Oregon, is about to open, the sole purpose of which is to manufacture “humanoid robots.” What does that mean, exactly? It means that, pretty soon, Amazon warehouse workers might be out of a job. Indeed, Axios reports that the robots in question have been designed to “help Amazon and other giant companies with dangerous hauling, lifting and moving.” The company behind the bots, Agility Robots, will open its facility at some point next year and plans to produce some 10,000 robots annually.