The Future Is Here
We may earn a commission from links on this page

AI This Week: Chuck's Big Meeting with Zuck and Elon

As Congress mulls AI regulations, Silicon Valley is lobbying hard against them. Plus: why AI watermarking may not be the catchall solution we hope it is.

We may earn a commission from links on this page.
Image for article titled AI This Week: Chuck's Big Meeting with Zuck and Elon
Photo: Kevin Dietsch (Getty Images)

Headlines This Week

  • In what is sure to be welcome news for lazy office workers everywhere, you can now pay $30 a month to have Google’s Duet AI write emails for you.
  • Google has also debuted a watermarking tool, SynthID, for one of its AI image-generation subsidiaries. We interviewed a computer science professor on why that may (or may not) be good news.
  • Last but not least: Now’s your chance to tell the government what you think about copyright issues surrounding artificial intelligence tools. The U.S. Copyright Office has officially opened public comment. You can submit a comment by using the portal on their website.
Image for article titled AI This Week: Chuck's Big Meeting with Zuck and Elon
Photo: VegaTews (Shutterstock)
Advertisement

The Top Story: Schumer’s AI Summit

Chuck Schumer has announced that his office will be meeting with top players in the artificial intelligence field later this month, in an effort to gather input that may inform upcoming regulations. As the Senate Majority leader, Schumer holds considerable power to direct the shape of future legislation, should it emerge. However, the people sitting in on this meeting don’t exactly represent the common man. Invited to the upcoming summit are tech megabillionaire Elon Musk, his one-time hypothetical sparring partner Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman, Google CEO Sundar Pichai, NVIDIA President Jensen Huang, and Alex Karpy, CEO of defense contractor creep Palantir, among other big names from Silicon Valley’s upper echelons.

Advertisement

Schumer’s upcoming meeting—which his office has dubbed an “AI Insight Forum”—appears to show that some sort of regulatory action may be in the works, though—from the looks of the guest list (a bunch of corporate vultures)—it doesn’t necessarily look like that action will be adequate.

Advertisement

The list of people attending the meeting has garnered considerable criticism online, from those who see it as a veritable who’s who of corporate players. Schumer’s office has pointed out that the Senator will also be meeting with some civil rights and labor leaders—including Liz Schuler, the president of the AFL-CIO, America’s largest federation of unions.

Advertisement

Still, it’s hard not to see this closed-door get together as an opportunity for the tech industry to beg one of America’s most powerful politicians for regulatory leniency. Only time will tell if Chuck has the guts to listen to his better angels or whether he’ll cave to the cash-drenched imps who plan to perch themselves on his shoulder.

Question of the Day: What’s the Deal with SynthID?

As generative AI tools like ChatGPT and DALL-E have exploded in popularity, critics have worried that the industry—which allows users to generate fake text and images—will spawn a massive amount of online disinformation. The solution that has been pitched is something called watermarking, a system whereby AI content is automatically and invisibly stamped with an internal identifier upon creation, allowing it to be recognized as synthetic later. This week, Google’s DeepMind launched a beta version of a watermarking tool that it says will help with this task. SynthID is designed to work for DeepMind clients and will allow them to mark the assets they create as synthetic. Unfortunately, Google has also made the application optional, meaning users won’t have to stamp their content with it if they don’t want to.

Advertisement
Image for article titled AI This Week: Chuck's Big Meeting with Zuck and Elon
Photo: University of Waterloo

The Interview: Florian Kerschbaum on the Promise and Pitfalls of AI Watermarking

This week, we had the pleasure of speaking with Dr. Florian Kerschbaum, a professor at the David R. Cheriton School of Computer Science at the University of Waterloo. Kerschbaum has extensively studied watermarking systems in generative AI. We wanted to ask Florian about Google’s recent launch of SynthID and whether he thought it was a step in the right direction or not. This interview has been edited for brevity and clarity.

Advertisement

Can you explain a little bit about how AI watermarking works and what the purpose of it is?

Watermarking basically works by embedding a secret message inside of a particular medium that you can later extract if you know the right key. That message should be preserved even if the asset is modified in some way. For example, in the case of images, if I rescale it or brighten it or add other filters to it, the message should still be preserved.

Advertisement

It seems like this is a system that could have some security deficiencies. Are there situations where a bad actor could trick a watermarking system?  

Image watermarks have existed for a very long time. They’ve been around for 20 to 25 years. Basically, all the current systems can be circumvented if you know the algorithm. It might even be sufficient if you have access to the AI detection system itself. Even that access might be sufficient to break the system, because a person could simply make a series of queries, where they continually make small changes to the image until the system ultimately does not recognize the asset anymore. This could provide a model for fooling AI detection overall.

Advertisement

The average person who is exposed to mis- or disinformation isn’t necessarily going to be checking every piece of content that comes across their newsfeed to see if it’s watermarked or not. Doesn’t this seem like a system with some serious limitations?

We have to distinguish between the problem of identifying AI generated content and the problem of containing the spread of fake news. They are related in the sense that AI makes it much easier to proliferate fake news, but you can also create fake news manually—and that kind of content will never be detected by such a [watermarking] system. So we have to see fake news as a different but related problem. Also, it’s not absolutely necessary for each and every platform user to check [whether content is real or not]. Hypothetically a platform, like Twitter, could automatically check for you. The thing is that Twitter actually has no incentive to do that, because Twitter effectively runs off fake news. So while I feel that, in the end, we will be able to detect AI generated content, I do not believe that this will solve the fake news problem.

Advertisement

Aside from watermarking, what are some other potential solutions that could help identify synthetic content?

We have three types, basically. We have watermarking, where we effectively modify the output distribution of a model slightly so that we can recognize it. The other is a system whereby you store all of the AI content that gets generated by a platform and can then query whether a piece of online content appears in that list of materials or not...And the third solution entails trying to detect artifacts [i.e., tell tale signs] of generated material. As example, more and more academic papers get written by ChatGPT. If you go to a search engine for academic papers and enter “As a large language model...” [a phrase a chatbot would automatically spit out in the course of generating an essay] you will find a whole bunch of results. These artifacts are definitely present and if we train algorithms to recognize those artifacts, that’s another way of identifying this kind of content.

Advertisement

So with that last solution, you’re basically using AI to detect AI, right?

Yep.

And then with the solution before that—the one involving a giant database of AI-generated material—seems like it would have some privacy issues, right?  

Advertisement

That’s right. The privacy challenge with that particular model is less about the fact that the company is storing every piece of content created—because all these companies have already been doing that. The bigger issue is that for a user to check whether an image is AI or not they will have to submit that image to the company’s repository to cross check it. And the companies will probably keep a copy of that one as well. So that worries me.

So which of these solutions is the best, from your perspective?

When it comes to security, I’m a big believer of not putting all of your eggs in one basket. So I believe that we will have to use all of these strategies and design a broader system around them. I believe that if we do that—and we do it carefully—then we do have a chance of succeeding.

Advertisement

Catch up on all of Gizmodo’s AI news here, or see all the latest news here. For daily updates, subscribe to the free Gizmodo newsletter.