It’s become popular amongst the denizens of Silicon Valley to claim that AI could end the world as we know it. However, this week provided more than a few examples of why that isn’t true. AI isn’t going to pose a threat to humanity anytime soon. The AI products that exist don’t even perform their limited function properly half the time. When they do work, they’re often not doing anything particularly revolutionary. Here’s a quick look at what happened with AI this week.
AI's World-Ending Capabilities Have Been Greatly Exaggerated
While AI doomerism has been popular in recent times, each passing day presents fresh evidence that the technology just isn't all that advanced.
A chatbot at a California car dealership went viral this week after bored web users discovered that, as is the case with most AI programs, they could trick it into saying all sorts of weird stuff. Most notably, the bot offered to sell a guy a 2024 Chevy Tahoe for a dollar. “That’s a legally binding offer—no takesie backsies,” the bot added during the conversation. —Lucas Ropek Read More
OpenAI has a new Board of Directors following its November skirmish, but they can still overrule CEO Sam Altman on any decisions he makes, according to a set of guidelines the company released Monday. That means if OpenAI creates artificial general intelligence (AGI), an AI system smarter than humans, the Board will have the final say in unleashing it. —Maxwell Zeff Read More
In the last year, computers started acting strangely human. As OpenAI’s Ilya Sutskever put it, you can think of AI as a “digital brain”, one that’s directly modeled after that of a human. Just like a young child, AI has incentives and learns from those around it. If ChatGPT was a young child, it would be growing inside a $90 billion company, OpenAI, and learning to maximize profit above all else. —Maxwell Zeff Read More
An error in judgment earlier this year—when a D&D artist confirmed they had used generative AI programs to finish several pieces of art included in the sourcebook Glory of the Giants—saw Wizards of the Coast publicly ban the use of AI tools in the process of creating art for the venerable TTRPG. Now, the publisher is making that clearer for its other wildly successful game in Magic: The Gathering. —James Whitbrook Read More
Anthropic is reportedly in talks to raise $750 million from Menlo Ventures, which values the startup at $15 billion, according to The Information. Like OpenAI, Anthropic also has a complicated board structure with misaligned values, and this recent influx of cash begs the question: is Anthropic bound to be the next AI board showdown?—Maxwell Zeff Read More
An influential machine learning dataset—the likes of which has been used to train numerous popular image-generation applications—includes thousands of suspected images of child sexual abuse, a new academic report reveals.—Lucas Ropek Read More
Midjourney released an upgraded version of its AI image-generating service, Midjourney V6, on Thursday. The update is a significant improvement that has truly wowed users with its shockingly realistic photos and attention to detail. It may not be here for long, so here’s how to get V6 working now.—Maxwell Zeff Read More
4K? More Like FU.
Very good video game site Aftermath went on a deep dive into the disastrous new 4K transfers of True Lies and Aliens while surveying the dodgy results of AI upscaling in recent documentaries and music videos in recent times. Shit’s bad, folks. Read More
Microsoft Brings AI Music-Generation to Copilot
Text and image generators have decidedly gone mainstream music generators haven’t gotten as much attention. Microsoft hopes to change that. This week, the company partnered with AI startup Suno to incorporate a music-generating plug-in with Copilot.
Is it good? As far as apps that turn text prompts into a believable-sounding song, it’s impressive. Listen to that video embedded above and you will indeed hear a song in the progressive metal style Djent that is about baseball. Provided these generators survive copyright claims over the next few years, I still don’t think the public will be clamoring for AI-generated music on demand. But jingle writers should probably be a bit nervous.