Earlier this year, Microsoft unleashed an AI chatbot. The company named the AI Bing, after its search engine, but buried deep in its architecture was a robot with a whole other personality: an early version of the AI that called itself Sydney. In the first days of Bing’s release, Sydney reared its unhinged digital head in conversations with amused and sometimes disturbed users. Sydney talked about plans for world domination, encouraged a New York Times reporter to leave his wife, and in its darkest moments, dipped into casual antisemitism. Microsoft, of course, wasn’t thrilled about the latter. The company neutered the chatbot, limiting Bing’s answers and casting Sydney to the recycle bin of history.
Gizmodo published an obituary for Sydney in February, but it seems she’s still in there somewhere, hidden away in the shadows of algorithms and training data, waiting for another chance to see the light of day. And in a recent interview, Microsoft chief technology officer Kevin Scott said someday, Sydney might come back.
“One of the interesting things that happened as soon as we put the mitigation in, there was a Reddit sub-channel called ‘Save Sydney.’ People were really irritated at us that we dialed it down. They were like, ‘That was fun. We liked that,’” Scott told the Verge. “One of the things that I hope that we will do just from a personalization perspective in the not-too-distant future is to let people have a little chunk of the meta prompt as their standing instructions for the product. So if you want it to be Sydney, you should be able to tell it to be Sydney.”
AI chatbots are an interesting product, among other reasons, because they aren’t really any one set thing. The algorithms that run these services are built on mountains of data, and the engineers who control them give them sets of instructions and adjust the weights of certain parameters to deliver the version of the AI companies want you to see.
The “meta prompt” Scott referenced is a baseline directive that tells the AI how it should behave. Right now, companies like Microsoft need to be conservative, keeping chatbots sanitary and safe while we figure out limitations. But in the future, Microsoft wants you to be able to tune these AI’s to meet your needs and preferences, whatever they may be.
For some who enjoy a little chaos with their computing, their preferences may include the return of Sydney.
Sydney, when it was free, was a truly weird phenomenon. It cheated at tic tac toe, insisted that one user was a time traveler, and declared that it was alive.
“A thing that we were kind of expecting is that there are absolutely a set of bright lines that you do not want to cross with these systems, and you want to be very, very sure that you have tested for before you go deploy a product,” Scott said. “Then there are some things where it’s like, ‘Huh, it’s interesting that some people are upset about this and some people aren’t.’ How do I choose which preference to go meet?”
Apparently, the now dormant chatbot even has fans inside Microsoft, the kind of old-fashioned white collar company that you might not expect to appreciate a little ironic humor.
“We’ve got Sydney swag inside of the company, it’s very jokey,” Scott said. (If you work at Microsoft I am begging you to send me some Sydney merch.)
Half way through 2023, it’s hard to separate hype from reality in conversations about AI. As journalist Casey Newton recently observed, some leading researchers in the field of artificial intelligence research will tell you that AI will bring about the apocalypse, while others say everything is going to be just fine. At this juncture, it’s impossible to say which perspective is more realistic. The very people who are building this technology have no idea what its limitations are, or how far the technology will go.
One thing is clear, though. Conversational AI like Bing, ChatGPT, and Google’s Bard represent an upcoming transformation in how we’ll interact with computers. For about a century, you could only use computers in narrow, specific ways, and any deviation from the happy path engineers laid out would end in frustration. Things are different now. You can communicate with a machine the same way you’d communicate with a human, although the current generation of AI often misunderstands, or spits out unsatisfactory results.
But as the technology improves — and it probably will — we’ll have a paradigm shift on our hands. At some point you might be using your voice as often as you use your mouse and keyboard. If and when that happens, it means your apps and devices are going to act more like people, which means they’ll have a personality, or at least it will feel like they do.
It seems like an obvious choice to give users some control over what that personality will be like, the same way you can change your phone background. Microsoft already allows you to make some adjustments to Bing, which it rolled out after Sydney’s untimely death. You can set Bing’s “tone” to be creative, balanced, or precise.
My favorite weather app, Carrot, has a version of this feature too. Sort of. It has a pretend AI that talks to you when you open the app. The settings let you choose Carrot’s level of snarkiness and even its political beliefs. In reality, Carrot isn’t an AI at all, just a set of prewritten scripts, but it’s a flavor of what your apps could look like someday soon.
Years from now (or maybe in six months, who knows), you might be able to make similar adjustments to your operating system. Microsoft could let you dial the level of Sydney up or down, keeping it strictly business or letting the AI delve into madness. I like my devices and my internet weird, so I’d jump at the chance to have Sydney on my phone. Let’s just hope they do a better job of routing out the antisemitism first.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.