The Future Is Here
We may earn a commission from links on this page

How to Check If Something Online Was Written by AI

It's tricky, but not impossible, to figure out if something wasn't written by a human.

We may earn a commission from links on this page.
Can you trust what you read online?
Can you trust what you read online?
Image: Fatos Bytyqi/Unsplash

Generative artificial intelligence is everywhere you look these days, including on the web: advanced predictive text bots such as ChatGPT can now spew out endless reams of text on every topic imaginable and make all this written content natural enough that it could plausibly have been written by a human being.

So, how can you make sure the articles and features you’re reading online have been thought up and typed out by an actual human being? While there isn’t any foolproof, 100 percent guaranteed way of doing this, there are a variety of clues you can look out for to spot what’s AI-generated and what isn’t.

Advertisement

Check the author

Most human writers will have an online presence—most AI writers won’t.
Most human writers will have an online presence—most AI writers won’t.
Screenshot: Gizmodo
Advertisement

For now, at least, there aren’t any high-profile, well-respected online outlets pumping out AI content without labeling it as such—but there are plenty of lower-tier sites making full use of AI-generated text and not being particularly honest about it. If you’re coming across a lot of text without author attribution, that’s one warning sign to look out for.

In contrast, if an article has the name of a real person attached—even better, a real person with a bio and social media links—then you’re more likely to be reading something that has been put together by a human. You’ll probably not have time to background check everything you read online, but it’s worth it when you really need to know its source.

Advertisement

The alleged AI articles recently spotted on the Sports Illustrated site came with author profiles and bios alongside them—profiles and bios that were also made by generative AI, it turns out. A reverse image search (through something like TinEye) can identify images of people that aren’t actually real, which might be helpful in determining an article’s source.

More clues can be gleaned from a website in terms of its history, the type of content it publishes, whether or not it has an About Us page, and so on. For example, searching for the best phone reviews on the web brings up well-known tech sites staffed by human beings.

Advertisement

Check a Detection Engine

Copyleaks correctly identified this article as being written by a human.
Copyleaks correctly identified this article as being written by a human.
Screenshot: Copyleaks
Advertisement

There’s plenty of debate about whether or not AI text detection works. OpenAI says not, and most reporting on the matter says these AI detectors aren’t to be trusted. However, there are still plenty of them in business at the time of writing, and within limits, they might be useful in checking for the use of AI online.

We ran a brief series of tests on a few AI detectors online, including Copyleaks, GPTZero, and Scribbr, and what we found tallies with what other people have found: These detectors can tell the difference between AI writing and human writing, but not all the time, and not to a level that conclusively proves anything one way or another.

Advertisement

These detectors seem to have a better success rate at spotting human writing than AI writing. They’re essentially looking for originality in the text, trying to figure out what an AI would say next based on its training. The more data they have to work with, the better, but there are limits on how much you can use for free.

The studies we have to date suggest that some detectors are better than others and that some are even right most of the time—but none of them are consistently right to a high level. These detectors are perhaps best thought of as another tool you can use alongside other avenues of inquiry and not something to rely on entirely.

Advertisement

Check the Signs

ChatGPT knows its own limitations.
ChatGPT knows its own limitations.
Screenshot: ChatGPT
Advertisement

As we said at the start, there’s really no guaranteed way of identifying which online text has been produced by AI and which hasn’t. However, there are still certain signs to look out for: Because of the way generative AI is trained, its output tends to be generic, vague, and obvious at times.

Certain touches of originality, humor, and humanity are often missing (as are personal anecdotes). AI always wants to generate text that has a low level of perplexity—put another way, a high level of predictability. At their heart, these engines are just predicting what word should come next, and that can show in a general mushiness and blandness that is sometimes noticeable.

Advertisement

You can also look out for glaring errors (such as hallucinations), but of course, human beings make errors in their writing, too. AI text might be capable of getting something significantly wrong or significantly wrong multiple times in different ways, but it still doesn’t prove if AI has composed an article.

Taking all these signals and clues and flags together, you may just be able to make an educated guess about whether something came from a human mind or not, even if the only way to be sure is to watch it being written: AI text is certainly harder to spot than AI imagery, but that’s a whole other topic.