Skip to main content

A.I.-generated text is supercharging fake news. This is how we fight back

How Deep Fakes Will Make Fake News Worse - The Deets

Last month, an A.I. startup backed by sometimes A.I. alarmist Elon Musk announced a new artificial intelligence they claimed was too dangerous to release to the public. While “only” a text generator, OpenAI’s GPT2 was reportedly capable of generating text so freakishly humanlike that it could convince people that it was, in fact, written by a real flesh and blood human being.

To use GPT2, a user would only have to feed it the start of a document, before the algorithm would take over and complete it in a highly convincing manner. For instance, give it the opening paragraphs of a newspaper story and it would manufacture “quotes” and assorted other details.

Such tools are becoming increasingly common in the world of A.I. — and the world of fake news, too. The combination of machine intelligence and, perhaps, the distinctly human unintelligence that allows disinformation to spread could prove a dangerous mix.

Fortunately, a new A.I. developed by researchers at MIT, IBM’s Watson A.I. Lab and Harvard University is here to help. And just like a Terminator designed to hunt other Terminators, this one — called GLTR — is uniquely qualified to spot bot impostors.

Fighting the good fight

As its creators explain in a blog post, text generation tools like GPT2 open up “paths for malicious actors to … generate fake reviews, comments or news articles to influence the public opinion. To prevent this from happening, we need to develop forensic techniques to detect automatically generated text.”

GLTR takes the same models that are are used as the basis for fake text generation by GPT2. By looking at a piece of text, and then predicting which words the algorithm would likely have picked to follow one another, it can give a verdict on whether it thinks it was written by a machine. The tool is available for users to try online. (If anyone has ever told you that your own writing is too machine-like, this might be your chance to prove them wrong!)

GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing. OpenAI

Until now, it’s been relatively easy for humans to pick out writing generated by machines — usually because it is overly formulaic or, in creative writing, makes little to no sense. That’s fast changing, though, and the creators of GLTR think that tools such as this will therefore become more necessary.

“We believe that machines and humans excel at detecting fundamentally different aspects of generated text,” Sebastian Gehrmann, a Ph.D. candidate in Computer Science at Harvard, told Digital Trends. “Machine learning algorithms are great at picking up statistical patterns such as the ones we see in GLTR. However, at the moment machines do not actually understand the content of a text. That means that algorithms could be fooled by completely nonsensical text, as long as the patterns match the detection. Humans, on the other hand, can easily tell when a text does not make any sense, but cannot detect the same patterns we show in GLTR.”

“Imagine getting emails or reading news, and a browser plugin tells you for the current text how likely it was produced by model X or model Y.”

Hendrik Strobelt, a data scientist at IBM Research, told us that figuring out whether a piece of text comes from a human origin will become more of a pressing issue. “[Our current] visual tool might not be the solution to that, but it might help to create algorithms that work like spam detection algorithms,” he said. “Imagine getting emails or reading news, and a browser plugin tells you for the current text how likely it was produced by model X or model Y.”

A cat and mouse game

Similar games of one upmanship — in which A.I. tools are used to spot fakes created by others A.I.s — are taking place across the tech industry. This is particularly true when it comes to fake news. For example, “deepfakes” have caused plenty of alarm with their promise of being able to realistically superimpose one person’s head onto another’s body.

To help counter deepfakes, researchers from Germany’s Technical University of Munich have developed an algorithm called XceptionNet that’s designed to quickly spot faked videos posted online. Speaking with Digital Trends last year, one of the brains behind XceptionNet suggested a similar approach involving a possible browser plugin that runs the algorithm continuously in the background.

It seems likely that others are working on solutions for spotting the A.I. behind other forms of machine-masquerading-as-humans, such as Google’s Duplex voice calling tech or the spate of artificial intelligences capable of accurately mimicking celebrity voices and making them say anything the user wants.

Image used with permission by copyright holder

This kind of cat-and-mouse game will be of no great shock to anyone who has followed the world of hacking. Hackers spot vulnerabilities in systems and exploit them, then somebody notices and patches the hole, leaving hackers to move onto the next vulnerability. In this case, however, the escalation involves cutting edge artificial intelligence.

“In the future, we will see increasingly common [use and] abuse of algorithmically generated text,” Gehrmann continued. “In only a few years, algorithms could potentially be used to influence the public opinion on products, movies, personalities, or politics on a larger and larger scale. Therefore, tools to detect fake content will become more and more relevant for real-world use. As researchers, we see it as our goal to develop detection methods at a faster rate than the generation methods to combat and extinguish this abuse.”

Now we just have to hope that the good guys can work harder and faster than the bad ones. Unfortunately, if history has taught us anything it’s that there’s guarantee that this will be the case. Keep your fingers crossed that it is!

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more