Skip to main content

Social (Net)Work: Fake news spreads faster than truth, but bots aren’t to blame

The spread of rumors online: How to know what's true or false?

Criticism for hate speech, extremism, fake news and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore what social platforms are doing, what works, what doesn’t, and possibilities for improvement. 

Sequestered in the dormitory as the manhunt for the second suspect in the Boston Marathon bombing locked down the entire city, MIT student Soroush Vosoughi turned to the fastest source of news he knew: Social media. While social networks spread eye witness accounts and real-time updates, the platforms also perpetuated rumors of a third bomb — and a third suspect. But what Vosoughi didn’t know at the time was that those rumors were 70 percent more likely to get a retweet than the actual truth.

MIT conducted what is the most comprehensive study on Twitter yet: 126,000 stories, Tweeted 4.5 million times from more than three million users from 2006 to 2017.

Fast forward five years, and Vosoughi, now a postdoctoral associate, is the co-author of a study out of MIT’s Media Lab that discovered that false news not only spreads faster, farther and deeper than the real thing, but that the reason for the wider spread isn’t bots. So what’s the cause?

Probably human nature, the researchers suggest. Working with Deb Roy and Sinan Aral, Vosoughi says the team conducted what is the most comprehensive study on Twitter yet, in both the time frame and the number of Tweets included. The study, published in the March 9 Issue of Science, covers a decade of Tweets, from Twitter’s launch in 2006 to 2017.

False news spreads farther and faster than the real thing

The study’s authors spent a year and a half using Twitter archives to look at around 126,000 stories, Tweeted 4.5 million times from more than three million users.

The study's authors Soroush Vosoughi, Sinan Aral, and Deb Roy.
Pictured (left to right): Seated, Soroush Vosoughi, a postdoc at the Media Lab’s Laboratory for Social Machines; Sinan Aral, the David Austin Professor of Management at MIT Sloan; and Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab, who also served as Twitter’s Chief Media Scientist from 2013 to 2017. MIT

While earlier studies researched methods for diffusing rumors, the MIT study compared the spread of verified stories with rumors. (The group choose to leave the term “fake news” out of the academic study because of the political connotations the term has picked up.)

The group conducted the study using 126,000 stories that had been checked by six independent organizations, such as Snopes and FactCheck.org, eliminating from the data any stories where the fact checkers did not agree between 95 and 98 percent. With each story labeled as true, false, or mixed, the group then analyzed how each category spread.

True stories took six times longer to spread to 1,500 people compared to false ones. The most wide-spread false Tweets reached between 1,000 and 100,000 users.

Out of the stories verified as true, few Tweets reached more than 1,000 people. Yet, the most wide-spread false Tweets reached between 1,000 and 100,000 users. Those false Tweets also took on a viral form, branching out with new Tweets rather than just spreading from one broadcast. True stories also took six times longer to spread to 1,500 people compared to false ones. The group also looked at the depth of that spread, or the branches of unique user retweets, and found that true stories took ten times longer to spread nearly half as deep.

False political news was more viral than any of the other topics examined by the study. The research suggests that false political Tweets exceeded a reach of 20,000 people three times faster than a true story could reach half that number, regardless of category. The categories for urban legends and science joined politics with the fastest and farthest spread. False news on politics and urban legends were the most viral.

Controlling for bots and influencers

After gathering the data, the group implemented several strategies in order to determine if variables such as bots and the number of followers influenced the data. The researchers took each account and ran it through a bot detection algorithm, then removed any account with a more than 50 percent chance of being a bot from the data. But even with the bots eliminated, the group said the conclusion on the faster, wider, deeper spread of false news still stood.

Graphic: The green shows the spread of true news and the red fake news.
Sample graphic of a true and false cascade, the green shows the spread of true news and the red fake news. Credit: Peter Beshai Image used with permission by copyright holder

But what about the number of followers? Factoring in the number of followers actually showed researchers that users that tweeted or retweeted false news were actually more likely to have fewer followers, not more. After controlling for the number of followers, the age of the account and the user’s level of activity — along with the blue verification badge that has recently come under fire — the group concluded that the false stories were still 70 percent more likely to go viral than true stories.

The group also worked to see if the fact-checking organizations used in the study had any biases that affected the results. It asked real people to fact check a smaller percentage of the data that hadn’t been verified by the organizations on a group of Tweets from 2016.

Comparing these manually-checked stories with the ones checked by an organization, the researchers said the results were nearly identical. (But hats off to the undergraduates who were tasked with going through three million Tweets).

So why does false news spread so fast?

The research didn’t stop at just the statistics. Based on an earlier theory that humans prefer novel information, the researchers looked at some 5,000 Twitter users who had retweeted either true or false rumors. They analyzed 60 days of history of the tweets those users had been exposed to prior to retweeting a rumor, and found that the false rumors, compared to rumors that proved to be true, tended to be much more different from the tweets a user was previously exposed to, suggesting a higher degree of novelty for false news.

Still, the study revealed that false news tends to have qualities that past research has shown may increase its appeal.

The group then looked to see if specific emotions were tied to a false news story more than a true one. Without access to the Facebook-style emoji reactions, the group ran a program that compared the words in the comments to a known list of that word’s associated emotion. The comments on the false Tweets had a greater number of words associated with surprise and disgust.

The true Tweets, meanwhile, often had comments that contained words associated with sadness, anticipation, and trust.

While the researchers suggest emotion and novelty actually may be causes for the difference in the spread of false news versus real news, they did not definitively make that conclusion. Still, the study revealed that false news tends to have qualities that past research has shown may increase its appeal.

What can platforms do to stop the spread of false news?

Since human nature appears to be one of the reasons why the false data spreads more, Vosoughi suggests the first solution should be people-based rather than dependent on the social media companies themselves. Educating social media users and students how to spot the fakes could help online viewers sort out the overwhelming mass of information online.

The spread of rumors online: How might this be used in future work?

While the post doctoral associate says that stopping the spread of false news is ultimately up to the user, social media companies could help by providing more information that the reader could use to judge the accuracy of the information. “In the same way that, when you go to a restaurant when you go to order a food, you see calorie content of the food you are ordering so that you can make a better choice, I think social media platforms could provide some kind of score on the quality of what you are reading,” Vosoughi said. “I don’t think they should censor anyone, but by providing quality scores, people could make better decisions before sharing.”

Vosoughi said he will continue researching the spread of false news by running tests on possible solutions in order to determine if giving users a nutrition-facts-like label impacts sharing behavior.

“When you were reading these things, you didn’t know if they were true or false. You couldn’t know what to believe and what not to believe.”

The study wasn’t the only research sparked by Vosoughi’s social media experience during the aftermath of the Boston Marathon bombings. For his PHD thesis, he developed a false news detection algorithm that, he says, wasn’t 100 percent accurate but helped cut back on some of the noise by detecting some of the fakes. The algorithm was finished in 2015 and he is currently talking with some groups interested in using it, including emergency services.

“When you were reading these things,” he said, while recalling using Facebook, Twitter and Reddit for news during the campus lockdown after the bombings, “you didn’t know if they were true or false. You couldn’t know what to believe and what not to believe. It was the first time that I experienced the effects that false news and rumors can have on you. If you are living in that moment, in that town, false news will change the story even more. That was a wakeup call for me.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Bluesky barrels toward 1 million new sign-ups in a day
Bluesky social media app logo.

Social media app Bluesky has picked nearly a million new users just a day after exiting its invitation-only beta and opening to everyone.

In a post on its main rival -- X (formerly Twitter) -- Bluesky shared a chart showing a sudden boost in usage on the app, which can now be downloaded for free for iPhone and Android devices.

Read more
How to make a GIF from a YouTube video
woman sitting and using laptop

Sometimes, whether you're chatting with friends or posting on social media, words just aren't enough -- you need a GIF to fully convey your feelings. If there's a moment from a YouTube video that you want to snip into a GIF, the good news is that you don't need complex software to so it. There are now a bunch of ways to make a GIF from a YouTube video right in your browser.

If you want to use desktop software like Photoshop to make a GIF, then you'll need to download the YouTube video first before you can start making a GIF. However, if you don't want to go through that bother then there are several ways you can make a GIF right in your browser, without the need to download anything. That's ideal if you're working with a low-specced laptop or on a phone, as all the processing to make the GIF is done in the cloud rather than on your machine. With these options you can make quick and fun GIFs from YouTube videos in just a few minutes.
Use GIFs.com for great customization
Step 1: Find the YouTube video that you want to turn into a GIF (perhaps a NASA archive?) and copy its URL.

Read more
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more