Skip to main content

Here’s what social media giants are doing to keep extremism off your screen

senate hearing terrorism and social media extremist content january 2018 ios reading list header
Image used with permission by copyright holder
Social media is a powerful tool for groups engaged in terrorist activities. The extremist content they post have sparked widespread changes across social networks. But, are those changes enough? That’s the question representatives from Facebook, Twitter, and YouTube addressed this week, speaking before the U.S. Senate Committee on Commerce, Science, and Transportation in a hearing in Washington, D.C.

The hearing was designed to look into the social networks’ current efforts into curbing extremist content, opening up a discussion on tech companies’ role in stunting the spread of online propaganda. While the companies have previously testified on Russian interference in the U.S. election, this hearing was the first time the companies spoke to the commerce committee on extremist content.

All three networks demonstrated a significant increase in the number of content removed from their respective platforms, as well as preventing the information in the first place. In some cases, the networks’ efforts overlap, including the Global Internet Forum to Counter Terrorism for information sharing, while a database of more than 40,000 “hashes” helps keep content recognized on one network, off another.

Facebook

Facebook’s head of Product Policy and Counterterrorism, Monika Bickert, said that Facebook is now able to remove 99 percent of ISIS and Al Qaeda-related posts before reaching a human flagger, thanks largely to machine learning; Facebook’s AI platform looks through image, video, and text material. The company is also working to teach the system how to recognize posts that support a terrorist organization (rather than generating false positives on posts condoning the behavior, for example).

For Facebook, AI is also being used to prevent some content uploads. Image matching prevents other accounts from uploading videos previously removed by the company. The company also works with experts “to track propaganda released by these groups and proactively insert it into our matching systems,” Bickert wrote in a prepared statement.

Facebook also looks for “clusters” or related Pages, groups, posts, and profiles tied to the removed account. The social network is also improving efforts in keeping users previously removed from creating a new account.

Facebook has already added 3,000 people to the review team and this year will expand to a total of 20,000 people working to identify all content that violates the community standards, including extremist content. Another 180 people, Bickert said, are trained specifically in preventing terrorist content.

At the same time, Facebook is working to further “counterspeech,” or content that fights against extremism and other hateful posts.

“On terrorist content, our view is simple: There is no place on Facebook for terrorism,” Bickert said. “Our longstanding policies, which are posted on our site, make clear that we do not allow terrorists to have any presence on Facebook. Even if they are not posting content that would violate our policies, we remove their accounts as soon as we find them.”

Twitter

Twitter’s director of Public Policy and Philanthropy, Carlos Monje Jr., said the platform has now suspended more than one million accounts for terrorism since mid-2015 — including 574,070 accounts just last year, a jump from the more than 67,000 suspensions in 2015. A big part of that increase is the technology used to detect those accounts, which caught one-third of the accounts in 2015 but is now responsible for 90 percent of the latest suspensions.

“While there is no ‘magic algorithm’ for identifying terrorist content on the internet, we have increasingly tapped technology in efforts to improve the effectiveness of our in-house proprietary anti-spam technology,” Monje said. “This technology supplements reports from our users and dramatically augments our ability to identify and remove violative content from Twitter.”

Extremist content was part of Twitter’s rule overhaul late last year prompted in part by #womenboycotttwitter. Those expanded rules went beyond Tweets to include handles, profile images and other profile information.

On a different note, the platform is also working to prevent election misinformation and will soon show users if they viewed that propaganda — along with donating money from those ads to conduct additional research. While Twitter has already shared updates designed specifically for political ads, verifying all state and federal candidates is part of those changes as well.

YouTube

Juniper Downs, YouTube’s director of Public Policy and Government Relations, said machine learning now removes 98 percent of “violent extremism” videos, up from 40 percent a year ago. Around 70 percent is removed within eight hours and half in under two, Downs said.

Along with the expanded software, YouTube has also added additional organizations to the Trusted Flagger program, including counter-terrorism groups. Within parent company Google itself, the number of staff working with those videos in violation will grow to 10,000 this year. This year will also bring a transparency report on flagged videos.

For videos that fall in a more gray area without an outright violation, YouTube has already announced these types of videos won’t receive monetary compensation or be part of the recommended videos, along with removing options for comments. Like Facebook, counter-speech is also part of the initiative, including the Creators for Change program.

“No single component can solve this problem in isolation,” Downs wrote in her prepared statement. “To get this right, we must all work together.”

Moving forward

While the session has been described as “mostly genial” with each platform reporting higher numbers of removed content and accounts, Clint Watts, a Robert A. Fellow for the Foreign Policy Research Institute, suggested that social networks can do more by reconsidering anonymous accounts and eliminating non-human bot accounts or requiring a CAPTCHA, while federal regulations for political ads should also be extended to social media.

“Social media companies realize the damage of these bad actors far too late,” Watts wrote in a prepared statement. “They race to implement policies to prevent the last information attack, but have yet to anticipate the next abuse of their social media platforms by emerging threats seeking to do bad things to good people.”

A video of the hearing is publicly available from the committee’s website, including prepared statements from each network.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Bluesky barrels toward 1 million new sign-ups in a day
Bluesky social media app logo.

Social media app Bluesky has picked nearly a million new users just a day after exiting its invitation-only beta and opening to everyone.

In a post on its main rival -- X (formerly Twitter) -- Bluesky shared a chart showing a sudden boost in usage on the app, which can now be downloaded for free for iPhone and Android devices.

Read more
How to make a GIF from a YouTube video
woman sitting and using laptop

Sometimes, whether you're chatting with friends or posting on social media, words just aren't enough -- you need a GIF to fully convey your feelings. If there's a moment from a YouTube video that you want to snip into a GIF, the good news is that you don't need complex software to so it. There are now a bunch of ways to make a GIF from a YouTube video right in your browser.

If you want to use desktop software like Photoshop to make a GIF, then you'll need to download the YouTube video first before you can start making a GIF. However, if you don't want to go through that bother then there are several ways you can make a GIF right in your browser, without the need to download anything. That's ideal if you're working with a low-specced laptop or on a phone, as all the processing to make the GIF is done in the cloud rather than on your machine. With these options you can make quick and fun GIFs from YouTube videos in just a few minutes.
Use GIFs.com for great customization
Step 1: Find the YouTube video that you want to turn into a GIF (perhaps a NASA archive?) and copy its URL.

Read more
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more