Skip to main content

Social giants to testify before Congress on extremist content

Best social media management tools for small businesses
Panithan Fakseemuang/123RF
Social media giants are once again being asked to testify before the U.S. Congress — this time, about “extremist propaganda.” Facebook, Twitter and YouTube will each have representatives testify before the Senate Commerce Committee next week, on January 17. The hearing, called “Terrorism and Social Media: #IsBigTechDoingEnough,” is expected to look into how the platforms handle extremist content.

The Commerce Committee’s leader, Sen. John Thune, said the meeting is designed to discuss how social media platforms are handling extremist propaganda and what the tech giants are doing to prevent the spread of those posts. According to Recode, the networks’ handling of hate speech, racism, fake news and other abusive content could also become part of the discussion.

Facebook’s global policy management head, Monika Bickert, Twitter’s director of public policy and philanthropy, Carlos Monje, and YouTube’s global head of public policy and government relations Juniper Downs will present testimony to the committee.

Questions on how social networks handle extremist content have led to several changes within the networks over the past two years, but the hearing will discuss if those steps are enough. Removing extremist content isn’t a U.S.-only issue either — the European Union is currently running a voluntary Code of Conduct and calling for networks to quickly detect and remove hate speech. In 2016, the three social networks, along with Microsoft, formed a group to build a shared database of extremist content, with the goal of making those types of posts easier to remove across all platforms.

YouTube has implemented several changes over the last year after several big brands pulled their ads upon discovering ads were running with extremist videos and other forms of hate speech. In August, the platform said that advances to its A.I. algorithms led to 75 percent of those videos getting booted before a human user could even flag them, while videos that fall into a gray area not quite against community guidelines began seeing penalties.

Twitter faced growing concern in 2016 and locked out 230,000 accounts for extremist content in August, followed by another removal of around 377,000 accounts over a period of six months. The platform recently started removing the blue verification badge from some users after facing criticism for giving a known white supremacist the badge.

Last year, Facebook shared insight into how the platform tackles extremist content using both A.I. and staff. A.I., for example, can identify duplicate videos that Facebook has already removed, preventing another group from sharing the same removed content, while another algorithm looks for keywords in text. The company’s review staff is expected to increase to 20,000 this year and CEO Mark Zuckerberg has made fixing abuse on the platform his personal goal for 2018.

According to The Hill, it’s rare to see representatives from the social media giant testifying in Washington, yet all three groups also testified concerning Russian interference in the U.S. election. A slew of recent events could lead to legislation catching up to social media technology, ranging from new proposals for managing political ads in the U.S. to a now active law in Germany requiring removal of hate speech.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Facebook reportedly considering ‘kill switch’ if Trump contests 2020 elections
Trump with Facebook CEO Mark Zuckerberg stylized image

Facebook is reportedly preparing for various scenarios after the 2020 presidential election -- including President Donald Trump using the social network to delegitimize the results.

Among the outcomes for which Facebook employees are planning include the possibility of Trump falsely declaring on the platform that he won the vote for another four-year term, The New York Times reported. The social network is also considering the possibility of Trump trying to invalidate the results by claiming the U.S. Postal Service lost mail-in ballots or that other groups interfered with the election, sources told the news outlet.

Read more
YouTube permanently bans white nationalist channel VDARE
YouTube

YouTube has permanently banned the channel of the far-right wing anti-immigration website VDARE for violating its hate speech policies.

According to a report by Right Wing Watch, VDARE was a gathering place for white supremacists and people who oppose immigration to the U.S. Its founder, Peter Brimelow, is well-regarded in right wing circles, even rubbing elbows with advisors in President Donald Trump’s cabinet, according to the Washington Post. Vox described Brimelow as one of the "founding fathers" of the alt-right.

Read more
YouTube bans over 2,000 Chinese accounts for ‘coordinated influence operations’
YouTube

Google has banned hundreds of Chinese YouTube channels that it says were involved in “coordinated influence operation campaigns.” Between April and June this year, the company’s division responsible for combating government-backed attacks, Threat Analysis Group (TAG) took down about 2,600 YouTube accounts, significantly up from the 277 channels it blocked in the first three months of 2020.

Most of these channels posted “spammy, non-political content,” Google said in a blog post, but some of them were actively participating in a spam network and uploaded political content primarily in Chinese.

Read more