Facebook has taken plenty of criticism on the platform’s algorithms designed to keep content within the community guidelines, but a new round of investigative reporting suggests the company’s team of human review staff could see some improvements too. In a study of 900 posts, ProPublica reports that Facebook’s review staff was inconsistent about the posts containing hate speech, removing some but not others with similar content.
Facebook apologized for some of those posts, saying that in the 49 posts highlighted by the non-profit investigative organization, reviewers made the wrong choice on 22 of those posts. The social media platform defended 19 other instances, while eight were excluded because of incorrect flags, user deletions or a lack of information. The study was crowd-sourced, with Facebook users sharing the posts with the organization.
Justin Osofsky, Facebook’s vice president of Global Operations and Media Partnerships, said that the social media platform will be expanding review staff to 20,000 people next year. “We’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” he said in response to the ProPublica investigation. “We must do better.”
ProPublica said Facebook is inconsistent on the treatment of hate speech, citing examples of two different statements that both essentially wished death on an entire group of people, with only one of them removed after being flagged. The second post was later removed after the ProPublica investigation.
“Based on this small fraction of Facebook posts, its content reviewers often make different calls on items with similar content, and don’t always abide by the company’s complex guidelines,” ProPublica said. “Even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook’s definition of hate speech.”
On the flip side, the report also found posts that were redacted that shouldn’t have been. In one example, the image contained a swastika, but the caption was asking viewers to stand up against a hate group.
The study is far from the first time ProPublica, a non-profit investigative organization, has called out Facebook’s practices this year. This fall, Facebook changed its ad targeting after a study showed that when enough users typed in their own answers into the bio fields, racial slurs could become a category for a targeted ad. Just a week ago, ProPublica demonstrated that employers could discriminate by age using those ad tools. In the first,
Monitoring content from the largest social media network with more than 2 billion monthly active users isn’t an easy task and one that Facebook approaches with both artificial intelligence algorithms and human reviewers. Social media networks generally attempt to find a balance between banning hateful content and prohibiting free speech. Osofsky says the platform deletes 66,000 instances of hate speech every week.
The move to a review staff of 20,000 is fairly significant — when Facebook reported in May that it would be adding 3,000 more review staff members that brought the team to 7,500 people.
ProPublica says the investigation is important “because hate groups use the world’s largest social network to attract followers and organize demonstrations.”