It’s not just third-party apps getting the ax from Facebook — it’s fake accounts, too. On Tuesday, May 15, Facebook published its first ever Community Standards Enforcement Report in a continuing effort to restore public faith in the social network as it continues to combat fake news and privacy scandals.
And as it turns out, when it comes to fighting the fake, there’s a lot to contend with. In fact, the company’s vice president of product management, Guy Rosen, revealed that Facebook disabled around 583 million fake accounts in the first three months of 2018 alone. For context, that’s about a quarter of the social network’s entire user base.
On average, around 6.5 million fake accounts were created every day between the beginning of 2018 and March 31. Luckily, Rosen notes that the majority of these spam accounts were disabled within just minutes of registration. This is largely thanks to Facebook’s artificial intelligence tools, which relieve humans of the burden of combing through the site to find the bots. That said, while A.I. is obviously useful, it’s not entirely foolproof.
Moreover, Facebook managed to find and delete 837 million spam posts in the first quarter of 2018, the vast majority of which were deleted before users got the chance to report them. “The key to fighting spam is taking down the fake accounts that spread it,” Rosen noted. And this, of course, is an ongoing effort within
While Facebook has been quite effective at taking down instances of adult nudity and sexual activity, as well as graphic violence, the team admits that its technology “still doesn’t work that well” when it comes to hate speech. While
“As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse,” Rosen noted. “It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important.”
That said, Facebook says that it is “investing heavily in more people and better technology to make