As the number of terrorist attacks continue to increase globally, Facebook is making an attempt to be completely transparent about its plans to keep terrorist content off its website. To make efforts more efficient, the company enlisted the help of both artificial intelligence and human expertise.
To kick off the initiative, Facebook introduced a thread called “Hard Questions” as a safe space to discuss complicated subjects. The first post within the thread is titled “How We Counter Terrorism” and written by Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, counterterrorism policy manager, who explain in detail how
The post lists a number of current tactics that use AI, including image matching — where systems search for whether or not an uploaded image matches any terrorism content previously removed by Facebook — to prevent other accounts from posting the same photo or video. Another experiment
To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages. The company also added more people to its team of terrorism and safety specialists specifically — ranging from former prosecutors to engineers — whose responsibility is to concentrate solely on countering terrorism.
Facebook will continue to see employee growth after CEO Mark Zuckerberg announced plans to expand the community operations team by adding 3,000 more employees across the globe — this decision came after a string of violent deaths and incidents were broadcast over
The company continues to develop different partnerships with researchers, governments, and other companies — including Microsoft, YouTube, and Twitter. These businesses continuously contribute to a database that’s specifically meant for gathering terrorist content.