Facebook is teaming up with some of its biggest tech industry counterparts in order to combat the spread of extremist content on the web.
On Monday, the company announced that along with Twitter, Microsoft, and YouTube it will begin contributing to a shared database devoted to “violent terrorist” material found on the respective platforms.
The compiled content itself will be identified using “hashes” — unique digital “fingerprints” — with the hopes that the sharing of this type of data will lead to a streamlining of the removal process across the web’s biggest services.
In its blog post, Facebook describes the items being targeted as: “hashes of the most extreme and egregious terrorist images and videos … content most likely to violate all of our respective companies’ content policies.”
Theoretically, once a participating firm adds an identified hash of an extremist image or video to the database, another company can use that unique data to detect the same content on its own platform and remove it accordingly.
Facebook assures its users that no personal information will be shared, and corresponding content will not immediately be removed. Ultimately, the decision to delete content that matches a hash will rest on the respective company and the policies it has in place. Additionally, each firm will continue to apply its practice of transparency to the database and its individual review process for government requests.
Over the past year, the web giants in question have all faced public pressure to tackle extremist content online. At the start of the year, execs from Google, Twitter, and Facebook met with White House officials to discuss the issue.
Facebook and Twitter have also been hit with lawsuits regarding their alleged inaction against terrorist groups operating on their respective sites. In response, the latter has banned 325,000 accounts since mid-2015 for promoting extremism. For its part, Google began showing targeted anti-radicalization links via its search engine. Meanwhile, in May, Microsoft unveiled a slew of new policies in its bid to remove extremist content from its consumer services.
“Throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms,” Facebook wrote in its post. “We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful, and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.”