Facebook CEO Mark Zuckerberg says that his company is considering treating deepfakes differently from traditional fake news and misinformation, which could make it easier for Facebook to delete the altered videos before they spread.
Deepfakes, or incredibly realistic altered videos that can make it seem like a person said or did something they never did, have become an increasing problem on social media platforms. Facebook came under fire recently over a less-advanced deepfake purporting to show Nancy Pelosi slurring her words, which spread throughout the network.
“We’re going through the policy process of thinking through what the deepfake policy should be,” he said Wednesday during an interview with Harvard Professor Cass Sunstein at the Aspen Ideas Festival. “This is certainly a really important area as the A.I. technology gets better and one that I think is likely sensible to have a different policy and to treat this differently than how we just treat normal false information on the internet.”
Right now, Facebook relies on independent fact-checkers to verify controversial content. If they determine that something is fake or misleading, the network will limit its distribution — so it won’t show up in your news feed. If you do see a photo or video that’s been flagged as false, it’ll be labeled as such.
But the system doesn’t always work: Zuckerberg said the Pelosi video “got more distribution than our policies allowed” and was able to spread across Facebook over the course of more than a day before fact-checkers could flag it as false, something Zuckerberg called an “execution mistake on our side.” The company still hasn’t removed the video, and Zuckerberg said
The Pelosi video was a more basic form of a deepfake that made her seem like she was slurring her words by cutting it up and slowing it down. Zuckerberg said that simple edits that eliminate context or change speed or pitch aren’t quite at the level of advanced deepfakes.
“I think we need to be very careful,” he said, adding that it’s not up to Facebook to delete every video that’s edited in a way that its subject dislikes.
The Pelosi video remains online, but a more advanced deepfake — like a one purporting to show Zuckerberg himself praising a shadowy organization called “Spectre” — might be a different story.
“I definitely think there’s a good case that deepfakes are different from traditional misinformation, just like spam is different from traditional misinformation and should be treated differently,” he said.
As technology advances, deepfakes are likely to become an increasing nuisance or even a major problem on social media. Experts say we’re not far away from almost anyone being able to create a convincing deepfakes from a single photo. Other social networks, notably YouTube, have struggled to keep up pace with the rapid development and spread of deepfakes, most notably when and how to delete them.
Scarily realistic ‘deep video portraits’ could take fake news to the next level, here's how. pic.twitter.com/RqrxbGkjXs
— Digital Trends (@DigitalTrends) June 14, 2019
Celebrities like Zuckerberg or Kim Kardashian have legal teams to deal with deepfakes, but as they become more common and begin to target normal users, Facebook and others will likely need to determine exactly how to treat them.