Skip to main content

Are deepfakes a dangerous technology? Creators and regulators disagree

Over the past few years, deepfakes have emerged as the internet’s latest go-to for memes and parody content.

It’s easy to see why: They enable creators to bend the rules of reality like no other technology before. Through the magic of deepfakes, you can watch Jennifer Lawrence deliver a speech through the face of Steve Buscemi, see what Ryan Reynolds would’ve looked like as Willy Wonka, and even catch Hitler and Stalin singing Video Killed The Radio Star in a duet.

For the uninitiated, deepfake tech is a form of synthetic media that allows users to superimpose a different face on someone else’s in a way that’s nearly indistinguishable from the original. It does so by reading heaps of data to understand the face’s contours and other characteristics to naturally blend and animate it into the scene.

Ryan Reynolds as Willy Wonka Deepfake from NextFace
NextFace/Youtube

At this point, you’ve probably come across such clips on platforms like TikTok (where the hashtag “#deepfake” has about 200 million views), YouTube, and elsewhere. Whether it’s to fulfill fan fiction and redo scenes with stars they wish the movie originally had cast or put a dead person in a modern, viral meme, deepfakes have been adopted as a creative outlet for purposes that were previously next to impossible.

The shift has spawned a league of new creators like Ctrl Shift Face, whose deepfake videos regularly draw millions of views and often are the main topic of discussion on late-night talk shows.

“It’s a whole new way of making funny internet videos, or telling stories like we’ve never seen before,” says the Netherlands-based creator behind the hit “The Avengers of Oz” deepfake clip who asked that his real name not be used. “It’s a beautiful combination of fascination for A.I. technology and humor.”

But there’s a looming risk that threatens the future of deepfake technology altogether: Its tainted reputation.

With great power comes great repostability

Unfortunately, in addition to their potential as a creative tool for well-intentioned video artists, deepfakes also carry a tremendous potential to do harm.

A recent study by the Dawes Centre for Future Crime at the UCL Jill Dando Institute of Security and Crime Science labeled deepfakes the most serious A.I.-enabled threat. Sen. Ben Sasse, a Nebraskan Republican who has introduced a bill to criminalize the malicious creation of deepfakes, warned last year that the technology could “destroy human lives,” “roil financial markets,” and even “spur military conflicts around the world.”

To an extent, these concerns are fair. After all, the proliferation of deepfake technology has already enabled things like the production of fake adult content featuring celebrities, and lifelike impersonations of politicians for satire. Earlier this year, Dan Scavino, White House social media director, tweeted a poorly manipulated clip, which was also retweeted by President Donald Trump, of Trump’s rival Joe Biden asking people to re-elect Trump.

Trump retweeted edited video of Biden. Here's what Biden actually said.

However, these less-than-convincing hoaxes have quickly been debunked before reaching the masses. More importantly, experts suggest deepfake videos have thus far had little to no societal impact, and that they currently don’t pose any imminent threats. For example, research conducted by Sensity, a cybersecurity firm focused on visual threats, claims that the vast majority of deepfake videos are pornographic (96%) and that the technology has yet to make its way into any significant disinformation campaigns.

Similarly, a Georgetown University report concluded that while deepfakes are an “impressive technical feat,” policymakers should not rush into the hype as the technology is not perfect at the moment and can’t influence real-world events just yet.

The creator behind the most popular deepfake channel Ctrl Shift Face, whose videos are viewed by millions, believes the “hysteria” circling around the deepfake topic is diverting lawmakers’ attention away from the real issues, such as the poorly regulated ad networks on Facebook that are actually responsible for misleading people.

“If there ever will be a harmful deepfake, Facebook is the place where it will spread,” Ctrl Shift Face said in an interview with Digital Trends. “In that case, what’s the bigger issue? The medium or the platform?”

The owner of BabyZone, a YouTube gaming channel with over half a million subscribers that often deepfakes celebrities into video games, echoes a similar concern: “I think that deepfakes are a technology like all the other existing technologies. They can be used for good and for bad purposes.”

The movement to save deepfakes

Over the last year or two, as governments and tech companies investigate the potential risks of this technology, deepfake advocates have scrambled to allay these concerns and fix the technology’s public image. Reddit communities that seek to “adjust this stigma” have popped up, and some independent researchers are actively building systems that can spot deepfakes before they go viral.

Roman Mogylnyi, CEO and co-founder of Reface, a hit app that lets you quickly swap your face into any GIF, says his startup is now developing a detection tool that can tell whether a video was made with Reface’s technology. “We believe that wide access to synthesized media tools like ours will increase humanity’s empathy and creativit, and will help to change the perception of the technology for better,” Mogylnyi told Digital Trends.

Eran Dagan, founder and CEO of Botika, the startup behind the popular face-swapping app Jiggy, has a similar outlook toward deepfakes and believes as they become more mainstream, “people will be much more aware of their positive use cases.”

Given the potential dangers of deepfakes, however, it’s likely that Congress will eventually step in. Major tech platforms including Facebook, Twitter, and YouTube already have updated their policies to flag or remove manipulated media that’s designed to mislead. Multiple states like California and New York have passed bills that will punish makers of intentionally deceptive deepfakes, as well as ones released without consent of the person whose face is used.

Should deepfakes be regulated?

While these policies exclude parody content, experts, fearing ill-defined laws or a complete ban of the technology, still believe Congress should stay out of it and let deepfakes run the natural course that any new form of media goes through.

David Greene, civil liberties director at the Electronic Frontier Foundation, says “any attempt by Congress to regulate deep fakes or really any kind of visual communication will be a regulation of speech and implicate the First Amendment.

“Society needs to build new mechanisms for certifying what can be trusted, and how to prevent the negative impacts of synthetic content on individuals and organizations.”

These laws, Greene adds, need to be precise and must — in well-defined and easily understood terms — address the harm they’re trying to curtail. “What we have seen so far … regulatory attempts at the state level are vague and overbroad laws that do not have the precision the First Amendment requires. They don’t have required exceptions for parody and political commentary.”

Giorgio Patrini, CEO of Sensity, finds a ban on algorithms and software “meaningless in the internet era.” Patrini compares this conundrum with malware protection and how it’s next to impossible to put an end to all computer viruses or their authors, so it’s better to simply invest in anti-malware mechanisms instead. “Society needs to build new mechanisms for certifying what can be trusted, and how to prevent the negative impacts of synthetic content on individuals and organizations,” he said.

Tim Whang wrote in the Georgetown University report that with the commodification of deepfake tools, the technology to detect and filter it out automatically will consequently evolve and become more accurate — thereby neutralizing their ability to present any serious threats.

Microsoft’s recently launched deepfake detection tool, for instance, analyzes the videos to offer you a confidence score that indicates how likely it is that the video was modified.

“Deepfakes are a new form of media manipulation, but not the first time we’ve faced this type of challenge. We are exploring and investing in ways to address synthetic media,” said a YouTube spokesperson.

The researchers behind CtrlShiftLab, an advanced deepfake creation software that many YouTubers including Ctrl Shift Face employ, are now also working toward open-source projects to raise awareness and build more comprehensive detection services.

“The only way to prevent this [deepfake abuse] is to establish an open source deepfake-related project and attract the public’s attention. So public netizens can realize that deepfakes exist,” Kunlin Liu, one of the CtrlShiftLab’s researchers, told Digital Trends.

Several deepfake creators Digital Trends talked to remain optimistic and find countries’ pushback against deepfakes premature. They agree that deepfake’s growing role in memes and parody culture will be instrumental in mending this emerging tech’s crummy reputation. And as long as videos have disclaimers and platforms invest in more effective detection layers, they added, deepfakes are here to stay.

“I think that the reputation of deepfakes is improving significantly. Two years ago, the word deepfake automatically meant porn,” said the creator of Ctrl Shift Face. “Now, most people know deepfakes because of these entertaining videos circulating around the internet.”

Shubham Agarwal
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more