Skip to main content

How artists and activists are using deepfakes as a force for good

In a recently released video, what appears to be a tense Brett Kavanaugh speaks before members of the United States Congress. “It’s time to set the record straight,” he begins. Over the next few minutes, the Supreme Court Justice admits it’s possible that he committed sexual assault and expresses remorse for the way he responded to the allegations by Christine Blasey Ford in his testimony. “For that, I take responsibility and I apologize.”

Thing is, this scene isn’t real. The footage is doctored, and Kavanaugh never actually said those things.

What if Brett Kavanaugh had a reckoning? | Infinite Lunchbox

In reality, Kavanaugh denied and disregarded the charges and played the victim. The video described above is from a series deepfaked clips that envision a future where divisive public figures like Kavanaugh, Alex Jones, and Mark Zuckerberg take responsibility for their past transgressions.

The series, titled Deep Reckonings, is the brainchild of Stephanie Lepp — an artist who aims to elicit positive change in the world by leveraging deepfake technology to help people see and imagine better versions of themselves.

It’s a lofty and somewhat abstract project, but Lepp isn’t alone in her efforts. She’s part of a growing league of creators that aim to use deepfake technology to do good.

Deepfake it ’till you make it

Deepfakes have had a controversial journey so far. The technology has been used widely for nefarious purposes like pornography creation and disinformation campaigns, which has brought it under sharp scrutiny from both governments and tech companies that fear the technology’s weaponization.

“Given that the overwhelming majority of deepfakes are nefarious in nature, it’s understandable that we’ve focused on their weaponization,” says Lepp. “But this focus has prevented us from realizing their prosocial potential. Specifically, deepfakes can be used for purposes of education, health, and social change.”

Stephanie Lepp
Stephanie Lepp Image used with permission by copyright holder

She argues that, similar to how virtual reality has been used to help patients recover from brain injuries by letting them interact with virtual memories, deepfakes can be employed for psychological healing in trauma victims. For example, imagine a scenario where doctors could script deepfakes of an addict’s sober future self and use that to encourage them down the path of recovery.

The concept, at least in theory, is sound. Jonathan Gratch, director of virtual human research at the University of Southern California’s Institute for Creative Technologies, has found that seeing yourself in VR can be highly motivating, and that the same concept could easily be applied to deepfake footage. He suggests that if a patient’s face was subtly blended into their doctor’s face, the patient would be more likely to follow the doctor’s advice.

More than memes and misinformation

Despite the fact that negative applications of deepfakes tend to get more attention, positive applications like Lepp’s are on the rise. Within the past couple years, the technology has made appearances in the fields of storytelling, prosocial projects, and more.

Project Revoice: Helping this Ice Bucket Challenge founder take his voice back from ALS

The ALS Association’s Project Revoice, for example, enables amyotrophic lateral sclerosis patients who’ve lost their ability to speak to continue using their voice. How? By using deepfakes to create personalized synthetic vocal tracks that can be played on demand with a soundboard.

In a separate project from the nonprofit antimalaria organization Malaria Must Die, celebrity athlete David Beckham delivered a message in nine different languages (and voices) thanks to deepfaked audio and video that made his lips match the words.

David Beckham speaks nine languages to launch Malaria Must Die Voice Petition

In one particularly striking campaign from earlier in 2020, the Massachusetts Institute of Technology’s Center for Advanced Virtuality sought to educate the public on misinformation by producing a deepfake of former U.S. President Richard M. Nixon delivering the contingency speech written in 1969 in the event the Apollo 11 crew were unable to return from the moon.

These kinds of public service announcements and awareness campaigns are just the tip of the iceberg. Deepfake tools have also helped to simplify processes in the entertainment industry that otherwise demand high-end equipment and time-consuming resources, such as de-aging, voice cloning, and a lot more. Every face in a recent music video by The Strokes was fake, for instance, so that the band’s roughly 40-year-old members look could like they are 20.

Ohad Fried, a senior lecturer of computer science at Israel’s Interdisciplinary Center Herzliya, says that thanks to deepfakes, “what used to take years of artist time can now be achieved by independent small studios. This is always good news for diversity and quality of the media we consume.”

Tipping the scales

However, deepfake technology’s potential to do harm — especially as it gets more accessible — remains a concern. Aviv Ovadya, founder of the Thoughtful Technology Project, agrees that the ability to create synthetic media can have “numerous positive impacts, for storytelling, for those with disabilities, and by enabling more seamless communication across languages.” But at the same time, he warns there’s still a lot of room for harm when the technology goes mainstream and a lot of work that needs to be done to minimize these risks.

“Even these positive use cases can unintentionally lead to real and significant harm,” he told Digital Trends. “Excerpts from art pieces attempting to create empathy can also be taken out of context and misused.”

“The goal should be to build this technology in a way that mitigates those negative impacts as much as possible.”

Experts have repeatedly sounded the horn for channeling more resources into detection programs and official ethics guidelines — though legal intervention could end up hampering free speech. But no one’s quite sure yet about which direction deepfakes will ultimately take. Like any emerging technology, there will be a point where deepfakes will reach a balance and the responsibility will fall on tech companies, policymakers, and creators to ensure the scales remain tipped toward the good side.

Ovadya also suggests limiting deepfake tools’ accessibility for the masses until researchers are able to “complete some of the fortifications that we need to protect our society from the potential negative impacts. The goal should be to build this technology in a way that mitigates those negative impacts as much as possible at the very least.”

For now, though, Lepp will spend her time focusing on her next deepfake protagonist: Donald Trump and his concession speech.

Shubham Agarwal
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more