When no one is safe from deepfakes, how can we protect ourselves?

When we were young, we were often told not to believe everything we heard. When the Information Age dawned on us, with the introduction of software like Adobe Photoshop, this mantra slowly spilled over onto the visual: don’t believe everything you see.

Anyone with design knowledge could fabricate an image of a celebrity doing something silly, ‘drop’ the Statue of Liberty into the Antarctic, airbrush their face, or make it look like they’ve dropped a few pounds. The designs from this era might be comical by today’s standards, but they offered a glimpse of what’s to come: deepfakes.

As the American Bar Association described them, “Deepfakes are more frightening than Photoshop on steroids.”

A deepfake is an artificial image or video in which a person’s face is digitally altered and replaced with another’s. Often, it is used for malicious purposes, or to spread misinformation.

In a benign scenario, it’s a film studio placing an older actor’s young face on a double to tell new stories with old characters (spoilers for The Mandalorian). In a malicious scenario, it’s someone placing the face of someone they know in a controversial video, like a pornographic film.

Still, surely it must be difficult to produce deepfakes? After all, the technology is quite complex.

As it turns out, it’s anything but. Building a deepfake program or algorithm is difficult—utilizing user-friendly deepfake software isn’t, and there’s plenty on the market already, many of which are free (a quick Google search will easily land you on a bunch of free, easy-to-use deepfake tools).

Here’s a chilling statistic: It now takes less than 25 seconds and costs $0 to create a 60-second deepfake pornographic video of anyone using just one clear face image.

Still, technology is always a double-edged sword, and there are also some positive applications: the tech can help improve ADR and democratize VFX for filmmakers, create lifelike bodies for customer service chatbots, and even produce synthetic brain-MRI images to improve the detection of certain brain conditions.

Therefore, how do we step up, plan for and navigate the ethical and legal implications of deepfake technology? Are regulators doing enough to rein in deepfake cyber abuse? How are improvements in GenAI exacerbating this threat? And ultimately, what can we do about it?

Let’s investigate…

Enjoying this newsletter? Subscribe to Future Files on Substack to receive it straight to your inbox.

It's not just celebrities who are being targeted

According to a 2023 report, the number one use for deepfake technology is to generate pornography (98%), and 94% of those featured in these controversial videos are entertainment industry celebs, particularly from the US and South Korea.

Taylor Swift often finds herself the victim of deepfake pornographic and violent video content. Her voice has also been used to generate songs that have been flooding TikTok, which prompted her record label to take action this May, getting TikTok to remove unauthorized music generated by AI using the artist’s voice.

While this controversial technology might have started as an imperfect way to replace people’s faces in videos, the development of GenAI is compounding the capabilities of deepfakes and their believability.

The technology has gotten better at digitally altering people’s faces, and it has also improved in audio. Now, you can mimic the voice of the person whose identity you’re illegally using, so that the person in the fabricated video even sounds like your acquaintance or celebrity crush. Add a large-language model into the mix, and you have a fake half-celebrity-half body double that talks and sounds like the person you’re emulating. It’s chilling, really. It’s how people have managed to revive dead actors, and hijack celebrity voices.

And it’s not just celebrities whose reputations and public image are being targeted.

Last year in the US, then 14-year-old Elliston Berry woke up one day and found herself the victim of deepfake pornographic content, distributed by a classmate across Snapchat. This is just one case of many that have since been reported.

On the state level, deepfakes are used to spread propaganda and disinformation, like when pro-China bot accounts on Facebook and Twitter spread deepfaked news reports as part of a state-aligned information campaign. Others hijack political figures’ likenesses or voice and use them to spread misinformation and serve hidden agendas, like they did this month with Joe Biden.

Other malicious use cases of this technology include fraud attempts, using deepfake technology to pose as other people. Even companies are not safe, with one in 10 US executives saying that their companies had been targeted.

There is always two sides to the coin

While deepfakes tend to get a bad reputation, and for good reason, we need to reconcile with the fact that this technology has potential for good as well.

For people with social anxiety, conversing with a deepfaked AI counselor or companion can help them ease back into social gatherings and interactions. In Japan, there is a large demand for AI companions.

Within healthcare especially, deepfake technology has massive potential: From scanning MRIs for diseases like cancer, to helping with drug design and discovery.

The lines between the 'real you' and the 'digital you' may start to blur

Looking into the future, imagine a world where our digital twins conduct business meetings while we sleep, where historical figures are 'resurrected', or where AI influencers dominate social media...

Rather than reading about a historical figure like Martin Luther King in a classroom history lesson, what if we could talk to him face to face? Children learn differently, and this could be one way to connect with those who struggle to learn the more 'traditional' way.

What if we could create a deepfake avatar/ digital twin of ourselves, that could eventually handle tasks on our behalf? By uploading pictures or videos and by training it on our personal data, we could have a virtual augmented reality or hologram version of ourselves that could attend meetings, or even go on job interviews. The possibilities are both exhilarating and unnerving.

As I think of the lines between real people vs our digital selves becoming increasingly blurred, I'm reminded of the Black Mirror episode "Joan is Awful."

In that episode, a streaming platform uses AI to create a real-time TV adaptation of Joan's life, starring Salma Hayek as Joan. The show depicts Joan's daily activities with unnerving accuracy, blurring the lines between reality and digital fabrication. As Joan struggles with this invasion of privacy, she discovers layers of simulated realities, each featuring AI-generated versions of herself. This fictional scenario presents a chilling vision of how deepfake technology could evolve, where our digital selves become indistinguishable from - and perhaps more influential than - our real selves.

Balancing innovation, ethics, education, and regulation

While law enforcement agencies and regulators are trying to keep up with the rate at which technologies like deepfakes are evolving, it’s an uphill battle. But in light of growing outcries from victims regarding their misuse, more governments are starting to take note—and action.

Last year, Kathy Hochul, the Governor of New York, signed a bill banning the distribution of artificial intelligence-powered deepfake content depicting non-consensual sexual images, while the UK has implemented a similar law, punishing anyone who “creates a sexually explicit deepfake, even if they have no intent to share it but purely want to cause alarm, humiliation or distress to the victim.”

In the United Arab Emirates, deepfake misuse cases fall under a range of UAE laws, including the UAE Penal Code, Cybercrimes Law, and Copyright Law, which support citizens with addressing issues like defamation, fraud, privacy breaches, and IP infringement. Furthermore, the government released a Deepfake Guide in 2021 to inform citizens and help them navigate the net in a post-deepfake world.

Still, it’s on us to educate ourselves, and especially our children, about how to navigate the web and new tech safely and responsibly. The solution isn't simply more regulation or better detection algorithms. It requires a fundamental shift in how we consume and interpret information. We need to cultivate a society of critical thinkers, digital skeptics who question not just what they see, but why they're seeing it.

We’ve reached a point where being online is an inescapable part of life

On a recent production of my podcast HYPERSCALE (full episode will be out soon), the author of Quantum Kids Guardians of AI, Angela Radcliffe told me how important digital literacy is for children, not just in using AI and other technologies, but in understanding how they work. This knowledge is crucial for recognizing potential exploitation and misuse. Angela also drew an intriguing parallel between technological advancements and clinical trials, suggesting that controlled experimentation is key to progress. When it comes to deepfakes, she stresses the need for users to become 'literate' in recognizing them to protect themselves from potential harm. It's a reminder that in this new digital world, knowledge truly is power.

Here are some telltale signs you can spot that indicate a video is deepfaked:

  • Digital artifacts, where small areas of the picture have obvious localised islands of distortion or blocky spots of off-colour pixels

  • Distortion around the face

  • Lip movements that don’t match speech

  • Changes in lighting, especially after cuts

  • Changes in skin tone

Proactively shaping the future we want

As author Yuval Noah Harari warns, "In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information." Deepfakes have the potential to create this flood, drowning us in a sea of manufactured realities.

As we move ahead, we must ask ourselves: How do we preserve authenticity in an age of perfect replication? How do we protect the sanctity of human experience when it can be artificially constructed? These are not just technological challenges, but profound philosophical quandaries that will shape the very essence of what it means to be human in the 21st century.

What are your thoughts on the future of deepfakes? Do you believe they will ultimately bring more harm than good? How can we, as individuals and as a society, ensure that we are prepared to face the challenges that lie ahead?

Briar Prestidge

Close Deals in Heels is an office fashion, lifestyle and beauty blog for sassy, vivacious and driven women. Who said dressing for work had to be boring? 

http://www.briarprestidge.com
Next
Next

Why Noland Arbaugh Volunteered For The First Ever Neuralink Brain Implant