Deepfakes are used for laughs and more nefarious reasons. Here’s more about the concept and why it isn’t all fun and games.
Once largely confined to pornography, deepfakes are now popping up everywhere. More troubling: the technology behind it is getting better each year, which could have broad implications in the coming years.
Did Mark Zuckerberg really admit to having stolen data from billions of people? Did Barack Obama call Donald Trump “a complete dipshit?” Each of these videos links is an example of a deepfake, a dangerous form of artificial technology that makes it possible to put words in someone’s mouth or a different face on another person’s body. There are also “voice skins” and “voice clones,” which allow pranksters to mimic the voice.
When discussing deepfakes, there are two questions worth answering. First, what makes them, and second, why have they increasingly become so effective. So let’s take a look.
Deepfakes: So, What’s the Problem?
No doubt, some deepfakes are funny — once you’re in on the joke. Imagine, for example, that the 44th president of the United States really did publicly call the 45th a bad name or that the original Wonder Woman, Lynda Carter, was somehow placed into the more recent movies that actually starred Gal Gadot.
Unfortunately, as deepfakes get better, problems could arise and cause significant problems. For example, political opponents can create fakes videos showing their opponents in compromising positions. In doing so, they could shift the election outcome. In a more dangerous example, imagine the current president of the United States declaring war on China and the Chinese government believing it’s true.
Problems with deepfakes don’t have to involve celebrities or government officials. For example, with “voice skins,” a scammer could convince a parent their child is in trouble, or a fake IRS agent could swindle an unsuspecting citizen to hand over banking information.
Other examples of deepfakes are phishing scams, data breaches, reputation smearing, social engineering, automated disinformation attacks, and financial fraud, among many others.
Why Are They Effective?
Deep learning technology has improved the quality of deepfakes in recent years. And yet, there are other reasons the line because fact and fiction are becoming increasingly blurred. According to Deep Fake Now, confirmation bias and false belief are also messing with our brains.
With confirmation bias, individuals search for, interpret, and favor information in a way that supports one’s beliefs or values.
As the American Psychology Association (APA) explains, “Confirmation Bias is the tendency to look for information that supports, rather than rejects, one’s preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data.”
In psychology, the theory of mind refers to the mental capacity to understand other people and their behavior. A false belief is considered an important milestone to the theory. It’s the understanding that other people can believe things that aren’t true.
Back to Deep Fake Now: “Given enough time, someone, somewhere will inevitably use ‘alternative facts’ to support their agenda. And they’ll be likely to use deepfake technology to ‘prove’ their statements. It’s their way of spreading doubt and propaganda within societies.”
Both concepts show how deepfakes are be used as a tool to alter an opinion.
Deepfakes: Is It Real or Fake?
Even though the technology behind deepfakes is getting better, there are ways to tell when a video specifically is fake. In fact, Norton explains there are at least 15 sure-fire ways to determine fakery. Among these are:
- Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye moments. That’s becomes someone’s eyes usually follow the person they’re talking to.
- A lack of emotion. You also can spot facial morphing or image stiches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.
- Unnatural coloring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.
- Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.
- Reverse image searches. A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.
There are lots of examples of deepfakes online. My personal favorites include the following:
As you can see, deepfakes can be funny, hilarious, really. Unfortunately, they can also be dangerous when the technology is used illegally. It will be interesting to where this goes as a concept in the years ahead.