keypoint-blogs

Deepfakes Are Making It Harder to Discover Deception in Media

Written by Mark DiMattei | Mar 27, 2023 12:15:14 PM

 

Check out the Keypoint Intelligence Channel on YouTube for interviews and other insightful videos!

 

We (as a digital society) are savvy concerning photoshopped images, filters for pictures to make us look better, and the countless social media influencers making use of camera angles to have the perfect body. Hell, we’ve been making fake images as far back as the early 19th century with spirit photos (made popular by William Mumler), so we’ve had plenty of time to figure out how to spot fake/falsified images.

 

“Ghost of Abraham Lincoln” by William Mumler.

 

Video used to be the tried-and-true measure to determine if something had happened. Video cameras have been embedded into doorbells for safety. Video taken through smartphones has become so popular that there have been entire social media platforms designed to make use of video (as well as others that have adopted it into their photo-based concept). News services rely heavily on video as a media to show current events that can affect us on a local, national, and global level.

 

But what happens when video can be falsified?

 

With the advent of artificial intelligence (AI) and superior technology, we must now be as vigilant with our videos as we have been with our images. Part of this scrutiny lies at the feet of a relatively new concept: Deepfakes. Made popular around 2017 on reddit, users would often superimpose the faces of celebrities onto other people in static images and simple videos. Now, with AI, we can manipulate videos to show whoever doing whatever we want.

 

Deepfakes are made for many reasons. Some are used for entertainment purposes. Other videos have a political bent. Unfortunately, a lot of them are adult in nature. According to an article in The Guardian, AI firm Deeptrace found that 96% of the 15,000 deepfake videos online in September 2019 were pornographic.

 

Videos are typically made by feeding thousands of face shots of the two people being used in the deepfake through an AI algorithm called an encoder. The encoder finds the similarities between the two faces and reduces them to their shared common features, compressing the images. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover person A’s face, and another decoder to recover person B’s face. To perform the face swap, the compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A.

 

Example of facial morphing by Edward Webb.

 

The scary thing is that people don’t have to be tech geniuses to make deepfakes. Companies like Deepfakesweb.com or DeepSwap will help you make them with their subscription services. Or if you have a computer with a high-end graphics card and a cloud network to help with processing, there are plenty of tools available for purchase that would allow you to make your own deepfakes by yourself.

 

There is a danger to all of this, however. While deepfakes are not illegal to make and share, there are a lot of gray areas that creators need to be wary of. Depending on the content, a deepfake could infringe copyright ownership, breach data protection law, or be considered defamatory if the video exposes the person to embarrassment and ridicule—not to mention the severe laws around the US concerning “revenge porn.” There are also concerns about actively illegal deepfakes that are actively using this technology to scam people via phone calls.

 

On top of all these worries is an ethical concern that plagues a lot of AI use. Because it requires sources to build upon, AI often takes content online without credit to the original creator—be that an artist, writer, or videographer. Deepfake videos can further be used to spread false information that can have real world consequences, especially if the target is political in nature. We need to be very cautious about how we proceed with a lot of this emerging technology. While we might not be on the path towards robotic sentience, Skynet, or HAL, AI use like in deepfakes can take something originally used for entertainment and abuse it for nefarious ends.

 

Log in to the InfoCenter to view research on digital transformation, smart office, and the move into Industry 4.0 through our Office CompleteView Advisory Service. If you’re not a subscriber, just send us an email at sales@keypointintelligence.com for more info.

 

Keep Reading

Managed Service Providers Are Integrating Generative AI

INFOGRAPHIC: The Basics of Large Language Models

ChatGPT Produces Yet Another Blog