LifestyleScripps News Life

Actions

Think That Video You’re Watching Might Be A Deepfake? Here’s What To Look For

Think That Video You’re Watching Might Be A Deepfake? Here’s What To Look For
Posted
and last updated

You’re scrolling your Facebook News Feed when you come across a video of a well-known politician. You pause, turn up your phone’s volume and start watching.

You’re shocked when you hear the politician say something wildly offensive or when he starts slurring his words as if he’s under the influence of drugs or alcohol. What happened to him? you wonder to yourself. You click “share” so that all of your Facebook friends can see evidence of his bad behavior, too.

Here’s something you may not have considered while watching or sharing: The video was completely fake. Though it looked and sounded totally real, it was actually the product of specialized video editing and artificial intelligence.

These so-called “deepfake” videos are so convincing, so seemingly real, that they’re almost impossible to discern from the real thing. So, how can you be sure you’re watching a legitimate video and not having the wool pulled over your eyes? Here’s what you need to know to make sure you’re not being duped by fake videos and misinformation.

What Are Deepfakes?

To start, it helps to simply recognize and understand that deepfake videos even exist. Just knowing that there’s a possibility a video you’re watching could be doctored or totally fake is one of the first steps toward becoming a better informed consumer of media.

So, what are deepfakes, exactly? According to the News Literacy Project, a nonpartisan nonprofit working to educate the public on how to identify credible news and information, deepfakes are digitally manipulated videos that make a person appear to say or do something that they never actually said or did. In short, they’re totally made up videos made to look like reality.

Deepfakes are created using deep learning, a form of artificial intelligence in which a computer creates fake videos, images and even audio files. They’re so sophisticated that many people don’t realize they’re watching or listening to something that never actually happened.

News Literacy Project

Even technology-savvy high school students can’t always tell when they’re watching a deepfake. More than half of high school students who participated in a recent study were fooled by a Facebook video that supposedly showed Democrats committing voter fraud but was actually a compilation of videos depicting ballot-box stuffing in Russia. The study, led by researchers at Stanford, found that 52% of students said the video presented strong evidence of voter fraud in America.

Similarly, you might encounter something called a “cheapfake,” which is a video or image that’s been taken out of context or badly edited. Think of cheapfakes as the low-tech version of deepfakes — they’re not quite as sophisticated, but they can still fool even savvy media consumers. A good example is a widely shared video of House Speaker Nancy Pelosi that had been slowed down to make it appear as though she had been drinking.

This Washington Post video compares the original to the manipulated video:

How Can You Spot A Deepfake?

Nobody likes to be misled or duped into believing something that’s not true, so how can you protect yourself against watching, listening to or sharing deepfakes and cheapfakes? There are a few simple red flags to be aware of — watch and read everything you come across very carefully, and keep your eyes and ears open for these warning signs.

1. Watch Their Eyes

Since deepfakes are created by computers, they’re don’t always accurately portray human behaviors, according to MIT’s Detect DeepFakes project. To start, look for anything that just seems off or abnormal about the person in the video, paying close attention to the person’s eyes. Does she ever blink? Or, if she does blink, do her eye movements seem jerky and unnatural? Computers have a hard time replicating natural human eye movements, so this can be a telltale sign that you’re watching a deepfake.

2. Consider The Coloring

Similarly, computers have a hard time with natural elements like skin tone, shadows and textures. Does the person’s skin look discolored or blotchy? Wrinkly or glassy smooth? Are the shadows from their nose or eyeglasses where they should be? These are all signs that you may be watching a deepfake.

3. Notice Any Awkwardness

You may also notice a lack of emotion, awkward body positioning, funky facial expressions or disjointed movements — all good indicators that you’re watching a deepfake.

4. Focus On The Teeth

Look at the person’s teeth — can you make out individual teeth, or are they one big white blob? Computers and algorithms often don’t create outlines of individual teeth in deepfake videos, according to computer security company Norton.

5. Listen Closely

While watching the video, pay close attention to the audio — does it match up exactly with the person’s mouth? If not, you may be watching a deepfake. Also keep an ear out for mispronounced words, disjointed or robotic-sounding voices, funky digital background noises and anything else that seems off.

Adobe

6. When In Doubt, Dig Deeper

If you watch a video of someone saying or doing something outrageous, you should always approach it with a healthy dose of skepticism. Do some of your own research and digging — are reputable news organizations reporting on the video? Can you confirm basic details, such as the date, time or location of the video? Can you determine where the video originated and who created it?

Remember that deepfakes and other types of misinformation are designed to be inflammatory — in other words, if something seems too outrageous to be true, check it out. If you’re feeling a strong emotion while watching the video — outraged, excited, angry or curious, for instance — you might be watching a deepfake. Anyone with a computer or phone can alter images, videos and audio files these days, and trolls are everywhere.

Be skeptical and if you suspect you’re watching a deepfake, be sure to report it to whatever platform you found it on — Facebook and Twitter have both banned the use of malicious deepfakes.

 

This article is part of the second annual National News Literacy Week, Jan. 25-29, a national public awareness campaign to promote news literacy and the role of a free press in American democracy. The week is part of an ongoing partnership between Simplemost’s parent company, The E.W. Scripps Company, and the News Literacy Project. Visit NewsLiteracyWeek.org to test your own news literacy and take the pledge to be news-literate.

This story originally appeared on Simplemost. Checkout Simplemost for additional stories.