Beware AI-generated audio, video fakes
The person at the other end of that video call certainly looks and sounds legitimate. Maybe it’s your grandchild or someone you’ve bonded with in the past.
Yes, it’s odd that they’re asking you to send them money or provide sensitive personal information, but you trust them.
Just one problem: They’re not real. Their image and voice have been generated through artificial intelligence (AI), and are being controlled behind the scenes by a scammer.
What you’re experiencing is a “deepfake” — a rapidly evolving technology often used for malicious acts.
The U.S. Government Accounting Office (GAO) defines a deepfake as video, photography or audio that “seems real but has been manipulated with AI. The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and synthesize speech.”
More and more criminals are using AI deepfakes to commit identity fraud or pry money and data from businesses and individuals. The digital verification platform Sumsub reported an astonishing 1740% jump in deepfake-related fraud attempts in North America between 2022 and 2023.
Cloned voices and faces
By creating a deepfake persona, fraudsters can trick people into believing they’re interacting with someone they know or want to know. This builds trust, making it easier for the scammer to manipulate the victim.
Cybercriminals can also utilize deepfakes to create compromising material for the purpose of extortion. They can use an AI bot to, for example, take a brief snippet of a person’s real voice and “clone” it to produce an authentic-sounding facsimile. The faked voice can then be made to say just about anything.
The majority of deepfake fraud cases thus far have targeted businesses. Even large global companies have fallen for these scams.
In one recent example, an employee at a multinational design and engineering firm was tricked by a deepfake video call into transferring $25 million of the company’s funds to fraudsters.
Many bad actors, meanwhile, are using deepfake audio and video in attempts to gain access to company data, which could result in breaches of customer information.
As this technology grows more sophisticated, it’s also getting easier to use — which means it’s becoming increasingly popular as a method to defraud individuals.
Deepfakes have made their way into the world of romance scams, according to a recent report in Wired. The article described how a crew of scammers used “deepfakes and face-swapping to ensnare victims in romance scams, building trust with victims using fake identities, before tricking them into parting with thousands of dollars.”
How to detect deepfakes
While a number of deepfake detection tools currently exist, many are only available to businesses. Also, most are designed to analyze recordings, and cannot help in real time during audio or video calls.
To recognize deepfakes in real time, you’ll most likely have to rely on your own powers of observation. The MIT Media Lab offers the following tips on how to determine whether a person seen on video is a deepfake.
Zero in on elements of the person’s face, they advised. This includes:
• Cheeks and forehead — “Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes?”
• Eyes and eyebrows — “Do shadows appear in places that you would expect?”
• Eyeglasses — “Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves?”
• Blinking — “Does the person blink enough or too much?”
• Lip movements — “Some deepfakes are based on lip syncing. Do the lip movements look natural?”
In an article for the fact-checking website PolitiFact, Manjeet Rege, director of the Center for Applied Artificial Intelligence at the University of St. Thomas, and Siwei Lyu, a computer science and engineering professor at the University at Buffalo, listed clues that a voice might actually be an audio deepfake.
These include “irregular or absent breathing noises, intentional pauses and intonations, along with inconsistent room acoustics.”
Use your common sense
One thing is clear: Deepfake technology is evolving at such speed that it will become progressively more difficult to tell fiction from reality.
Today you might be able to spot a weird glitch in a person’s face on video, or a strange vocal pattern on a call. But those flaws might not be as noticeable a year or two from now.
Beyond the observational tips offered here, your best defense is to use common sense. If someone contacts you by phone or video — whether or not it’s a person you think you know and trust — and makes an unusual request or demand involving money or sensitive information, step back and assess the situation.
Do whatever you can to independently verify that what the person is telling you is true.
As AI expert Rege said in the PolitiFact interview, “Healthy skepticism is warranted, given how realistic this emerging technology has become.”
This article was originally published by ZeroFox.com. Reprinted with permission.