January 16, 2026

-
min read

The Automated Con: Mitigation Tactics for Identifying Deepfake and LLM-Assisted Impersonation

Generative AI has made it simple for threat actors to craft convincing cons with deepfake video, audio, and image files.

Over the past few years, artificial intelligence (AI) has supercharged deepfake technology. Creating a fake picture, video, or audio recording of a person used to require a considerable investment of both time and technical skills. Now, generative AI (genAI) platforms can whip up convincing deepfakes in minutes, using only a single photo or short voice clip as a starting point. Threat actors can effectively create “automated cons,” which they can scale and customize for any number of potential victims.

While deepfake scams can be harder to identify than ordinary social engineering, you can still mitigate them with the right technology and training. To protect your organization from automated cons, you’ll first need to understand the different types of deepfakes and the threats they represent. From there, you can adopt proven strategies to help employees recognize, report, and counteract common scams.

Common types of deepfakes

What is a deepfake? The National Security Agency in the United States defines a deepfake as “multimedia that has either been synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology.”

The U.S. Congress elaborates on this definition: Deepfakes depict real people doing things that they never actually did. Types of deepfakes include:

  • Videos
  • Sound recordings
  • Photos
  • “Representation[s] of speech,” such as text or email messages

Each of these categories raises thorny ethical issues regarding consent, autonomy, ownership, privacy, and transparency. For IT administrators, each one also presents different cybersecurity challenges. To combat different kinds of deepfakes, it helps to know how threat actors use each one.

1. Video Deepfakes

Video deepfakes are “recordings” of people doing or saying made-up things. They feature motion and sound, just like real video recordings. Until recently, video deepfakes required specialized skills in image manipulation, sound splicing, and video editing. Now, large language model (LLM) AI tools such as Google Gemini and Sora can generate realistic fake videos with just a few text prompts. While these programs have ethical guardrails in place, threat actors can find software without those restrictions. From there, attackers can threaten or harass employees with “footage” of compromising things that they never really did.

While video deepfakes are getting better by the day, they still tend to have a few telltale signs. To identify video deepfakes, look for:

  • Overly smooth skin
  • Unnatural body movements
  • Inconsistent hairstyles or colors
  • Poor lip syncing

2. Audio deepfakes

Audio deepfakes are sound files that mimic a person’s voice. These forgeries are relatively easy to create, especially if you have a real recording of a person saying more than a few sentences. There are a few different ways for threat actors to use audio deepfakes:

  • Logging into voice-protected accounts
  • Fooling friends, family, or coworkers on the phone
  • Discrediting a person by making them “say” defamatory or bigoted things online

Luckily, audio deepfakes are not too difficult to identify in real-time conversations. Listen for unnatural pauses, particularly after you ask questions, since threat actors need time to modify their voices or create new recordings. For prerecorded deepfakes, keep an ear out for stilted, unnatural cadences or mispronounced words.

3. Deepfake images

Deepfake images are arguably just as old as photography. In a modern context, deepfake images are realistic still photographs of real individuals in made-up situations. A dedicated threat actor could produce photographic “evidence” of a potential victim in just about any unpleasant situation.

Deepfake images are difficult to detect, so you may have to rely on your gut instincts to know something is off. From there, look for:

  • Inconsistencies in the color palette, or misplaced shadows
  • Impossible perspectives
  • Unnatural numbers of teeth or fingers

If possible, you should also check where the image first appeared. A picture on a reputable photography site has more credibility than one from a throwaway social media account.

4. AI impersonation

LLMs are especially good at generating text, and threat actors know how to take advantage of that. By feeding a person’s social media account, text messages, or emails into an AI chatbot, an attacker could create a realistic facsimile of that person’s online voice. From there, they could send convincing messages to a coworker, asking for money or login credentials. This is an especially potent tactic for phishing or executive impersonation (whaling) scams.

Impersonation is relatively easy to discover, since talking to the real person in question will almost always clear things up. To discourage people from doing this, threat actors use a few familiar tactics:

  • Urgency, so there’s no time to think things over
  • Appeals to emotion, where the victim will feel bad for not “helping”
  • Threats of punishment, in which the victim’s job or well-being is on the line

How to mitigate deepfake threats

Knowing how to identify an automated con is arguably the best deepfake fraud prevention tactic. However, some attempts will be more convincing than others, especially as GenAI systems continue to evolve. Use these methods to protect your organization from deepfake attacks:

1. Multi-factor authentication (MFA) systems

Multi-factor authentication (MFA) is a tried-and-true method for preventing cyber attacks, including those involving deepfakes. With MFA, a correct username and password aren’t enough to log into a network. Users must also provide randomly generated codes from their mobile devices. Even if a deepfake tricks an employee into giving up their login info, a threat actor can’t access your systems without an MFA code.

Just be aware that some services use voice recognition as either a primary or secondary way of verifying users. With the prevalence of audio deepfakes, it’s probably best to avoid this and focus on alphanumeric MFA codes instead.

2. Email gateway filtering

Email is an ideal medium for phishing scams. Threat actors can easily mimic a person’s online voice, spoof an email address, and hide malicious links in legitimate-looking URLs. That’s where a good email gateway filtering strategy can make a difference. Email filtering can detect and block phishing attempts by analyzing everything from the sender’s address to the words and phrases in the body copy. Even if they use deepfake multimedia or AI impersonation, malicious messages often fall back on shady email domains or links with obvious homograph attacks. Email gateway filtering can also flag the same suspicious message being sent to multiple members of your team.

3. Employee awareness training

Your best defense against deepfake attacks is a skeptical, well-educated workforce. Employees should know how to identify deepfakes and how to report them to the IT or security team. Organize workshops to review the telltale signs of deepfake videos, audio, and images, and give practical tests to see whether employees can identify obvious (and not-so-obvious) forgeries. Even excellent deepfakes still have flaws, and as long as your staff members know what to look for, they can avoid many of the scams that come their way.

Defend your organization’s mobile devices

When threat actors create deepfake videos, audio, images, and text, it’s often the first step in a phishing scheme. With small screens and constant, easy access, mobile devices are especially susceptible to phishing. Download The Mobile EDR Playbook: Key Questions for Protecting Your Data from Lookout to learn how you can safeguard your organization’s mobile devices. This resource asks four questions that can help you develop a comprehensive strategy for smartphones and tablets. Deepfakes will continue to improve, but so can your approach to mobile cybersecurity.

Book a personalized demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization

Contact Lookout to
try out Smishing AI

Book a Demo

Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.