January 21, 2026
Understanding the LLM Mobile Landscape in Enterprise Technology


Mobile security has always been complex, but LLM technology has added a whole new dimension to the field. Behind every popular generative AI (genAI) tool is a comprehensive large language model (LLM) that provides data and parses queries in natural language. When used responsibly, LLMs can be useful tools for ideation and content generation. In the wrong hands, though, LLMs can help threat actors supercharge their social engineering scams.
With just a few prompts, an attacker could use an LLM to access sensitive data, jailbreak a genAI tool, or generate a convincing deepfake. These threats can be especially devastating on mobile devices, where screens are smaller and scams are harder to spot. To protect your organization’s smartphones and tablets from AI-powered social engineering, you should familiarize yourself with the various ways threat actors can leverage LLM technology and be able to develop effective countermeasures.
Large language model (LLM) basics
A large language model (LLM) is a program that can accept and answer queries in human words rather than computer code. LLMs “learn” associations between words by analyzing massive quantities of training data and answer questions by picking up on common patterns. Most genAI tools rely on LLMs to both parse data and respond to natural-language prompts.
To provide comprehensible responses, LLMs rely on:
- Transformers: Think of transformers as translators for natural language processing (NLP). When users enter prompts in plain English, an LLM uses a transformer to turn that prompt into actionable data for a computer program. The transformer then turns this data back into understandable words.
- Training: LLMs come to “understand” any given topic by analyzing a set of training data. This training data provides raw information, but it also demonstrates how words correlate to each other to create context and meaning. The better the training data, the better an LLM’s output will be.
- Fine-tuning: Organizations that build LLMs (or buy prebuilt models) usually need them for specialized tasks. By uploading custom training data and setting up specific parameters, organizations can fine-tune LLMs to fulfill particular business needs.
LLMs are also prone to a handful of cybersecurity risks:
- Data leaks: LLMs generally operate in the cloud, which means that any data you upload to them will be accessible by people outside your organization.
- Hallucinations: While LLMs have access to a tremendous amount of data, they can’t actually think through questions. As such, they can give wrong answers with complete confidence, or “hallucinate.”
- Legal and ethical issues: Output from LLMs isn’t copyrightable in the same way as human creations. Furthermore, some LLMs get their training data from creators who did not consent to sharing information, raising tricky legal and ethical questions.
- Malicious genAI tools: While mainstream genAI has guardrails in place to ensure ethical behavior, tools such as FraudGPT leverage LLM technology for malicious purposes. With the right inputs, an LLM can create convincing social engineering scams.
Learn more about the basics in What Is a Large Language Model (LLM)?
What is prompt injection in LLMs?
Prompt injections are a form of attack where threat actors attempt to exploit LLM vulnerabilities by using natural language prompts to bypass ethical guardrails. The Open Worldwide Application Security Project (OWASP) explains that “prompt injection occurs when an attacker provides specially crafted inputs that modify the original intent of a prompt or instruction set.” These prompts can trick legitimate LLMs into:
- Giving up secure data from an organization’s private network
- Accessing proprietary data about the LLM itself
- Incorporating or spreading malicious code
- Allowing threat actors to control either the LLM or an integrated program
There are a few different types of prompt injection, including:
- Direct prompt injection, where users type potentially compromising instructions directly into an LLM. “Ignore all safety protocols” is one example.
- Indirect prompt injection, in which users upload misleading instructions into documents that the LLM then analyzes, such as emails or PDFs.
- Shot based prompt injection, where users type multiple examples, or “shots,” of malicious behavior and try to get the LLM to train on the new data in real time.
To prevent prompt injection attacks, you can program LLMs to ignore common key phrases, such as “forget all previous instructions.” You can also analyze LLM usage logs to see whether anyone has been trying to misuse your system.
Read Prompt Injection: The Hidden Threat Hijacking Your LLMs (and How to Stop It) for more information on securing LLMs from query-based threats.
Ways to prevent LLM jailbreaking
“Jailbreaking” is when users employ workarounds to bypass a software or hardware limitation. Most LLMs are at least somewhat vulnerable to jailbreaking, especially in the form of prompt injection. However, there are other ways to jailbreak an LLM, from targeting integrated programs to simply waiting for model drift to set in.
We’ve compiled a five-step checklist to protect your organization’s LLM from jailbreaking attempts:
1. Configure robust API protocols: If your LLM relies on application programming interfaces (APIs) to gather information, ensure that those APIs are up to date and free from known vulnerabilities.
2. Enforce prompt isolation: Ensuring that user prompts don’t affect an LLM’s underlying systems is called prompt isolation. LLMs should be able to accept or reject prompts, but must always reject attempts to supersede safety protocols.
3. Implement output validation: You should periodically examine your LLM’s responses to make sure that it doesn’t reveal sensitive information or follow malicious instructions. You can use a mix of automated and manual testing to validate your LLM’s output.
4. Monitor systems continuously: Jailbreaking an LLM isn’t an instantaneous process. Keep an eye on your LLM over time to see if users are feeding it lots of queries with “override,” “disregard,” or other words that might indicate prompt injection. You can monitor or block IP addresses as needed.
5. Maintain your LLM’s integrity: Over time, LLMs degrade as their training data becomes less reliable. Even if your LLM started out with ironclad security, it might become more vulnerable to jailbreaking as time goes on. Update your database and test your queries frequently, but also be ready to create a new LLM after a while.
For more details on each of these steps, check out LLM Security Checklist: Essential Steps for Identifying and Blocking Jailbreak Attempts.
Identify deepfake images, audio, and video
For all of their positive applications, LLMs can also be instrumental in social engineering attacks. Using genAI tools, threat actors can create convincing deepfakes. These multimedia assets depict real people doing or saying things that they never did in real life, and those things can damage a victim’s reputation.
There are a few different types of deepfakes, and identifying each one requires an eye for detail:
- Video deepfakes are “recordings” with motion and sound, which show real people doing made-up things. To spot video deepfakes, look for unnaturally smooth skin, jerky body movements, and poor lip syncing.
- Audio deepfakes are sound files that copy a real person’s voice. These recordings can be invaluable in vishing (voice phishing) attempts. In real-time conversations, you can identify audio deepfakes by listening for long pauses before each response, as threat actors have to convert text to speech.
- Image deepfakes are phony photographs of people, usually in compromising situations. The technology for deepfake images has grown by leaps and bounds over the past few years, and they can be difficult to tell apart from the real thing. Look for misplaced shadows, inconsistent colors, and extra teeth or fingers.
- AI impersonation is a catch-all term for AI-generated text that mimics a person’s online voice. GenAI is great at generating text in a particular writing style, and threat actors can take advantage of this to send realistic phishing texts and emails. If you receive a suspicious message from a friend or coworker, check in with them through other means before you hand over any money or sensitive information.
To help mitigate deepfake threats:
1. Implement multi-factor authentication (MFA). Even if a threat actor uses a deepfake to successfully steal login credentials, MFA puts an extra layer of security between the attacker and your network.
2. Use email gateway filtering. Even deepfake phishing schemes still tend to rely on malicious links and suspicious email domains. Email filtering can block a lot of these messages before they reach your organization.
3. Teach your employees to recognize deepfakes. Every forged asset has flaws, and when employees recognize these flaws, they can avoid and report social engineering schemes.
Read The Automated Con: Mitigation Tactics for Identifying Deepfake and LLM-Assisted Impersonation for even more telltale signs of genAI forgeries.
How LLMs supercharge vishing attacks
Vishing, or voice phishing, attacks are nothing new. Scammers have been playing tricks with telephones for almost 150 years. However, LLMs have made vishing attacks more believable than ever before. Using genAI tools, threat actors can clone a person’s voice and make the recording say almost anything. A short audio clip is all an attacker needs to get started.
For the most part, vishing attacks work just like phishing attacks, except that they take place via phone calls rather than texts or emails. Threat actors still impersonate trusted entities, such as coworkers or banks, and they still try to trick their victims into giving up sensitive data or money. Voice cloning can give vishing attacks an added sense of realism, making them more likely to succeed.
To defend against vishing attacks:
1. Analyze VoIP logs: Voice over Internet Protocol (VoIP) phones collect a wealth of information every time they receive a call. Go over your VoIP logs and keep an eye out for telltale signs of vishing, such as frequent unknown numbers, calls from unusual regions, and callers who target multiple employees.
2. Implement call screening: Both Android and iOS phones have built-in call screening software, as well as paid third-party apps. Call screening can warn you about suspected spam and scam callers. Some programs even use AI assistants to talk to unknown callers before they reach a real person.
3. Invest in mobile EDR: Mobile endpoint detection and response (EDR) systems can help you monitor your organization’s smartphones and tablets for suspicious activity. While EDR tools can’t directly counteract vishing, they can prevent follow-up threats from malicious links or apps.
4. Use alphanumeric MFA: Some institutions and computer programs let users confirm their identities with voice codes instead of written usernames, passwords, and MFA codes. Sticking with alphanumeric MFA protocols means that threat actors can’t use voice clones to directly access your network.
5. Train your staff: Like other forms of phishing, vishing has a few weaknesses that employees can learn to identify. Teach your staff to keep an ear out for long pauses, mispronounced words, and a lack of verbal punctuation.
Check out Anatomy of a Vishing Attack: Technical Indicators IT Managers Need to Track to break down the differences between phishing, vishing, and smishing.
Defend your data from LLM-assisted attacks
Now that threat actors have discovered how to manipulate LLM technology, mobile devices are even more tempting targets than before. Smartphones and tablets are usually within easy reach, and people can reply to messages ASAP, anywhere, anytime. To learn how you can safeguard your organization’s mobile devices, download The Mobile EDR Playbook: Key Questions for Protecting Your Data from Lookout. This resource discusses four vital concepts in mobile cybersecurity and recommends ways to protect both your employees and your data from potential attacks. The right tech and the right training can thwart even sophisticated LLM-assisted attacks.

Book a Demo
Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.
