September 5, 2025
The Double-Edged Sword: Benefits and Risks of AI Transformations


Over the past few years, artificial intelligence (AI) has transformed millions of organizations worldwide. AI can automate rote tasks, facilitate natural-language interfaces, and pick up subtle patterns in huge data sets. It can also hallucinate wrong answers, reinforce societal biases, and even introduce cybersecurity risks. Before incorporating the technology into their workflows, responsible organizations must weigh the benefits and risks of AI. That’s especially true for cybersecurity, where AI has been both a tool and a weapon.
Malicious large language models (LLMs) have made it easier than ever for threat actors to phish their targets and spread malware. However, AI has also empowered security researchers and IT administrators to detect, identify, and counteract digital threats more effectively.
As generative AI (GenAI) grows more sophisticated and agentic AI becomes a reality, navigating the cybersecurity landscape will require cooperation between business leaders, security professionals, government officials, and even professional ethicists. To protect your organization while leveraging powerful AI tools, learn what the technology can do today — and what it might be able to do in the near future.
WormGPT’s legacy lives on in FraudGPT
Less than a year after ChatGPT launched for the general public, threat actors had already found a way to twist the technology to their own ends. A Portuguese programmer named Rafael Morais created WormGPT: an LLM that functioned like ChatGPT, but without any ethical guardrails. Users could ask WormGPT to generate phishing messages, social engineering schemes, or sophisticated malware, and the tool would comply. WormGPT essentially democratized cyber crime, allowing threat actors to craft sophisticated schemes with simple, natural-language queries.
While Morais shut down WormGPT after only a few months, the damage was done. Other threat actors realized that programming malicious LLMs was both feasible and potentially profitable. Enter FraudGPT and about half a dozen similar tools. FraudGPT works just like WormGPT did, providing phishing messages, copycat login websites, malware packages, and other deceptive assets in response to simple prompts. Unlike WormGPT, FraudGPT doesn’t seem to be going away anytime soon.
While threat actors use AI to defraud organizations, security professionals should know how to use AI to defend sensitive data. Cybersecurity suites now incorporate AI and machine learning (ML) tools, which can analyze massive quantities of threat data and identify patterns over time. Cybersecurity software can often identify and thwart creative phishing attempts or novel malware, even if the program has never encountered those exact threats before.
Some tried-and-true cybersecurity techniques are also effective in counteracting AI threats. For example, training employees to spot familiar signs of cyber crime still works well. Even though AI tools can eliminate typos and mimic familiar writing styles, they can’t hide slightly different URLs or email addresses. A quick message or phone call to a trusted entity is still enough to recognize and prevent most potential attacks. Zero-trust frameworks limit employee access to sensitive information, while mobile endpoint security tools help safeguard smartphones against phishing and malware.
For more information about malicious LLMs, read How to Defend Against WormGPT-Driven Phishing and Malware and FraudGPT and the Future of Cybercrime: Proactive Strategies for Organizational Protection.
Polymorphic malware leverages adversarial AI
AIs excel at simple, repetitive, predictable tasks, from alphabetizing spreadsheets to diagnosing software errors. Unfortunately, threat actors can leverage AI’s methodical nature to compromise sensitive systems and data. Adversarial AI and ML algorithms can infiltrate organizations through precise, frequent, and constantly evolving attacks.
Common adversarial AI techniques include:
- Model extraction: An adversarial AI feeds countless queries into an organization’s proprietary LLM, then collates the output. With enough data, an attacker can recreate the entire model, effectively stealing a custom tool.
- Model inversion: Many LLMs contain personally identifiable information (PII) and financial data. Using the right inputs, an adversarial AI could reverse-engineer a model and gather that information.
- Data poisoning: AI tools run on complicated code that most human operators never see. Attackers can include malicious code in otherwise-innocuous training data that either sabotages the tool or creates a backdoor.
- Evasion attacks: Unlike data poisoning, evasion attacks target models that are already fully trained. An attacker uploads malicious files, purposely designed to look like information that a model would normally process.
Adversarial AI also powers polymorphic malware. These malicious programs can adapt their characteristics in real time, changing their file names, sizes, and encryption details to evade detection. Because polymorphic malware continuously changes its appearance, even seasoned IT professionals may have a hard time detecting it.
Like malicious LLMs, the countermeasures for adversarial AI and polymorphic malware include AI detection tools and staff training. Additionally, administrators should regularly sanitize their sensitive data by removing any corrupted, duplicated, or incomplete files.
Read Adversarial AI and Polymorphic Malware: A New Era of Cyber Threats to learn precisely how each type of adversarial AI attack works.
Agentic AI poses ethical challenges
Once it becomes widely available, agentic AI could represent a quantum leap over current GenAI technology. While GenAI can only respond to direct human queries, agentic AI would be autonomous, able to carry out complex procedures after receiving general instructions.
As an easy example, a GenAI chatbot at a computer repair company could dispense probable solutions based on preprogrammed training data. An agentic AI chatbot could “think” through a customer’s problem and propose a brand-new solution. This type of general problem-solving could benefit just about any industry, from planning out corporate travel on a budget to finding more efficient ways to route delivery trucks.
Agentic systems also raise some ethical questions about the benefits and risks of AI. If an agentic AI can think and reason like a human, then who is ultimately responsible for its behavior? Could an AI harm someone, either through negligence or on purpose? What would happen if an agentic AI created its own moral systems, rejecting human values in the process? While these dilemmas are still (at least) a few years off, the time to start thinking about them is now.
One way that organizations can stay ahead of the curve is to familiarize themselves with government regulations on AI. The Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government in the United States and the EU AI Act in the European Union are good places to start. While they don’t address agentic applications specifically, they do outline each government’s hopes and fears about AI technology in general.
Right now, tech companies, government agencies, and philosophy departments have a unique opportunity to help shape the future of AI. Developing effective, ethical, and compliant AI systems will require technologists, policymakers, and ethicists to share their knowledge and build on each other’s ideas.
If you’re interested in a deeper dive on ethics in AI, check out Ethical and Regulatory Implications of Agentic AI: Balancing Innovation and Safety.
Defend your organization with AI tools
The benefits and risks of AI are still very much in flux. Right now, GenAI can help organizations run more efficiently, but it can also trick employees into giving up sensitive data. In the not-too-distant future, agentic AI could automate complex assignments, but it could also sidestep or ignore ethical guidelines.
If your organization hasn’t incorporated AI into its cybersecurity strategy, now is the time. Lookout offers powerful AI and ML tools that can help administrators detect and respond to cyber threats more effectively. From blocking phishing attacks on mobile devices to providing cutting-edge threat intelligence, our cybersecurity solutions help keep your sensitive data — and your employees — safe from the latest AI-powered threats. Schedule a demo today to learn more.

Book a Demo
Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.