September 18, 2025

-
min read

Ethical and Regulatory Implications of Agentic AI: Balancing Innovation and Safety

Artificial intelligence (AI) has come a long way over the past six decades. From simple chatbots in the 1960s to today’s sophisticated large language models (LLMs), mimicking human behavior has always been one of AI’s most intriguing applications. At present, though, AI cannot plan or make decisions as humans do. If it could, the ethical implications of AI would suddenly become much more complex. That’s where agentic AI comes in.

Right now, agentic AI is mostly just an intriguing idea. Unlike generative AI (GenAI), which simply produces naturalistic responses to specific queries, agentic AI would be an autonomous entity.  Such a program would have goals, problem-solving abilities, and perhaps even memories. In effect, an agentic AI could eventually make moral — or immoral — decisions. 

GenAI, agentic AI, and moral agency

When experts discuss the ethical implications of AI today, they tend to focus on GenAI. To give a brief refresher, GenAI tools produce customized content in response to specific natural-language queries. While GenAI can’t create any truly original ideas, it often presents information in novel ways, especially if it picks up on patterns humans haven’t seen before. Conversely, GenAI can also “hallucinate,” or confidently state incorrect — and potentially damaging — information.

Ethical issues pertaining to GenAI include data provenance, unconscious bias, and malicious misuse. However, GenAI cannot act on its own. Everything it produces, whether helpful or harmful, is in direct response to a human inquiry.

Compare that with agentic AI. In this context, an “agent” is an entity that can act independently and autonomously. Agentic AI would theoretically do just that. Rather than responding to specific, regimented instructions, an agentic AI could follow general guidelines. This kind of AI could reason through roadblocks and come up with novel solutions, just as a human would.

For example, agentic AI could listen to a customer’s problems, “think” through possible causes, and come up with a solution that’s not listed anywhere in a data set. If that customer calls back later, agentic AI could also “remember” their previous interaction and pick up where they left off.

As such, the differences between GenAI and agentic AI could be ethically significant. An agentic AI could theoretically harm someone, or allow someone to come to harm, if that was the most efficient way to solve a problem. That prospect raises some thorny ethical questions:

  • Who is responsible for an agentic AI’s actions?
  • Could threat actors manipulate an agentic AI?
  • What happens if an agentic AI doesn’t demonstrate human values?

These issues are all theoretical right now, but agentic AI could become a reality within the next few years. Developing and regulating this technology will most likely require an interdisciplinary approach.

Perspectives on the ethical implications of AI 

To balance efficiency and social responsibility, agentic AI will need three things:

  • Regulations that define its capabilities
  • Technology that mimics human cognition
  • Ethics that dictate its behavior

Government agencies, tech companies, and philosophy departments alike have something to contribute to the agentic AI conversation. By pooling their knowledge, they can help agentic AI become a force for good as soon as it’s ready to deploy.

Policymakers

Policymakers — elected officials, bureaucrats, and everyone in between — will be primarily responsible for AI regulatory compliance. The good news is that they won’t have to start from scratch. The United States and the European Union have already instituted regulations for responsible AI usage in the Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government and the EU AI Act, respectively.

The fact that these powerful governments have flexible and forward-thinking AI regulation is a good start. However, neither one addresses the challenges specific to agentic AI. The American regulations are vague, simply calling for “responsible” use of AI “in a manner that “fosters public trust.” The European regulations, on the other hand, are highly technical and situational. The document spends more than half of its Prohibited AI Practices section discussing surveillance and biometrics, for example, but devotes only two short paragraphs to what would happen if an AI caused harm on purpose. 

In the near future, policymakers should consider legislation that defines agentic AI and lays out general guidelines for use and misuse. These regulations should clarify who is ultimately responsible for an agentic AI’s behavior, as well as what, specifically, would count as a “harmful” AI action.

Technologists

Since agentic AI is still in its infancy, technologists will probably spend the next few years refining it. The greatest challenge is that agentic AI requires a totally different training method than GenAI. There is no such thing as a data set that creates reason, intuition, or cognition. Even the most sophisticated GenAI system doesn’t really “know” what a user is asking, or whether its answer makes sense. The transition from “finding the most probable response” to “reasoning through a problem” would require a quantum leap in AI technology.

At the same time, technologists should consider the ethical implications of their work. Right now, the only entities that can make autonomous, rational decisions are humans and other intelligent animals. Creating a truly agentic AI might also mean creating a sentient — or even sapient — AI. If that happens, the creation and use of agentic AI could become a profound philosophical issue.

Ethicists

Philosophers have been dealing with the question of how to live a moral life for at least 2,600 years. Their insights may prove particularly useful in the era of agentic AI. Humans know how to teach ethics to other humans, but AI will require a fundamentally different approach. Agentic AIs will not acclimate to the world during infancy and childhood. They won’t accumulate knowledge through life experiences. 

First and foremost, ethicists will have to determine whether it’s possible to impart morality to an agentic AI. From there, the issues get more complicated. Do we have a right to force human ethics on machines? What if agentic AIs — programmed to be proactive and rational — develop their own ethical systems? These questions become particularly salient if technologists decide to pursue artificial general intelligence (AGI) or artificial superintelligence (ASI).

Ethicists are already thinking through these heady questions. They should also start building relationships with policymakers and technologists, as all three camps may need to work together sooner rather than later.

Make AI part of your cybersecurity strategy

While agentic AI is mostly a future concern, the ethical implications of AI are an issue right now. Threat actors are using GenAI resources such as FraudGPT to exploit vulnerabilities and enhance their social engineering attacks. Lookout is using GenAI to fight back. Our AI and machine learning (ML) solutions help administrators detect threats, block malware, and ultimately protect both employees and their devices. Lookout SAIL also leverages GenAI to streamline policy creation, incident response, and similar cybersecurity features. With these tools at your disposal, you and your organization can stay a step ahead of AI-enhanced threats.

Book a personalized, no-pressure demo today to learn:

How Lookout can help secure your organization’s approach to GenAI without sacrificing productivity

Book a personalized demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization

Contact Lookout to
try out Smishing AI

Book a Demo

Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.

Book a personalized, no-pressure demo today to learn:

How Lookout can help secure your organization’s approach to GenAI without sacrificing productivity