On March 7, 2026, Heather Wishart-Smith wrote in Forbes that agentic AI is changing the security model for enterprise systems. That framing is correct, but it still sounds smaller than the actual shift.
Traditional enterprise security assumed a simple chain of control: a human authenticates, software executes deterministic logic, and security teams wrap the environment with IAM, network controls, logging, and endpoint policy. Agentic AI breaks that chain. Now the system that reads instructions is also the system that selects tools, interprets ambiguous data, and decides which action to take next.
That turns security from a question of "who logged in?" into a harder question: what authority was delegated, to which agent, for which task, under what constraints, and how do you prove what happened afterward?
The timing matters. NIST opened its RFI on securing AI agent systems on January 12, 2026, published an NCCoE concept paper on software and AI agent identity and authorization on February 5, and launched the AI Agent Standards Initiative on February 17. This is no longer a niche AppSec debate. It is becoming a standards, identity, and governance problem for every enterprise that wants agents touching production systems, customer data, code, or money.
The short answer: agentic AI forces enterprises to redesign security around delegated identity, constrained authority, tool-level policy enforcement, and continuous observability. If your current plan is "put SSO in front of the app and log the API calls," you are under-scoping the problem.
TL;DR
- Forbes is right: agentic AI changes enterprise security because agents act, not just answer.
- NIST is already treating AI agent security as a distinct category, with an RFI that closed on March 9, 2026 and a separate identity-and-authorization comment window that stays open through April 2, 2026.
- The biggest shift is from user authentication to delegated authority management. Agents need their own identities, not borrowed human sessions and shared service keys.
- Prompt injection is now an action-security problem, not just a model-safety problem. In tool-using systems, hostile content can influence real operations.
- OWASP's framing of prompt injection and excessive agency maps directly to enterprise risk: unauthorized tool use, data exfiltration, workflow manipulation, and harmful automated actions.
- The minimum viable control stack is agent identity, short-lived scoped credentials, policy gates on every tool call, sandboxing, approval workflows, and full action lineage.
- Enterprises should not stop pilot programs, but they should stop giving agents broad standing privileges.
The Real Break: Agents Are Actors, Not Just Interfaces
The Forbes piece matters because it pulls a technical issue into the mainstream enterprise conversation: the security challenge is not simply "AI can make mistakes." It is that AI agents now sit in the middle of identity, applications, documents, APIs, workflows, and action loops.
That matches how NIST defines the problem. In its January 12 RFI, NIST describes AI agent systems as systems capable of planning and taking autonomous actions that impact real-world systems or environments. That definition matters because it moves the discussion from model quality into systems security.
Once an LLM can:
- read a customer email
- decide which SaaS application to open
- retrieve data from internal systems
- choose a tool
- trigger the next action
the security boundary is no longer the chatbot interface. The boundary is the full decision-and-action path.
Inference from the standards push
NIST is signaling that agent security is not just an extension of generic AI governance. It is a distinct systems-security problem created when model output is fused with real authority.
That is why security leaders quoted by Forbes keep landing on the same conclusion from different directions. Some focus on identity and delegated credentials. Others focus on visibility across layers. Others focus on secure-by-design defaults. They are all describing the same structural change: agents compress decision-making and execution into one runtime surface.
What Breaks First In Enterprise Deployments
The first failures are usually not spectacular. They are architectural shortcuts that feel harmless in a pilot and become dangerous once the agent gets real permissions.
Old Enterprise Security vs Agentic Enterprise Security
The control model changes more than most vendor pitches admit.
This is why "zero trust for agents" is not enough as a slogan. Zero trust helps with connection and access assumptions. But agents introduce a separate authority problem: the system deciding what to do is also the system executing the action path.
Identity Becomes the New Control Plane
This is where the NIST and NCCoE work is most useful.
The February 5 NCCoE concept paper is not really about chatbots. It is about applying identity standards and best practices to software and AI agents, with explicit attention to identification, authorization, auditing, non-repudiation, and controls that mitigate prompt injection. That is the right frame.
If an agent can deploy code, move data, open tickets, approve discounts, change configs, or trigger payments, then the enterprise needs answers to four questions on every run:
- Which human or business process delegated this task?
- Which exact identity is the agent using right now?
- Which tools and data sources are in scope for this task only?
- What evidence exists for every decision and action taken?
The practical implication is blunt: borrowed browser cookies, copied API keys, and shared service accounts are the wrong abstraction for agentic systems. Enterprises need agent-specific workload identity, ephemeral credentials, and policy checks that evaluate intent, data sensitivity, action type, and destination before execution.
The Minimum Viable Control Stack
You do not need a perfect reference architecture before starting. You do need a minimum viable control stack before expanding autonomy.
Why Prompt Injection Is Now an Enterprise Security Event
OWASP's LLM01 prompt injection guidance and LLM06 excessive agency guidance are useful here because they translate abstract AI risk into operational failure modes.
Prompt injection matters more in agentic systems because the model is no longer just generating text. It is selecting tools, invoking extensions, and influencing downstream actions. A malicious instruction hidden in a help ticket, a shared document, a website, a tool description, or a retrieved memory item can steer the model away from its intended workflow.
Excessive agency is the multiplier. If the agent has too much standing power, then even a small steering failure can become:
- an unauthorized data retrieval
- a ticket closure that hides a real incident
- a repo change that should have required approval
- a financial or operational action triggered under false context
The new design rule
Treat every external input as untrusted code for the model. If the agent can act, then content security and action security collapse into the same problem.
This is also where CISA's secure-by-design posture becomes more relevant, not less. The right enterprise question is not "Can customers configure enough controls after deployment?" It is "Did the vendor design the product so risky autonomy is constrained by default?" In agentic systems, safe defaults, included logging, and strong identity primitives are product requirements, not premium extras.
The 2026 Timeline Explains Why This Topic Suddenly Matters
Security teams are not imagining a future problem. The standards and policy machinery is already moving.
What CISOs and Platform Teams Should Do In the Next 30 Days
The right move is not to freeze every pilot. It is to stop pretending that agent access is just another SaaS integration.
The Strategic Read For Enterprise Leaders
The biggest mistake executives can make is treating agent security as a faster version of chatbot governance. It is not.
Chatbot governance mostly asked whether answers were safe, accurate, and compliant. Agent security asks whether a system with probabilistic reasoning and delegated power can be trusted to operate inside real workflows without causing unacceptable damage.
That is a different class of question. It requires different controls. And it lands in a different budget line: not just model safety or AI governance, but IAM, AppSec, platform engineering, procurement, and incident response.
FAQ
Final Take
The Forbes article should be read as a warning shot, not a trend piece.
Agentic AI is not simply adding another application to the enterprise stack. It is introducing a new actor that can interpret instructions, chain tools, and exercise delegated power in environments built for humans and deterministic software.
That is why the security model changes. Identity must become more granular. Authority must become shorter-lived and more explicit. Policy must sit in front of tool use. Observability must capture action lineage, not just final outputs. And product teams have to stop treating safe autonomy as an optional layer they will add later.
The enterprise winners in 2026 will not be the companies that give agents the most power the fastest. They will be the companies that build the cleanest authority model around them.
Sources
- Forbes: Agentic AI Is Changing The Security Model For Enterprise Systems (Mar 7, 2026)
- NIST: CAISI Issues Request for Information About Securing AI Agent Systems (Jan 12, 2026)
- NIST: AI Agent Standards Initiative (created Feb 17, 2026)
- NCCoE: New Concept Paper on Identity and Authority of Software Agents (Feb 5, 2026)
- OWASP GenAI: LLM01 Prompt Injection
- OWASP GenAI: LLM06 Excessive Agency
- CISA: Secure by Design
Related Reading
- The $100M AI Heist: How DeepSeek Stole Claude's Brain With 16 Million Fraudulent API Calls
- The $300K Bug That Was Never the AI's Fault -- Inside Addy Osmani's Spec Framework That Changes Everything
- When AI Fights Back: The Autonomous Agent That Wrote a Hit Piece on a Developer
Originally published at umesh-malik.com
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.