This is a submission for the Google Cloud NEXT Writing Challenge
What is this piece about: IAM is evolving from access control into a trust grid for humans, workloads, service accounts, and agents, and Google Cloud NEXT '26 makes this very clear, but how is your IAM?
I have been an avid user of Google Cloud and the ecosystem around it for years. My own journey started with Firebase, because it gives developers a ridiculous amount of capability before they even have to think about infrastructure properly. And this is both the beauty and the trap of cloud platforms.
Cloud platforms allow you to build and ship quickly, connecting your products, APIs, databases, serverless functions, CI/CD pipelines, logs....
...analytics, security tools, and now agents faster than ever before.
Then someone in the business asks:
Who exactly can access what?
And that is usually the point where the room falls silent, because everyone knows the answer is more complicated than it should be.
For the past few years, I have worked around cloud architecture, security, and cost optimisation. I have seen companies spend way too much money on cloud resources, but the scarier issue is usually not cost. The bill may hurt, but unclear access is what keeps me awake.
Somewhere along the way, it became almost instinctive: every cloud security conversation began with the same question.
How is your IAM?
And often, the answer would move straight into comfortable territory. RBAC, improved login journeys, cleaner onboarding, smoother user flows. All important, but not enough. What was usually missing was the harder conversation about blind spots, the forgotten edges where unauthorised access is most likely to creep in.
That is what the picture below captures so well. Experienced engineers mapping the paths they expect users and systems to take, while the dangerous paths sit quietly outside the frame.
I mean, can you explain the trust graph inside your cloud?
Because in Google Cloud, IAM is not one lock on one door. It is a policy system spread across organizations, folders, projects, workloads, service accounts, APIs, logs, secrets, pipelines, and now agents. And after Google Cloud NEXT '26, that question matters even more.
Google Cloud NEXT '26 felt less like a parade of AI features and more like the outline of a new operating reality. Through Gemini Enterprise Agent Platform, Agent Identity, Agent Gateway, Agent Observability, Agentic Defense, and Agentic Data Cloud, Google was naming the shape of the next cloud era. One where agents are no longer passive assistants, but actors inside the control plane itself.
Agents stop being "just chatbots" the moment they can call tools, query data, inspect incidents, trigger workflows, and act on behalf of users.
At that point, they cross a boundary.
They become participants in the identity system itself.
And once that happens, IAM is no longer managing only people and workloads. It is managing delegated intent.
IAM is not a login screen
A lot of teams still speak about IAM as though it begins and ends with the login journey. If the user can sign in, if the role-based access control looks reasonable on paper, if MFA is enabled, then the system is treated as though it has passed some invisible security threshold.
But IAM is not the login screen. It is not the modal where a user enters an email address, and it is not the friendly redirect that takes them back into the application.
Authentication asks who someone is. Authorization asks what they are allowed to do. Cloud IAM has to ask a larger and more uncomfortable question.
Which principal can perform which action on which resource, under which policy, and with what blast radius if that permission is misused?
That question sounds clean in theory, but it becomes complicated as soon as you look at the number of identities that exist inside a real cloud environment. A principal can be a human user, a group, a service account, a workload identity, an external identity from another provider, or an automated system moving quietly in the background.
And now, increasingly, a principal can also be an agent.
That is where the conversation changes. Once agents can call tools, query data, inspect incidents, trigger workflows, or act on behalf of a user, they stop being a decorative AI layer around the edge of the system. They become participants in the identity model itself.
That is why IAM cannot be treated as a narrow authentication concern. IAM is the control plane for trust.
The basic model is simple. The real model is not.
At the conceptual level, Google Cloud IAM looks almost elegant. A principal receives a role on a resource. The role contains permissions. The resource belongs somewhere in the hierarchy. The policy determines whether the action is allowed.
That is the version we can draw neatly on a whiteboard.
Production rarely stays that neat.
A user belongs to a group. That group inherits access from a folder. A project contains a service account. A workload can impersonate that service account. A CI/CD pipeline can deploy the workload. The workload can read secrets. Those secrets can unlock third-party systems. Logs can expose sensitive data. An agent can call a tool. That tool can reach an internal API. That API can touch customer data.
No single step in that chain may look dramatic on its own. That is the dangerous part.
Cloud risk often hides in the relationship between permissions, not just in the permission itself. The obvious question is whether an identity has a role. The better question is what that role allows the identity to reach, directly or indirectly.
This is why I no longer think about IAM as a gate.
I think about it as a grid.
A grid of trust transitions, where every identity, policy, resource, workload, tool, and agent creates another possible path through the system.
And sometimes the risk is not one obviously dangerous role.
Sometimes the risk is the chain.
The hierarchy is clean. Effective access is not.
Google Cloud gives us a resource hierarchy that looks clean from a governance perspective. Organizations, folders, projects, and resources give teams a way to structure ownership, policy, and control.
That hierarchy is useful. It gives large environments shape. It allows policies to be managed at different levels. It gives platform teams a way to apply governance without configuring every single resource manually.
But the same hierarchy also creates inherited access, and inherited access is where confidence can become misleading.
A role granted too high in the structure can quietly become broader than anyone intended. A folder-level permission can reach more projects than expected. A viewer role can expose more sensitive operational context than the name suggests. A service account that looks harmless in isolation can become highly privileged when combined with impersonation rights, deployment rights, or access to secrets.
This is where teams often get caught out.
Someone says that access was only granted at the folder level, but the more important question is what lived under that folder.
Someone says that a user only had Viewer, but the better question is what that user could view through logs, metadata, dashboards, configuration, or linked systems.
Someone says that it was just a service account, but the actual question is who could impersonate it, what it could access, and what it could trigger.
This is why least privilege is not just a security slogan.
In Google Cloud, least privilege is a graph reduction problem.
You are not only reducing roles. You are reducing reachability. You are reducing the number of paths an identity can use to create production impact. You are reducing the blast radius before there is a blast.
Positive paths are not enough
Most engineering teams are very good at explaining the positive path.
A user signs in. An admin grants a role. A service calls an API. A pipeline deploys. A dashboard loads. The demo works. The user journey looks smooth. The architecture diagram makes sense. Everyone moves on.
But attackers do not follow the positive path. Misconfigurations do not stay neatly inside the intended path. Broken integrations do not respect the diagram. Agents, especially once they are connected to tools and workflows, will not always remain inside the path we imagined for them.
The more useful question is not whether the right identity can do the right thing.
The more useful question is whether the wrong identity can reach the same outcome through a side route.
That is where cloud security actually lives.
Can a developer impersonate a privileged service account? Can a CI job deploy code that runs with broader permissions than the pipeline itself? Can a support role read logs containing customer data? Can read-only access still expose secrets through metadata or operational traces? Can an agent call a tool that performs a privileged action on its behalf? Can an abandoned service account still reach production? Can a temporary exception slowly become permanent infrastructure?
These are not edge cases. They are the places where real incidents are born.
This is the difference between IAM existing and IAM working.
Service accounts were the first warning
Before agents entered the conversation, service accounts had already shown us that IAM was never only about humans.
A service account is a non-human identity that can act inside the system without anyone sitting there and clicking a button. That can sound like plumbing until you realise how much of a modern production environment depends on it.
Service accounts run workloads. They access APIs. They deploy systems. They read data. They integrate platforms. They sit inside CI/CD pipelines, automation scripts, configuration files, and sometimes places they should never have reached in the first place.
The industry has already learned this lesson the hard way. Leaked service account keys, default accounts with excessive access, broad Editor roles, forgotten CI credentials, and unreviewed impersonation paths have all shown the same pattern.
The problem is rarely that nobody configured IAM at all. The problem is that the identity model grew faster than the governance around it.
Google Cloud’s own service account best practices point teams toward avoiding unnecessary keys, reducing excessive privilege, using temporary privilege elevation, reviewing unused permissions, and paying attention to lateral movement risk.
That list tells us where real IAM risk lives. Standing privilege. Overbroad roles. Long-lived credentials. Default accounts. Unused permissions. Lateral movement.
Now apply that same logic to agents.
An agent is not just a piece of software. In practical terms, it is a workload with reasoning, memory, delegated intent, and tools.
So the uncomfortable question becomes obvious.
Are we about to repeat every service account mistake, but faster?
Workload Identity Federation was the bridge
One of the better evolutions in cloud IAM has been the move away from long-lived static keys.
Static keys are dangerous because once they leak, they become portable access. They can sit in CI systems, laptops, scripts, Slack messages, old repositories, Docker images, backup folders, and forgotten configuration files for far longer than anyone wants to admit.
Workload Identity Federation moved the model in a better direction by allowing external workloads to use existing identity providers and obtain short-lived access without relying on long-lived service account keys.
That principle matters because it reveals the direction cloud IAM has been moving in for years.
Trust should be explicit. Credentials should be short-lived. Access should be scoped. Identity should be verifiable. Authority should be revocable.
Agents need the same discipline.
They should not be random API keys with a personality. They need proper identity. They need scoped permissions. They need tool boundaries. They need audit trails. They need revocation paths. They need to prove who or what they are before they act.
This is where Google Cloud NEXT ’26 becomes interesting.
Because the agentic cloud is not just about making agents more capable.
It is about making agents governable.
Agents are entering the IAM grid
Google Cloud’s Agent Identity work is an important signal because it treats agents as something that must be identifiable, governable, and accountable.
Agent Identity gives agents a secure per-agent identity in Agent Runtime and supports a least-privilege approach to access management. Google also describes lifecycle-tied identities, IAM allow and deny policy support, auditability, and credential protections such as certificate-bound tokens protected through Context-Aware Access.
That matters because once an agent can query data, call tools, inspect alerts, route requests, trigger workflows, or act on behalf of a user, it is no longer just an AI feature.
It is an actor inside the cloud trust model.
And actors need identity.
They need boundaries, logs, revocation, deny rules, runtime controls, and a clear relationship to the user, workload, or business process they are acting for.
Without that, we are not building an agentic cloud.
We are building shadow infrastructure with a nicer interface.
Agent Gateway and Model Armor are not AI safety fluff
This is the part I think developers should take seriously.
Agent Gateway is not just a routing layer. It is governance infrastructure for the way agents connect and interact across Google Cloud, external agents, AI applications, tools, and language models.
Model Armor is not just a generic safety wrapper either. It screens prompts and responses, and when it is integrated with Agent Gateway, those protections can sit directly in the communication pathways agents use.
That is important because IAM alone cannot understand every model interaction.
IAM can determine whether a principal is allowed to perform a particular action. It can enforce policy against identity, role, permission, and resource.
But agentic systems introduce other questions.
What is being sent to the model? What is being returned? Is the agent being manipulated? Is sensitive data crossing a boundary? Is a tool being invoked through an unsafe chain? Is the interaction consistent with the agent’s intended purpose, or has the agent drifted into behaviour nobody explicitly approved?
The enforcement model has to become layered.
IAM controls who or what can act. Agent Identity makes the agent visible as a first-class identity. Agent Gateway governs where agent traffic flows. Model Armor screens the model boundary. Deny policies and principal boundaries limit what should never be reachable. Audit logs make the activity explainable after the fact.
That is the deeper cloud security story.
Not simply adding AI to the stack.
But governing a new class of actor once it starts operating inside the cloud.
If agents are going to act inside production systems, they need to be governed like production identities.
Every agent should be less privileged than the human who created it
My strongest opinion on agentic IAM is simple.
Every agent should be less privileged than the human who created it.
That sounds obvious, but I think many teams will get this wrong.
They will connect agents to broad service accounts because the demo needs to work. They will let agents inherit human access because it feels convenient. They will give agents tools without mapping the downstream permissions. They will log prompts but not tool calls. They will track model output but not agent authority. They will call it productivity and accidentally build shadow infrastructure.
This will not happen because teams are careless by nature.
It will happen because abstraction makes the trust model feel invisible.
When everything sits behind a friendly agent interface, it becomes easier to forget that real authority is being delegated underneath. A tool call may look harmless in the UI, but downstream it may be reading data, opening tickets, modifying configuration, calling APIs, or triggering workflows that have real operational consequences.
The lazy version of agentic cloud says the agent should have whatever access it needs.
The better version asks whether we can prove the agent cannot reach what it must never touch.
That is the shift.
The agentic cloud needs deny thinking
Allow policies are necessary, but they are not enough.
Mature IAM needs both sides of the sentence.
One side says that this identity is allowed to do this specific thing.
The other side says that this identity must never be able to do that thing, even if another policy, shortcut, inherited role, or future configuration mistake tries to make it possible.
That second sentence is where deny policies, principal access boundaries, conditional access, runtime governance, and just-in-time privilege become important.
An incident-response agent may be allowed to investigate alerts, but it should not be able to disable security controls. A data analysis agent may query approved datasets, but it should not be able to export raw customer data. A deployment assistant may suggest infrastructure changes, but it should not apply production changes without human approval. A support agent may retrieve account metadata, but it should not access payment information. A security triage agent may enrich findings, but it should not modify IAM. A coding agent may open a pull request, but it should not rotate secrets or change production policies.
That is not overengineering.
That is the minimum level of discipline required once agents can touch real systems.
The real IAM maturity test
So when I ask “How is your IAM?” in a Google Cloud environment, I am not asking whether someone ticked a checkbox or enabled a feature.
I am asking whether the team can explain the trust grid.
Can they list the human principals? Can they account for the service accounts? Can they explain which workloads can impersonate which identities? Can they identify inherited access? Can they detect unused permissions? Can they remove standing privilege? Can they avoid long-lived keys? Can they reduce lateral movement?
And now, can they explain which agents exist? Can they show what each agent can call? Can they prove which tools each agent can reach? Can they deny destructive actions by design? Can they trace user-delegated agent activity? Can they shut down an agent without breaking production? Can they prove that an agent cannot chain itself into production impact?
That is the bar now.
Because the agentic cloud is not merely more automated.
It is more delegated.
And once systems begin acting on our behalf, IAM is no longer just access management.
It becomes the map of who and what we trust inside the grid.
Final thought
There is no doubt in my mind that IAM is changing shape.
It is no longer only the discipline of managing human access. It is becoming a full trust system for every actor inside the cloud. Humans, workloads, automations, service accounts, and now agents all move through the same grid of permissions, policies, resources, and consequences.
That shift is bigger than most people realise.
The companies that win with AI will not be the ones that connect every agent to every tool as quickly as possible. Speed alone is not strategy with agentic AI. Capability without boundaries is just risk moving faster.
The companies that win will be the ones that can answer the uncomfortable questions clearly.
Who is acting?
On whose behalf?
With which identity?
Through which tool?
Against which resource?
Under which policy?
With what runtime protection?
With what audit trail?
And if something starts moving in the wrong direction, how quickly can we shut it down?
So yes, my question remains the same.
How is your IAM?
But in 2026, that question is no longer only about users. Google Cloud NEXT '26 makes the shift hard to ignore. IAM now has to account for every human, workload, service account, automation, and agent that can touch your cloud.
Because ultimately, IAM matters from the standpoint of the business. It is not a configuration detail. It is the fence around the system, the map of trust, and the line between controlled delegation and quiet exposure.
A strong fence does not mean nothing can ever go wrong. It means the ordinary intruder does not walk in through the obvious gate. It means the blast radius is smaller. It means the dangerous paths are harder to reach. And it means that when someone asks who or what can access the heart of your application, the room does not have to fall silent.





Top comments (0)