This is a submission for the Google Cloud NEXT Writing Challenge
When Mandiant's M-Trends 2026 report revealed that the average time from initial intrusion to handoff to a secondary threat actor had collapsed from 8 hours to 22 seconds over three years, it confirmed what many security practitioners already suspected: traditional security operations aren't built for the speed of modern attacks.
Google Cloud NEXT '26's headline announcements—Agentic Defense, three new autonomous Security Operations agents, and the Wiz partnership—represent more than just another product launch. They signal a fundamental architectural shift in how organisations defend against threats. For Australian enterprises operating under the Australian Signals Directorate's Essential Eight framework, this shift raises an important question: does agentic security strengthen Essential Eight compliance, or does it create new gaps?
What Google Actually Announced
The core of Google's security story at NEXT '26 centres on Agentic Defense — the integration of Google Threat Intelligence, Security Operations, and Wiz's Cloud/AI Security Platform into a unified autonomous defence system.
Three new Security Operations agents entered preview:
-Threat Hunting Agent: Continuously analyses telemetry for anomalies and suspicious patterns
-Detection Engineering Agent: Automatically creates and refines detection rules based on emerging threats
-Third-Party Context Agent: Enriches alerts with external intelligence from vendor advisories and threat feeds
Alongside these, Wiz introduced red, blue, and green agents for continuous attack simulation, detection validation, and automated remediation across Google Cloud, AWS, Azure, and Oracle environments. The AI Application Protection Platform (AI-APP) extends code-to-cloud-to-runtime protection specifically for AI workloads.
The partnership also delivered Dark Web Intelligence (98% accuracy claimed in internal testing), AI-BOM (AI Bill of Materials for model transparency), and remote MCP server support for Google Security Operations.
The technical architecture is impressive. The strategic implications for compliance frameworks like Essential Eight are less obvious.
The Essential Eight Reality Check
Australia's Essential Eight mitigation strategies were designed around a simple principle: reduce the attack surface and contain damage when prevention fails. The framework mandates eight specific controls across three maturity levels (ML1, ML2, ML3), each with measurable technical requirements.
Let's examine how agentic security intersects with each:
Application Control (Mitigation Strategy 1)
Essential Eight ML3 requires that applications can only be executed from approved locations, all executables are validated against an approved list, and Microsoft's recommended application control policies are implemented.
Agentic security impact: Detection Engineering agents can identify unauthorised application execution patterns faster than human analysts, but they don't enforce allowlisting—that remains a GPO/Intune policy decision. Autonomous agents can alert on policy violations with sub-second latency; they cannot retroactively prevent execution that already occurred in those critical 22 seconds.
Compliance gap: Application control is preventative. Agentic detection is reactive, albeit extremely fast reactive. The two are complementary, not substitutional.
Patch Applications (Mitigation Strategy 2) & Patch Operating Systems (Mitigation Strategy 3)
ML3 mandates patching within 48 hours of release for internet-facing services and two weeks for other systems, with vulnerability scanning at least daily.
Agentic security impact: Wiz's green agent (automated remediation) can theoretically patch at machine speed, but Essential Eight compliance isn't measured by deployment speed—it's measured by patch coverage and risk prioritisation. An agent that patches every CVE indiscriminately creates operational risk; an agent that triages using threat intelligence aligns better with the framework's intent.
Google's Cloud Asset Inventory combined with Security Command Center's continuous vulnerability assessment does provide the "daily scanning" ML3 requires, and agent-assisted prioritisation could help meet the 48-hour window for critical patches.
Compliance opportunity: This is where agentic security genuinely strengthens Essential Eight posture—if organisations configure remediation agents to respect change management windows and business-critical system dependencies.
Multi-Factor Authentication (Mitigation Strategy 4)
ML3 requires phishing-resistant MFA (FIDO2, smart cards, Windows Hello for Business) for all users.
Agentic security impact: Minimal direct impact. MFA is an identity control; Security Operations agents operate in the detection/response layer. However, the Third-Party Context Agent can enrich MFA-related alerts (impossible travel, repeated failures) with threat intelligence that helps security teams identify credential compromise faster.
Compliance status: Unchanged. Essential Eight MFA requirements remain a policy and technology deployment challenge, not a detection problem.
Restrict Administrative Privileges (Mitigation Strategy 5)
ML3 requires privileged access workstations, just-in-time administration, separate privileged accounts, and strict segmentation.
Agentic security impact: Threat Hunting agents can detect privilege escalation attempts and lateral movement patterns that indicate compromised admin credentials, but they don't enforce least-privilege policies. Google Cloud's Agent Identity and Agent Gateway features (announced at NEXT '26) do provide governance for agent-to-agent communication and MCP server access—this is relevant because autonomous agents themselves represent a new privileged entity class.
Compliance consideration: Organisations need to treat Security Operations agents as privileged accounts. If an agent has read/write access to Security Command Center policies or can trigger automated remediation, it must be governed under Mitigation Strategy 5.
Regular Backups (Mitigation Strategy 6)
ML3 mandates daily backups, weekly testing of restoration, and offline/immutable storage.
Agentic security impact: None. Backups are an operational resilience control. Detection agents don't back up data; they detect ransomware encryption patterns. The green remediation agent could theoretically automate restoration workflows, but Essential Eight explicitly requires tested restoration—automated restore without validation doesn't satisfy ML3.
Compliance status: Unchanged.
Restrict Microsoft Office Macros (Mitigation Strategy 7)
ML3 requires that macros can only execute from trusted locations, all macros are digitally signed, and Microsoft's recommended macro security settings are enforced.
Agentic security impact: Detection Engineering agents can identify malicious macro execution (e.g., spawning PowerShell, making network connections), but macro execution restrictions are enforced by GPO, not by Security Operations tooling.
Compliance gap: Same pattern as Application Control—detection is reactive, policy is preventative.
User Application Hardening (Mitigation Strategy 8)
ML3 mandates web browser isolation, blocking ads/untrusted content, disabling Flash/Java, blocking web content from Office, and implementing DMARC/SPF/DKIM.
Agentic security impact: Browser isolation and content filtering are network/endpoint controls. Security Operations agents can detect policy violations (e.g., Flash execution, unvalidated email senders) but don't enforce the restrictions themselves.
Google's partnership with Wiz does provide Cloud/AI Security Platform capabilities that can monitor cloud-hosted applications for hardening policy drift, which is relevant for SaaS applications governed under Essential Eight.
Compliance status: Detection improves; enforcement remains a separate implementation concern.
The Real Question: Speed vs. Governance
The 22-second attacker handoff window is real. The idea that autonomous agents can respond faster than human analysts is also real. But Essential Eight compliance isn't measured by response time—it's measured by control implementation, coverage, and maturity level consistency.
Agentic security doesn't replace Essential Eight. It accelerates detection and response for Mitigation Strategies 1, 2, 3, 5, 7, and 8—but only if the underlying controls are already implemented. An organisation with no application control policies gains nothing from a Detection Engineering agent that can identify unauthorised execution in 22 seconds if the execution was never blocked in the first place.
The real value proposition is for organisations already at ML2 or ML3 maturity. These are the environments where:
Application control policies exist but drift occurs
Patch cycles are defined but threat prioritisation is manual
Privileged access is segmented but lateral movement detection is slow
Macro restrictions are enforced but evasion techniques evolve
For these organisations, Google's Agentic Defense becomes a force multiplier for existing controls. The Threat Hunting agent doesn't replace your application allowlist—it identifies when an attacker finds a bypass. The Detection Engineering agent doesn't replace your patch management workflow—it helps you prioritise which vulnerabilities are being actively exploited in the wild.
What Australian Enterprises Should Actually Do
If you're responsible for Essential Eight compliance in an organisation considering Google Cloud's security stack, here's the practical playbook:
1. Audit your current maturity level honestly.
If you're not at ML2 across all eight strategies, agentic security won't fix the gap. Implement the foundational controls first—application control, MFA, privilege restriction—then layer on autonomous detection and response.
2. Treat Security Operations agents as privileged accounts.
The Detection Engineering agent that can create custom detection rules has write access to your security posture. The green remediation agent that can auto-patch systems has administrative privileges. Both must be governed under Mitigation Strategy 5 (Restrict Administrative Privileges). Implement least-privilege principles, audit agent actions, and require multi-party authorisation for high-impact remediation workflows.
- Map agent capabilities to Essential Eight evidence requirements. ASD's Essential Eight Maturity Model includes specific evidence requirements for each control. Can your Security Operations agents generate the logs, audit trails, and compliance reports ASD assessors expect? If not, manual evidence collection still applies—automation doesn't eliminate the compliance burden, it just shifts where the work happens.
4. Test autonomous remediation in non-production first.
Wiz's green agent can patch at machine speed. That's powerful. It's also dangerous if misconfigured. Essential Eight ML3 requires tested backups and change management processes. Autonomous remediation that breaks business-critical systems faster than humans can intervene creates operational risk that outweighs the 22-second response benefit.
5. Use Sydney-based Google Cloud regions for data sovereignty.
Google Cloud's Sydney and Melbourne regions have been operational since 2021. If you're subject to Australian data residency requirements (common in government and critical infrastructure), ensure Security Operations telemetry, Threat Intelligence data, and agent logs stay within Australian geography. The Google Cloud / Wiz partnership explicitly supports multicloud—verify that cross-cloud telemetry flows respect your jurisdiction requirements.
The Bigger Picture
Google Cloud NEXT '26's security announcements reflect a broader industry shift: security operations are moving from human-led, tool-assisted workflows to agent-led, human-supervised architectures. This isn't speculative—75% of Google's codebase is now AI-generated according to Sundar Pichai's keynote, and that same automation is coming to security operations.
For Essential Eight compliance, the question isn't whether to adopt agentic security. The question is how to adopt it without creating new compliance gaps.
The 22-second attacker handoff window is real, but so is the Essential Eight requirement for tested backups, validated patches, and auditable controls. Autonomous agents that respond in sub-second timeframes are impressive. Autonomous agents that can generate the evidence trail ASD assessors expect are essential.
The organisations that will succeed in this transition are those that treat agentic security as a compliance accelerant, not a compliance replacement. Implement the controls, deploy the agents to monitor and enforce them, audit the agents as privileged entities, and test everything before trusting automation in production.
And if you're still running Essential Eight at ML1, focus on the fundamentals first. No amount of autonomous threat hunting will compensate for missing application control policies or untested backups.
The 22-second problem is real. But the solution isn't just faster agents—it's faster agents operating within a mature security framework that treats speed and governance as complementary requirements, not competing priorities.
Have you implemented Essential Eight in your organisation? How are you thinking about autonomous security agents in a compliance-heavy environment? Let's discuss in the comments.



Top comments (1)
The framing of security operations agents as privileged accounts that must be governed under Mitigation Strategy 5 is the insight that changes how you'd actually deploy any of this. It's easy to get caught up in the speed numbers—22 seconds, sub-second response, machine-speed patching—and forget that an agent with write access to your detection rules or remediation pipelines is effectively a domain admin that operates at silicon speed. You wouldn't give a human junior analyst unrestricted access to modify Security Command Center policies. But an agent that can create custom detection rules has exactly that. The governance question isn't "can we trust the agent?" It's "would we give these permissions to a human with the same scope, and if not, why not?"
What I find myself thinking about is the evidence trail problem. Essential Eight compliance isn't just about having controls—it's about proving they exist during an assessment. An autonomous agent that patches a critical vulnerability in 30 seconds is impressive, but the assessor doesn't care about speed. They care about whether the patch was tested, whether the change management process was followed, and whether there's an auditable record of who (or what) authorized the change. The agent can do the work, but can it produce the paperwork? Most autonomous systems I've seen generate logs that are technically complete but practically useless for compliance—machine-readable but not assessor-readable. The gap between "we have the data" and "we can present the data in a form that satisfies an ASD assessor" is where a lot of automation projects quietly fail.
The point about ML1 organizations gaining nothing from agentic detection if the underlying controls don't exist is the kind of honesty that gets lost in vendor announcements. Detection without prevention is just a faster way to watch yourself get compromised. The 22-second attacker handoff window is terrifying, but if there's no application control policy to begin with, the attacker doesn't need 22 seconds—they need about two. The agents are a force multiplier, and a force multiplier applied to zero is still zero. Do you see many Australian enterprises actually at ML3 across all eight strategies, or is the reality on the ground more fragmented—some controls at ML3, others barely at ML1?