<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sujal singh </title>
    <description>The latest articles on DEV Community by sujal singh  (@rajputs_027).</description>
    <link>https://dev.to/rajputs_027</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rajputs_027"/>
    <language>en</language>
    <item>
      <title>I'm a Red Teamer. Here's How I'd Go After Google's Agentic Defense.</title>
      <dc:creator>sujal singh </dc:creator>
      <pubDate>Wed, 29 Apr 2026 06:40:38 +0000</pubDate>
      <link>https://dev.to/rajputs_027/im-a-red-teamer-heres-how-id-go-after-googles-agentic-defense-5fl8</link>
      <guid>https://dev.to/rajputs_027/im-a-red-teamer-heres-how-id-go-after-googles-agentic-defense-5fl8</guid>
      <description>&lt;p&gt;Google just announced Agentic Defense at Cloud NEXT '26 — an AI-powered security platform that combines Google Threat Intelligence, Security Operations, and Wiz's Cloud Security into one autonomous system. AI agents that hunt threats, engineer detections, and auto-remediate vulnerabilities. Sounds solid on paper.&lt;br&gt;
My job is to break things on paper.&lt;br&gt;
So let me tell you what I actually thought while reading through those announcements — not as someone impressed by the demo, but as someone already thinking about the gaps.&lt;/p&gt;

&lt;p&gt;First, Let's Give Credit Where It's Due&lt;br&gt;
I'm not going to pretend this isn't a meaningful step. The numbers are real:&lt;/p&gt;

&lt;p&gt;Triage and Investigation Agent has processed 5 million alerts, cutting 30-minute analysis down to 60 seconds&lt;br&gt;
Security Operations Center agents reduced threat mitigation time by over 90%&lt;br&gt;
Dark web intelligence engine analyzing millions of daily events at claimed 98% accuracy&lt;/p&gt;

&lt;p&gt;That 2% miss rate sounds small. At millions of daily events, it isn't. But the speed improvement is genuine — and speed is the one thing blue teams have always lost on. Attackers hand off access in 22 seconds now. You can't fix that with more analysts.&lt;br&gt;
But "harder to attack" is not the same as "impossible to attack." Nothing ever is.&lt;/p&gt;

&lt;p&gt;How I'd Think About Going After This&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make the AI Cry Wolf
Every automated detection system has a noise tolerance. Push it past that threshold and analysts start ignoring alerts — or worse, the system starts auto-suppressing them.
Agentic Defense's Threat Hunting Agent is designed to surface "novel attack patterns." Novel is the key word. If I'm moving slowly, mimicking legitimate behavior, and staying just inside normal baselines, I'm not novel — I'm boring. Boring doesn't get flagged.
The flip side: if I deliberately trigger a wave of low-severity, obviously fake alerts first, I'm training the system (and the humans watching it) to associate alert spikes with noise. Then I move for real while everyone's fatigued.
This isn't a new technique. It's just more relevant when the system is autonomous and confidence-scored.&lt;/li&gt;
&lt;li&gt;Target the Remediation Agent Itself
This is the one that genuinely interests me as a red teamer.
Wiz Skills — Google's agent-based remediation feature — needs deep permissions to work. It reaches into your codebase, your cloud environment, your IDE. It can trigger automated fixes at the repository level.
That's a juicy target. An agent with remediation permissions is, from my perspective, an agent with modification permissions. If I can influence what that agent sees — through poisoned findings, manipulated context, or injected data in the Wiz Security Graph — I might be able to get the remediation agent to make changes I want made.
Not breaking in. Getting the defender's own tooling to open the door.&lt;/li&gt;
&lt;li&gt;Shadow AI as a Blind Spot — For Now
Google announced an AI-Bill of Materials through Wiz — automatically inventorying AI frameworks, models, and IDE extensions across the environment, including shadow AI tools. Good idea. Genuinely useful.
But inventory is not control. Knowing that a developer is running an unsanctioned AI coding plugin is different from preventing it, auditing what it generated, or understanding what data it touched. The gap between "we can see it" and "we've secured it" is where I'd be spending time.
New tool in an environment = new attack surface that defenders haven't fully modeled yet. That's not a flaw in the announcement — it's just the nature of shipping new capabilities.&lt;/li&gt;
&lt;li&gt;The MCP Server Attack Surface
Google Security Operations now supports remote MCP server integration — teams can build custom security agents on top of the platform. This is powerful. It's also a brand new protocol in a security-critical context.
MCP is still relatively new. Security tooling built on top of it inherits whatever trust model MCP uses. If I'm targeting an org running custom security agents via remote MCP servers, I'm looking at that integration layer hard — how authentication is handled, what the agent can be instructed to do, whether there's prompt injection risk in the threat data the agent processes.
Security agents that read unstructured threat intelligence and then take action are, from a red team perspective, systems that process untrusted input and produce privileged output. That's a classic injection risk, just with a new name.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What This Means If You're Defending&lt;br&gt;
I'm not writing this to scare anyone away from Agentic Defense. I'm writing it because the best blue teams I've worked with want to know exactly how someone would come at them — before that someone actually does.&lt;br&gt;
A few things I'd want in place before going fully agentic:&lt;br&gt;
Red team the agents before they go live. Specifically test whether your Threat Hunting Agent can be tuned out through alert fatigue. Test what happens when the remediation agent receives manipulated input. Find your own gaps before someone else does.&lt;br&gt;
Keep humans in the loop for high-impact actions. Auto-remediation on low-severity findings? Fine. Auto-remediation that touches production IAM, network configs, or codebase at scale? That needs a human confirmation step until you've built real confidence in the system's judgment.&lt;br&gt;
Treat your security agents like privileged users. Audit their actions. Monitor their access. Ask who can influence what they see and what they act on.&lt;/p&gt;

&lt;p&gt;My Actual Take&lt;br&gt;
Agentic Defense is the right direction. The threat landscape has moved to machine speed, and defenders need to as well — there's no debate there.&lt;br&gt;
But every new capability is also a new attack surface. The same autonomy that lets an AI agent remediate a vulnerability at 3 AM is the autonomy that, if misdirected, can make changes no human intended.&lt;br&gt;
The teams that get this right won't be the ones who trusted the platform most. They'll be the ones who questioned it hardest first.&lt;br&gt;
That's just red team thinking. Doesn't turn off when the vendor demo ends.&lt;/p&gt;

&lt;p&gt;Red teamers — what's the first thing you'd test against an agentic security stack? Drop it below.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
      <category>cybersecurity</category>
    </item>
  </channel>
</rss>
