<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anshul Prakash</title>
    <description>The latest articles on DEV Community by Anshul Prakash (@anshulprakash).</description>
    <link>https://dev.to/anshulprakash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anshulprakash"/>
    <language>en</language>
    <item>
      <title>Agents Are the New Microservices — And Google Just Proved It at Next '26</title>
      <dc:creator>Anshul Prakash</dc:creator>
      <pubDate>Fri, 24 Apr 2026 14:45:16 +0000</pubDate>
      <link>https://dev.to/anshulprakash/agents-are-the-new-microservices-and-google-just-proved-it-at-next-26-2k3i</link>
      <guid>https://dev.to/anshulprakash/agents-are-the-new-microservices-and-google-just-proved-it-at-next-26-2k3i</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I'm at Google Cloud Next this week, and something someone said in passing between sessions stuck with me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Agents are the new microservices."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At first it felt like conference hyperbole. By the end of the keynotes, I wasn't so sure.&lt;/p&gt;

&lt;h2&gt;
  
  
  We've Been Here Before
&lt;/h2&gt;

&lt;p&gt;Cast your mind back to 2012–2015. We were breaking apart monoliths. The pitch was compelling — smaller, focused services, independently deployable, easier to reason about. The reality took years to catch up. We needed service meshes to manage traffic, distributed tracing to debug across boundaries, circuit breakers to handle failures gracefully, and IAM policies to figure out which service could talk to which.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;decomposition&lt;/em&gt; was easy. The &lt;em&gt;discipline&lt;/em&gt; was hard.&lt;/p&gt;

&lt;p&gt;I think we're at the exact same inflection point with AI agents — and what Google shipped at Next '26 is the clearest signal yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Announcements That Made It Click
&lt;/h2&gt;

&lt;p&gt;Google didn't just talk about agents in the abstract. They shipped the infrastructure primitives:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent Identity&lt;/strong&gt; — Every agent gets a unique cryptographic ID with scoped permissions and audit trails. Agents can operate autonomously but within defined authorization boundaries, and escalate to a human when they hit the edge of their scope.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent Gateway&lt;/strong&gt; — Centralized policy enforcement for all agent-to-agent and agent-to-tool interactions. It understands MCP and A2A natively, inspects every interaction, and integrates with Model Armor for runtime protection against prompt injection and tool poisoning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent Registry&lt;/strong&gt; — They literally called it &lt;em&gt;"the DNS of your internet of agents."&lt;/em&gt; Agents advertise capabilities via signed Agent Cards. Other agents discover and route to them. Sound familiar?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A2A Protocol v1.2&lt;/strong&gt; — Now in production at 150+ organizations including Microsoft, AWS, Salesforce, SAP, and ServiceNow. Governed by the Linux Foundation. IBM's competing ACP protocol voluntarily merged into it last year. The standard is settling.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture That's Forming
&lt;/h2&gt;

&lt;p&gt;Here's the mental model I came away with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP&lt;/strong&gt; = how an agent connects to tools and data (think: the service's internal dependencies)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A2A&lt;/strong&gt; = how agents communicate with each other across platforms and orgs (think: the inter-service API contract)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Identity&lt;/strong&gt; = trust and authorization (think: mTLS + IAM, but for agents)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Gateway&lt;/strong&gt; = policy enforcement at the boundary (think: your API gateway / service mesh)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Registry&lt;/strong&gt; = discovery (think: service registry / DNS)
If you squint, it's a microservice architecture. The components map almost 1:1. The difference is that the &lt;em&gt;nodes&lt;/em&gt; in this network are nondeterministic — they don't return a predictable response to a given input the way a REST endpoint does. That changes the failure modes significantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Actually Different (And Harder)
&lt;/h2&gt;

&lt;p&gt;With microservices, failure is binary and observable. A service either returns 200 or it doesn't. You can write deterministic tests, set SLAs, and build dashboards around clear metrics.&lt;/p&gt;

&lt;p&gt;With agents, failure is semantic. An agent can return &lt;em&gt;something&lt;/em&gt; that looks correct and is completely wrong for the task. Observability isn't just logs and latency anymore — it's &lt;em&gt;did the agent do the right thing&lt;/em&gt;, which is fundamentally harder to instrument.&lt;/p&gt;

&lt;p&gt;Google's response to this is Agent Observability and Model Armor — but we're early. The tooling for semantic correctness is nowhere near as mature as distributed tracing was at the microservice maturity peak.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Has the Advantage
&lt;/h2&gt;

&lt;p&gt;The engineers who survived the microservices wars — who built retry logic, designed for idempotency, drew service boundary diagrams, and debugged cascading failures at 2am — are going to have a serious edge here.&lt;/p&gt;

&lt;p&gt;The instincts transfer: keep agents focused on a single responsibility, design for failure at every boundary, don't share mutable state between agents, make every interaction traceable.&lt;/p&gt;

&lt;p&gt;What doesn't transfer is the assumption that a system that &lt;em&gt;runs&lt;/em&gt; is a system that &lt;em&gt;works&lt;/em&gt;. That's the new discipline we're building.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Think This Goes
&lt;/h2&gt;

&lt;p&gt;A2A reaching the Linux Foundation with 150+ production orgs in under a year is legitimately impressive. For context, it took years for microservice tooling (Kubernetes, Istio, Jaeger) to reach equivalent adoption. The compression is real.&lt;/p&gt;

&lt;p&gt;I left the keynotes thinking the analogy isn't just rhetorical anymore. The same organizational forces that pushed teams toward microservices — scale, team autonomy, independent deployment — are pushing toward multi-agent architectures. And Google just built the Kubernetes-equivalent control plane for it.&lt;/p&gt;

&lt;p&gt;The question isn't &lt;em&gt;if&lt;/em&gt; this becomes the dominant enterprise architecture. It's whether your team builds the discipline &lt;em&gt;before&lt;/em&gt; you're debugging agent spaghetti at 2am.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written from the conference floor at Google Cloud Next '26, Las Vegas.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>I added GenAI System Design to my interviews. Then I tried to pass one myself.</title>
      <dc:creator>Anshul Prakash</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/anshulprakash/i-added-genai-system-design-to-my-interviews-then-i-tried-to-pass-one-myself-2eag</link>
      <guid>https://dev.to/anshulprakash/i-added-genai-system-design-to-my-interviews-then-i-tried-to-pass-one-myself-2eag</guid>
      <description>&lt;p&gt;I've been interviewing software engineers for a while. Recently I started incorporating GenAI system design into the loop — RAG pipelines, agent architectures, evaluation strategies. The kind of questions that are now standard at OpenAI, Anthropic, Google, and Meta.&lt;/p&gt;

&lt;p&gt;Before I started asking candidates these questions, I wanted to make sure I understood what a strong answer actually looked like from the other side. So I went looking for reference material. That search led me to an AI-powered mock interview tool built specifically for this type of question.&lt;/p&gt;

&lt;p&gt;I figured I'd run through a session. I've conducted hundreds of interviews. I know this material cold.&lt;/p&gt;

&lt;p&gt;I was not expecting what happened next.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Session
&lt;/h2&gt;

&lt;p&gt;The question was a system design problem. The kind I've watched candidates struggle with dozens of times. I've sat on the other side of this exact type of question. I know what a strong answer looks like.&lt;/p&gt;

&lt;p&gt;I started talking. I went into architecture. I covered components. I felt fine.&lt;/p&gt;

&lt;p&gt;The session ended. The scores came back.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  

&lt;p&gt;1/5. Across every dimension. Architecture. Scalability. Trade-off analysis. Requirement coverage. Communication. All ones.&lt;/p&gt;

&lt;p&gt;The summary: &lt;em&gt;"The candidate demonstrated minimal technical depth and failed to present any coherent system architecture or design.."&lt;/em&gt;&lt;/p&gt;


&lt;/div&gt;


&lt;p&gt;I stared at it for a while.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;The feedback was right, and once I read it I knew exactly why.&lt;/p&gt;

&lt;p&gt;I had walked in thinking about the problem the way I think about it when I'm reviewing a candidate's answer — from a position of already knowing the destination. I wasn't narrating my reasoning. I was stating conclusions and moving on, assuming the listener could follow my logic without seeing it.&lt;/p&gt;

&lt;p&gt;That's not how interviews work. The interviewer has no access to your internal reasoning. They only have what you say out loud. And what I said out loud, apparently, didn't add up to much.&lt;/p&gt;

&lt;p&gt;The sharpest note in the feedback: I had jumped straight into implementation without spending a single minute on requirement gathering. No clarifying questions. No scope definition. Just architecture. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;*&lt;em&gt;The #1 fix the tool highlighted: *"Practice the opening 2 minutes religiously — have a memorized script for requirement gathering that you can deliver even under extreme stress."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I give that exact note to candidates regularly. I had just failed to do it myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Keeps Happening
&lt;/h2&gt;

&lt;p&gt;There's a specific failure mode I see repeatedly in GenAI system design interviews, and I now realize I'm not immune to it myself.&lt;/p&gt;

&lt;p&gt;Most engineers prepare by studying. They read papers, go through architecture blogs, maybe build a small RAG prototype. They accumulate knowledge. And then they walk into an interview and treat it like a solo design session — thinking in their head, stating conclusions out loud.&lt;/p&gt;

&lt;p&gt;That's not what an interview is. An interview is a real-time window into your reasoning process. The interviewer isn't just evaluating &lt;em&gt;what&lt;/em&gt; you know. They're watching &lt;em&gt;how&lt;/em&gt; you think — how you handle ambiguity, how you weigh tradeoffs, how you respond when challenged.&lt;/p&gt;

&lt;p&gt;Silence reads as uncertainty. Jumping to implementation without framing reads as shallow. These aren't signals that you don't know the material. They're signals that you haven't practiced communicating it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What These Questions Actually Look Like
&lt;/h2&gt;

&lt;p&gt;In case you haven't encountered them yet, here's the shape of what's showing up in AI/ML interview loops right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent design:&lt;/strong&gt; &lt;br&gt;
"Design an agent that can autonomously manage X." The follow-ups are about failure modes, escalation logic, tool interface design, and how you'd evaluate whether the agent is performing well in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RAG architecture:&lt;/strong&gt; "We need a system that answers questions grounded in our internal documentation." The follow-ups are about chunking strategy, retrieval quality, handling stale data, and latency vs. accuracy tradeoffs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluation strategy:&lt;/strong&gt; "How would you measure whether this AI system is working?" The follow-ups are about what you do when you don't have ground truth, and how you catch regressions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt engineering tradeoffs:&lt;/strong&gt; "Should we fine-tune or use RAG here?" The follow-ups are about when each approach breaks down, and how you'd make that call given specific constraints.&lt;/p&gt;

&lt;p&gt;None of these have a single correct answer. The interviewer is watching your reasoning, not your conclusion.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Changed After This
&lt;/h2&gt;

&lt;p&gt;I now run through at least one practice session before any role where I'll be on the interviewer side of a new question type. Not because I don't know the material — but because there's a gap between knowing something and being able to explain it fluently under pressure, and that gap shows up faster than you expect.&lt;/p&gt;

&lt;p&gt;If you're preparing for AI/ML roles and you've only been studying — not practicing speaking — you might have more of that gap than you think.&lt;/p&gt;

&lt;p&gt;Try explaining your last system design out loud, to no one, for ten minutes straight. See how long it takes before you go quiet.&lt;/p&gt;

&lt;p&gt;That's the thing to fix.&lt;/p&gt;

&lt;p&gt;Have you tried a GenAI interview yet? Share your experience or your favorite "clarifying question" in the comments below!
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>interview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Kubecon 2020: Istio Simplified</title>
      <dc:creator>Anshul Prakash</dc:creator>
      <pubDate>Tue, 25 Aug 2020 00:53:21 +0000</pubDate>
      <link>https://dev.to/anshulprakash/kubecon-2020-istio-simplified-2l1j</link>
      <guid>https://dev.to/anshulprakash/kubecon-2020-istio-simplified-2l1j</guid>
      <description>&lt;p&gt;Last week I attended Kubecon 2020 Europe and have tried to write a summary on Istio simplified session which talks about changes in Istio 1.6 and a shift from microservices design pattern to a monolith design. Please do share your views in the comment section.&lt;br&gt;
&lt;a href="https://link.medium.com/Vjv18OSqc9" rel="noopener noreferrer"&gt;Medium post&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
