<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Saquib Shahid</title>
    <description>The latest articles on DEV Community by Saquib Shahid (@devsaquib).</description>
    <link>https://dev.to/devsaquib</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devsaquib"/>
    <language>en</language>
    <item>
      <title>A2A + MCP — The Two Protocols That Were the Actual Story of Google Cloud NEXT '26</title>
      <dc:creator>Saquib Shahid</dc:creator>
      <pubDate>Fri, 24 Apr 2026 22:37:59 +0000</pubDate>
      <link>https://dev.to/devsaquib/a2a-mcp-the-two-protocols-that-were-the-actual-story-of-google-cloud-next-26-3pj8</link>
      <guid>https://dev.to/devsaquib/a2a-mcp-the-two-protocols-that-were-the-actual-story-of-google-cloud-next-26-3pj8</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Hey, I'm Saquib. I've been deep in the backend trenches for a couple of years now, mostly building out Node and Go microservices. Like pretty much every dev in 2026, I spent the last few months messing around with AI. Wiring LLMs into APIs. Breaking stuff. Fixing it. Getting a prototype working on the first try and immediately assuming something is horribly wrong.&lt;/p&gt;

&lt;p&gt;When Google Cloud NEXT '26 kicked off, I actually blocked out some calendar time to watch. My feed was absolutely flooded with hot takes on the Gemini Enterprise Agent Platform and the new 8th-gen TPUs. Everyone also had a very loud opinion about that "75% of Google's code is AI-generated" stat.&lt;/p&gt;

&lt;p&gt;But honestly? I kept getting distracted by two boring acronyms.&lt;/p&gt;

&lt;p&gt;A2A and MCP.&lt;/p&gt;

&lt;p&gt;They barely got any stage time. No hype reel. But I genuinely think they were the most important things announced at the whole event. Let me try to explain why.&lt;/p&gt;

&lt;h3&gt;
  
  
  the problem nobody wanted to say out loud
&lt;/h3&gt;

&lt;p&gt;Every vendor on stage was showing off agents. Salesforce has Agentforce. ServiceNow's got one. SAP too. Microsoft Copilot, obviously. Google's Gemini Enterprise. Basically everyone is shipping agents right now. Which is fine.&lt;/p&gt;

&lt;p&gt;But the question I kept coming back to was way less glamorous: how do any of these things actually talk to each other?&lt;/p&gt;

&lt;p&gt;In a normal backend setup, when two microservices need to chat, we have a playbook. REST, gRPC, Kafka queues, OpenAPI specs. The whole mess is solved. It's boring and it works.&lt;/p&gt;

&lt;p&gt;But with agents? A year ago, the answer was they just didn't. Or worse, they did it through some horrible custom webhook adapter that someone is going to have to babysit from a Slack DM at 2am. Every system spoke its own language. Integration wasn't a feature—it was somebody's entire job.&lt;/p&gt;

&lt;p&gt;A2A and MCP fix that exact problem. Google finally stopped treating them like research experiments and started treating them like actual infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  the mental model that finally made it click
&lt;/h3&gt;

&lt;p&gt;I had to read the docs a few times before I really got it. Here is the summary I wish I had on day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP is HTTP for agents. A2A is DNS plus HTTP for agents talking to &lt;em&gt;other&lt;/em&gt; agents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MCP (Model Context Protocol) was Anthropic's baby originally. It standardizes how a model reaches out to your tools and databases. Before MCP, if you wanted Gemini to query a database, you had to write a bunch of glue code and pray the model didn't invent a table name. Now, the model speaks MCP, the server speaks MCP, and they just shake hands.&lt;/p&gt;

&lt;p&gt;A2A (Agent2Agent) came from Google and got donated to the Linux Foundation. It standardizes how one agent talks to another autonomous agent.&lt;/p&gt;

&lt;p&gt;The A2A docs say it best: build with whatever framework, equip with MCP, and communicate with A2A. They aren't competing standards. MCP is how your agent talks down to the database. A2A is how it talks sideways to another agent. That realization saved me a massive headache.&lt;/p&gt;

&lt;h3&gt;
  
  
  okay, so what did google actually ship?
&lt;/h3&gt;

&lt;p&gt;A lot of this trickled out before NEXT, but the conference is where they put it all together into an actual strategy.&lt;/p&gt;

&lt;p&gt;On the MCP side, we got fully managed servers for things like BigQuery, Cloud SQL, and Pub/Sub. You don't deploy anything. You just point your agent at an endpoint. They also turned Apigee into an MCP bridge. This means any REST API you already built instantly becomes a discoverable agent tool with your existing auth layered on top. As a guy who spent way too long last year hand-wrapping APIs for LLMs, that was a huge relief. Auth is all IAM-backed now too, so no more passing API keys around.&lt;/p&gt;

&lt;p&gt;On the A2A side, it hit production grade with LangGraph and CrewAI support. Donating it to the Linux Foundation was a big move. They also announced Agent Registry (basically DNS for your internet of agents) and Agent Gateway.&lt;/p&gt;

&lt;p&gt;Put it all together, and Google is basically saying they don't need to own the agent itself. They just want to own the highways the agents drive on.&lt;/p&gt;

&lt;h3&gt;
  
  
  show me the code
&lt;/h3&gt;

&lt;p&gt;I hate architecture diagrams. I need to see the actual implementation. So here is a really minimal setup: a planner agent analyzing data, handing work off to a reporter agent in a completely different service.&lt;/p&gt;

&lt;p&gt;Planner first. We'll set it up to hit BigQuery via MCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# planner_agent.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.adk.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LlmAgent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.adk.tools.mcp_tool&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MCPToolset&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.adk.tools.mcp_tool.mcp_session_manager&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StreamableHTTPConnectionParams&lt;/span&gt;

&lt;span class="c1"&gt;# Point at Google Cloud's managed BigQuery MCP server.
# No infra to deploy. Auth handled by IAM.
&lt;/span&gt;&lt;span class="n"&gt;bigquery_tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MCPToolset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;connection_params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;StreamableHTTPConnectionParams&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://bigquery.googleapis.com/mcp/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;planner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LlmAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sales_planner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-3.1-pro&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instruction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You analyze sales data in BigQuery and prepare a brief &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;for the reporter agent. Findings as bullet points only.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;bigquery_tools&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice what's missing there. There are no BigQuery SDK imports or crazy prompt schemas. The managed MCP server just exposes the database as an agent tool. When I ran this the first time, I kept looking for the missing step. There isn't one.&lt;/p&gt;

&lt;p&gt;Next, we expose the reporter agent over A2A:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# reporter_server.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;a2a.server&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;A2AServer&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;a2a.types&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AgentCard&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AgentSkill&lt;/span&gt;

&lt;span class="c1"&gt;# The Agent Card is the "business card" that other agents fetch.
# Served at /.well-known/agent-card.json automatically.
&lt;/span&gt;&lt;span class="n"&gt;card&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AgentCard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;QuarterlyReporter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Turns sales briefs into formatted exec reports.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://reporter.example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;skills&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nc"&gt;AgentSkill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;write_exec_report&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write executive report&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Takes bullet findings and produces a polished report.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;input_modes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;output_modes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/pdf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;A2AServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent_card&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;card&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;my_report_handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That Agent Card is the magic part. It's just a tiny JSON doc at a predictable URL. It tells any other agent from any cloud exactly what this bot can do and how to authenticate. It's basically robots.txt for AI.&lt;/p&gt;

&lt;p&gt;Finally, we make them talk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# orchestrator.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;a2a.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;A2AClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;A2ACardResolver&lt;/span&gt;

&lt;span class="c1"&gt;# Discover what the reporter can do.
&lt;/span&gt;&lt;span class="n"&gt;resolver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;A2ACardResolver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://reporter.example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;card&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;resolver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent_card&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Open a client against it.
&lt;/span&gt;&lt;span class="n"&gt;reporter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;A2AClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent_card&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;card&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the planner...
&lt;/span&gt;&lt;span class="n"&gt;findings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;planner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze Q1 2026 revenue by region.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# ...and hand off to the reporter over A2A.
&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;reporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;findings&lt;/span&gt;&lt;span class="p"&gt;}]},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# A2A tasks have a real lifecycle so we stream updates back.
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;reporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three files and we have a working cross-service handoff. The crazy part is that the reporter could be running on AWS or it could be a Salesforce agent. The planner code wouldn't need to change at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  the detail most write-ups miss
&lt;/h3&gt;

&lt;p&gt;A lot of the coverage I've seen just says "these are cool." True. But the interesting part is the underlying design.&lt;/p&gt;

&lt;p&gt;A2A is basically stealing every good idea from how the early web scaled. Agent Cards live at /.well-known/agent-card.json. That's RFC 8615, the same pattern we use for security.txt. They're using JSON-RPC 2.0 over HTTP and Server-Sent Events. If your system speaks HTTP, it speaks A2A.&lt;/p&gt;

&lt;p&gt;Tasks have a normal lifecycle too. Submitted, working, completed. It's the same shape as AWS Step Functions or GitHub Actions. It's boring—and coming from backend land, that's basically the nicest thing I can say about a tech. Clever protocols usually die because nobody wants to actually implement them. MCP and A2A are predictable.&lt;/p&gt;

&lt;h3&gt;
  
  
  stuff I'm still not sold on
&lt;/h3&gt;

&lt;p&gt;Look, I'm not totally sold yet.&lt;/p&gt;

&lt;p&gt;The security surface area here is terrifying. An agent that can auto-discover other agents and pass tasks around is an absolute nightmare for prompt injection or data exfiltration. I'd need a very solid threat model before letting this touch real production data.&lt;/p&gt;

&lt;p&gt;Debugging is also going to be awful. Google says everything lands in Cloud Audit Logs. Cool. But tracing a failed task across three different agents built by three different vendors? Prepare for a lot of late nights.&lt;/p&gt;

&lt;p&gt;Also, the spec is still moving. A2A is at the Linux Foundation, which means it will evolve. If you build heavily on it today, you will probably be rewriting parts of it next year.&lt;/p&gt;

&lt;h3&gt;
  
  
  what I'd actually do this week
&lt;/h3&gt;

&lt;p&gt;If you want to get ahead of this, here is my unsolicited advice.&lt;/p&gt;

&lt;p&gt;Go run the A2A Python quickstart. It takes maybe an hour and you get a working agent in 50 lines. Then hook a basic agent up to a managed MCP server. BigQuery is probably the easiest to test. Read a real Agent Card—just go look at the JSON and see how the auth and skills are structured. It grounds the whole concept. Just don't overbuild right away. Start with two agents and see what breaks.&lt;/p&gt;

&lt;p&gt;Next '26 was Google quietly admitting that no single company will own the agent ecosystem. So they're building the infrastructure instead. A2A is the DNS. MCP is the HTTP. If you learn these protocols now, future-you is going to be really happy about it. That's my bet anyway.&lt;/p&gt;

&lt;p&gt;If you are building with A2A or MCP, drop a comment. I'd love to swap notes, especially if you've hit weird OAuth snags between managed MCP servers and non-Google clients. I definitely spent a few hours stuck on that this week.&lt;/p&gt;

&lt;p&gt;— Saquib&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>BrewOS 3000: The $47M Enterprise Coffee Platform That Has Never Made Coffee</title>
      <dc:creator>Saquib Shahid</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:34:58 +0000</pubDate>
      <link>https://dev.to/devsaquib/brewos-3000-the-47m-enterprise-coffee-platform-that-has-never-made-coffee-50i2</link>
      <guid>https://dev.to/devsaquib/brewos-3000-the-47m-enterprise-coffee-platform-that-has-never-made-coffee-50i2</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;, Best Ode to Larry Masinter&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;In 1998, Larry Masinter published &lt;a href="https://datatracker.ietf.org/doc/html/rfc2324" rel="noopener noreferrer"&gt;RFC 2324&lt;/a&gt; — the Hyper Text Coffee Pot Control Protocol. It defined a new HTTP method (&lt;code&gt;BREW&lt;/code&gt;), a new status code (&lt;code&gt;418 I'm a Teapot&lt;/code&gt;), and exactly zero practical applications. It was an April Fools joke. It became immortal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BrewOS 3000&lt;/strong&gt; is my love letter to that joke.&lt;/p&gt;

&lt;p&gt;It is a NASA mission-control-grade enterprise coffee procurement dashboard that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepts your blood type (affects roast compatibility)&lt;/li&gt;
&lt;li&gt;Requires you to select your Existential Mood State (&lt;code&gt;Monday&lt;/code&gt; is a valid option)&lt;/li&gt;
&lt;li&gt;Makes you pass a CAPTCHA to add milk ("Spell COFFEE backwards")&lt;/li&gt;
&lt;li&gt;Asks you to agree that &lt;em&gt;"coffee is a human right and you accept all karmic responsibility for choosing decaf"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Runs a live WebSocket diagnostics log that — I cannot stress this enough — logs &lt;code&gt;[17:22:32] Querying Larry Masinter's ghost... 404&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tracks six real-time metrics including Bean Entropy, Grind Latency, and Quantum Roast Index&lt;/li&gt;
&lt;li&gt;Executes a full 8-step async brew sequence with blockchain verification&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Always fails with HTTP 418.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every single time. Without exception. By design.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I am a teapot&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Short and stout, here is my spout&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Your coffee: denied&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;🔗 &lt;strong&gt;Live demo:&lt;/strong&gt; &lt;a href="https://htcpcp.vercel.app/" rel="noopener noreferrer"&gt;htcpcp.vercel.app&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://htcpcp.vercel.app/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;htcpcp.vercel.app&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;![BrewOS 3000 dashboard showing brew parameters, live diagnostics log reading "Querying Larry Masinter's ghost... 404", and six real-time telemetry metrics in a dark CRT-style terminal UI]&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;SYSTEM UPTIME: 09:02:28 · ONLINE · NEVER BREWED&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/saquib-shahid" rel="noopener noreferrer"&gt;
        saquib-shahid
      &lt;/a&gt; / &lt;a href="https://github.com/saquib-shahid/htcpcp" rel="noopener noreferrer"&gt;
        htcpcp
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Dev April fool challenge
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tbody&gt;
  &lt;tr&gt;
    &lt;th&gt;title&lt;/th&gt;
    &lt;td&gt;BrewOS 3000: The $47M Enterprise Coffee Solution That Has Never Brewed a Cup&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;published&lt;/th&gt;
    &lt;td&gt;false&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;description&lt;/th&gt;
    &lt;td&gt;My submission for the DEV April Fools Content Challenge - Best Ode to Larry Masinter&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;tags&lt;/th&gt;
    &lt;td&gt;webdev, humor, javascript, aprilfools, react&lt;/td&gt;
  &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 The $47M Mission&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Picture this: The year is 2021. Our CEO waited an agonizing &lt;em&gt;four minutes&lt;/em&gt; for a latte at a hipster coffee shop. In a moment of pure unadulterated hubris, he decided what the world really needed was not another barista, but a &lt;strong&gt;distributed coffee infrastructure platform&lt;/strong&gt; powered by Web3, Machine Learning, and arbitrary venture capital.&lt;/p&gt;
&lt;p&gt;We raised $47M in Series B funding. We hired 200 engineers. We spent two years rewriting our microservices in Rust.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;And we have never made a single cup of coffee.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Welcome to &lt;strong&gt;[BrewOS 3000]&lt;/strong&gt;, my wildly over-engineered submission for the &lt;strong&gt;DEV April Fools Content Challenge 2024&lt;/strong&gt; under the &lt;em&gt;"Best Ode to Larry Masinter"&lt;/em&gt;…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/saquib-shahid/htcpcp" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; React + Vite, Tailwind CSS v4, deployed on Vercel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fonts:&lt;/strong&gt; &lt;code&gt;Bebas Neue&lt;/code&gt; for headers, &lt;code&gt;Share Tech Mono&lt;/code&gt; for everything that needed to feel like a submarine terminal. Both doing &lt;em&gt;serious&lt;/em&gt; work here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The aesthetic:&lt;/strong&gt; Full dark CRT mission-control theme. Amber-on-black. Scanline overlay. Real-time metrics that fluctuate via &lt;code&gt;setInterval&lt;/code&gt; to make it look like the servers are working incredibly hard to do nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The brew sequence&lt;/strong&gt; is a React state machine — eight async steps with artificial delays, each one logging to the live diagnostics panel. Steps include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validating blood type against NASA coffee manifest&lt;/li&gt;
&lt;li&gt;Verifying bean provenance on-chain&lt;/li&gt;
&lt;li&gt;Checking municipal brew permit&lt;/li&gt;
&lt;li&gt;Milk authorization CAPTCHA&lt;/li&gt;
&lt;li&gt;Blockchain roast consensus (always times out)&lt;/li&gt;
&lt;li&gt;Biometric mood analysis&lt;/li&gt;
&lt;li&gt;Running 47 pre-flight safety checks&lt;/li&gt;
&lt;li&gt;Attempting to brew → &lt;strong&gt;418&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The diagnostics log streams these in real time. The progress bar hits 99% and turns red. The 418 modal takes over the screen with the full RFC citation and the haiku.&lt;/p&gt;

&lt;p&gt;I originally planned a real Node/Express HTCPCP backend, but realized the true startup move is to fake the entire infrastructure client-side to save on compute costs. We call this "serverless". Investors love it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keyboard shortcuts exist&lt;/strong&gt;, because of course they do. Press &lt;code&gt;B&lt;/code&gt; to initiate brew. Press &lt;code&gt;Escape&lt;/code&gt; to acknowledge your failure and try again.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best Ode to Larry Masinter.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's why this earns it:&lt;/p&gt;

&lt;p&gt;RFC 2324 is one of the greatest pieces of technical writing in internet history. It's a fully specified protocol, complete with &lt;code&gt;BREW&lt;/code&gt; and &lt;code&gt;WHEN&lt;/code&gt; methods, error codes, and a clause prohibiting the addition of milk &lt;em&gt;"to an Assam Darjeeling"&lt;/em&gt;. It introduced &lt;code&gt;418 I'm a Teapot&lt;/code&gt; as a serious-looking status code for a joke that has since survived every attempt to remove it from the web. Developers rallied. The code stayed. Larry Masinter won.&lt;/p&gt;

&lt;p&gt;BrewOS 3000 does not merely &lt;em&gt;reference&lt;/em&gt; RFC 2324. It &lt;strong&gt;implements&lt;/strong&gt; it — faithfully, reverently, and completely uselessly. The &lt;code&gt;BREW&lt;/code&gt; request fires. The &lt;code&gt;418&lt;/code&gt; returns. The teapot wins. Every time.&lt;/p&gt;

&lt;p&gt;This is the only correct way to honor a man who made the internet laugh for 28 years and counting.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;© 2025 BrewOS Inc. Powered by HTCPCP/1.0 · RFC 2324 · No coffee was harmed in the making of this app (because none was made).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
      <category>aprilfool</category>
    </item>
  </channel>
</rss>
