<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yashvanth R S</title>
    <description>The latest articles on DEV Community by Yashvanth R S (@olivespecs).</description>
    <link>https://dev.to/olivespecs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/olivespecs"/>
    <language>en</language>
    <item>
      <title>NASA and Papa John's Are Using the Same AI. Nobody's Asking If That's a Problem.</title>
      <dc:creator>Yashvanth R S</dc:creator>
      <pubDate>Wed, 29 Apr 2026 13:05:16 +0000</pubDate>
      <link>https://dev.to/olivespecs/nasa-and-papa-johns-are-using-the-same-ai-nobodys-asking-if-thats-a-problem-ho5</link>
      <guid>https://dev.to/olivespecs/nasa-and-papa-johns-are-using-the-same-ai-nobodys-asking-if-thats-a-problem-ho5</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At Google Cloud Next '26, Thomas Kurian stood on a stage in Las Vegas and declared: &lt;strong&gt;"The era of the pilot is over. The era of the agent is here."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The crowd loved it.&lt;/p&gt;

&lt;p&gt;I'm a student entering tech, and I found it unsettling.&lt;/p&gt;

&lt;p&gt;Not because AI agents are inherently bad. Not because I think Google is lying. But because in the same keynote — sometimes in the same breath — we were told that agentic AI is being deployed at &lt;strong&gt;NASA's Artemis II&lt;/strong&gt; for flight readiness checks &lt;em&gt;and&lt;/em&gt; at &lt;strong&gt;Papa John's&lt;/strong&gt; to remember your usual pizza order.&lt;/p&gt;

&lt;p&gt;Same platform. Same underlying technology. Same era.&lt;/p&gt;

&lt;p&gt;And nobody on that stage asked: should we be trusting these things equally?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Use Case That Stopped Me
&lt;/h2&gt;

&lt;p&gt;Here's the moment I'm talking about. During the keynote recap, Google highlighted real-world deployments of its Gemini-powered agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Home Depot&lt;/strong&gt; — an AI phone and in-store assistant giving shoppers product advice&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Papa John's&lt;/strong&gt; — an Ordering Agent that uses context to remember "the usual" and speed up checkout&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NASA/Artemis II&lt;/strong&gt; — agentic AI supporting flight readiness checks for a crewed lunar mission&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I genuinely had to re-read that list.&lt;/p&gt;

&lt;p&gt;We've moved from "AI suggests your next Netflix show" to "AI signs off on whether humans are safe to fly to the moon" — and the framing at Next '26 treated these as points on the same continuum of progress. Just more examples of agents &lt;em&gt;doing things&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But a pizza recommendation going wrong means you get pepperoni when you wanted mushroom. A flight readiness check going wrong means people die.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stakes are not the same. The accountability model cannot be the same. So why are we talking about them like they are?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Google Actually Announced (And What It Quietly Implies)
&lt;/h2&gt;

&lt;p&gt;To be fair, Google did announce a serious governance layer at Next '26. Agent Identity gives every agent a cryptographic ID and auditable trail. Agent Gateway enforces security policies. Agent Anomaly Detection flags suspicious reasoning in real time.&lt;/p&gt;

&lt;p&gt;This is genuinely impressive engineering.&lt;/p&gt;

&lt;p&gt;But here's what I keep coming back to: &lt;strong&gt;an audit trail only matters after something goes wrong.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agent Identity tells you &lt;em&gt;which&lt;/em&gt; agent made a bad call. It doesn't prevent the bad call. And when we're talking about autonomous agents running "long-running tasks" in "secure cloud sandboxes" without "constant prompting" — the question of &lt;em&gt;what counts as wrong&lt;/em&gt; becomes load-bearing.&lt;/p&gt;

&lt;p&gt;At Papa John's, wrong is "the order was weird." Recoverable in 30 seconds.&lt;/p&gt;

&lt;p&gt;At NASA, wrong is a category that involves congressional hearings.&lt;/p&gt;

&lt;p&gt;Google's governance stack doesn't distinguish between these. It treats accountability as a technical problem — verifiable identity, anomaly scores, audit logs. But accountability is also a &lt;em&gt;human&lt;/em&gt; problem. Who at Papa John's is responsible when the Ordering Agent upsells a customer into a $60 order they didn't want? Who at NASA signs their name to a flight readiness check that an agent participated in?&lt;/p&gt;

&lt;p&gt;I don't see those answers in the keynote. I don't see them in the product announcements either.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More to Me Than to the People On Stage
&lt;/h2&gt;

&lt;p&gt;I'm entering tech right now. Not in five years — now, in the middle of this.&lt;/p&gt;

&lt;p&gt;Sundar Pichai announced that 75% of all new Google code is AI-generated. Thomas Kurian talked about engineers "orchestrating fully autonomous digital task forces." The message was clear, even if it was never said directly: &lt;strong&gt;the job is changing, and it's changing fast.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For someone senior with 15 years of context, that's an interesting career inflection point. For me, it's the only career I'll ever know.&lt;/p&gt;

&lt;p&gt;And what I'm watching is an industry that is very good at announcing capability and very bad at pausing to ask what the capability is for, who it's accountable to, and what happens when it fails in ways the demos didn't anticipate.&lt;/p&gt;

&lt;p&gt;The Home Depot demo at Next '26 was polished. Shaun White's snowboarding analytics were cool. The Unilever procurement agent was slick. Every demo worked perfectly.&lt;/p&gt;

&lt;p&gt;Real deployments at scale, under pressure, with edge cases the product team didn't think of — those don't look like keynotes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question I'm Bringing Into My Career
&lt;/h2&gt;

&lt;p&gt;I'm not anti-AI. I'm not doomer-posting. I think agentic AI is genuinely transformative and I want to build with it.&lt;/p&gt;

&lt;p&gt;But I'm going to carry one question with me into every project, every deployment, every "era of the agent" pitch I sit through:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the stakes, and does the accountability model match them?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Papa John's and NASA can both use Gemini agents. That's fine. What can't be the same is how those deployments are governed, tested, questioned, and owned by humans when they go sideways.&lt;/p&gt;

&lt;p&gt;Google built the governance tools. The industry still has to decide to use them seriously — not just as compliance checkboxes, but as genuine answers to the question: &lt;em&gt;if this agent gets it wrong, who is responsible, and what happens next?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We're at the beginning of the agentic era. The norms aren't set yet. The defaults aren't locked in.&lt;/p&gt;

&lt;p&gt;That means right now — before the next 500 enterprise deployments, before the next keynote declares another era over — is exactly the right time to ask the uncomfortable questions.&lt;/p&gt;

&lt;p&gt;I'd rather ask them early than audit them later.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by a student watching the agentic era begin and trying to figure out what questions to bring into it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
    </item>
  </channel>
</rss>
