DEV Community

Cover image for From APIs to Agents: The Real Shift at Google Next ‘26
Konark Sharma
Konark Sharma

Posted on

From APIs to Agents: The Real Shift at Google Next ‘26

Google Cloud NEXT '26 Challenge Submission

This is a submission for the Google Cloud NEXT Writing Challenge

I watched Google Cloud Next ‘26 thinking I’ll just see better models, faster APIs, maybe some cool demos.

But I didn’t expect this. This felt different. Not like “AI is improving”. More like the way we build software is changing.

The Moment That Stuck With Me

It started simple. JayTee Hazard was creating music. Tina Tarighian was generating visuals.

But the interesting part wasn’t the demo. It was what was happening behind it.

Gemini was:

  • listening to the music
  • generating code
  • updating visuals in real time

And it just kept going. No “run again”. No “generate once”

It was a loop. That’s when it clicked for me. This is not prompt to output anymore. This is: input → reasoning → tool use → execution → feedback → repeat

img

The difference is not just output. It’s the system behind it.

That loop is the foundation of agentic systems.

Then This Number Hit Me

Sundar Pichai mentioned:

~75% of new code at Google is AI generated and reviewed by engineers

I had to pause there. Not because it’s surprising. But because it confirms something we already feel. We’re not writing everything anymore.

We’re:

  • guiding
  • reviewing
  • correcting

Almost like we moved from writing functions to reviewing systems.

The Part That Felt Real

The most interesting part wasn’t the models. It was how they’re actually using this internally. They gave an example of a complex code migration.

Instead of one system, they had:

  • a Planning Agent
  • an Orchestrator Agent
  • a Coding agent
  • and Engineers

Working together. And they completed it 6x faster. That’s not “AI helping”. That’s a team.

So What Does This Mean for Us?

This is where things started making sense for me.

1. We’re Not Writing Prompts. We’re Designing Systems

With the Agent Development Kit (ADK), you don’t just create one agent. You define:

  • roles
  • capabilities
  • tool access
  • execution flow

Each agent becomes: a stateful unit with memory + tools

It felt like building microservices, but instead of APIs you’re wiring intelligence.

2. The API Layer Is Getting Abstracted (MCP)

This was subtle but huge. With Model Context Protocol (MCP) inbuilt now:

  • tools expose capabilities in a standard format
  • models understand how to use them
  • context is passed in a structured way

Instead of:

  • writing REST calls
  • parsing responses
  • handling retries

Your agent does tool invocation via context. Think of MCP as a contract between models and tools

3. Agents Talking to Agents (A2A)

With A2A (Agent-to-Agent). Agents can:

  • discover other agents
  • request capabilities
  • validate outputs

Each agent exposes something like:

{
"name": "evaluator",
"capabilities": ["validate", "score", "simulate"]
}
Enter fullscreen mode Exit fullscreen mode

And another agent can:

evaluator.evaluate(plan)
Enter fullscreen mode Exit fullscreen mode

This creates dynamic multi-agent coordination.

4. The UI Part Was Unexpected

This one felt weird at first. Instead of building dashboards manually. Agents generate UI based on context.

Using A2UI:

  • data → structured output
  • output → UI components

So instead of build dashboard → connect data. It becomes generate data → UI gets created

This flips the flow completely.

5. Memory Makes Agents Actually Useful

One of the biggest limitations I’ve felt AI forgets everything

With:

  • session state
  • memory bank

Agents can:

  • store context
  • recall past decisions
  • refine outputs

So instead of stateless prompt → response. You get stateful system → evolving behavior

That’s a big shift.

6. DevOps Is Turning Into System-Level Reasoning

This part felt unreal. Using Cloud Assist:

  • infra migration → prompt
  • debugging → automated reasoning
  • fixes → suggested patches

Under the hood: model + logs + context + tool execution

So instead of:

  • checking logs manually
  • tracing errors

The system does root-cause reasoning + suggestion

What This Means in a Real Project

If I think about building something today:

Before:

  • write backend
  • connect APIs
  • manage state
  • build UI

Now:

  • define agents (planner, executor, validator)
  • connect via A2A
  • use MCP-enabled tools
  • let UI emerge via A2UI

The shift is not just speed. It’s how I think about building systems.

The Real Takeaway

I’m not thinking “AI will replace developers”. I’m thinking the role is changing

Before: “How do I write this?”
Now: “How do I design a system that can solve this?”

And honestly, I’m still figuring out what that means for me.

If you’re building with AI right now, are you still writing prompts? Or are you starting to design systems?

Top comments (0)