This was a fun conversation with Ryan J. Salva Senior Director of Product at Google Cloud, the brains behind Gemini CLI and Gemini Code Assist.
We talked about Agents, Agentic Workflows, autonomous cars, industry trends in AI, and how job roles slowly merging, he explained me on how his team at Google is solving and improving the overall developer experience through CLI and Code Assist.
Azim Shaik : The one analogy that I really liked is the four stage analogy on how autonomous cars are compared to the AI development that’s happening.
Ryan J. Salva : You’ve likely heard of the five stages of autonomous driving. Conceptualized years ago by companies like Apple and Tesla, this framework describes the progression from a car that offers simple lane assistance (Stage 1) to one where you are just a passenger, completely hands-off (Stage 5).
We can apply a very similar model to the evolution of AI in software development. Here’s how I see those five stages.
Stage 1: The Predictive Typist
This stage began five or six years ago. I was at GitHub at the time, leading the team responsible for GitHub Copilot. We came up with this “ghost text” — a predictive text that anticipates the next one, two, or even ten lines of code you might be writing, suggesting entire functions or classes.
It certainly saves us a few keystrokes, but it’s not going to write an entire application for you.
Stage 2: The Single-File Conversationalist
About a year and a half after Stage 1, chat-based AI emerged. This stage allows a developer to ask questions about the code directly in front of them.
You can ask it to explain a particular file, make changes to it, or add code comments. It’s very useful, but its context is limited to that single file.
Stage 3: The Application-Aware Assistant
I believe this is where most developers are spending their days with AI assistants today.
In this stage, you’re still using a chat window, but now you’re asking questions about many components at once, not just a single file. You can ask, “How does this particular view interact with the data model?” The AI has to reason over the view, the model, the controller, and maybe even the database connection.
At this stage, you are thinking about your application, not just a file.
Stage 4: The Proactive SDLC Agent
This is the stage we’re all just beginning to experiment with, finding limited but successful use cases.Stage 4 is about giving the AI agent instructions and allowing it to respond to a Software Development Lifecycle (SDLC) event. A great example is an AI-assisted code review.
A developer submits code, and the agent automatically picks it up. It analyzes the code and reflects back on how it could be improved. This analysis is based on two things:
Best practices and principles baked into the model from its general knowledge.
Specific documentation and instructions provided by your team.
This is where the AI starts to be informed by your team’s unique context.
Stage 5: The Autonomous Problem-Solver (The Vision)
This is the shared vision we’re all working toward.
This is the “L5” (Level 5) agent that goes out and solves problems on its own volition. You could instruct an agent: “You are responsible for performing SRE duties for this application. Go make sure we stay online with maximum availability.”
That agent would then be out there, monitoring for spikes in critical errors, evaluating new deployments against SLOs, and checking for known vulnerabilities. If it discovered a vulnerability in a new deployment’s dependency, it would autonomously perform a rollback.
This is the agent acting on its own. To be clear, Stage 5 is not where we are today. It remains the vision. Most of the industry is operating in Stage 3, while the most advanced teams are just beginning to find success with Stage 4.
Azim Shaik : You spearhead Gemini Code Assist and Gemini CLA at Google. You probably are talking to your customers. You mentioned that we are not there yet. This is based off how you see it or based off how you are getting feedback from your customers.
Ryan J. Salva : I was talking to a company in Berlin, called Delivery Hero. They were using Gemini CLI, essentially to monitor critical errors in their production environment. So that when their SREs got notified that there was a deployment failure of some kind that Gemini CLI would’ve effectively analyze all the critical errors, look back at the bill of materials, and provide some hypotheses about what might have gone wrong. So that the SRE could quickly respond to it and recover from the failed deployment faster than they might have otherwise.
Azim Shaik : One of the things that you’ve mentioned, which really caught my attention, is how job roles are getting merged. Help us understand why that merge and why is it essential?
Ryan J. Salva : I see my engineering teams spending a lot less time. It’s not zero time okay, but a lot less time writing “If then else statements” ? we’re as engineers using an AI first methodology, we are spending more time writing requirements and they’re often very technical requirements. But nevertheless, we’re writing those technical requirements in order to guide the agents to produce outcomes that are many more lines of code that we could have written in the same amount of time. At the same time we, that we’re making our engineers more efficient, more effective. We’re also reducing the barriers for other folks who have historically and traditionally parti participate in the software development practice.
From a software development perspective, when we’re working in abstractions, right? If a, if it’s a product manager who’s writing a giant requirement stock we’re working at a layer of abstraction.
Here’s what I think is gonna happen, I have a series of life experiences I have that make me particularly empathetic when it comes to customer needs. I have a set of life experiences that make me like really have good taste when it comes to design. By the way, I’m not saying this necessarily true of me, but I’m just providing examples.
Maybe you have a set of life experiences that make it so that you’re really good at. Anticipating failure points. And you’re really good at seeing like what could go wrong at a time, right? Yeah. Like those are the types of qualities that tend to be really well associated with a really excellent engineer.
Path to Product Management:
Azim Shaik : A majority of the product managers that I’ve seen be it LinkedIn, be it people I met today, they all have engineering background. What is that line that they had to cross to come out of engineering role to become a product manager?
Ryan J. Salva : I started my career as an engineer, so I spent 10 years give or take a little bit. As an engineer, it turns out in my case, I also end up going the startup route, and you go the startup route, you’re wearing a different hat.
The engineers that I’ve known who ended up becoming product managers. And by the way, at least in my particular product category, just about every product manager was an engineer at some point. But I think the thing that cross-over from engineer to product manager is that you want to spend more time understanding the way that your tools are used. That you want, you spend more time thinking a little bit about, what are the problems that other people have that are worth solving?
Spend time talking to them. Spend time immersing yourself in that, and then spend time talking to all of the folks who pick up the work of the engineer after their check-in.
Spend time talking to the marketers, to the folks writing documentation.
If I as a product manager have one job, it’s to create clarity.
Azim Shaik : A few years ago we used to talk about 12 month plan, 18 month plan. I don’t think people are doing 18 month plans and releases anymore. What’s the new normal now?
Ryan J. Salva : When I took the job at Google about a year and a half ago, it was right around the time for annual planning. I basically said no. I told them I wasn’t going to do it because, frankly, it’s nonsensical.
In today’s environment, there is no way I could tell you where the business is going to be, let alone where the technology is going to be, a year from now.
Our Alternative: Continuous Planning
New information will always arrive. Because of this, we practice continuous planning.
Literally every week, we come back together as a team. We look at the backlog and ask ourselves three key questions based on what’s new:
What new information did we get this week?
How does that information need to change our list of priorities?
How do we need to adjust our execution plans based on that new information?
Execution Can Change Weekly. Strategy Shouldn’t.
This process allows us to adapt our execution plan — the how — on a weekly basis.
But it’s crucial to make this distinction: our execution might change, but our core strategy hopefully is not changing every week. That would be a failure mode. Continuous planning is about being agile in our execution while staying grounded in our long-term strategy.
Azim Shaik : Given the velocity of updates, could you give us some teasers and as to what’s coming in next near few months, near future from Google?
Ryan J. Salva :
We have a long list of priorities, but I want to focus on a few particular areas, starting with the developer experience (DevEx) and then bridging up to team-based functionality.
- Developer Experience: Prioritizing Extensibility From a DevEx perspective, our primary focus is extensibility. No developer works with a single, narrow toolset; they are constantly switching between a wide range of services. We’re all using a gazillion of these tools — whether it’s Jira, ServiceNow, GitHub, Postman, LaunchDarkly, or countless others.
Because developers have to switch tools so often, we want Gemini CLI to be able to integrate neatly with all of them.
- Team Enablement: Parallelization and Deployment Beyond the individual, we see teams asking, “How do I do more in parallel? How do I equip these agents with more power?”
The answer lies in the ability to parallelize and specialize through features like sub-agents and background agents. This focus also extends beyond just the compute on a local laptop. We are building ways to deploy these agents so they can respond directly to events in the Software Development Life Cycle (SDLC).
- The Need for Observability More and more organizations are asking for observability around the tools they’re using internally, and for good reason.
First, it’s expensive to burn through tokens. Second, we should all be practicing continuous improvement. Every day, I ask myself, “How could I do my job a little bit better?” While intuition is helpful, it’s also useful to have a mirror — quantitative data — that I can use to reflect and ask, “Is my perception matching reality in terms of how I’m using these tools?”
- Meeting Developers Where They Are: The Three Windows Finally, when I think about the tools we use, I know that every developer I’ve ever known usually has three windows open on their desktop:
Their IDE
Their Command Line
Their Browser
We are focused on meeting developer needs in all three of those categories.
Command Line: Gemini CLI clearly checks this box.
IDEs: We already provide IDE extensions for VS Code and JetBrains. As you may know, Google also had a transaction with the folks at Windsurf several months ago. We are collaborating very closely with them to make sure that Google is participating in the IDE as an important surface.
Here is the full podcast link.
Top comments (0)