Google says AI-generated code now accounts for 75% of all new code at the company, with that code still “approved by engineers.” That line came from CEO Sundar Pichai’s April 22, 2026 Cloud Next post, and it extends a public progression Google has now repeated for three straight reporting periods.
The numbers are straightforward. The meaning is less so. Google is not saying 75% of its entire codebase was written by AI, or that software engineers are gone. It is saying AI is now the default first drafter for most new code, while humans still decide what gets accepted.
Google’s AI-generated code claim, in context
Google’s official trendline looks like this:
| Date | Google wording | Share of new code |
|---|---|---|
| Oct. 29, 2024 | “more than a quarter” generated by AI, then reviewed and accepted by engineers | >25% |
| Oct. 9, 2025 | “nearly half” generated by AI, reviewed and accepted by engineers | ~50% |
| Apr. 22, 2026 | “75% of all new code ... AI-generated and approved by engineers” | 75% |
That is a sharp climb in about 18 months: from just over a quarter to three-quarters.
The phrasing also stayed remarkably consistent. In 2024, Pichai said the code was “generated by AI, then reviewed and accepted by engineers.” In 2025, “generated by AI, reviewed and accepted by engineers.” In 2026, “AI-generated and approved by engineers.”
Those wording changes are small, but they tell you what Google wants the claim to mean. Not autonomous coding. Not lights-out software development. A workflow where machines draft and engineers govern.
Business Insider’s coverage helped popularize the 75% figure, but the core fact is from Google’s own posts. Secondary coverage often compresses this into “Google’s code is AI-generated,” which is not what Google said. The phrase is new code. That distinction is doing a lot of work.
How Google’s metric appears to work
Google has not publicly defined the measurement. That is the main caveat.
The posts do not say whether AI-generated code is measured by:
- lines of code,
- code suggestions accepted,
- files changed,
- commits,
- pull requests,
- or merged production code.
Those are very different metrics. “75% of lines in a change were machine-drafted” is not the same as “75% of merged changes were primarily authored by AI.” One can be inflated by boilerplate. The other would be a stronger claim about engineering workflow.
What the wording does reveal is the control point: approved by engineers.
That implies at least three things:
- A human is still accountable for the final change.
- Google does not present this as fully autonomous software output.
- The meaningful governance layer sits in review, acceptance, and editing — not in the raw generation step.
This is why the 75% number is useful as a workflow signal but weak as a productivity proof. If an engineer accepts an AI draft after minor edits, that looks very different from spending an hour rewriting a plausible but flawed patch. Google has not published the split.
That missing detail matters for any claim about labor displacement. “AI wrote the first draft” and “AI replaced the engineer” are not the same sentence.
Why the 75% number matters now
The obvious reading is “Google is automating coding fast.” The more interesting reading is “Google is normalizing AI-written code as the engineering baseline.”
That is new. In 2024, “more than a quarter” still sounded like an experiment. By 2026, Google is presenting AI coding as routine enough to mention in a keynote-style company update without much ceremony. The number is less a benchmark than a declaration that this is now how engineering work is expected to happen.
Google also frames itself as “customer zero” for its own tools. That matters because this is partly an internal operations story and partly a product signal for Gemini and Google Cloud customers. If Google can say its own engineers use AI to produce most new code, that is marketing for enterprise adoption as much as internal reporting.
There is a broader pattern here. Companies are increasingly comfortable saying the quiet part out loud: AI systems are not just helping with search, writing, or support, but participating directly in software production. We’ve covered adjacent versions of that shift in AI builds AI, where model development itself starts to absorb AI assistance, and in the opposite direction in AI Dependence, where relying on these systems creates new operational and cognitive risks.
The public benchmark also creates pressure on peers. Once one large company says three-quarters of new code is machine-generated, every other big platform company gets the same question from investors, customers, and boards: why isn’t yours?
What “approved by engineers” actually implies
This is the part that keeps the claim grounded.
Google’s wording consistently assigns engineers the jobs that matter most in high-stakes software work: review, acceptance, and approval. Those verbs suggest the company knows the raw generation number is not enough on its own. An AI can produce code quickly. The expensive part is knowing whether the code should exist, whether it is safe, and whether it quietly breaks something three services away.
That human approval layer is doing the real governance work. It is where security review, architecture judgment, test interpretation, rollback decisions, and edge-case reasoning still live. If you have read stories about AI coding tools producing convincing but unsafe output — including leaks and prompt-handling failures like the ones discussed in our Claude Code Leak coverage — this division of labor will feel familiar.
The skeptical reading is simple: Google’s metric probably tells us more about drafting than authorship.
That does not make it fake. It makes it narrower than the headline version. A company can simultaneously have:
- a genuine shift toward machine-drafted software,
- engineers still doing final accountability work,
- and an opaque internal metric that is hard to compare with anyone else’s.
All three can be true at once. In this case, they probably are.
Key Takeaways
- Google’s official public figures for AI-generated code rose from more than 25% in October 2024 to nearly 50% in October 2025 to 75% in April 2026.
- Google’s claim applies to new code, not its full codebase.
- Across all three statements, Google paired the number with human oversight: code was “reviewed,” “accepted,” or “approved” by engineers.
- Google has not publicly explained whether the metric is based on lines of code, suggestions, commits, pull requests, or merged changes.
- The clearest reading is a workflow shift: AI is becoming the default drafting tool, while engineers remain the approval and governance layer.
Further Reading
- Cloud Next '26: Momentum and innovation at Google scale — Google’s April 2026 post with the 75% claim.
- Gemini Enterprise: The new front door for Google AI in your workplace — Google’s October 2025 post citing “nearly half” of new code.
- Q3 earnings call: Remarks from our CEO — Google’s October 2024 source for “more than a quarter.”
- Google Says 75% of the Company's New Code Is AI-Generated — Secondary reporting that surfaced the claim and the internal-friction angle.
- Google says 75% of the company’s new code is AI-generated — Secondary coverage summarizing the update and prior benchmark.
Originally published on novaknown.com
Top comments (0)