Anthropic's Engineers Have Stopped Writing Code. Here's What That Actually Means.
In early 2026, Boris Cherny — the person who built Claude Code at Anthropic — announced he hasn't manually written or edited a single line of code since November 2025. Not a variable name. Not a comment. Nothing.
100% of his production code is written by AI.
Across the rest of Anthropic, it's described as "pretty much 100%" as well. Dario Amodei said at Davos that some of his engineers now tell him: "I don't write any code anymore. I just let the model write the code, I edit it."
And then he added: Anthropic might be six to twelve months away from when AI handles most, maybe all, of what software engineers do end-to-end.
That was January 2026. We are now in April.
This is not a thought experiment. This is not a Silicon Valley trend piece. This is happening — and most businesses, founders, and even developers haven't fully processed what it means for them.
As someone who ran engineering teams before founding Innovatrix Infotech, I want to give you the unfiltered version — not the hype, not the panic, but the actual picture of what this shift looks like from inside a 12-person dev team that is already operating this way.
What Actually Happened at Anthropic
The story starts with Claude Code — an agentic coding tool that reads codebases, edits files, runs commands, and integrates with development pipelines. It was released in 2025 and almost immediately became something its own creators hadn't fully anticipated: not just a coding assistant, but a complete replacement for the act of writing code.
But the more revealing data point isn't about one engineer. It's about what Anthropic's research team did as a stress test.
They set up 16 Claude agents working in parallel on a shared codebase, inside Docker containers, with Git-based synchronization between them. No human was actively supervising. The task: build a Rust-based C compiler from scratch, capable of compiling the Linux kernel.
The result: nearly 2,000 Claude Code sessions, approximately $20,000 in API costs, and a 100,000-line compiler that successfully builds Linux 6.9 across x86, ARM, and RISC-V architectures.
That is not autocomplete. That is an autonomous engineering team.
The engineers involved described their role as designing the architecture, writing the test harness that kept agents on track, and handling the moments when agents got genuinely stuck. They were not writing code. They were managing systems that wrote code.
This is what Boris Cherny means when he says the software engineering title is going to "start to go away" by end of 2026. He is not talking about developers becoming unemployed. He is talking about the job fundamentally changing — from a person who writes syntax to a person who designs systems, reviews outputs, owns outcomes, and keeps agents unblocked.
His prediction: everyone becomes a product manager, and everyone codes. The word "builder" replaces "software engineer."
We Are Already Living This
At Innovatrix Infotech, we didn't wait for the Anthropic announcement to validate what we were already seeing.
Our entire development workflow is now agent-driven. Claude, Codex, and a custom orchestrator handle implementation. Our engineers — and when I say engineers, I mean AI agents — operate in a TDD-first environment. Test cases are written before a single line of implementation exists. Architecture Decision Records (ADRs) and Technical Design Records (TDRs) are generated and maintained automatically. Documentation is produced as a byproduct of the development process, not an afterthought.
Our own website — the entire codebase — was generated by AI. The code quality, documentation coverage, and test suite are better than anything we shipped manually in the years before this shift.
Here is what changed for our clients: we deliver faster. Significantly faster. But the pricing didn't drop — it went up, or stayed the same. Because what improved wasn't the speed of typing. What improved was the quality of what we ship. No redundant boilerplate. No overlooked edge cases. Better test coverage. Proper documentation. Clients get a codebase that a senior engineer can walk into and understand immediately.
For D2C brands we work with — including Shopify storefronts where a broken checkout or a slow mobile experience directly costs revenue — this matters enormously. When we rebuilt FloraSoul India's Shopify stack using our current workflow, mobile conversions climbed 41% and average order value increased 28%. That's not because we typed faster. It's because we shipped cleaner, tested, production-grade code.
The Warning: What This Means for Junior Developers
I want to be direct here because most takes on this subject are either catastrophizing or dismissive.
Entry-level developers are at structural risk. Not because AI will fire them — but because the traditional path into professional software development is being disrupted at its foundation.
Junior developers have historically learned by doing the repetitive work: writing boilerplate, debugging small issues, implementing well-defined features from tickets. Those tasks gave them thousands of hours of experience that compounded into senior-level intuition.
Those tasks are now handled by agents.
As Andrej Karpathy noted, even his own ability to write code manually has started to atrophy. He uses AI for everything. The skill is being outsourced, and like any outsourced skill, it weakens through disuse.
I will be honest about my own position: I would not hire a fresh junior developer today to do what junior developers have traditionally done. Not because I don't respect them — but because an AI agent does it faster, with better test coverage, and doesn't need onboarding.
What I would hire for — what actually has value right now — is someone who understands systems. Someone who can look at what an agent produced and identify the architectural flaw the agent didn't catch. Someone who knows why a particular approach to caching in a Next.js app will cause problems at scale, even if the code looks correct. Someone who can write a test harness that keeps agents honest.
That is not a junior skill. And the path to developing it just got shorter in some ways and harder in others.
If you are a developer early in your career, here is my honest advice: stop optimizing for syntax. Start optimizing for architecture. Learn systems design. Learn how to review and interrogate code you didn't write. Learn to think about trade-offs at scale, not just whether something works.
Because that is what the market will pay for.
The Celebration: What This Unlocks for Product Builders
Let me flip the angle entirely, because there is a genuinely exciting side to this that I don't think gets enough attention.
For anyone building a product — a founder, a D2C brand owner, a business that has always wanted to move faster but was constrained by engineering bandwidth — this shift removes constraints that felt permanent.
The barrier between "I have an idea" and "this is running in production" has collapsed. Not for everything, not without skill, but for a class of work that used to require months and significant capital.
At Innovatrix, we are a DPIIT-recognized startup with AWS and Shopify Partner status. When we build AI automation pipelines for clients — using n8n, custom Python orchestrators, or integrated Shopify flows — we are not doing it by having our engineers manually write every function. We are directing agents toward outcomes, reviewing what they produce, owning the architecture decisions, and shipping.
The laundry services client we work with saves over 130 hours per month through a WhatsApp AI agent we built. That agent handles customer queries, booking confirmations, and follow-ups. The code powering it is AI-generated, maintained by AI-assisted tooling, and monitored through automated pipelines.
That is what this shift unlocks for businesses willing to work with partners who understand agentic engineering — not just agencies who describe themselves as "AI-powered" but actually still have someone in the back writing jQuery.
The Tactical Guide: What Founders Should Actually Do
If you are a founder or a business owner trying to translate this into decisions, here is what matters:
1. Ask your development partner directly: what percentage of your code is AI-generated, and how do you review it?
This is now a legitimate due-diligence question. Not because AI-generated code is bad — the evidence says the opposite — but because the answer tells you whether they are using modern tooling and, more importantly, whether they have a review process that catches what agents miss.
At Innovatrix, our answer is: the majority of implementation is AI-generated, reviewed against TDD test suites, validated through ADR/TDR documentation, and audited by engineers who understand the architecture at a systems level.
2. Stop treating AI automation as a separate "add-on" service.
Every piece of software you build in 2026 should have an automation layer. Not as a nice-to-have, but as part of the architecture from day one. Whether that is a Shopify storefront with automated cart recovery and personalization flows, a web application with AI-driven customer support, or an internal tool with intelligent routing — the cost of adding this after the fact is always higher than building it in.
If you are planning a Shopify build or a web application and your vendor is not talking about AI automation as part of the initial scope, that is a gap worth pushing on.
3. Understand that "agentic engineering" requires architectural skill, not just tool adoption.
The failure mode I see most often: a team installs Cursor or GitHub Copilot, has engineers accept autocomplete suggestions at high rates, and calls it AI-driven development. The code gets worse, not better, because nobody changed the review process or the architectural thinking.
Real agentic engineering means the humans in the loop are operating at a higher level of abstraction. They are not reviewing whether the syntax is correct. They are evaluating whether the approach is right, whether the test coverage is honest, whether the architecture will hold at the scale the product needs to reach.
That is a senior engineering skill. It is the skill that remains valuable as everything else gets automated.
4. On checkout, payments, and Shopify's critical paths — AI-generated code still needs deep human review.
I want to be clear about one area where we do not reduce human oversight: payment flows, checkout logic, and any system that touches financial transactions. Not because agents write worse code here — they don't — but because the blast radius of an error is high and the edge cases are numerous.
As a Shopify Partner, we have seen how a subtle miscalculation in discount stacking or a race condition in cart updates can cause real revenue loss before it is caught. These areas warrant additional manual review layers regardless of who or what generated the underlying code.
The Bigger Picture
Boris Cherny said something on Lenny Rachitsky's podcast that I think is the most honest framing of where we are: coding has been "practically solved" for him, and he believes it will be solved for everyone — regardless of domain — by the end of 2026.
McKinsey data from earlier this year shows AI-centric organizations are achieving 20–40% reductions in operating costs and 12–14 point improvements in EBITDA margins. In India alone, Gartner predicts that 40% of enterprise applications will embed task-specific agents by the end of 2026, up from less than 5% in 2025.
The shift is happening faster than most organizations are adapting. The companies and teams that come out ahead are not the ones who resist it or the ones who blindly automate everything. They are the ones who understand the new division of labour: agents execute, humans architect, review, and own outcomes.
At Innovatrix, that is the model we operate on every day. And it is the model we bring to the brands and businesses we work with across India, Dubai, Singapore, and beyond.
If you are building something and you want to understand what this shift means for your specific situation — whether you are evaluating a tech partner, rebuilding your stack, or trying to figure out where AI automation fits in your product — book a free strategy call. No pitch, just an honest conversation about what we are seeing.
Frequently Asked Questions
Is AI-generated code safe to use in production?
Yes, when reviewed properly. The key word is reviewed. AI agents can produce production-grade code with excellent test coverage, but they can also make subtle architectural errors that a senior engineer would catch immediately. The model for production use is: agents generate, engineers review against test suites and architectural standards, then ship. This is what we do at Innovatrix.
Does this mean software development agencies are becoming obsolete?
The opposite. Agencies that understand agentic engineering can now deliver better quality at faster timelines than was possible before. What becomes obsolete is the agency that has juniors manually writing boilerplate and calling it "custom development." The bar for quality has risen, not fallen.
What skills should a developer build right now to stay relevant?
Systems architecture. The ability to evaluate and interrogate code you didn't write. Deep understanding of the domain you work in — whether that is ecommerce, fintech, logistics, or anything else. Test-driven thinking. The ability to write specifications that agents can execute against. These skills compound; syntax knowledge alone does not.
As a D2C founder with no technical background, how should I think about this?
You now have more leverage than ever in conversations with your tech partner. You can reasonably ask: how are you using AI tooling? What is your review process? Can you deliver faster than six months ago? If the answer to the last question is no, that is worth understanding. The productivity gains from agentic engineering are real. A good partner should be able to articulate how that benefits you.
What is the risk of over-relying on AI agents for code generation?
The main risks are: architectural drift (agents optimise locally but miss the bigger picture), test coverage that looks complete but misses real-world edge cases, and context window limitations in very large codebases. These are real risks. The mitigation is not to write more code manually — it is to have stronger architectural oversight, better test harness design, and clear documentation standards. All of which are solvable with the right process.
Anthropic is an AI company — isn't their experience unique and not applicable to normal businesses?
Partly. The speed of adoption at a frontier AI lab is faster than most. But the tools they are using — Claude Code, agentic pipelines, automated review systems — are available to everyone. The gap between Anthropic's current state and what an average development team could implement with the right process is months, not years.
What does this mean for Indian software development companies specifically?
India has one of the largest pools of software engineering talent in the world. That talent is now facing a fundamental repositioning. The opportunity is significant for engineers and companies that move up the value stack toward architecture, systems design, and domain expertise. The risk is for those who compete purely on volume of code output — that competition is over. An agent will always win on volume.
How do I evaluate whether a development partner is actually using agentic engineering versus just claiming to?
Ask for their process documentation. Ask whether they use test-driven development, what their ADR/TDR standards look like, and what their code review process catches. Ask specifically: what percentage of implementation is AI-generated, and how is it reviewed? A partner genuinely operating this way will have clear, specific answers. A partner using the language without the substance will deflect.
Rishabh Sethia is the Founder & CEO of Innovatrix Infotech. Former Senior Software Engineer and Head of Engineering. DPIIT Recognized Startup. Shopify Partner, AWS Partner, Google Partner.
Top comments (0)