Feb 22, 2026 — If you're learning to code right now, or if you've been writing code for years, today's news will hit you differently.
OpenAI just published an internal experiment. Anthropic published another. And Cisco's president gave an interview that every developer needs to read.
Here's what happened, in plain English.
The Experiment That Broke Software Engineering
Let me start with OpenAI.
A team of 3 engineers. Five months. Zero human-written code. One million lines of code in production.
That's not a typo.
OpenAI ran an internal experiment where they literally banned humans from writing code. The rule was simple: no one on the team could write a single line of code by hand. Everything had to come from their AI agents (called Codex).
The result? A complete product with hundreds of internal users. Built entirely by AI.
Here's what makes this wild: even the instruction manual that told the AI how to work—a file called AGENTS.md—was written by the AI itself.
Think about that for a second. The AI wrote the rules for how other AIs should write code.
But Wait, It Gets Crazier
Now look at Anthropic.
A researcher named Nicholas Carlini decided to stress-test what happens when you let 16 AI agents work together on a massive project.
His goal? Build a C compiler from scratch. In Rust. One that could actually compile the Linux kernel.
Here's what happened over two weeks:
- 2,000+ AI coding sessions
- 100,000 lines of code produced
- $20,000 in API costs (which sounds expensive until you realize what a human team would cost)
- A compiler that can now build Linux 6.9 on x86, ARM, and RISC-V architectures
The compiler also handles QEMU, FFmpeg, SQLite, Postgres, Redis, and passes 99% of compiler test suites. It even compiles Doom.
But here's the part that made me laugh:
This same compiler that can compile the Linux kernel... sometimes fails to compile "Hello World."
Why? Because the include paths weren't set correctly.
It's a perfect metaphor for where we are with AI coding. Capable of mind-blowing things. Still tripping on the basics.
Cisco Just Drew a Line in the Sand
Now read what Jeetu Patel, President of Cisco, said at an AI Summit in Amsterdam:
"We won't have developers at Cisco who don't choose AI as a core habit."
He didn't stop there.
Cisco has already built its first product with 100% AI-generated code. By the end of 2026, they expect at least half a dozen more.
The company is shifting from traditional development to what they call "spec-driven development." Instead of writing code line by line, engineers write specifications. AI generates everything.
Here's the math Patel shared:
- Before: 8 humans on a team
- Now: 3 humans + 5 AI agents
- Result: Triple the output
Then he said something that should stick with every developer reading this:
"Don't worry about AI taking your job. Worry about someone using AI better than you definitely taking your job."
What This Actually Means for You
I know what you're thinking. "Should I stop learning to code? Am I wasting my time?"
Let me give you a better question.
The engineers at OpenAI didn't write code. But they weren't sitting around doing nothing. They were:
- Breaking big problems into smaller pieces the AI could handle
- Designing the rules and constraints that kept the AI from going off track
- Reviewing what the AI produced
- Figuring out what the AI needed next
One engineer described their job as "building fences" for the AI. The AI runs fast inside those fences. The engineer makes sure the fences are in the right place.
Here's another way to think about it.
Anthropic's research showed something fascinating: developers use AI in about 60% of their work, but they can only fully delegate 0-20% of tasks.
The rest is collaboration. You and AI, working together. The AI is your partner, not your replacement.
One engineer put it perfectly:
"I mainly use AI when I already know what the answer should look like. I built that intuition the old-fashioned way—by actually learning software engineering."
The people who benefit most from AI? The ones who already understand what good code looks like.
The Part Nobody's Talking About
Here's something both experiments revealed.
When OpenAI's AI wrote code, it would copy bad patterns from existing code. The codebase would slowly get messier. The engineers had to build automated "cleanup" systems—basically garbage collectors for code quality.
When Anthropic's AI team worked, they'd break existing features while adding new ones. The researcher had to build better tests and stricter validation.
AI doesn't eliminate the hard parts of software engineering. It just changes where the hard parts live.
Someone still needs to:
- Decide what the system should do
- Design the architecture
- Set quality standards
- Catch the things AI misses
- Know when something doesn't feel right
That "know when something doesn't feel right" part? That's taste. That's judgment. That comes from experience.
The Best News You'll Read Today
Cisco's Jeetu Patel said something else that didn't make the headlines.
When asked whether AI means fewer developers, he said the opposite will happen.
"Instead of zero software engineers left, the future could see eight billion software engineers as code generation is democratized."
Think about that. Not fewer developers. More.
People who never wrote a line of code—lawyers, marketers, designers, operators—will start building their own tools. Not because they become "developers" in the traditional sense, but because they can tell AI what they need and get working software.
Anthropic's report called this "the collapse of the boundary between people who can code and people who can't."
What I'd Tell My Younger Self
If I was starting my coding journey today, here's what I'd focus on:
First, learn the fundamentals. The engineers getting the most from AI are the ones who already understand architecture, system design, and what quality looks like. AI is a force multiplier. It multiplies whatever you already have.
Second, learn to work with AI. The engineers in these experiments weren't writing prompts like "write a function that does X." They were designing systems where AI could operate autonomously for hours. That's a different skill.
Third, focus on problems, not syntax. The syntax part is exactly what AI is getting good at. The problem-solving part? That's still you.
Fourth, develop your taste. The ability to look at code and know whether it's good, whether it's maintainable, whether it's solving the right problem—that becomes more valuable, not less.
The Bottom Line
OpenAI built a million lines of code without humans touching the keyboard.
Anthropic built a Linux-capable compiler with 16 AI agents working in parallel.
Cisco is betting its entire engineering future on AI-augmented teams.
The question isn't whether this is happening. It's happening right now.
The question is whether you'll be the developer who learns to work with these tools, or the developer who gets left behind because someone else did.
What do you think? Have you started using AI in your development work? Are you worried about where this is going, or excited? Drop a comment below—I read every single one.
If this article helped you think differently about AI and coding, consider following for more simple, honest takes on where technology is heading.
Disclosure: AI helped me write this — but the bugs, fixes, and facepalms? All mine. 😅
Every line reviewed and tested personally.
Top comments (2)
I'm a beginner at coding (well, a junior) and that last part really helped, thanks!
You're welcome! I'm glad I could help. If you have any more questions or need further assistance as you learn coding, feel free to ask. Happy coding! 😊