The work moved. The craft didn't.
A weekly read for devs, tech leads, and anyone keeping up with where AI is taking software engineering..
Try This Week
How to keep up
The New Senior Engineer
Chris Parsons — chrismdp.com
Less reviewing, more training.
If you're still tied to your IDE, you're working a year behind. The 2026 toolchain recommendation: install Claude Code or Codex CLI, not Copilot. The senior engineer's job has shifted from writing code or reviewing diffs to training the AI to write better code.
The central point: the harness around the model matters as much as the model itself. Leverage lives in standing instructions in AGENTS.md or CLAUDE.md and in skill files that teach the agent your conventions.
How fast you can tell whether the output is right separates teams that ship from teams stuck.
As Chris puts it: "Coding with AI is now the default. The question is whether you are doing it as a reviewer, a prompter, or a trainer. The trainer role compounds."
The Harness
Henrique Bastos — blog post
Some teams keep getting better. Others run in place forever.
The difference is whether they've built a harness — something that turns repetitive work into a loop.
Most processes are designed from scratch every time, with context living in someone's head. Each run costs the same as the first. In a loop, each run leaves things ready for the next. Starting again is almost free.
AI made this personal. When an agent produces garbage, the instinct is to fix the prompt. But that fix dies with the session. The real move is to fix the environment — add a test, tighten a boundary, write a rule. That fix sticks. Every future run inherits it.
Quality & Craft
How to think well in the AI era.
Below the Illusion of Progress
Kent Beck — LinkedIn
The new failure mode: When the code looks fine but isn't.
Most teams get through problems in a messy way — without really solving them. The result is software that mostly works, but is hard to change. Not great, but it holds.
There's a worse state: when it becomes possible to claim things are working when they aren't, as AI tools do. Complexity builds, and progress slows down.
No clear solution yet for working with AI here. Better data, tests, prompting — all might help, but none is enough. The first step is awareness.
The Principles Don't Change
Robert C. Martin — X/Twitter
AI raises the level of abstraction — it doesn't change software engineering.
Uncle Bob Martin has been on a roll lately, and his core message is consistent: AI doesn't change software engineering — it just raises the level of abstraction.
"AI is just another step up the semantic expression ladder. We initially expressed our semantics in binary, then assembler, then Fortran, then C, then Java, then Python. AI is just the next step up that same old ladder."
What we're losing is syntax — semicolons, braces — and good riddance. What stays is everything that actually matters: design, architecture, formalism, behavioral and structural semantics. Objects still matter (the AI is literally generating them). Principles from decades ago still apply.
His warning to engineers: a few disciplines and tools aren't enough. You still need a mental model of what the AI is doing, the engineering insight to correct it, and the instinct to form suspicions and verify them — without falling back on exhaustive code reviews.
Creation Cost Approaching Zero
Kent Beck — LinkedIn
If anyone can clone your software with an AI, what happens to scale?
Software used to work like this: one person spends a long time building something great, and then millions of people use that same thing. The work pays off because it's spread across a huge number of users.
AI may break this. Maybe people will be able to ask an AI to clone your software. So instead of everyone using your version, you get multiple near-identical copies floating around — and you never reach the scale you used to.
But here's the twist: building software is also cheaper now. You didn't spend as much time on it either. So maybe it doesn't matter that you have fewer users.
Worth Knowing
On the radar this week
Legacy Code Isn't Going Anywhere
Vikas Pujar — LinkedIn
Even when AI nails the headline task, the work around it doesn't disappear.
Vikas Pujar asked Claude how long it would actually take to convert the world's 800 billion lines of COBOL into Java, now that Anthropic claims AI can handle the translation. The answer: in the best case, 844 years.
The interesting part wasn't the timeline — it was the breakdown. Code translation is only about 20% of the effort. The other 80% is parallel testing, business approvals, regulatory compliance, and deployment — work where AI moves the needle far less.
A useful reminder that even when AI is genuinely good at the headline task, the work around it doesn't disappear. It seems there's still a long career waiting for anyone willing to do it.
Structured-Prompt-Driven Development
Wei Zhang — martinfowler.com
Treating prompts as code — but is it too heavy for most teams?
Thoughtworks recently published Structured Prompt-Driven Development (SPDD) on Martin Fowler's site — a method that treats prompts as first-class artifacts: version-controlled, reviewed, and reused. The core rule is when reality diverges, fix the prompt first, then update the code.
It's a serious framework, with its own seven-part canvas (REASONS) and a CLI tool (openspdd) to run the workflow.
But here's the thing: a lot of ways to work with AI are emerging right now, and many of them come from the AI tools themselves — CLAUDE.md, skill files, AGENTS.md. These are lightweight, already integrated into the tools developers actually use, and easy to adopt one piece at a time. Heavyweight methodologies like SPDD might be the right fit for big consultancies and regulated environments, but will most teams actually adopt it — or will the simpler conventions baked into Claude Code or Codex win on adoption alone?
GitHub Opted Everyone Into Training
Gergely Orosz — LinkedIn
Your private code is training Microsoft's AI — unless you opt out.
Gergely Orosz raised the alarm: GitHub quietly opted every user — paying customers included — into letting their private code train Microsoft's AI models. Pro subscribers, Copilot subscribers, all of it. The opt-out lives under Settings → Privacy, and Orosz's point is sharp: a true platform for code wouldn't do this.
Worth checking your settings if you haven't already.
Top comments (0)