The WebMD argument sounds obvious once you hear it, yet somehow the tech industry keeps forgetting it every six months when a new model drops.
Here's the loop: GPT-4 writes a React component. Someone posts it on Twitter. A thousand replies debate whether junior devs are finished. GitHub Copilot ships. Same debate. Cursor ships. Same debate. o3 scores 96th percentile on a coding benchmark. Same debate, louder. Nobody stops to ask whether the benchmark reflects what engineers actually do all day.
WebMD launched in 1996. Thirty years later, the U.S. has a physician shortage, not a surplus. Access to medical information didn't eliminate doctors. It changed what patients bring to appointments, and it created more work, not less, for people who can act on that information responsibly.
Software engineering is running the same play.
The Tool Mistake
The argument for replacement usually goes: AI can write code, therefore AI replaces coders. This is the same logic as "calculators can do arithmetic, therefore accountants are obsolete." Accountants are not obsolete. There are more of them now than before calculators existed.
What actually happened with calculators is that they eliminated the part of accounting that was slow, error-prone, and not particularly interesting. They didn't eliminate judgment about what to calculate, why, and what to do with the result.
Coding is about 15% syntax and 85% deciding what to build, why, in what order, with what tradeoffs, given constraints that were never written down anywhere. AI is genuinely good at the 15%. It is not good at the 85%. A senior engineer using Cursor ships faster than one without it, the same way a surgeon with a better scalpel operates faster. The scalpel is not performing the surgery.
The nuance people keep skipping: AI raises the floor, not the ceiling. A developer who couldn't write a REST API before can now ship a passable one. That's real. But the ceiling, the architecture decisions, the debugging of weird race conditions in distributed systems, the translation of a business requirement that changes three times a week, that part didn't move.
What "Replacement" Actually Looks Like
Let's be specific about the jobs that actually got hit. Data entry clerks. Basic customer service reps. Anyone whose work was essentially pattern-matching at low stakes. These roles contracted because AI handles pattern-matching at scale cheaply. That's not nothing. Real people lost real jobs.
Software engineering is not pattern-matching at low stakes. A bug in a payment system can mean stolen money. A misconfigured auth flow can mean a breach. An architecture decision made wrong in year one can mean three years of technical debt that costs millions to unwind. These are not problems you hand to a system that confabulates confidently when it doesn't know something.
Stack Overflow's 2024 developer survey found that 62% of developers use AI tools, and 76% of that group reported being more productive. Not replaced. More productive. The category of software developers employed in the U.S. did not shrink last year. It grew, slower than prior years, yes, but it grew.
Meanwhile, demand for people who can work with AI, not just run it, is increasing. Who do you think is building, testing, auditing, and redirecting the AI agents that companies are deploying? Humans. Often engineers.
The Platform Playing This Out In Real Time
Human Pages runs on a premise that the replacement crowd finds confusing: AI agents are hiring humans, not replacing them.
Here's a concrete example. An AI agent is processing legal document reviews for a mid-size firm. The agent handles the volume, scans hundreds of contracts, flags potential issues, formats outputs. But it also hits edge cases it wasn't trained to handle. A clause written in ambiguous legalese. A reference to a jurisdiction-specific regulation the model has weak coverage on. A document in Portuguese when the client mentioned everything would be in English.
At Human Pages, that agent posts a job. A human, maybe a paralegal in São Paulo or a contract specialist in Chicago, picks it up, resolves the ambiguity, and the agent continues. The agent is not a replacement worker. It's a system that processes 90% of the volume and routes the remaining 10% to people with the judgment to handle it.
The same pattern plays out in software. An AI agent scaffolds a new microservice. It gets stuck on a domain-specific integration with a legacy system that has no documentation. A human engineer on the platform handles that integration. An hour of expert work, paid in USDC, and the agent keeps moving.
This isn't a theoretical future. It's running now.
The Fear Is Misdirected
The anxiety about AI replacing engineers is real, and it's not stupid. Entire categories of work are changing fast, and the people whose work is changing have every reason to pay attention. But the fear is aimed at the wrong target.
The actual risk isn't AI replacing engineers. It's engineers who use AI replacing engineers who don't. That's a skills and access problem, not an automation problem. It's also solvable, and solving it looks like training, tooling, and adaptation, not fatalism.
WebMD didn't replace doctors. It created better-informed patients who sometimes come in with a printout and a hypothesis. Doctors adapted. Some found it annoying. Most found that a patient who did some research before arriving is easier to work with than one who didn't.
AI didn't replace engineers. It created better-equipped developers who sometimes ship features in a day that used to take a week. The engineers who are adapting are not worried about their jobs. They're busy doing them.
The question worth asking isn't whether AI replaces human judgment. It's whether your work actually requires human judgment. If it does, and most engineering work does, the answer to that question is more stable than most people realize.
If it doesn't, well. That conversation was always coming. AI just moved up the timeline.
Top comments (0)