"Software engineering is solved." This is all I see lately when scrolling LinkedIn, X, or Reddit.
The message is loud and clear: developers are cooked and we should all pivot immediately and become plumbers or electricians.
It's not a completely crazy idea. AI coding tools have improved at a pace that has affected software development more than almost any other white-collar field. Developers who are deep into these tools can generate large amounts of working code quickly, and the days of memorizing syntax and writing everything by hand are already over.
The obvious response is that code generation is only one slice of the software engineering pie. When I say this, usually everyone nods along.
But we rarely talk about what the rest of the pie actually is. If code generation is mostly solved, then the question is what remains.
Right now the results from AI coding tools are wildly uneven. Some developers report massive productivity gains. Others say it makes them slower and produces nothing but AI slop.
A big part of the difference comes down to how the systems around AI coding tools are built and used.
Software engineering looks solved because code generation is advancing quickly. Understanding why explains what we're seeing right now and where this is headed.
A good place to start is with a simple question: why is code generation a winning use case for generative AI?
Why Code Is Easy for AI
Software development is heavily pattern based. We work with language syntax, framework conventions, data structures, standard patterns, reusable components, and familiar structures.
Most code follows established patterns. The training data for these models includes massive volumes of code from real-world examples and open source projects. There are millions of implementations of common patterns to learn from.
Writing code is largely about applying known patterns to new problems. You need a REST endpoint? There are thousands of examples. Need to validate user input? Want to implement authentication? The patterns are also very well documented.
This is why syntax and implementation for most common use cases is mostly solved. The models have seen these patterns thousands of times. They can reproduce them reliably with minor variations to fit your specific context.
But pattern matching alone doesn't explain why AI coding tools work as well as they do. There's another factor that makes software particularly suited to AI generation.
Software Is Verifiable
Correct software for most non-AI related use cases has a simple property: the same input produces the same output. That's determinism.
The actual implementation can vary. If you give the same user story to two different developers, they will produce two different solutions.
The structure, abstractions, and technologies might differ. But from an end-user perspective, the functional requirements must be met.
That property makes software unusually well suited to AI-assisted development. Nondeterministic systems can generate many possible implementations, but we can still measure whether the result is correct. Either the behavior is correct or it isn't. The tests pass or they don't.
The function of the software is the most important part, which can be measured and tested automatically.
But lots of different fields have measurable outputs that can be tested and iterated upon. Why are software developers likely to be among the first groups to feel the full impact of AI adoption?
Automation Is Our Culture
Part of the reason is that software engineering is already deeply mechanized, and it's in our culture to automate everything that we can.
We have CI/CD pipelines, automated tests, automated security scanning, and metrics everywhere. We already understand how to build feedback loops and automate processes. Now we're applying those same skills and lessons learned to other parts of our work in ways that weren't possible before generative AI. And the reason it's accelerating so quickly is that we are both the domain experts and the ones building the systems.
This creates extremely rapid iteration that is less likely to happen organically in other fields.
If a developer discovers a truly innovative AI engineering technique on Monday, by Wednesday it's in a blog post with thousands of readers. By Friday multiple people have attempted to build tools around it. Within weeks, if something else hasn't replaced it, the idea gets integrated into mainstream workflows. Rinse and repeat.
But even with all of this progress, implementation is still only one slice of the software engineering pie. It takes a lot of work to get systems working in production reliably.
The Rest of the Software Engineering Pie
Working code is only one dimension of correctness. Production-ready software requires many dimensions to align at once, like security, scalability, architecture, maintainability, integration, performance, stability, and dependencies. A system that works correctly in isolation can still fail when any of these dimensions are ignored.
This is often where the conversation turns to the idea that developers will all become architects. If implementation becomes automated, humans focus on higher-level system design.
There is truth to that, and I think this is where we are now. Developers who understand system design have an advantage in today's market. But will higher-level design remain purely a human activity? That assumption is already being tested.
Encoding Judgment
Many of the harder parts of software engineering come down to judgment. What does "good" look like for a particular system? Which tradeoffs are acceptable? Where should the complexity live? This kind of judgment is what separates working code from production-ready software.
Developers are already experimenting with ways to encode that judgment into automated systems. These systems look less like chatbots and more like coordinated pipelines. One component generates an implementation. Others verify behavior by running tests, scan for issues with deterministic tools, and evaluate architectural concerns. Higher-level review can be layered in, where models evaluate tradeoffs, consistency, and design decisions.
In many ways, this looks like a natural extension of CI/CD. Instead of validating code after a human writes it, the system validates code as it's generated. The feedback loop gets tighter.
This is also part of why specification-driven development is gaining traction. If models generate the implementation, developers need to define behavior precisely enough that correctness can be tested automatically. The discipline shifts from writing code to defining what correct code looks like.
None of this is solved. Tried and true patterns for automating higher-level concerns like architectural evaluation and performance validation don't exist yet, though many teams are actively working on it (with varying degrees of success).
Adoption will be incremental, and regulated industries will move slower because reliability and accountability matter more than speed. But each dimension of engineering judgment that gets encoded into a validation step is one more thing the system can handle.
For those using the latest and greatest tooling, the hardest part of being a developer has shifted from learning syntax to developing judgment. That shift may make the early stages of a software career harder, because judgment takes time and experience to build.
Are We Cooked?
Is software development cooked? Not yet.
Maybe in the future. But I think that humans will be in the loop for longer than we inside the AI bubble currently believe. Only time will tell if I'm right.
However, the job is changing faster than most of us expected. A developer can ship features in hours that used to take days. Small teams can build products that previously required dozens of engineers.
In the short term, there's a growing need for developers who can build the systems that make AI-generated code production-ready.
We used to spend most of our time writing implementations. That time is shrinking. What's growing is everything around it: figuring out what correct software looks like and building the validation to prove it.
Top comments (0)