"Software engineering is solved." This is all I see lately when scrolling LinkedIn, X, or Reddit.
The message is loud and clear: developers are cooked and we should all pivot immediately and become plumbers or electricians.
It's not a completely crazy idea. AI coding tools have improved at a pace that has affected software development more than almost any other white-collar field. Developers who are deep into these tools can generate large amounts of working code quickly, and the days of memorizing syntax and writing everything by hand are already over.
The obvious response is that code generation is only one slice of the software engineering pie. When I say this, usually everyone nods along.
But we rarely talk about what the rest of the pie actually is. If code generation is mostly solved, then the question is what remains.
Right now the results from AI coding tools are wildly uneven. Some developers report massive productivity gains. Others say it makes them slower and produces nothing but AI slop.
A big part of the difference comes down to how the systems around AI coding tools are built and used.
Software engineering looks solved because code generation is advancing quickly. Understanding why explains what we're seeing right now and where this is headed.
A good place to start is with a simple question: why is code generation a winning use case for generative AI?
Why Code Is Easy for AI
Software development is heavily pattern based. We work with language syntax, framework conventions, data structures, standard patterns, reusable components, and familiar structures.
Most code follows established patterns. The training data for these models includes massive volumes of code from real-world examples and open source projects. There are millions of implementations of common patterns to learn from.
Writing code is largely about applying known patterns to new problems. You need a REST endpoint? There are thousands of examples. Need to validate user input? Want to implement authentication? The patterns are also very well documented.
This is why syntax and implementation for most common use cases is mostly solved. The models have seen these patterns thousands of times. They can reproduce them reliably with minor variations to fit your specific context.
But pattern matching alone doesn't explain why AI coding tools work as well as they do. There's another factor that makes software particularly suited to AI generation.
Software Is Verifiable
Correct software for most non-AI related use cases has a simple property: the same input produces the same output. That's determinism.
The actual implementation can vary. If you give the same user story to two different developers, they will produce two different solutions.
The structure, abstractions, and technologies might differ. But from an end-user perspective, the functional requirements must be met.
That property makes software unusually well suited to AI-assisted development. Nondeterministic systems can generate many possible implementations, but we can still measure whether the result is correct. Either the behavior is correct or it isn't. The tests pass or they don't.
The function of the software is the most important part, which can be measured and tested automatically.
But lots of different fields have measurable outputs that can be tested and iterated upon. Why are software developers likely to be among the first groups to feel the full impact of AI adoption?
Automation Is Our Culture
Part of the reason is that software engineering is already deeply mechanized, and it's in our culture to automate everything that we can.
We have CI/CD pipelines, automated tests, automated security scanning, and metrics everywhere. We already understand how to build feedback loops and automate processes. Now we're applying those same skills and lessons learned to other parts of our work in ways that weren't possible before generative AI. And the reason it's accelerating so quickly is that we are both the domain experts and the ones building the systems.
This creates extremely rapid iteration that is less likely to happen organically in other fields.
If a developer discovers a truly innovative AI engineering technique on Monday, by Wednesday it's in a blog post with thousands of readers. By Friday multiple people have attempted to build tools around it. Within weeks, if something else hasn't replaced it, the idea gets integrated into mainstream workflows. Rinse and repeat.
But even with all of this progress, implementation is still only one slice of the software engineering pie. It takes a lot of work to get systems working in production reliably.
The Rest of the Software Engineering Pie
Working code is only one dimension of correctness. Production-ready software requires many dimensions to align at once, like security, scalability, architecture, maintainability, integration, performance, stability, and dependencies. A system that works correctly in isolation can still fail when any of these dimensions are ignored.
This is often where the conversation turns to the idea that developers will all become architects. If implementation becomes automated, humans focus on higher-level system design.
There is truth to that, and I think this is where we are now. Developers who understand system design have an advantage in today's market. But will higher-level design remain purely a human activity? That assumption is already being tested.
Encoding Judgment
Many of the harder parts of software engineering come down to judgment. What does "good" look like for a particular system? Which tradeoffs are acceptable? Where should the complexity live? This kind of judgment is what separates working code from production-ready software.
Developers are already experimenting with ways to encode that judgment into automated systems. These systems look less like chatbots and more like coordinated pipelines. One component generates an implementation. Others verify behavior by running tests, scan for issues with deterministic tools, and evaluate architectural concerns. Higher-level review can be layered in, where models evaluate tradeoffs, consistency, and design decisions.
In many ways, this looks like a natural extension of CI/CD. Instead of validating code after a human writes it, the system validates code as it's generated. The feedback loop gets tighter.
This is also part of why specification-driven development is gaining traction. If models generate the implementation, developers need to define behavior precisely enough that correctness can be tested automatically. The discipline shifts from writing code to defining what correct code looks like.
None of this is solved. Tried and true patterns for automating higher-level concerns like architectural evaluation and performance validation don't exist yet, though many teams are actively working on it (with varying degrees of success).
Adoption will be incremental, and regulated industries will move slower because reliability and accountability matter more than speed. But each dimension of engineering judgment that gets encoded into a validation step is one more thing the system can handle.
For those using the latest and greatest tooling, the hardest part of being a developer has shifted from learning syntax to developing judgment. That shift may make the early stages of a software career harder, because judgment takes time and experience to build.
Are We Cooked?
Is software development cooked? Not yet.
Maybe in the future. But I think that humans will be in the loop for longer than we inside the AI bubble currently believe. Only time will tell if I'm right.
However, the job is changing faster than most of us expected. A developer can ship features in hours that used to take days. Small teams can build products that previously required dozens of engineers.
In the short term, there's a growing need for developers who can build the systems that make AI-generated code production-ready.
We used to spend most of our time writing implementations. That time is shrinking. What's growing is everything around it: figuring out what correct software looks like and building the validation to prove it.
Top comments (10)
I liked the read, but it is a bit too much within the box of the mainstream. AI is a hype, it will definitely change software development but not as much as people think right now. The same happened with .com era, bitcoin, and so on.
When you write software by hand and something brakes you can have a quick sense of how and where the problem might be. In the case of developing software relying too much on dependencies and/or AI, the codebase becomes more and more of a black box (due to the numerous layers of abstraction and lack of understanding of how things work under the hood).
Which approach would you choose ? ... The second approach will probably require you to depend on another tool to navigate the black box and the process becomes even more bloated.
In personal projects I usually use the minimum amount of dependencies, libraries or frameworks. Because I enjoy the freedom that working with plain languages offer. If you build styles only with CSS the process might be more tedious and slower but the quality, freedom and innovation is superior. Conventions = boring = old. All libraries, frameworks, etc.. add limitations. The lower the level of the tech stack you use, the more freedom you'll have. If we were able to code directly in bits without a hussle, the level of freedom we would have would be out of this world.
I can see your perspective. I think I’m between two ideas: I think developers who deeply know software will win this game over the black box approach. To your point. People who understand what’s beneath the layers of abstraction will be able to produce better results and make more reliable systems.
I also think that this innovation allows us to do much more with less, and move a lot faster. So it’s not that devs disappear entirely, but how many no longer have a job and when does that really start to become a problem? A few very smart senior engineers can handle a lot.
There will probably be an uptick in specific types of dev roles in the near future as companies race to make the AI hype real and try to automate as much as they possibly can.
That’s what I’m struggling to wrap my mind around. At what point do we reach a critical mass and we can call software dev as a field “cooked”? I think we have time, but I think we’re headed towards a serious culling of white collar jobs in the long run.
My point about the adoption tail being long outside of the AI hype bubble leads me to hope that the job market shift happens at just a slow enough pace to offset things to where there isn’t a total collapse of the white collar job market.
Thanks for getting back to me with such a rich answer.
👏 completely agree and that is something that has been happening for ever. Old generation programmers, the ones who started with C, kobalt, etc.. are so much better. Because they understand low level better than following generations. As times goes by, the harder it is to deep dive the tech stack.
Yes, but e.g. you still need good front-end specialists that want to dig deep into CSS bugs, develop unique features (which CSS frameworks don't provide) for the specific utility, etc..
Not sure, this is something that has been worrying people since the 1960's with the massification of computers. Read this TIME article (written in 1965) if you have time, the conversation is pretty much the same😆 time.com/archive/6874126/technolog...
True that we always predict doomsday. I think reality will be in the middle. AI will change everything about white collar jobs but it’ll take more time than it seems.
Great perspective, and I pretty much agree with your assessment, but i do think with the right mindset we can push ourselves into a higher "frame" of engineering.
I have already personally shifted to more of an "architect" position - I build strict governance documentation around projects I am working on and "lead" a team of one or more agents (depending on the focus) through the development process.
I'm operating now out of the mindset of "Let me see if i can get this done with Agentic tools first, and if that fails, then i'll either start over or fix what broke.
Professionally, I haven't looked back and this process is paying off considerably for everything I've thrown at it - from refactoring python2 to 3,, fixing legacy codebases, and even some salesforce development.
With personal projects, i'm a bit looser, but that's catching up too.
Yeah I basically do the same thing. I only step in if I have to, but I also do watch the agents work and sometimes I have to step in and get them back on track. Plus I’m not going to sit here and say I have most optimized setup. It’s amazing how much these tools can do basically out of the box.
I think that one of the major things that makes me think we have time here is that I have friends in engineering at Fortune 500 companies who got access to copilot finally in the last like 3 months. The adoption tail is longer across regulated industries.
How long things stay that way? Check in with me in… 2 weeks? lol
Excellent read. I've run development teams for most of my working life. I'm overseas with my team currently running AI workshops and demos. It's been quite unnerving at times for them and staying ahead is like trying to catch knives. What I have really enjoyed is the 'engineering' side of this. Especially DevOpps. I've had ideas of solutions for years but never the time nor bench strength internally to 'waste' Dev time on my 'experiments' so it's been incredible to define a problems, break it down into individual interlinking engineering experiments and tackle them one by one when I have time, often until 3-4am! All the while creating a knowledge base and better agent interactions as I go. After 18+ such experiments over the course of 2 months, I was able to connect it all together and solve quite a bit challenge which had been regularly thrown at us!
That's awesome! I agree that with AI we can now chase down all the things we've been wanting to do forever but just didn't have the time. There is no shortage of problems to solve.
Old head here,
AI is not hype. If you can't see the writing on the wall where agentic cloud development is already being implemented in cloud services than you should poke around and look at the ability to assign agents to work items in the cloud and further more you should start looking at agents in build pipelines and linters. These are all features implemented to make junior development super easy and streamlined. It also comes with security hench the security compliance checks, no code on your laptop and companies will no longer need to buy high end laptops. Like others said, architects/leads will remain the gate, the human in the loop and in smaller projects they'll likely be one of the few engineers.
If your argument is no tribal knowledge of the codebase that's almost comical cause if you're the seasoned engineer you claim to be than your black box of code would have followed your coding practices via clear instructions and linting and if copilot went haywire on a feature then you probably did something dumb like bombard it with irrelevant context via speckit and hit the token limit. Besides you're a engineer, if you can't hop into a project a read/understand code than that's a bigger problem...
I actually think we mostly agree. The systems you're describing, agents generating code and validating it through pipelines, are exactly the direction I was pointing to in the article. My point was that code generation alone isn't the whole engineering problem. The interesting work now is building the systems around it that validate correctness across security, architecture, performance, and reliability. That's where a lot of the experimentation is happening right now.