For years, software engineering had a clear hierarchy.
The best developers were often the ones who:
wrote the cleanest code
mastered complex fra...
For further actions, you may consider blocking this person and/or reporting abuse
This maps exactly to what we've been seeing at Othex. The bottleneck has shifted from writing code to knowing what's worth writing. Our best collaborators with AI aren't the ones who can prompt perfectly — they're the ones who can look at generated output and immediately know if it's solving the right problem.
The tricky part: that judgment is really hard to teach. You can learn syntax, algorithms, APIs. But "does this actually make sense for our users" is experiential. It comes from shipping things and watching them succeed or fail. So in some ways AI accelerates the learning cycle — you ship faster, get feedback faster — but it can't shortcut the experience itself.
That’s a great insight. The bottleneck really has shifted to problem judgment, not code generation.
And you’re right, that kind of judgment is hard to teach. It comes from shipping, observing, and learning, even if AI accelerates the feedback loop.
This matches what I see hiring engineers at a fintech startup. Our best hire had "Googling" listed as a skill on his resume. No CS degree, B.Sc background. But he understood why things break, not just how to write code that compiles. He now owns our entire merchant-facing app.
We recently used AI agents to audit env variables across 15 repos — mapping conflicts, generating a unified schema. What would've taken days took hours. But the AI didn't decide that DB_HOST should be canonical instead of DATABASE_HOST — that was human judgment about our system's future. AI accelerated execution. The quality gate was still understanding our own architecture deeply enough to validate the output.
That’s a great example of the shift in action.
AI can accelerate execution and analysis, but decisions like naming, structure, and long-term consistency still require human architectural judgment.
The best engineers, as you said, are the ones who understand why systems break, not just how to make them run.
100%. The "understanding why systems break" framing is something I tell every new hire. We had a developer join us — B.Sc graduate, listed "Googling" as a skill on his resume. Most companies would've binned that CV. Two years later, he owns our entire merchant-facing app. Not because he was the best coder on day one, but because he obsessively understood failure modes. Every bug was a lesson, not just a ticket. That curiosity compounds faster than any framework knowledge.
That’s a powerful example.
Curiosity around failure modes compounds far faster than raw coding skill. Developers who ask why it broke end up building systems that don’t break the same way twice.
Exactly. We interview for this now — not "how would you implement X" but "tell me about the last time something broke and what you did in the first 10 minutes." The developers who light up describing their debugging process are the ones who end up owning entire systems within a year. One of our best engineers joined with zero fintech experience but had an obsessive need to understand root causes. Within 18 months he was the person the whole team went to when production went sideways. You can't teach that instinct — you can only hire for it and then get out of the way.
That’s a great hiring signal.
People who focus on first response and root cause thinking are the ones who grow into system owners. That instinct is hard to teach, but incredibly valuable once identified.
I agree because as a Web developer, AI is starting to have more and more people use it for coding and AI is not always right and yes I also think it redefined what great developers are because AI has been constantly evolving in coding but it should not be used to write every single thing in your project.
That’s a very balanced view. AI is powerful, but it’s not always right, and relying on it for everything can create risks.
As you said, great developers today are defined by judgment, knowing when to use AI and when to rely on their own understanding.
Mentioning a change from “how to build” to “what to build” is a nice examination of how we are developing in the future. As an MIS course student, I certainly recognized this kind of pattern. AI is a force multiplier for your system design, while AI only speeds up the delivery of that outcome I agree. As coding becomes increasingly common practice, would it diminish the level of deep algorithmic understanding necessary to recognize hidden errors in AI, when AI doesn't succeed and fails? If programmers or developers aren't dealing with syntax, they might lose the pattern recognition you talk about which is critical for evaluation.
That’s a very thoughtful concern. I don’t think deep understanding disappears, but it can atrophy if not exercised.
As AI handles more syntax, the real skill shifts to evaluation and pattern recognition. Developers who actively question and validate AI outputs will strengthen this skill; those who don’t may lose it.
That's how I feel my job shifting. I haven't written a single letter of code this month, but delivered 2 nice (small) projects, by constantly managing my ai coding tools.
That’s a great example of the shift in action. The value is moving from writing code to directing systems and making decisions.
If outcomes are strong, that orchestration skill is becoming just as important as coding itself.
Coding will always be here, but the future belongs to those who build the intelligence layers.