I used Claude to write a function last week. Took thirty seconds. Would've taken me twenty minutes.
So what did I do for the other nineteen and ...
For further actions, you may consider blocking this person and/or reporting abuse
From my perspective, problems are not solved by writing code.
They are solved earlier, by designing solutions that are already strategic and long-term in your mind.
Code is just the translation of that thinking.
AI can make this translation faster, but it cannot — and should not — think on your behalf.
It does not decide what is sustainable, what is risky, or what will have consequences six months from now.
For this reason, I don’t think the real question is whether we use AI to write code or not.
The real issue is whether we are willing to cooperate: with the context, with the system, and with the people who will come after us.
Without that cooperation, even perfect code — whether written by a human or by an AI — remains fragile.
I agree
Programming has never been the main or most complicated thing, anyone can learn to program. The challenge has always been figuring out how to solve problems, with programming being the vehicle for implementing the solution.
This was true yesterday, it's true today, and it will continue to be true tomorrow. Whether or not we have AI involved.
So from my own perspective coding is still as important as before. That is, for whether good programming matters or not... well, it matters about as much as laying bricks properly in a building. No matter how well you've design the building or thought out the solution, if the bricks aren't laid correctly or the material isn't right, you'll have built a building very quickly and very beautiful at first glance, but it will fall apart and need repairs almost from day one.
So I don't agree with those that claims that writing code won't be important anymore because the IA. It is stil as coding is important to implement the solution to a problem, nothing more, nothing less.
This! As long clients don't know what they want, there's work! :)
Fact! xD
I must say, I find it rather intriguing that AI-generated code has become a topic of discussion in the developer community. It's almost as if we're witnessing a subtle shift in the way we perceive our profession - from being seen as a craftsman or artisan, to a more utilitarian role, where efficiency and speed are paramount. Nevertheless, I firmly believe that the human element will continue to play a crucial role in software development, particularly in problem-solving and strategic decision-making.
Yes, the shift is real, from problem-solving to decision-making, it will all be replaced by AI, but we're not quite there yet 😂
Real, I saw this too ^0^
The thing is, AI rarely produces working code, unless you request very straightforward out-of-the-book boilerplate. So, while gen AI comes up with an answer, code, or copy, anticipate its shortcomings, inconsistencies, hidden bugs or failing to question implicit narrow constraints.
Copying a 2,000 lines code is definitely a bad practice but you can use it as a draft - it reduces your time spent only on syntax and basic operations.
Buy you right anyway, you need double, even triple check
Couldn't agree more
We’re paid for judgment, not keystrokes. AI can write code — it can’t own consequences, tradeoffs, or accountability.
Exactly. And I think what scares people is realizing how much of their day wasn't actually judgment. It's one thing to say "we're paid for decisions, not typing" — it's another to look at your calendar and realize half your meetings could've been emails, and half your code reviews were just catching syntax issues AI wouldn't make. The uncomfortable truth is that accountability only matters when something goes wrong. The rest of the time, we're just... there. And AI is forcing us to be honest about what "being there" actually means.
Well said, I feel what the AI produces from the prompts by the developer and what the final outcome is actually a bigger reflection on the developer's skill & maturity rather than the chat bot.
Exactly. That's the part I keep coming back to—the prompt itself is the skill. What you ask for, what you leave out, when you stop the AI and rewrite from scratch because it doesn't feel right... that's all judgment. The chatbot doesn't know if it's building something maintainable or just technically correct. That gap between "it works" and "it works in six months when someone else touches it" is everything. And honestly? I think that gap is getting wider, not narrower, because now we can generate more code faster—which means more chances to make deeply embedded mistakes at scale.
AI accelerates the typing layer.
I operate at the layer that cannot be automated:
• Intent
• Integrity
• Boundaries
• Restoration logic
• System coherence
• Long-term consequences
• Mythic-operational framing
• Transmission across eras
That’s the work companies actually pay for—even if they don’t always have the language for it.
I think you've named something really important here that most conversations about AI miss entirely. "Intent" and "system coherence" especially—those aren't just abstract concepts, they're the difference between code that ships and code that lives in production for years. AI can generate a perfectly valid implementation, but it can't tell you whether that implementation respects the implicit contracts your system has been running on for the last five years. It doesn't know what broke last time someone "just refactored this one thing." That knowledge—that operational mythology you're carrying—that's irreplaceable, and honestly, I think it's what separates developers who survive AI from those who get replaced by it.
It kind of does know some of those things, doesn't it? I know when I use AI to write code in my codebase, it's respecting my conventions, it's using the patterns that we look for. If it doesn't, I tell it and it makes a rule for that and it doesn't make that mistake again.
You're right that AI can learn pattern adherence—conventions, style guides, linting rules.
That's real and useful.
But coherence isn't pattern adherence.
Coherence is knowing why that one service can't be refactored even though it violates every convention in the repo. It's the restoration logic that says "if this breaks, here's what we rebuild first." It's the implicit contract between teams that was never documented because it predates everyone currently on the team.
It's the thing that breaks when someone "just refactors this one thing" and three downstream services fail silently for six hours.
You can correct an AI into following your conventions. You can't correct it into understanding the emotional weight of a system's history, the unwritten dependencies, or the restoration sequences that only exist in the collective memory of the people who've been paged at 3am.
That's not a rule violation. It's a missing ontology.
That's the real work, isn't it—what survives refactors, rewrites, and regime shifts.
The bottleneck has never been typing.
It's:
That hasn't changed much with AI.
AI helps to a certain extent. Then I start making mistakes, and that's when I use critical thinking. I use my knowledge and consult the official documentation for the languages or technologies I'm using.
You get paid because you tell the ChatGPT or any AI tool what to do! That's all! If you're a good programmer, you'll tell him to do certain things to save you time; experience is key here. Inexperienced people will hit a wall as soon as the project gets a little bigger.
A good programmer here is one who considers performance, SEO, flexibility, future scalability, testing, and the tools they choose to develop their program, etc.. All of this is in your head, not artificial intelligence, but it does save a significant amount of time, just like existing libraries used to save you time. But here, it's even easier.
Even if ChatGPT writes your code, you are getting paid for:
In our company, team members who actively incorporate AI into their daily workflows are highly appreciated.
AI enables us to solve problems faster, build more efficiently, and maintain a stronger pace of delivery.
Really thought-provoking post. It’s true, AI is asking us to reconsider what our value as developers truly is. The real challenge isn’t writing code, but knowing what to build, how to build it, and navigating the complexities of architecture and business constraints. I think AI will handle more of the rote tasks, but the creativity, problem-solving, and critical thinking we bring to the table are irreplaceable. Ultimately, it’s about understanding the bigger picture and making the right decisions that AI can’t yet replicate.
I get paid to solve problems
It finishes my sentences in mundane tasks. I am getting paid for the rest. Accountability, predictability, loyalty, seeing big picture, and a lot of other things.
My take on this is, first you do logic thinking, instead of letting ai decide what it needs to be done. ask for help, but u do thinking. and explain how problem needs to solved with your logic and get code, instead of explaining problem to ai and just blindly pasting code.
I believe, when you have solid foundation and willing to keeping up with new tech, u'll never have the fear of getting replaced by ai.
Just don't blindly push code. Then you add no value!. Your manager can write prompt and explain problem to ai, instead of explaining it to you!!!!...
This is also written by ChatGPT ha haha !!