
We have a weird relationship with planning in software development. We'll spend hours debugging a problem that 15 minutes of upfront thinking could...
For further actions, you may consider blocking this person and/or reporting abuse
This is such a valuable breakdown especially the connection to Thinking, Fast and Slow. It makes a lot of sense why planning feels mentally exhausting compared to jumping straight into code. Framing it as a System 1 vs. System 2 issue explains a lot about developer behavior (including my own habits).
Right. It's worth understanding that it's a natural inclination to want to jump right in. But that's not long-term thinking.
You still have to come up with the questions, so when you ask the wrong questions you waste time and money (if you are using a for profit AI).
If you don't know why you made the assumption how would an AI question it? It is a prediction engine, it learns from the way you thought in the past.
Just build things until they break, and investigate why they break. Then you have first hand knowledge in case a similar thing is proposed or breaks in production code.
I agree with the think before you act message. AI should be at most a small part of that process.
Totally agree. AI won’t do the deep thinking for you, and it can definitely reinforce bad assumptions if you’re not aware of them.
That’s why I think using tools like Plan Mode as a structured pause is a chance to think with friction before diving in. You’re still doing the cognitive work, but the AI has context from your codebase, so when you ask good questions, it can surface things you might overlook like mismatches, edge cases, or implicit dependencies.
And yes, breaking things and seeing why is some of the best experience you can get. I just want to build habits that help us debug before we commit, not just after things fall apart.
As a developer you should have that same context. If you don't have that knowledge you should fix that because AI can make things up.
And some companies don't want or are not allowed to expose all the code to an AI.
Sure the process starts with making the right decisions. I think if you want to test your decisions it is better to do it with another human, instead of an AI. An AI can come up with novel ideas, but then the question is if those ideas are the right solution. A human can come up with the better boring solution from the first go, because we don't want to do too much work.
Before AI there were tools like tests and static analyzers. And they are still valid tools.
In this AI hype time we want to use the new shiny toy. But the techniques to create good software already existed, people just need to use them.
I'm not against AI, apply it where it outperforms the other tools.
Developers do have the same context, but we're not nearly as efficient in understanding the full context as AI, especially if we're looking at a large codebase, if we're new to a codebase, or if we're implementing a technology that we haven't before. Do you need it in every usecase? No, but there's definitely times where it will decrease your time to "first code" or help you understand code you haven't seen before.
I'll all for human interaction, but humans make things up too or give you the wrong answer or approach or share outdated information.
Neither ways are full-proof, but I think we should be working with the right tools when we need them.
It is true you can be faster feeding the codebase to AI. But the time you take to learn the codebase or new code is not wasted. It is the give a fish or learning to fish scenario.
At the moment I have more confidence in humans for receiving knowledge than an AI that doesn't care. I think caring is a human feature that is highly under appreciated.
From what I read and heard openAI has added a feature in GTP-5 that resembles caring. But I don't trust it with knowledge, why should I trust it with feelings?
It shouldn't stop anyone to use both humans and AI, use all the tools you think you need to come up with a solution.
I'm with you on caring, but, unfortunately, a lot of people don't care. I also think that we've been conditioned to not care in a lot of scenarios bc there are deadlines, and job cuts, and impossible asks. If we can use AI to be more efficient, and don't fill the time with more tasks, then we'll have the time to care. This is similar to what Eric Topol says in his book Deep Medicine. It's important that we don't just fill the time created by technology with more tasks. We use that time to create and maintain human connection.
I have a different point of view. I think even with all the negative things you should care about the code. It is not the same as caring for people. Caring for code is wanting to do the best thing with the means you have at that moment.
Caring means fighting for the thing you believe is best, but also let your own convictions go if others come with a better solution.
The caring about people for me is collaboration and mentoring. I feel good when I see someone gets more confidence after I congratulate them on a job well done.
I agree people are pushed to be harder on others, and at the same time take care of their selves. But that is a toxic way to live. I hope the more people see that behaviour as toxic, the better we can live with each other. The sad thing is that it is a system you are fighting against, and that overwhelms a lot of people.
I don't think we're in total disagreement here. I think you should care about people, and I think you should do your best in your work. The middle of that is good collaboration.
I think using all the tools in your toolbox can make you a better collaborator if you use them the right way.
What I've found is that "AI assisted planning" using ChatGPT, etc. offer as much problem solving and help, but also the risk that you end up wasting valuable time in wild goose chases and rabbit holes.
The best way to utilize these tools for work is like ready references or answers to specific questions.
For a second I was afraid you're going to advertise a new AI tool... 😅
I agree, that's something I understood probably 1-2 years ago and now I have a full vision of it: LLM is a examples and info finder, it puts what usually done into one formatted message - that's it. It can't what was never done, it can easily be wrong in unknown fields as it will just imagine some shit.
I also not so agree with that we should switch to this approach: it's good, but this is a different "mode", not a next milestone, we should learn to use both wisely.
We should be using AI to learn and develop faster (eventually), not because LLM did the job for you, but because it gave you starting ground and useful info. But don't forget about old school Google search, it's also worth it.
Just to mention, I also written an article about "longer thinking" mode, it's even more flexing your brain since you treat AI as a book instead of copy-paste source:
dev.to/framemuse/how-to-use-chatgp...