We have a weird relationship with planning in software development. We'll spend hours debugging a problem that 15 minutes of upfront thinking could have prevented, then complain that "planning slows us down."
Now with AI, there’s a strong urge to vibe-code our way to something that works. The resistance to planning is real, but so are the consequences.
Why Developers Resist Planning (and the Cognitive Science Behind It)
In Thinking, Fast and Slow, Nobel Prize-winning psychologist Daniel Kahneman explains that we rely on two modes of thinking:
- System 1: Fast, automatic, intuitive thinking. Great for quick decisions and pattern recognition.
- System 2: Slow, effortful, deliberate thinking. Necessary for deep problem-solving but mentally expensive.
Planning forces us into System 2. It feels “unnatural” because it burns more cognitive energy than simply jumping in and dealing with problems as they come up.
That’s why so many developers skip planning and jump straight into building. Sure, part of it could be deadlines or impatience, but our biology tells us to start with the path of least resistance. In this case, coding feels easier than planning architecture. Unfortunately, skipping System 2 just pushes the complexity downstream.
This is why AI-assisted planning tools like Continue’s Plan Mode can act like a circuit breaker, forcing you to pause, switch into deliberate thinking, and approach problems strategically rather than reactively.
How AI-Assisted Planning Builds Stronger Developer Skills
Research shows that how we use AI directly impacts how we think and learn:
- Cognitive offloading (when we delegate thinking to tools) reduces our ability to form mental models and solve problems independently (MDPI).
- Heavy reliance on AI has been shown to create “surface learning,” where people retrieve answers without understanding the underlying reasoning (The Australian).
In education, this leads to students who can’t think deeply. In software development, it leads to shallow understanding, technical debt, and stalled growth into senior-level problem-solving.
Treating AI as an “answer vending machine,” makes you weaker. Use it as a planning “sparring partner” to question assumptions, map edge cases, and explain trade-offs. That’s how you become faster and smarter with AI.
How Continue’s Plan Mode Helps Developers Plan Better
Continue's Plan Mode supports System 2 thinking in AI development, and allows for a transition to System 1 when you’re ready to implement your plans. Plan Mode gives you a read-only, safe space to explore your codebase and make architectural decisions before you build. Here’s how it helps:
- Forces deliberate thinking: No edits, just thinking. You explore without the urge to start coding.
- Lowers mental friction: Trace flows, map dependencies, and explore edge cases with AI.
- Supports real understanding: Ask things like “How does auth work here?” instead of “Build auth for me.”
- Preserves context: Your notes and insights carry over into implementation.
15 Minutes of Planning > 15 Hours of Debugging
Here’s a quick workflow:
- Pick a task you’d usually dive into.
- Set a 15-minute timer.
- Check out Continue's Plan Mode to ask thoughtful questions for AI-assisted planning.
- Document what changes about your approach after those 15 minutes.
Even if you “lose” 15 minutes, you’ll likely avoid hours of rework.
Best Practices for Using AI Planning Tools in Software Development
Why AI-Assisted Planning Matters for Modern Development
AI can either accelerate poor judgment or reinforce deep understanding. The developers who thrive in AI-driven development will be the ones who can think, plan, and then execute with AI as a multiplier.
I think Lee Robinson’s recent post on X, helps to validate this idea:
The more powerful AI tools become, the more critical it is to understand what’s happening underneath. Think of it like using a high-powered demolition tool on a wall. Sure, you can tear through it in seconds. But if you don’t know where the electrical lines or plumbing are, you’re creating a disaster that costs ten times as much to fix.
AI is the same way. Without deep thinking, you’re just breaking things faster.
Your Turn: The Planning Challenge
Try this: Spend 15 minutes in Plan Mode, question assumptions, and document what changes. Share what you learn.
Top comments (14)
This is such a valuable breakdown especially the connection to Thinking, Fast and Slow. It makes a lot of sense why planning feels mentally exhausting compared to jumping straight into code. Framing it as a System 1 vs. System 2 issue explains a lot about developer behavior (including my own habits).
Right. It's worth understanding that it's a natural inclination to want to jump right in. But that's not long-term thinking.
You still have to come up with the questions, so when you ask the wrong questions you waste time and money (if you are using a for profit AI).
If you don't know why you made the assumption how would an AI question it? It is a prediction engine, it learns from the way you thought in the past.
Just build things until they break, and investigate why they break. Then you have first hand knowledge in case a similar thing is proposed or breaks in production code.
I agree with the think before you act message. AI should be at most a small part of that process.
Totally agree. AI won’t do the deep thinking for you, and it can definitely reinforce bad assumptions if you’re not aware of them.
That’s why I think using tools like Plan Mode as a structured pause is a chance to think with friction before diving in. You’re still doing the cognitive work, but the AI has context from your codebase, so when you ask good questions, it can surface things you might overlook like mismatches, edge cases, or implicit dependencies.
And yes, breaking things and seeing why is some of the best experience you can get. I just want to build habits that help us debug before we commit, not just after things fall apart.
As a developer you should have that same context. If you don't have that knowledge you should fix that because AI can make things up.
And some companies don't want or are not allowed to expose all the code to an AI.
Sure the process starts with making the right decisions. I think if you want to test your decisions it is better to do it with another human, instead of an AI. An AI can come up with novel ideas, but then the question is if those ideas are the right solution. A human can come up with the better boring solution from the first go, because we don't want to do too much work.
Before AI there were tools like tests and static analyzers. And they are still valid tools.
In this AI hype time we want to use the new shiny toy. But the techniques to create good software already existed, people just need to use them.
I'm not against AI, apply it where it outperforms the other tools.
Developers do have the same context, but we're not nearly as efficient in understanding the full context as AI, especially if we're looking at a large codebase, if we're new to a codebase, or if we're implementing a technology that we haven't before. Do you need it in every usecase? No, but there's definitely times where it will decrease your time to "first code" or help you understand code you haven't seen before.
I'll all for human interaction, but humans make things up too or give you the wrong answer or approach or share outdated information.
Neither ways are full-proof, but I think we should be working with the right tools when we need them.
It is true you can be faster feeding the codebase to AI. But the time you take to learn the codebase or new code is not wasted. It is the give a fish or learning to fish scenario.
At the moment I have more confidence in humans for receiving knowledge than an AI that doesn't care. I think caring is a human feature that is highly under appreciated.
From what I read and heard openAI has added a feature in GTP-5 that resembles caring. But I don't trust it with knowledge, why should I trust it with feelings?
It shouldn't stop anyone to use both humans and AI, use all the tools you think you need to come up with a solution.
I'm with you on caring, but, unfortunately, a lot of people don't care. I also think that we've been conditioned to not care in a lot of scenarios bc there are deadlines, and job cuts, and impossible asks. If we can use AI to be more efficient, and don't fill the time with more tasks, then we'll have the time to care. This is similar to what Eric Topol says in his book Deep Medicine. It's important that we don't just fill the time created by technology with more tasks. We use that time to create and maintain human connection.
I have a different point of view. I think even with all the negative things you should care about the code. It is not the same as caring for people. Caring for code is wanting to do the best thing with the means you have at that moment.
Caring means fighting for the thing you believe is best, but also let your own convictions go if others come with a better solution.
The caring about people for me is collaboration and mentoring. I feel good when I see someone gets more confidence after I congratulate them on a job well done.
I agree people are pushed to be harder on others, and at the same time take care of their selves. But that is a toxic way to live. I hope the more people see that behaviour as toxic, the better we can live with each other. The sad thing is that it is a system you are fighting against, and that overwhelms a lot of people.
I don't think we're in total disagreement here. I think you should care about people, and I think you should do your best in your work. The middle of that is good collaboration.
I think using all the tools in your toolbox can make you a better collaborator if you use them the right way.
What I've found is that "AI assisted planning" using ChatGPT, etc. offer as much problem solving and help, but also the risk that you end up wasting valuable time in wild goose chases and rabbit holes.
The best way to utilize these tools for work is like ready references or answers to specific questions.
For a second I was afraid you're going to advertise a new AI tool... 😅
I agree, that's something I understood probably 1-2 years ago and now I have a full vision of it: LLM is a examples and info finder, it puts what usually done into one formatted message - that's it. It can't what was never done, it can easily be wrong in unknown fields as it will just imagine some shit.
I also not so agree with that we should switch to this approach: it's good, but this is a different "mode", not a next milestone, we should learn to use both wisely.
We should be using AI to learn and develop faster (eventually), not because LLM did the job for you, but because it gave you starting ground and useful info. But don't forget about old school Google search, it's also worth it.
Just to mention, I also written an article about "longer thinking" mode, it's even more flexing your brain since you treat AI as a book instead of copy-paste source:
dev.to/framemuse/how-to-use-chatgp...
Some comments may only be visible to logged-in visitors. Sign in to view all comments.