Introduction
Depending on whom you talk to, you will get a wide range of answers on whether AI coding or agentic coding is worth the time. Some developers swear by it, and others swear it off entirely. I'm not talking about vibe coders, either; I'm talking about true-to-form junior or senior developers. You often get mixed opinions that vary based on their experience. Some of this can be attributed to how they interact with AI and their anticipated outcome, but what type of thinker they are also plays a huge role.
Understanding the Types of Thinkers
The distinction made between the different types of thinkers was first made popular by Rob Walling in The SaaS Playbook1. In this part of the book, Rob Walling points out three distinct types of thinkers, and depending on which level thinker you are, it can have a direct relation to how you perceive new technology, tools, and advances. It also impacts how you evaluate and use these tools. This is incredibly important to remember: the type of thinker your developers are can greatly influence how they evaluate a tool and form their first impressions. Rob Walling explains that there are three levels of thinkers: task-level, project-level, and owner-level thinkers.
How Most Young Developers Think, the Task Thinker
Task thinkers tend to think about issues at a very low level and don't think past the feature or bug they are currently working on. These are generally more junior or entry-level developers who are not yet thinking about things at that "big picture" level. This is fine, this is where we all begin, and a key sign that a developer is becoming ready to take on more senior roles is when they start to grow out of this level. Rob Walling describes this level as:
Task-level thinkers are team members who focus on their current or next task. They might be early in their career or get overwhelmed with more than a few sequential tasks on their plate. Most of us begin our careers as task-level thinkers because prioritizing many complex, interrelated tasks is often not a natural ability.1
The important thing to take from this is that they often focus on the task at hand and have not yet learned how to manage and prioritize complex related tasks. This concept of managing and prioritizing complex tasks is critical when working with AI coding tools. The more you understand these tasks and your ability to describe them and break them down into small, manageable features and instructions greatly impacts the quality of code you get from AI tools. This would be like telling AI to make a park compared to giving it detailed instructions on how to make a tree. Developers need to be able to understand the project as a whole at a high level and how all the pieces fit together. At the same time, they need to be able to break down those features into smaller tasks, building the tree.
A key difference here is that task-level thinkers generally view these tools as an instrument to immediately solve a problem, fix a bug, or build a feature. They may find AI tools overwhelming or just bad because of how this shapes their interaction with the tool. So why is this a problem? Why does this task-level thinking cause a problem with the results you will see from AI?
How Agentic AI Works
First of all, Agentic AI differs from AI in general because it is an architecture for AI to use memory, learning, and decision-making to achieve a goal. To accomplish this, there are defined steps that must take place. First, there is the perception module, which takes input and processes it into a structured format to pass on to the next module2. Essentially, taking the "build me a park" prompt and turning it into whatever structure the second module needs. If your developers are prompting this bare bones, there won't be much to pass off to the reasoning module.
The reasoning module is the "brain" of the system, typically an LLM. It attempts to take this structured input and use chain-of-thought reasoning to break the input into small subtasks and action items2. This step is critical because it forms the step-by-step problem-solving that leads to correct and helpful output. This is why your first prompts are critical when working with AI. The more well-structured and planned those prompts are, the more they will change the entire outcome of the build. It gives the LLM the context and information needed to break down the problem into small subtasks and a strong history to look back on. For example, which do you think generates a better output, and which do you think the task-level thinker uses:
I need to implement user authentication for my React app using Firebase Auth.
Or...
I need to implement user authentication for my React app using Firebase Auth.
Requirements:
- Email/password and Google OAuth sign-in
- Protected routes that redirect to login
- User context that persists across app
- Logout functionality
- Loading states during auth operations
Technical constraints:
- Using React 18 with TypeScript
- React Router v6 for routing
- Tailwind for styling
Please create:
1. AuthContext with provider
2. Custom hooks for auth operations
3. ProtectedRoute component
4. Login/signup forms with validation
5. Integration with existing routing
Include error handling and ensure type safety throughout.
The task-level thinker likely uses the first example.
The reasoning module passes off these instructions to the action module. The action module is responsible for executing the plan and interacting with any needed tools2. It will only perform as well as it can based on the action plan. The action plan depends on the context it is given from the input. The input is dependent on the user and the level of thinker they are. If you take a realistic look at where agentic AI is today, it is barely a step above the base level of AI, processing an input and returning an output with no memory or decision-making. The Vellum blog describes this level as where most AI is today
At this stage, AI isn’t just responding—it’s executing. It can decide to call external tools, fetch data, and incorporate results into its output. This is where AI stops being a glorified autocomplete and actually does something. This agent can make execution decisions (e.g., “Should I look this up?”). The system decides when to retrieve data from APIs, query search engines, pull from databases, or reference memory. But the moment AI starts using tools, things get messy. It needs some kind of built-in BS detector—otherwise, it might just confidently hallucinate the wrong info. Most AI apps today live at this level. It’s a step toward agency, but still fundamentally reactive—only acting when triggered, with some orchestration sugar on top. It also doesn't have any iterative refinement—if it makes a mistake, it won’t self-correct.3
Project Level Thinkers: Where Things Begin to Change
The next step is a project-level thinker. This is where most senior or almost senior developers are. They are seeing projects at that higher organizational level and understand how they fit into the whole. They can take intricate features and effectively break them into smaller tasks and instructions for junior developers. Rob Walling describes this level as:
Project-level thinkers look ahead weeks or months and juggle multiple priorities. They often rely on team members to complete work that's combined into a single deliverable. Project-level thinkers have advanced systems in place to track the myriad moving parts needed to successfully complete a project.1
They are thinking at a higher level and can break down the feature into small instruction sets for junior team members to complete the work. Think about that for a second; they are doing exactly what the LLM needs right now. LLMs are junior task-level thinkers that thrive on context, instructions, and rules. They need that information to be successful, just like a junior or entry-level developer does. This level thinker is probably more excited about using agentic AI because it allows them to work more efficiently, manage multiple tasks, and automate processes. Breaking projects down is already second nature to them, so they likely follow that same process with AI tools and get better results.
Critical Soft Skills are Still the Answer
We'll skip the next level for this, the next is owner level, which is someone thinking months or years in advance changing the path of a company1. When we think about these two levels, task and project levels, the major difference is the level at which they are able to break down tasks as part of the whole. The way you get there is experience and time; this is why it is critical to still take the time to mentor those junior developers. They need to continue to develop that skill set and the critical thinking that allows them to manage project-level thinking. There is some fear that AI tools will impact our critical thinking and the ability to think at the project level4. Keeping that in mind it may be worth having junior developers still turn off the AI from time to time.
It will continue to be important to give junior developers, or even more senior task-level thinkers, time to grow and learn how to see things from the project level. The ability to see a project and break that down into smaller tasks and instructions is a critical skill that becomes even more important when working with AI. Developers who haven't yet mastered this skill are probably not crazy about AI. They probably say things like "it just started coding the whole project", "it took me hours to fix what it did", or "it didn't build what I wanted". There is some nuance to models and which are better for specific tasks, but what is true with all of them is that the quality of the output depends on the quality of the input, your first handful of prompts.
Mentor and coach these developers along and help them get out of the task-level thinking box. Talk them through a large feature and ask for input on how they would break tasks down, what the challenges and constraints should be, define the scope, and what domain knowledge is needed to complete the feature. Let them take the reins and guide them through. It takes time, but it will help them and you in the long run.
Conclusion
AI isn't magic. How your developers think can greatly impact their experience and opinion about working with AI. Luckily, the same skills we have relied on to mentor developers into senior roles and help them advance their careers still apply. Understanding how your developers think is no different now than it was five years ago, but realizing it could be impacting their success or opinion on AI tools is new. Take the time to mentor them and get them thinking at the project level, and even if they still hate AI, you at least have them that much closer to being a senior or project-level thinker.
Top comments (0)