DEV Community

norinori1
norinori1

Posted on

A Role-Based Workflow to Supercharge AI Coding — A Tool-Agnostic Design Philosophy

Overview

In this article, we introduce a 


Introduction

The world of AI coding tools is changing rapidly.

Services that were free yesterday become paid, models you relied on get restricted, and plan names keep changing. Do you find yourself having to “research everything all over again” each time?

In this article, we introduce a universal AI coding workflow that does not depend on any specific service or plan. Even if tools change, as long as you have this “framework,” you can quickly rebuild your setup.


Core Concept: Classify AI by “Roles”

The starting point is to classify AI coding tools into three roles:

The essence of cost saving is: “Don’t let the Thinker do the implementation.”


Overall Workflow

```plain text
[0] Draft a rough specification yourself

[1] Refine the spec with a Thinker AI (and generate implementation prompts)

(Optional: use a Researcher AI to investigate prior art)

[2] Hand it off to an Executor AI

[3] Check the generated code (test / debug)

[4] If no issues, merge → return to [0]




By repeating this loop, you complete one feature at a time.

---

## [0] Write the Spec in Your Own Words

Before handing things over to AI, start by **writing down what you want to implement in bullet points**.



```plain text
- When the user presses a button, the score increases by +1
- The score is displayed at the top of the screen
- When the score reaches 10, transition to a game over screen
Enter fullscreen mode Exit fullscreen mode

Perfection is not required. The goal here is simply to organize your own thoughts.


[1] Refine the Spec with a Thinker AI

This is the most valuable step to invest cost in. Provide your rough spec to a high-quality model.

Prompt Template

```plain text
Based on the following specification, create a more detailed specification
and an implementation prompt for an AI coding agent.
If anything is unclear, ask me before proceeding with implementation.


(Paste the spec from [0])




### Why say “ask if unclear”?

If you pass an ambiguous spec to the implementation AI, **it will interpret things on its own**, increasing the chance of unintended behavior. Eliminating ambiguity through dialogue with the Thinker AI greatly improves downstream success rates.

### Cost-saving principles for Thinker AI

- **Mid-tier models are often sufficient.** Save top-tier models for complex reasoning.
- **Usage depends on output size and complexity, not just the number of calls.** Longer conversations increase per-call cost.
- **Usage limits reset over time (time windows or weekly).** Spread usage strategically instead of consuming it all at once.
---

## [1.5] Use a Researcher AI for Prior Art (Optional)

If you get stuck wondering “how should this be implemented?” while writing the spec, insert a step to **research prior art, libraries, and design patterns using a Deep Research tool**.

**Steps:**

1. Ask the Thinker AI: “Create a prompt to research prior art related to this spec.”
2. Run Deep Research using that prompt
3. Attach the results to your spec and include them in the implementation prompt for [2]
This helps the implementation AI generate code with awareness of **common practices and established approaches**, improving success rates.

---

## [2] Hand Off to the Executor AI

Provide the refined prompt from [1] to the implementation AI.

### How to choose an Executor AI

The Executor should be **low-cost (or free) while maintaining a reasonable success rate**. While specific tools may change, the evaluation criteria remain consistent:

- **Cost**: Is there a free tier? Is it usage-based pricing?
- **Success rate**: Can it handle simple feature additions and bug fixes?
- **Asynchronicity**: Can you submit and wait (async), or must you stay engaged (sync)?
- **Limit reset timing**: Daily, weekly, or monthly resets?
### Keep multiple agents ready

Always have **2–3 candidate agents** available so you can switch immediately if one hits limits or fails.

“The best free agent today” might not be the best tomorrow. Keeping alternatives ready ensures long-term development stability.

---

## [3] Check the Generated Code

Run and verify the code generated by the agent.

### ① If errors occur

Copy the error message and ask the same agent to fix it:



```plain text
The following error occurred. Please fix it.

---
(Paste the error message)
Enter fullscreen mode Exit fullscreen mode

② If behavior is not as intended

Close the PR or discard changes and return to [2].

Why avoid repeated patching?

If issues persist, go back to [1] and revise the prompt itself.

③ When it’s faster to fix manually

If you understand the code, it may be faster to fix it yourself rather than going back and forth with the agent. For small fixes (a few lines), manual correction is often more efficient.


[4] Merge if Successful → Back to [0]

Once everything works, merge the PR or commit the changes, then return to [0] to define the next requirement.


Three Benefits of This Workflow

1. Stable Costs

By using the Thinker (high-cost) only for thinking, and delegating implementation (token-heavy tasks) to low-cost agents, monthly costs become more predictable.

2. Tool Independence

The roles—Thinker, Executor, Researcher—remain constant. Even if services change, you can adapt by simply plugging in new tools that fit each role.

3. Higher Success Rate

The better the prompt quality, the better the output. By eliminating ambiguity beforehand, you reduce retries and accelerate development overall.


Practical Checklist

  • [ ] Have you chosen a Thinker AI (and understood its usage limits and reset timing)?
  • [ ] Do you have 2–3 Executor agents as backups?
  • [ ] Do you have access to a Researcher AI (with Deep Research capability)?
  • [ ] Do you have a strategy to improve prompts when [2]→[3] fails?

- [ ] Do you review tool plans and limitations monthly?

Summary

  • Separating AI into Thinker, Executor, and Researcher optimizes cost and quality
  • Not letting the Thinker handle implementation is the biggest cost saver
  • Keep multiple implementation agents and reset/retry when needed
  • Tools will change—having a framework lets you adapt easily In a rapidly evolving AI landscape, relying on specific tools is fragile. Having a solid “framework” is what gives you a long-term advantage.


This article was automatically cross-posted from norinori1's portfolio

Top comments (0)