DEV Community

Cover image for Stop Generating AI Slop: The Ultimate Workflow for Coding with Claude Code
Joao Victor Souza
Joao Victor Souza

Posted on

Stop Generating AI Slop: The Ultimate Workflow for Coding with Claude Code

The AI-assisted software development workflow must follow one fundamental principle: never generate code before you review and approve a plan. In short: don't generate AI slop.

The separation between planning and execution is the most critical point, and the most neglected by vibe coders. It prevents wasted effort, keeps you in control of architectural decisions, and produces significantly better results than jumping straight into coding.

To achieve this, we separate the process into three stages.

Stage 1: Research

Every task starts with context. Ask Claude to fully understand the relevant part of the codebase before anything else; this analysis shouldn't be a chat summary, it needs to be recorded in a Markdown file.

The @xpto.js file is responsible for user authentication in the application. Deeply analyze and understand how JWT generation works, its functions, and all its specificities. Once finished, write a detailed report of your findings in analysis/xpto.md

The written document is crucial and serves as a review tool. You can read it, verify if Claude actually understood the system, and correct misunderstandings before any planning begins. If the research is wrong, the plan will be wrong, and the implementation will follow suit. Slop in, slop out.

This is the most expensive failure mode in AI-assisted programming. It's not about incorrect syntax or flawed logic; it's about implementations that work in isolation but break the surrounding system: a function that bypasses an existing caching layer, a migration that ignores ORM conventions, an API endpoint that duplicates logic already existing elsewhere. The research phase should prevent all of this.

Stage 2: Planning

After reviewing the research, I request a detailed implementation plan in a separate Markdown file.

I want to create a new feature that adds a role to the existing JWT, extending the system to implement different subscription levels. Draft a detailed plans/jwt-roles.md document describing how to implement this feature. Include significant code snippets and paths for the files that will be modified.

The generated plan always includes a detailed explanation of the approach, code snippets showing the actual changes, files to be modified, and considerations regarding the pros and cons of the solution.

It is possible to use Claude's own planning mode. However, with a Markdown file in the project folder, you can edit it, add inline notes, and it remains as an actual artifact in the project.

This workflow shines especially in AI-driven editors, where the agent has access to your filesystem. After Claude writes the plan, I open it in my editor and add notes directly into the document. These notes correct assumptions, reject approaches, add constraints, or provide domain- or project-specific knowledge that Claude lacks. The notes vary greatly in length. Sometimes it's a simple paragraph explaining a business constraint or pasting a code snippet showing the expected data format.

The JWT verification function must implement a cache.
The database query should be cursor-based instead of offset-based.

Then, I send Claude back to the document:

I added some notes to the document. Address all the notes and update the plan accordingly.

This cycle repeats as many times as necessary.

This is the most distinctive part of my workflow — and where I add the most value.

plan-flow

Why does this work?

The Markdown file acts as a shared mutable state between Claude and me. I can think at my own pace, annotate precisely where something is wrong, and resume the analysis without losing context; there's no need to explain everything in a chat message.

This is fundamentally different from trying to steer the implementation through chat messages. The plan is a structured and complete specification that I can analyze holistically. A chat conversation is something I'd have to scroll through to reconstruct decisions. The plan always wins.

One or two rounds of "I added notes, update the plan" can transform a generic plan into one that fits perfectly into the existing system. Claude is excellent at understanding code, proposing solutions, and writing implementations. But it doesn't know my product priorities, my users' pain points, or the engineering trade-offs I'm willing to make. The annotation loop is how I inject that judgment.

The Task List

Once planning is finished, before starting the implementation, I always request a detailed breakdown of the tasks:

Add a detailed task list to the plan, with all the phases and individual tasks needed to complete it.

This creates a checklist that serves as a progress tracker during implementation. Claude marks items as done as they progress, so I can check the plan at any time and see exactly where we are. This is especially useful in sessions that last for hours.

Stage 3: Implementation

When the plan is ready, I issue the implementation command. I've refined this into a standard prompt that I reuse in practically all sessions:

Implement everything. Upon completing a task or phase, mark it as done in the planning document. Do not stop until all tasks and phases are completed.

I use this exact phrase (with slight variations) in almost all implementation sessions. When I say "implement everything", all decisions have already been made and validated. The implementation becomes mechanical, not creative. This is on purpose; implementation should be boring. The creative work happens in the annotation loops. Once the plan is right, execution should be straightforward.

Without the planning phase, Claude makes a reasonable, yet wrong, assumption right from the start, builds on it for 15 minutes, and then I have to undo a series of changes. The "do not implement yet" clause completely eliminates this.

Feedback during implementation

As soon as Claude starts executing the plan, my role shifts from architect to supervisor. My commands become drastically shorter.

feedback-loop

While a planning note might be a paragraph, an implementation correction usually consists of a single sentence. Claude has full knowledge of the plan and the ongoing session, so concise corrections are sufficient.

You created the settings page in the main app when it should be in the admin app. Move it there.

Stay in control of the situation

Even though I delegate execution to Claude, I never give it total autonomy over what will be built. I do the vast majority of active steering within the documents.

This is important because Claude proposes solutions that are technically correct but unsuitable for the project. Maybe the approach is overly complex, changes the signature of a public API that other parts depend on, or chooses a more complex option when a simpler one would suffice. I have context about the system as a whole, the product direction, and the engineering culture that Claude doesn't.

claude-flow

  • Selecting what works: When Claude identifies several issues, I analyze them one by one: "for the first one, just use Promise.all, keep it simple; for the third, extract it into a separate function; ignore the fourth and fifth, they're not worth the complexity." I am making specific decisions for each item based on what matters right now.
  • Scope reduction: When the plan includes nice-to-have items, I actively cut them. "Remove the download feature from the plan, I don't want to implement it right now." This prevents scope creep.
  • Protecting existing interfaces: I set strict boundaries when I know something shouldn't change: "the signatures of these three functions must not change; the caller must adapt, not the library."
  • Overriding technical choices: Sometimes, I have a specific preference that Claude is unaware of: "use this model instead of that one" or "use the built-in method of this library instead of writing a custom one." Quick and direct overrides.

Claude handles the technical execution, while I make the decisions. The plan covers the big decisions upfront, and selective steering handles the minor decisions that pop up during implementation.

What this workflow actually builds

Bringing research, planning, and execution together in a disciplined loop isn't just about "producing code faster". It's about elevating the quality of technical thinking.

  1. Research as a safeguard: Deep analysis of existing code prevents changes from breaking the surrounding system. It's not just about correct syntax, but respecting the existing architecture.
  2. Planning as a filter: The annotated plan is where human judgment comes in. Claude is great at understanding code and proposing solutions, but it doesn't know your product priorities, the engineering trade-offs you're willing to make, or the problems your users actually face.
  3. Distraction-free execution: When all decisions have been made and validated, implementation becomes mechanical. As said before, execution should become a predictable and mechanical step, reserving your creative energy for annotating the plan.
  4. Long sessions as an advantage: By keeping research, planning, and execution in a single conversation, Claude accumulates context progressively. Auto-summarization keeps enough to continue, and the plan survives with full fidelity. Although there are risks of the AI starting to forget initial instructions if the chat gets too long.

Verdict

The difference between producing AI slop and quality software with coding agents comes down to a single word: discipline.

Read carefully, write a plan, annotate the plan until it's right, and then let Claude execute everything non-stop, checking the types along the way.

No magic prompts, no elaborate system instructions, no clever tricks. Just a workflow that separates thinking from typing. Research prevents Claude from making changes out of ignorance. Planning prevents it from making the wrong changes. The annotation loop incorporates your judgment. And the implementation command lets it run without interruption.

Top comments (0)