DEV Community

Cover image for Building with AI: My (Still Evolving) Workflow with Claude Code
Ooi Yee Fei
Ooi Yee Fei

Posted on

Building with AI: My (Still Evolving) Workflow with Claude Code

Documenting my learning journey and the workflow practices I’ve been building.Context of my background: not a genius developer. Not from a CS degree. No years of production engineering. I started in tech a few years ago as a Solutions Architect. I like thinking in systems and automation, but I'm still learning the hands-on implementation details as I go.

Here are a few things I’ve learned — or had to unlearn — as I figure out how to actually use these tools to my advantage:

On learning with AI-fed information

  • An early lesson and a constant reminder: when learning something new, I often can’t tell if the AI is “confidently wrong” — because I don’t even know what’s right yet sometimes.
  • These tools make you just good enough to be dangerous. It's empowering, sure. But I have to constantly remind myself to think critically / stay skeptical of the suggestions I'm being fed.
  • That’s shaped a simple loop for me: take AI suggestions → understand → validate → learn. (Not a new idea about how we humans learn anyway, just now with faster iteration.)

On coding agents / Tools:

  • Early experience was mixed. Let agents take the wheel a few times — they broke things. So I backed off. Avoided using AI directly in codebases and let it edit.. I went back to using tools like Perplexity and Google AI Studio, where I kept control. This allowed me to maintain ownership of the architecture.
  • (I still use these tools for different tasks today. Google AI Studio and Gemini Pro have become my "Shifu"—they've saved me from scratching my head off on some challenging bugs!)
  • Revisited Cursor again recently. I revisited Cursor recently. This time, I studied how others used it and set clear rules. It worked okay for small, greenfield stuff. But didn’t work for me on bigger, messier projects with weird dependencies and syncing issues. It felt too risky.
  • The project I’m on now started small but is growing more complex. The learning curve is steep. AI-assisted coding has helped me keep up, not just by writing code faster, but by giving me a persistent mentor and thinking partner to explore unknowns and fix weak spots with. Then, I gave Claude Code a try about 10 days ago (wish I started earlier). Its design and workflow clicked and felt like they fit the way I think.

Here are a few foundational practices I've adopted that have boosted my effectiveness, especially for navigating the complexities of an evolving project. (Most of these are probably common sense by now and some are from Claude's best practices, as many people are pros who have been using these workflows for much longer than I have.)

1. Always Start with /init

New project, new codebase > Run /init.

Serves two purposes for me:

  • For Claude: It documents the repo, setting the rules of engagement—tech stack, coding style, focus areas—so it doesn’t wander aimlessly.
  • For Me: It creates an instant, smart README. I can even ask Claude to explain parts of the project by referencing this file. It’s a learning tool in itself.

It's important to automate CLAUDE.md updates as the project grows - e.g. I ask Claude to keep its own CLAUDE.md updated with new configs, versions, and libraries to track compatibility. These small, compounding benefits build up over time: fewer mistakes, more clarity.

[More on how I've started using a sub-tree of CLAUDE.md files for better context engineering later.]

  • A better setup saves you tokens, time, and mental energy. Manage context well, avoid overfeeding context, and don’t rely on /compact as much. (Still learning about context engineering and how to do it better.)

  • I also maintain a CLAUDE.md file to set general rules I use across most projects — e.g., use subagents, tools, MCPs, and when to call what.

(Which brings me to the next few points...)

2. Use Sub-agents to Scale

  • My simple starting point was with a rule in my CLAUDE.md, like an autoscaling rule. Example: “Min 2, max 5 subagents depending on the task.”
  • That really improved how I break down and parallelize work. Also each subagent can focus on smaller and more specialised tasks better.
  • Claude Code recently launched custom sub-agents. I have been trying out and now have specialists for different tasks. [More on custom sub-agents later, they are amazing with the right setup.]
  • Also ‘roles’ / RBAC kind of concept that’s related to this aspect I want to explore more in it. This is for better context, focus run, and security as well (Which agent can do what)
  • Using these with Git worktree is also life changing 😛. [More on this later]
  1. MCP

For my project, I started with the Supabase and browser MCPs, which have been helpful. For example:

Supabase MCP -

  • My agents stay up-to-date with my database schema and security policies automatically - perfect for syncing with RLS, auth migration, and performance/security planning.
  • For security, I set the Supabase MCP to read-only. Any schema changes, I ask Claude to generate the SQL, I validate and apply it.

GitHub MCP -

  • Seems like for trivial tasks it’s just running git commands and we can actually type and run it quickly. But I find the additional value comes in when Claude understands the context of our session. It can summarize our work and write a commit message that is far more helpful and detailed than what I would write myself. It’s about letting the expert do what it does best.
  • Note on How to use it: you need to set up gh cli, set up gh auth login etc.

[Other powerful MCPs like context7, zen, and playwright. More on this later]

4. "Code -> Review -> Verify" loop - use "Plan Mode": Thinking Before Coding

Shift + Tab gets you with ‘Plan mode’ in Claude Code. (When I first started with Claude Code I actually didn’t know as it’s not documented, until I saw some videos).

  • What it does: lets Claude read files, images, or URLs — either with general cues like ‘read xyz file’, ‘understand ABC’. In simple words, I use this mode to tell Claude: “Read this. Understand it. Don’t write code yet.”
  • I use this for complex tasks, together with some keywords sometimes "analyze" and "think" before writing a single line of code.
  • Sometimes I have subagents do parallel research. Each feeds their notes back to the main agent. Faster and more complete.
  • I also experiment using MCPs to get different models to “debate” / discuss an approach or best practice, or break a massive refactor into small, trackable units. [More on this later.]

5. Keywords + Knowing When to Use AI (vs when it’s not)

  • You can control Claude’s “thinking depth” with simple keywords: think < think hard < think harder < ultra think.
  • More thinking = more tokens. More cost. So don’t use a sledgehammer for a nail.
  • That’s another thing I learned: Sometimes you don’t need AI. You just need automation.
  • The "plan" is for me, too. I need to think critically about when AI is the right tool. Sometimes all you need is automation, not AI.
  • For example, with simple repetitive tasks, it's often better and faster to use a Bash script - or if you need help, ask AI to generate an automation script once, and solve it for the long term.
  • Some tasks could be faster and more effective than having AI to reason through the task itself. Knowing the difference has been a huge productivity booster. (Still learning and exploring on this front.)

I’ve been learning a lot by trying different setups, tools, and workflows. Learning from others, and from doing. Documenting it here for myself. And maybe it’s useful for someone else too.
(Or maybe you’ll see something I’m doing wrong - help me do it better :)

Top comments (0)