TL;DR: I built a 7-step workflow to pair with AI effectively: align on context, plan, track progress, implement in small steps, reflect, test, and run deterministic quality checks before every commit.
After more than ten years of coding, I hit a weird spot.
The tools kept getting smarter — React was rock solid, TypeScript was everywhere, and AI assistants started feeling like magic.
But my workflow? Total chaos.
Some days, I’d treat the AI like a teammate and actually collaborate. Other days, I’d just “vibe code,” throw random prompts at it, and hope for the best.
Sometimes it worked. Most of the time, it didn’t.
That’s when I realized I needed to figure out a better way to actually work with AI — not just use it.
The Problem with “Vibe Coding”
Let’s be real — AI coding assistants are amazing.
They can spin up components from a single prompt, explain code better than Stack Overflow ever did, and sometimes even save you from your own bugs.
But when I first started using them, I completely messed up my approach.
I’d let the AI take the wheel.
One day I’d say, “Build me a user dashboard,” and it would spit out 200 lines of working code.
The next day I’d ask for “a better version,” and suddenly everything looked different — styles, logic, even naming.
It was fast, sure. But it was chaos.
No consistency.
No real sense of progress.
And no way to actually learn from what worked.
Building a Systematic Approach
After months of experimentation, I developed a workflow that treats AI as a skilled collaborator rather than a code generator. It's structured, repeatable, and gives me complete control while leveraging AI's strengths. Every commit follows the same 7-step process, and every decision is intentional.
Here's exactly how I work now:
Step 1: Context & Planning 🎯
I start every feature by giving my AI assistant complete context. This isn't just about what I want to build - it's about ensuring the AI understands my codebase, patterns, and standards.
In this phase, I deliberately elaborate as much technical knowledge as possible - architecture constraints, data models, invariants, performance budgets, failure modes, and edge cases - so the plan encodes my expertise and trade-offs.
My typical prompt to Cursor:
"I need to add a table view and switcher to my card list component.
Please read the context:
- @src/components/ui/Table/Table.tsx
- @src/components/ui/ViewSwitcher/ViewSwitcher.tsx
- @src/components/Cases/Cases.tsx
- @.cursor/rules/react-development.mdc (React/TS patterns & standards)
- @.cursor/rules/quality-check.mdc (quality requirements)
- @.cursor/rules/api-development.mdc (API interaction patterns)
Now create a detailed implementation plan with:
1. Small, deliverable steps...
2. Testing strategy...
3. Component architecture...
4. Best practices to follow...
The key here is my Cursor rules - custom guidelines I maintain that ensure consistent React patterns, accessibility standards, and quality requirements. Without these, I'd be starting from scratch every time.
Note: these @...md
Commands are my custom Cursor actions that encapsulate prompts and validations.
Step 2: Generate Progress Document 📋
Once I have the plan, I run my custom command: @progress-md.md
. This generates a structured progress tracker that breaks the feature into small, testable steps.
The output looks like this:
# Table View Feature - Progress Tracker
## Project Overview
**Objective**: Add table view and switcher to card list
**Status**: In Progress | **Phase**: 1 of 5 | **Progress**: 0%
## Pending Steps
- [ ] **STEP-001**: Create ViewSwitcher component
- [ ] **STEP-002**: Integrate Table component with data
- [ ] **STEP-003**: Add view switching logic
- [ ] **STEP-004**: Add comprehensive tests
- [ ] **STEP-005**: Update documentation
Each step is designed to be committed independently.
This document becomes my roadmap. No more vague "implement the feature" - I know exactly what's next.
Step 3: Implement Features ⚡
Here's where the actual coding happens. I work through each step systematically, implementing small chunks that can be committed independently.
For UI components, I use another custom command: @start-playwright.md
to launch Playwright. This gives the AI browser context, allowing it to see rendered components and validate interactions in real-time.
I tell Cursor: "Based on our plan, implement STEP-001. Follow the patterns in @src/components/ui/
and ensure accessibility compliance."
The result? A focused, testable component that fits perfectly into my architecture.
Step 4: Reflect on Changes 🔍
After implementation, I ran @reflect-changes.md
to analyze what was built. This command validates compliance with my development patterns and identifies improvement opportunities.
Summary of Changes:
- Added ViewSwitcher component with proper state management
- Integrated Table component with existing data structure
- Used consistent styling patterns across components
React Development Pattern Compliance:
- Component Structure: PascalCase naming, proper organization
- TypeScript Usage: Proper interfaces and type safety
- Accessibility: WCAG 2.1 AA compliance, proper ARIA attributes
- State Management: Correct hooks usage and data flow
Extraction Opportunities:
- Switcher logic could be moved to a custom hook for reuse
This reflection step catches inconsistencies before they become problems.
Step 5: Add Tests 🧪
Testing is non-negotiable. I run @add-tests.md
to generate user-centric tests that focus on real interactions, not implementation details.
The generated tests look like this:
describe('View switching', () => {
it('switches between card and table views', async () => {
render(<Cases />)
await user.click(screen.getByRole('button', { name: /table view/i }))
expect(screen.getByRole('table')).toBeInTheDocument()
await user.click(screen.getByRole('button', { name: /card view/i }))
expect(screen.getAllByTestId('case-card').length).toBeGreaterThan(0)
})
})
These tests validate user workflows while ensuring accessibility compliance.
Step 6: Quality Assurance ✅
Before any commit, I run @precommit.md
for deterministic validation. No opinions here - just tools checking my work.
Quality Checks:
✅ TypeScript compilation - No type errors
✅ ESLint - Code quality standards met
✅ Prettier - Consistent formatting
✅ Vitest - All 127 tests passing
✅ React patterns - Development standards compliance
📊 Quality metrics:
- 0 linting errors
- 0 type errors
- 98% test coverage
- All accessibility checks passed
🎯 SUGGESTED COMMIT MESSAGE:
feat: add table view and switcher to card list
✨ Key improvements:
- ViewSwitcher component for toggling between views
- Table component integration with existing data
- Accessible view switching with keyboard navigation
- Comprehensive test coverage for user workflows
If the tools pass, the code is ready. No human judgment needed.
Step 7: Commit & Continue 🚀
With validation complete, I commit using the generated message and update my progress document:
## ✅ Completed Steps
- [x] **STEP-001**: Create ViewSwitcher component ✅
- Commit: feat: add table view and switcher
- Validated: All quality checks passed
## 🔄 In Progress
- [ ] **STEP-002**: Integrate Table component with data 🔄
- Status: Ready to implement
- ETA: Next work session
Each step gets its own commit with clear value and context.
Why This Works
After 10 years of coding, I've learned that consistency compounds. Every feature I build now follows this exact process, which means:
- Predictable Quality: Every commit meets the same high standards
- Faster Iteration: Clear progress tracking means I can resume work instantly
- AI Accountability: The structured approach prevents "vibe coding" while maximizing AI benefits
- Measurable Improvement: I can optimize each step because the process is consistent
AI Should Amplify - Not Obscure - Your Expertise
There's a gut-check I run every day: do I feel my existing knowledge is being fully used? If the answer is ever "no," that's a signal I'm slipping into autopilot. The goal of this workflow is not to outsource judgment - it's to amplify it, so my advantages (taste, domain context, architectural instincts) compound through the AI, not get washed out by it.
Practically, I design each step so my expertise has to show up: I reference my own rules and patterns in prompts, I constrain options (trade-offs I care about), and I reflect on diffs with questions only I can answer ("Does this match our failure modes?" "Will this scale with our data shape?"). When that loop is present, I feel my experience steering the outcome.
If I ever feel my expertise isn't factoring into day-to-day work, I treat it as a process bug - not a personal one - and adjust the guardrails until my judgment is front and center again.
The Tools That Make It Possible
My workflow relies on several key components:
- Cursor AI: The intelligent assistant that understands my codebase and patterns
-
Custom Cursor Rules:
@.cursor/rules/react-development.mdc
,@.cursor/rules/quality-check.mdc
, etc. -
Custom Commands:
@progress-md.md
,@reflect-changes.md
,@add-tests.md
,@precommit.md
- Playwright MCP: For real browser context during UI development
- Structured Progress Tracking: Living documents that guide each feature
Looking Forward
This workflow has transformed how I approach development. What used to take days of scattered effort now happens in focused, high-quality sessions. The AI isn't replacing my judgment - it's amplifying my systematic approach.
If you're considering AI-assisted development, don't just "try it out." Build the guardrails, establish the patterns, and create the consistency that lets you optimize every iteration.
The shift from traditional coding to this AI-augmented approach wasn't about the tools - it was about finding a systematic way to collaborate with them effectively.
You can see my complete workflow presentation and methodology at: github.com/haco29/ai-workflow
Who Am I
Hey, I’m Harel — a Front-End Tech Lead at Verbit.
Over the past few years, I’ve built and led React projects that grew from small ideas into large-scale ecosystems used daily by hundreds of people.
Somewhere along the way, I realized that writing great code isn’t just about knowing React or TypeScript — it’s about building systems that make other developers faster, calmer, and more confident. That’s the craft I’m obsessed with.
I love exploring tools, patterns, and workflows that bring more intention to coding — from shared UI libraries and micro frontends to AI-assisted development.
The goal is always the same: make the front end less chaotic, and coding more deliberate.
Outside of work, I’m usually chasing my two little kids around the house or over-engineering some side project for fun.
Note: This article was collaboratively written with AI assistance following the same structured workflow described above.
Top comments (0)