I've been using AI assistants for coding daily for over a year. Some workflows stuck. Others looked great on paper but wasted more time than they saved.
Here are the three I kept — and two I dropped.
✅ Workflow 1: The Spec-First Draft
Before I ask an AI to write any code, I write a one-paragraph spec:
Build a function that takes a list of timestamps and returns
the longest gap between consecutive entries. Input: list of
ISO-8601 strings. Output: { gapMs: number, start: string,
end: string }. Throw if the list has fewer than 2 entries.
That's it. No prompt engineering tricks. Just a clear description of what I want, what goes in, and what comes out.
Why it works: The AI doesn't have to guess your intent. You spend 30 seconds writing the spec and save 10 minutes of back-and-forth.
Why I kept it: Every time I skip the spec "because it's simple," the output is wrong on the first try.
✅ Workflow 2: The Review-First Read
When I inherit unfamiliar code, I paste the file and ask:
Read this file. Don't change anything.
Tell me:
1. What this module does (one sentence)
2. What it depends on
3. The three riskiest lines and why
I do this before I touch anything. It's like having a second pair of eyes that reads faster than I do.
Why it works: I catch assumptions I'd otherwise miss. The "riskiest lines" question surfaces edge cases.
Why I kept it: I found a race condition in a queue handler this way. The AI flagged a shared mutable reference I would have glanced past.
✅ Workflow 3: The Test-First Scaffold
Instead of asking the AI to write the implementation, I ask it to write the tests first:
Write 5 unit tests for a function called \`parseConfig\`.
It should:
- Accept a JSON string
- Return a typed config object
- Throw on missing required fields
- Use defaults for optional fields
- Reject unknown keys
Then I feed the tests back and ask for the implementation. The tests act as a contract — the AI knows exactly what "correct" looks like.
Why it works: The AI writes better code when it has concrete pass/fail criteria.
Why I kept it: My test coverage went up and my "it works but not quite" rewrites went down.
❌ Abandoned: The "Explain Everything" Comment Pass
I used to ask the AI to add comments to every function after writing the code. The idea was documentation-as-you-go.
Why I dropped it: The comments restated the code. // increment counter\ above counter++\. It added noise, not clarity. When I asked for "why" comments, the AI made up plausible-sounding reasons that were sometimes wrong.
Now I write the important comments myself and leave the obvious code uncommented.
❌ Abandoned: Full-File Refactoring in One Shot
I used to paste a 300-line file and ask: "Refactor this for readability."
Why I dropped it: The AI would change variable names, reorder functions, extract helpers, and switch patterns — all at once. The diff was unreadable. I couldn't tell if it introduced bugs because everything moved.
Now I refactor one thing at a time: "Extract the validation logic into a separate function. Don't change anything else."
The Pattern
The workflows that stuck have something in common: they constrain the AI's scope. Spec-first limits what it builds. Review-first limits it to reading. Test-first limits what "correct" means.
The workflows I dropped gave the AI too much freedom. "Comment everything" and "refactor everything" are vague mandates that produce vague results.
My rule of thumb: If you can't describe the task in one sentence, split it into tasks you can.
What workflows stuck for you? I'm curious which ones other developers kept vs. dropped — leave a comment.
Top comments (1)
Hi,
I’m now looking for a reliable long-term partner.
You’ll use your profile to communicate with clients, while I handle all technical work in the background.
We’ll position ourselves as an individual freelancer to attract more clients, especially in the US market where demand is high.
Best regards,