I'm a Swift/iOS developer. I know MVVM, async/await, dependency injection. I don't know Next.js, React, or how to wire together ElevenLabs STT + DeepL translation + ffmpeg in the browser.
Rather than spending 2 weeks learning the stack, I went all-in on AI coding agents (Claude Code) and treated it like a real architecture problem — not just "write me some code."
The result: DubbAI, a web service that takes audio/video, transcribes it, translates it, and dubs it back — in the target language. 3 days. 33 tests passing. Zero Next.js experience at the start.
Here's what I learned, including the parts that didn't work.
Mistake #1: Asking for Code Before Having a Plan
My first move was asking the AI to implement Google OAuth + a Turso database schema. It produced working code, but the pieces didn't fit together — the auth flow assumed a session model that didn't match my DB schema, and I had to restructure the integration points.
What actually worked:
Ask for a design doc first. Architecture, data flow, error handling strategy, API contracts. Review it yourself, poke holes in it, then say "implement this."
Don't write code. Design the architecture for [feature].
I'll review it before we implement.
Zero structural rewrites after adopting this. The plan stage is where you catch the expensive mistakes — not after 200 lines are already written.
This is the core of the PDCA loop (Plan → Do → Check → Act): you stay in control of the direction, and the AI handles the execution.
Mistake #2: Letting the AI Review Its Own Code
I asked the same AI session to review auth code it had just written. Response: minor suggestions, nothing structural.
I opened a new session, gave it a "code reviewer" role, and asked it to find bugs. It immediately flagged:
- A missing error handler in the OAuth callback
- Places where types were typed as any
- An unvalidated session check on an API route
Same code, different perspective, completely different results.
The lesson: The AI that wrote the code has the same blind spots it had while writing it. Always use a separate session or role for review — this is the cross-validation pattern, and it's non-negotiable if you care about code quality.
Mistake #3: Babysitting the Test Loop
I had 21 unit/integration tests. Running them manually, reading failures, telling the AI to fix one thing, running again... painfully slow.
What worked:
Run the test suite. For each failure, fix the implementation — not the test. Re-run until all pass. Don't stop unless you're genuinely stuck.
I came back to all green. Later I added 12 E2E tests with Playwright the same way. 33 tests total, all passing.
Set the success criteria, hand it off, and let the AI own the iteration loop. You don't need to be in the room.
Mistake #4: Sequential When Parallel Was Possible
I was asking for services one at a time — ElevenLabs wrapper, then DeepL wrapper, then the dubbing pipeline. The second service sometimes made assumptions that contradicted the first, which meant going back and reconciling them.
What worked:
Implement these three services independently, each with their own tests. We'll integrate after.
All three came back clean. Integration was straightforward because each had a tested contract. Independent tasks should always run in parallel — it's faster and surfaces misalignments earlier.
The Thing I Didn't Expect
I asked: "I know MVVM from iOS. What's the equivalent pattern in Next.js?"
Instead of "React is different, forget what you know" — it mapped my existing mental model onto the new stack:
- iOS Service classes → Next.js service layer (lib/services/)
- Network layer → API routes
- ViewModel → Custom hooks
- Combine publishers → State management hooks
That reframing was worth more than any generated code. I wasn't learning Next.js from scratch — I was translating what I already knew.
And this isn't just for people switching stacks. Even with zero coding background, you can lean on this same approach — describe the outcome you want, ask for the reasoning behind the design, and let the AI meet you at your level. The process works regardless of where you're starting from.
Results
- ⏱ 3 days (vs ~2 weeks estimated)
- ✅ 33 tests passing (21 unit/integration + 12 E2E with Playwright)
- 🔧 Full pipeline: Google OAuth → file upload → STT → translation → TTS → ffmpeg.wasm video muxing → Vercel deployment
- 📂 Source: github.com/kangsw1025/DubbAI
Key Takeaways
- Plan before code — design doc first, implementation second. Catches the expensive mistakes early.
- Cross-validate — never let the AI that wrote the code also review it.
- Set goals, not steps — "fix until all tests pass" beats manually supervising every iteration.
- Parallelize independent work — surfaces integration problems earlier, not later.
- Lead with what you know — asking "what's the Next.js equivalent of MVVM?" gets you further than asking "how do I structure a Next.js app?"
Have you found the separate-session review pattern makes a real difference? Curious whether others are doing this or just using one chat for everything.
Top comments (0)