<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marble</title>
    <description>The latest articles on DEV Community by Marble (@_45970c3dfcf5c81b3da29).</description>
    <link>https://dev.to/_45970c3dfcf5c81b3da29</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_45970c3dfcf5c81b3da29"/>
    <language>en</language>
    <item>
      <title>I'm a Swift Dev Who Shipped a Full-Stack Next.js App in 3 Days with AI Agents — Here's What Actually Worked</title>
      <dc:creator>Marble</dc:creator>
      <pubDate>Sat, 14 Mar 2026 16:51:30 +0000</pubDate>
      <link>https://dev.to/_45970c3dfcf5c81b3da29/im-a-swift-dev-who-shipped-a-full-stack-nextjs-app-in-3-days-with-ai-agents-heres-what-4hb0</link>
      <guid>https://dev.to/_45970c3dfcf5c81b3da29/im-a-swift-dev-who-shipped-a-full-stack-nextjs-app-in-3-days-with-ai-agents-heres-what-4hb0</guid>
      <description>&lt;p&gt;I'm a Swift/iOS developer. I know MVVM, async/await, dependency injection. I don't know Next.js, React, or how to wire together ElevenLabs STT + DeepL translation + ffmpeg in the browser.&lt;/p&gt;

&lt;p&gt;Rather than spending 2 weeks learning the stack, I went all-in on AI coding agents (Claude Code) and treated it like a real architecture problem — not just "write me some code."&lt;/p&gt;

&lt;p&gt;The result: DubbAI, a web service that takes audio/video, transcribes it, translates it, and dubs it back — in the target language. 3 days. 33 tests passing. Zero Next.js experience at the start.&lt;/p&gt;

&lt;p&gt;Here's what I learned, including the parts that didn't work.&lt;/p&gt;




&lt;p&gt;Mistake #1: Asking for Code Before Having a Plan&lt;/p&gt;

&lt;p&gt;My first move was asking the AI to implement Google OAuth + a Turso database schema. It produced working code, but the pieces didn't fit together — the auth flow assumed a session model that didn't match my DB schema, and I had to restructure the integration points.&lt;/p&gt;

&lt;p&gt;What actually worked:&lt;br&gt;
Ask for a design doc first. Architecture, data flow, error handling strategy, API contracts. Review it yourself, poke holes in it, then say "implement this."&lt;/p&gt;

&lt;p&gt;Don't write code. Design the architecture for [feature]. I'll review it before we implement.&lt;/p&gt;

&lt;p&gt;Zero structural rewrites after adopting this. The plan stage is where you catch the expensive mistakes — not after 200 lines are already written.&lt;/p&gt;

&lt;p&gt;This is the core of the PDCA loop (Plan → Do → Check → Act): you stay in control of the direction, and the AI handles the execution.&lt;/p&gt;




&lt;p&gt;Mistake #2: Letting the AI Review Its Own Code&lt;/p&gt;

&lt;p&gt;I asked the same AI session to review auth code it had just written. Response: minor suggestions, nothing structural.&lt;/p&gt;

&lt;p&gt;I opened a new session, gave it a "code reviewer" role, and asked it to find bugs. It immediately flagged:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A missing error handler in the OAuth callback&lt;/li&gt;
&lt;li&gt;Places where types were typed as any&lt;/li&gt;
&lt;li&gt;An unvalidated session check on an API route&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same code, different perspective, completely different results.&lt;/p&gt;

&lt;p&gt;The lesson: The AI that wrote the code has the same blind spots it had while writing it. Always use a separate session or role for review — this is the cross-validation pattern, and it's non-negotiable if you care about code quality.&lt;/p&gt;




&lt;p&gt;Mistake #3: Babysitting the Test Loop&lt;/p&gt;

&lt;p&gt;I had 21 unit/integration tests. Running them manually, reading failures, telling the AI to fix one thing, running again... painfully slow.&lt;/p&gt;

&lt;p&gt;What worked:&lt;/p&gt;

&lt;p&gt;Run the test suite. For each failure, fix the implementation — not the test. Re-run until all pass. Don't stop unless you're genuinely stuck.&lt;/p&gt;

&lt;p&gt;I came back to all green. Later I added 12 E2E tests with Playwright the same way. 33 tests total, all passing.&lt;/p&gt;

&lt;p&gt;Set the success criteria, hand it off, and let the AI own the iteration loop. You don't need to be in the room.&lt;/p&gt;




&lt;p&gt;Mistake #4: Sequential When Parallel Was Possible&lt;/p&gt;

&lt;p&gt;I was asking for services one at a time — ElevenLabs wrapper, then DeepL wrapper, then the dubbing pipeline. The second service sometimes made assumptions that contradicted the first, which meant going back and reconciling them.&lt;/p&gt;

&lt;p&gt;What worked:&lt;/p&gt;

&lt;p&gt;Implement these three services independently, each with their own tests. We'll integrate after.&lt;/p&gt;

&lt;p&gt;All three came back clean. Integration was straightforward because each had a tested contract. Independent tasks should always run in parallel — it's faster and surfaces misalignments earlier.&lt;/p&gt;




&lt;p&gt;The Thing I Didn't Expect&lt;/p&gt;

&lt;p&gt;I asked: "I know MVVM from iOS. What's the equivalent pattern in Next.js?"&lt;/p&gt;

&lt;p&gt;Instead of "React is different, forget what you know" — it mapped my existing mental model onto the new stack:&lt;/p&gt;

&lt;p&gt;iOS (Swift)        | Next.js &lt;br&gt;
Service classes    │ Service layer (lib/services/)&lt;br&gt;
Network layer      │ API routes&lt;br&gt;&lt;br&gt;
ViewModel          │ Custom hooks&lt;br&gt;&lt;br&gt;
Combine publishers │ State management hooks       &lt;/p&gt;

&lt;p&gt;That reframing was worth more than any generated code. I wasn't learning Next.js from scratch — I was translating what I already knew.&lt;/p&gt;

&lt;p&gt;And this isn't just for people switching stacks. Even with zero coding background, you can lean on this same approach — describe the outcome you want, ask for the reasoning behind the design, and let the AI meet you at your level. The process works regardless of where you're starting from.&lt;/p&gt;




&lt;p&gt;Results&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⏱ 3 days (vs ~2 weeks estimated)&lt;/li&gt;
&lt;li&gt;✅ 33 tests passing (21 unit/integration + 12 E2E with Playwright)&lt;/li&gt;
&lt;li&gt;🔧 Full pipeline: Google OAuth → file upload → STT → translation → TTS → ffmpeg.wasm video muxing → Vercel deployment&lt;/li&gt;
&lt;li&gt;📂 Source: github.com/kangsw1025/DubbAI
If you find this helpful or relatable, a ⭐ on GitHub would mean a lot!&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Key Takeaways&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Plan before code — design doc first, implementation second. Catches the expensive mistakes early.&lt;/li&gt;
&lt;li&gt;Cross-validate — never let the AI that wrote the code also review it.&lt;/li&gt;
&lt;li&gt;Set goals, not steps — "fix until all tests pass" beats manually supervising every iteration.&lt;/li&gt;
&lt;li&gt;Parallelize independent work — surfaces integration problems earlier, not later.&lt;/li&gt;
&lt;li&gt;Lead with what you know — asking "what's the Next.js equivalent of MVVM?" gets you further than asking "how do I structure a Next.js app?"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Have you found the separate-session review pattern makes a real difference? Curious whether others are doing this or just using one chat for everything.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
