DEV Community

Cover image for The AI Coding Trap: Refactoring my Next.js SaaS with OpenSpec (and why I ditched spec-kit)
xin wan
xin wan

Posted on

The AI Coding Trap: Refactoring my Next.js SaaS with OpenSpec (and why I ditched spec-kit)

In 2025, I heavily relied on AI coding assistants to build the MVP of my SaaS, Printable Handwriting—a platform featuring 5 custom worksheet generators and an AI handwriting analysis tool.

It shipped, but beneath the surface, the codebase was a nightmare. Here is the story of my AI-generated technical debt, the specific hallucinations that drove me crazy, and why I ultimately chose openspec over spec-kit to dig myself out.
The Problem with the AI-Generated MVP

The biggest nightmare occurred in the core worksheet rendering module. The AI completely failed to grasp a fundamental architectural concept: the frontend preview area and the final PDF generator must share the exact same rendering engine.

Instead, the AI hallucinated two entirely separate logic paths. What users saw on the screen during the preview never perfectly matched the PDF they downloaded. I spent hours wrestling with the AI, tweaking prompts endlessly, trying to force it to synchronize the two. But it kept spinning in circles, patching bad logic with worse logic. It simply couldn't find the right direction within that single context window.
The "Multi-Session" Assembly Hack

I finally realized I had to step back and act as the architect. I couldn't let the AI handle the entire scope at once.

I opened a brand new chat session and strictly instructed the AI to do one single thing: encapsulate the core rendering logic into a pure, isolated module. Once that raw rendering engine was built, I opened another entirely fresh session specifically to handle the assembly and state management.

It worked. The preview and the PDF output finally aligned perfectly across all my tools (Lines, Alphabet, Print, Cursive, and Name for signatures). However, there was a massive catch: while the resulting assembled code achieved the outcome I wanted, its readability was absolutely terrible. It was dense, tangled, and entirely unmaintainable for future feature iterations.
The Refactoring Journey: spec-kit vs. openspec

I needed to properly refactor this Next.js app to ensure long-term stability.

Initially, I reached for spec-kit. It has great concepts, but as I dove in, I quickly realized a critical limitation: spec-kit feels heavily optimized for scaffolding brand-new projects from scratch. For a legacy, AI-tangled codebase that needed gradual iteration and untangling, it felt like forcing a square peg into a round hole.

I then explored openspec, and it was a game-changer. Its approach to engineering felt much more aligned with the messy reality of refactoring an existing project. It provided the structural rigor I needed to iteratively untangle the AI's spaghetti code, modularize my 5 different generators, and maintain the site's uptime without requiring a "burn-it-all-down" complete rewrite from day one.
Conclusion

Relying blindly on AI to architect a complex UI rendering pipeline is a trap. AI is a fantastic typist, but you still need to enforce strict engineering boundaries.

The refactor with openspec finally gave me the stable, scalable foundation I needed to confidently launch my core feature: an AI tool that actually analyzes users' handwriting styles and generates personalized training plans based on their specific needs.

If you're curious to see how the unified rendering engine performs in production, or want to test the AI handwriting analysis to improve your own writing, check out printablehandwriting.com.

Has anyone else experienced the "renderer split" hallucination with AI coding tools? Let me know your refactoring stories in the comments!

Top comments (0)