What Claude Got Wrong While Building My Next.js Blog
I recently built a blogging platform using Claude as my primary AI assistant.
Overall? It was impressive.
But it wasn’t perfect — and the imperfections were interesting.
Here’s what went wrong.
1. Version Awareness Issues
Modern JavaScript stacks move fast.
During development, I noticed:
- Confusion around the Next.js middleware → proxy shift
- Patterns that didn’t fully match Prisma v7
- Inconsistent alignment with NextAuth v5
None of this broke the project.
But it highlighted something important:
AI doesn’t automatically understand the current state of the ecosystem.
If you’re using bleeding-edge versions, you still need to guide it carefully.
2. SEO Was Almost Ignored
This was a blogging platform.
Yet:
- No proper static params generation
- Weak metadata setup
- No clear SEO-first structure
The UI looked good.
The routing worked.
The app ran fine.
But discoverability wasn’t prioritized.
AI optimized what was visible.
Not what was strategically important.
3. The JS Bundle Was Larger Than Necessary
The generated code produced a heavier-than-ideal bundle.
Interestingly, performance still felt fast.
But feeling fast and being optimized aren’t the same thing.
Bundle discipline still requires intentional review.
4. What It Did Surprisingly Well
To be fair, Claude handled changes cleanly.
When I asked it to:
- Refactor
- Add features
- Modify specific components
It rarely broke unrelated parts of the system.
That structural continuity was genuinely impressive.
What I Learned
AI is very good at:
- Local reasoning
- Feature iteration
- Maintaining structural consistency
It’s weaker at:
- Ecosystem nuance
- Strategic prioritization (like SEO for a blog)
- Long-term architectural thinking
AI accelerates execution.
But it doesn’t replace engineering judgment.
And honestly, that’s fine.
The real skill isn’t prompting.
It’s knowing what matters.
Top comments (0)