DEV Community

Sangio
Sangio

Posted on

Are CMSes becoming pointless now that AI can just code websites? I built one anyway — here's my reasoning.

I've been seeing the "just let AI code it" sentiment a lot lately, and honestly, it crossed my mind too. AI is getting scary good at generating full pages. So why would anyone use a CMS?

Here's where I landed after thinking about it for months (and building something to test the idea):

The problem with "just let AI code it": You get a page, sure. But then you want to change the font, restructure a section, translate it, deploy it somewhere else. You're back to prompting, re-generating, copy-pasting, merging. Every change is a full round-trip through the AI. It's like having a brilliant architect who redraws the entire blueprint every time you want to move a door.

Now, I'll be honest: this problem is shrinking fast. When I started building in November, agentic AI was clunky — you'd lose context, get inconsistent outputs, waste tokens on back-and-forth. Today, with models like Claude Opus, the "just let it code everything" approach is genuinely viable for more and more cases. The architect is getting better at remembering the blueprint.

So why still bother with structure? Because even the best architect is faster when you hand them a clear brief instead of saying "make something nice." Structured workflows use fewer tokens, produce more predictable results, and give you a system that works after the AI is done — a visual editor, consistent builds, export/import. The question isn't "can AI do it without structure?" anymore. It's "is the structured approach still worth the trade-off in control and efficiency?"

That's what I've been building since November. I started by splitting the project into commands (web creation, build & deploy, user and project management). Then I added "workflows" — batches of commands that accomplish predetermined constructions. Then I used that concept to create specialized workflows that explain to the AI how commands work and what format to respond in, so the AI returns a sequence of commands itself. It makes the AI very focused — one narrow direction, no freestyling. And since the AI builds in a structure the project defines, a visual editor can offer easy manual changes afterward.

Some honest notes: I've been coding since I was 15 (I'm 33 now), but coding with AI in an agentic way — that's new. I started using GitHub Copilot in November 2025 and it changed how I work. Before that, sure, copy-pasting from ChatGPT like everyone else. But the agentic workflow is a different beast. I tested GPT, Gemini, and Claude. Landed on Claude — Opus specifically blew me away for structured output. Which loops back to my opening question: if AI keeps getting this good, does a structured approach still make sense, or will raw generation just outpace it?

I'm not here to drop a link (happy to share if anyone's curious). I'm genuinely wondering: does this reasoning hold up? Is "structured AI direction" worth pursuing, or is raw AI code generation going to make it irrelevant?

Transparency note: I wrote the ideas, AI helped me structure and write this post. I'm better at building things than writing Reddit posts — figured I'd use AI for what it's good at.

Top comments (1)

Collapse
 
apex_stack profile image
Apex Stack

Your reasoning absolutely holds up, and I think the answer becomes obvious once you hit scale. I run a financial data site with thousands of programmatically generated pages across 12 languages, built on Astro (static site generator). There's zero chance raw AI generation could manage that — every page follows a strict template structure with consistent schema markup, internal linking patterns, and multilingual routing. The structure IS the product.

The "brilliant architect who redraws the blueprint every time you move a door" metaphor is perfect. I've experienced exactly this when I tried using AI to make one-off content changes — it works for a single page, but the moment you need consistency across thousands of pages, you need the structured layer underneath. The AI becomes dramatically more useful when you constrain it to operate within defined templates and workflows rather than letting it freestyle.

Your command + workflow approach sounds a lot like what I ended up building too — specialized AI tasks that each do one focused thing (generate analysis text, validate financial data, translate content) rather than one mega-prompt that tries to do everything. The token efficiency alone is worth the architectural overhead, but the real win is predictability. When you have a CI/CD pipeline deploying pages to production, you can't afford the AI to get creative with output formats.

I'd be curious to hear more about the visual editor piece — that's the part where the structured approach has a clear edge that raw generation can't touch.