Every time you ask an AI to generate a backend, something slightly annoying happens: it writes 200 lines of TypeScript, picks its own folder structure, names things its own way, and somewhere in the middle it hallucinates an import that doesn't exist.
You then spend 20 minutes fixing what should've taken 2.
The thing is, this isn't really the AI's fault. You asked it to do too much.
What Terraform figured out
Think about how Terraform works.
When you want to provision infrastructure, you don't ask an AI to write raw AWS API calls. That's complex, verbose, and easy to get wrong. Instead, you describe what you want:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-app-storage"
acl = "private"
}
And Terraform handles the rest. The AI only needs to learn a small, constrained language. The hard work is handled by the system.
The same idea applies to backend generation. We've just been too slow to notice.
What we're doing wrong today
The current AI-for-backend workflow looks like this:
User → "Build me a REST API for users with JWT and Supabase" → AI → 200 lines of code → you fix the mess
The AI is generating the implementation. That's the wrong layer.
It burns tokens on boilerplate. It makes architectural decisions it shouldn't. And every project ends up slightly different because the AI is improvising every single time.
The better way: a DSL
What if instead, the AI's only job was to write something like this?
config {
db: supabase
auth: jwt
}
table "users" {
id: uuid @primary
email: string @unique
name: string
created_at: timestamp @default(now)
}
resource "users" {
table: users
endpoints {
GET /users
POST /users
GET /users/:id
PUT /users/:id
DELETE /users/:id
}
}
And your system generates the actual TypeScript, the router, the database client, the handlers — everything.
The AI doesn't write code. The AI speaks your language, and your language generates code.
Why this matters
Consistency. Every project generated from the same DSL looks the same. Same folder structure, same naming conventions, same patterns. No more "why did it do it this way this time?".
Token efficiency. 10 lines of DSL vs 200 lines of TypeScript. The AI's job just got 20x cheaper.
Predictability. The AI can't hallucinate an import that doesn't exist in your DSL. The language is closed and constrained. If the AI writes valid DSL, the output is valid code. Period.
You own the generator. Want to switch from Express to Hono? Update the generator once. Every project benefits automatically.
Basalt: an early answer to this problem
This is exactly what we're building with Basalt.
Basalt is an open source CLI written in Go. You give it a .bs file describing your backend — tables, endpoints, auth — and it generates all the TypeScript you need, ready to run.
Ask any AI to read the Basalt spec and write your main.bs
→ basalt generate
→ cd generated && npm install && npm run dev
That said — Basalt is just getting started. We're at v0.1.0 and the generator is still pretty basic: Express + TypeScript + Supabase, same template structure for every project. It doesn't do magic yet.
But the foundation is there, it works, and the direction is clear. Upcoming versions will bring more frameworks, more databases, custom templates, and eventually a basalt ai command that takes plain English and generates the .bs for you.
The bigger picture
We're at an interesting moment with AI and code generation. The first instinct was "just let the AI write everything." But the more mature approach is starting to emerge: design constrained languages that AI can speak fluently, then build systems that compile those languages into real artifacts.
Terraform did this for infrastructure. Prisma did this for database schemas. OpenAPI did this for API contracts.
Backend generation is next — and the pieces are already there. We just need to stop handing the AI a blank canvas and start giving it a grammar.
What do you think? Is this a pattern you'd use? Drop a comment — would love to hear how others are thinking about this.
Top comments (0)