Why I Let AI Choose My Technologies
The philosophy is clear: programming by coercion. Make the machine enforce quality. But which machine? Which stack?
Here's a confession: I didn't pick this stack because it's the best. I picked it because AI knows it by heart.
Python. FastAPI. TypeScript. React. PostgreSQL. Nothing exciting. Nothing cutting-edge. Nothing that will impress anyone at a conference.
That's the point.
The Training Data Advantage
AI generates code based on training data. More examples = better output. Simple as that.
You can use that fancy new Rust framework with 200 GitHub stars. AI will hallucinate half the API. You'll spend your evening fixing AI mistakes instead of watching your show.
Or you can use technologies with millions of examples in the training data. AI gets it right the first time. You ship faster.
I chose option two. My ego will recover.
Server Side
Python
Every AI coding benchmark uses Python. HumanEval, MBPP, SWE-bench—all Python. Coincidence? No. AI understands Python better than any other language.
async def get_user(user_id: int, db: AsyncSession) -> User | None:
stmt = select(User).where(User.id == user_id)
result = await db.execute(stmt)
return result.scalar_one_or_none()
AI generates this correctly every time. Try the same in Scala, Go or Rust. Good luck.
FastAPI
Flask is fine. Django is fine. But FastAPI has something they don't: types everywhere and OpenAPI out of the box.
@router.post("/users", response_model=UserResponse)
async def create_user(
user: UserCreate,
db: AsyncSession = Depends(get_db),
) -> User:
...
AI reads this signature and knows exactly what to generate. Input types, output types, dependency injection—all explicit. No guessing.
SQLAlchemy + PostgreSQL
"Just write raw SQL, it's simpler."
Sure. And AI will generate SQL injection vulnerabilities, wrong column names, and type mismatches. I've seen it. Multiple times. In one afternoon.
SQLAlchemy gives AI structure:
stmt = select(User).where(User.id == user_id)
AI can't accidentally concatenate user input. The ORM pattern is type-safe by design.
I naturally chose relational databases—they enforce typing by design.
PostgreSQL because it's the industry standard: mature, stable, perfect migrations, unbelievable backward compatibility.
MySQL/MariaDB could work, but I prefer real open source without Oracle's shadow. And I'm still unable to rename a database without voodoo file manipulation—am I the only one shocked by this?
NoSQL with MongoDB or Neo4j looks cool, but I'll stick with boring PostgreSQL for type enforcement. AI has seen millions of examples. I'm guaranteed to run it seamlessly for years.
uv
This one's not about AI. It's about sanity.
uv sync # 2 seconds instead of 45
uv run pytest
10-100x faster than pip. Deterministic builds. I'm not interested in watching never-ending package installations, even with Netflix on. I switched and never looked back.
Async
With 10 sync workers, a 200 ms request caps you at ~50 RPS (25 RPS at 400 ms) because each worker naps while Postgres thinks.
With async, the same setup can handle ~500 RPS (250 RPS at 400 ms) by multitasking instead of staring at the wall.
Python async used to be a footgun. AI would forget await constantly.
Now we have Ruff with async rules. AI still forgets await. The linter catches it. Problem solved.
# AI writes this (wrong)
result = db.execute(stmt)
# Ruff screams, AI fixes it
result = await db.execute(stmt)
This is programming by coercion in action.
The Frontend That Just Works
TypeScript (Strict Mode)
JavaScript has no types. AI doesn't know what functions expect. Refactoring is prayer-based.
TypeScript strict mode forces AI to be explicit:
function processUser(user: User): string {
return user.name.toUpperCase();
}
AI knows the input. AI knows the output. AI generates correct code.
I use strict: true and noUncheckedIndexedAccess. Yes, it's annoying sometimes. That's the point.
React
Vue is great. Svelte is great. But AI has seen more React code than everything else combined.
const [user, setUser] = useState<User | null>(null);
const { data, isLoading } = useQuery(['user', id], fetchUser);
Standard patterns. Predictable hooks. AI generates this in its sleep.
The Infrastructure Rock
Docker
"Works on my machine" is not a deployment strategy.
Docker makes environments reproducible. AI knows Dockerfile patterns. Everyone wins.
CI/CD: The Rule of Power
YAML-based pipelines. Well-documented. AI generates correct CI configs.
More importantly: this is where the coercion happens. Every check, every gate, every "you cannot merge this". One file to rule them all.
Monorepo: One Home for Everything
/
├── backend/ # FastAPI
├── frontend/ # React
├── infra/ # Helm, k8s
└── .gitlab-ci.yml # The gatekeeper
Backend, frontend, infra—same repo. One clone. One branch. One PR.
"But separate repos are cleaner!" Sure. And now AI needs to:
- Clone three repos
- Keep them in sync
- Make coordinated changes across repos
- Hope the CI in repo A passes before repo B deploys
Good luck with that.
With a monorepo, AI sees everything. Change the API schema? AI updates the backend endpoint, the frontend types, and the OpenAPI spec. One commit. One pipeline. All checks run together.
The pipeline enforces consistency. Frontend types don't match backend? CI fails. Database migration missing? CI fails. Contract broken? CI fails. You can't ship half a feature.
Separate repos can't do this. You'd need cross-repo CI triggers, version pinning, deployment coordination. Complexity for complexity's sake.
AI works in one context. The pipeline validates one state. Ship with confidence.
The Humble Conclusion
I could have picked Rust—efficient memory management, blazing fast, amazing type system, solid async.
But AI struggles with Rust. Fewer examples, different patterns, even syntax errors.
So I use Python. And FastAPI. And all the boring stuff.
My side projects ship. My evenings are free. The stack is unremarkable.
That's the whole point.
Next up: The Linter That Yells — First line of defense. Code that compiles isn't code that works.
Top comments (0)