DEV Community

Cover image for Symfony AI: When a School Bus Painted as a Rocket Pretends to Go to Orbit
Pascal CESCATO
Pascal CESCATO

Posted on

Symfony AI: When a School Bus Painted as a Rocket Pretends to Go to Orbit

_Technical satire

You'd almost want to believe it. One fine morning, Symfony announces its "AI" module, and the whole ecosystem shivers as if the framework had just discovered quantum gravity. But very quickly, scratching beneath the polish, you realize you're not witnessing a technological revolution... but a makeover operation.

A school bus repainted white, decorated with three NASA stickers, and presented as a space shuttle.

Welcome to "Symfony AI," or the subtle art of pretending to be modern.


1. AI Integration Cosplay Style: Fake Chic on Real Emptiness

The AI component offers a ChatModelInterface perfectly DI-friendly, perfectly Symfony. But behind it, what's really there? A nicely wrapped HTTP request, and an object instantiation to make you believe magic is happening.

No serious streaming, no parallelism, no fine-grained token management at high cadence. Just a layer of architectural polish that transforms a simple API call into a sacred ritual.

It's technical cosplay: you dress up as an astronaut, but you stay in the backyard.

Streaming: When the 1980s Tires Explode

In the real world of AI, an LLM takes time to respond — sometimes 10, 20, 30 seconds. So we use streaming (Server-Sent Events) to display words one by one, giving the illusion of fluidity.

In Python (FastAPI):

  • Native, asynchronous streaming
  • One worker can handle 100+ simultaneous connections without breaking a sweat
  • While OpenAI generates the response, the worker is free to process other requests
  • Non-blocking architecture: everything is fluid

In Symfony (classic PHP-FPM):

  • Making proper streaming work is already a pain
  • Each streaming connection monopolizes one complete PHP worker
  • If 50 users are streaming a response simultaneously, your 50 PHP workers are all frozen, patiently waiting for OpenAI to deign to send back a token
  • Meanwhile? Your site doesn't respond anymore. Other visitors wait. Monitoring goes haywire.
  • This is textbook worker starvation: all your workers are alive but useless, blocking on I/O while your queue fills up and users time out.

The school bus doesn't just have NASA stickers. It also has 1980s tires that explode as soon as you exceed 30 mph.

That's when you understand that synchronous PHP architecture was never designed for this. You can apply as much polish as you want, the foundation remains unsuitable.


2. Doctrine: A Ferrari with a Lawnmower Engine

Modern RAG relies on vector operations: cosine distances, ANN indexes, millions of points in memory. Doctrine, on the other hand, relies on PHP object hydration designed in 2009 for SQL relations.

But let's be honest: even for standard plowing — your everyday SELECT * FROM user WHERE active = 1 — Doctrine consumes like an ogre.

The Hidden Cost of "Simple CRUD"

Forced hydration:

It manufactures complete PHP objects with all the machinery (EventManager, UnitOfWork, lazy-loading proxies) just to display three fields in a JSON.

Memory footprint:

50,000 rows? The PHP process takes 400MB and the garbage collector screams. This isn't data management, it's helium inflation.

Subtle N+1:

Even senior devs forget a fetch="EAGER" and suddenly your page makes 47 SQL queries to list users. Doctrine doesn't protect you from yourself, it amplifies your mistakes.

DQL Overhead:

The DQL parser + SQL generator + result set mapping to transform SQL into objects... it's molecular gastronomy to make a sandwich. You wanted SELECT id, name FROM user? Doctrine offers you a ballet of 800 lines of internal code.

The Real Metaphor

You can announce the same power on paper — "millions of entries management, elegant abstraction" — but Doctrine isn't even a robust farm tractor.

It's a garden micro-tractor, with 25 HP, meant to plow flowerpots (your 200-line admin CRUD), that we're trying to pass off as intensive farming equipment.

And here, in AI, we're asking this micro-tractor to plow 50 hectares of 1536D vectors continuously.

Result?

  • It melts its clutch (PHP segfault)
  • It blows its tires (disk swap activated, server on its knees)
  • The driver (the DBA) has to call for help at 3 AM

The metaphor "Ferrari with tractor engine" was already too flattering.

It's a Ferrari with a Honda lawnmower engine.

You can't race the 24 Hours of Le Mans with a block that was designed to mow the lawn.

A Concrete Example That Kills

Let's take a basic RAG chatbot: 50,000 documents, OpenAI embeddings (1536 dimensions), semantic search.

With Qdrant (or Pinecone, or Weaviate):

  • Latency: 20-50ms
  • RAM: ~2GB for 50k vectors
  • Scale: linear up to several million vectors

With Symfony AI + Doctrine:

  • Doctrine tries to hydrate thousands of PHP objects to calculate cosine distances
  • MySQL (or PostgreSQL) does a full table scan on an embedding column stored as JSON or BLOB
  • Latency: 3-8 seconds for a simple query
  • RAM: the PHP process explodes to 512MB, then 1GB, then timeout
  • The DBA receives an alert at 3 AM and resigns by email

And the worst part? Even if the dev adds a vector index (pgvector on PostgreSQL, for example), Doctrine doesn't know how to generate the specific search operator like pgvector's <->.

They have two options:

  1. Write raw SQL with NativeQuery → the ORM is useless, we just added 3 layers of abstraction to... write SQL by hand
  2. Use Doctrine's QueryBuilder → which will generate a slow and inefficient query, completely ignoring the vector index

The abstraction isn't just slow. It's useless. Worse: it's dangerous, because it gives the illusion that you're doing things properly while sabotaging performance.

It's a Ferrari with a lawnmower engine: it looks impressive on the brochure, but try exceeding 20 mph.


3. Economic Incoherence: Doing AI with Yesterday's Problem's Tool

Using Symfony to do AI is like using COBOL to make a website in 2025.

Technically possible? Yes, absolutely.

Has someone already done it? Probably, in some basement of the Finance Ministry.

Is it a good idea? No. Never. Under no circumstances.

The Real Economic Question

Facing a RAG project, an average company has two options:

Efficient option:

Two Python devs → FastAPI + Qdrant → robust prototype in two weeks → scales to 10M vectors with 2 servers → controlled cost, performance delivered.

Symfony option:

We try to fit embeddings into Doctrine → six months of refactoring → a budget equivalent to a country house → performance that makes a 200-line Python script smile → scales to 100k documents maximum before everything collapses.

It's not a question of Symfony devs' competence. It's a question of tool unsuitable for the problem.

Symfony AI is a solution for those who want to do AI without ever approaching AI. For those who prefer to pay six months of consulting rather than three weeks of Python training.


4. The Rubber Belt Against the Metal Chain

The rubber belt (Symfony AI) is exactly what we put in place of the metal chain (an AI-native architecture).

Why did the automotive industry replace chains with belts?

  • Cost: a belt costs less to produce — like avoiding training a Python team or hiring an ML engineer.
  • Silence: it makes less noise — no organizational friction, no questioning of the historical stack.
  • Lightweight: it lightens — we don't change anything about hosting, we stay on a shared server that does what it can.
  • Planned obsolescence: a belt is replaced regularly — exactly like these Symfony AI refactorings that come back every X months.

The problem? A belt breaks cleanly. No sign, no warning. It gives out. Brutally.

And when the Symfony AI belt breaks:

  • embeddings explode the RAM of an OVH shared server
  • Doctrine latency makes the chatbot timeout in production
  • a "simple" RAG must handle 100k documents and MySQL triggers a 12-second full table scan
  • the application becomes unavailable
  • emergency committee improvised around a PowerPoint

... it's engine failure: valves in pistons, project to rewrite, budget to double.

The metal chain (Python + vector DB + AI-designed architecture), it makes noise at first, it's expensive to install, but it lasts 300,000 km. It's made to withstand.

With Symfony AI, we replaced a durable solution with a disposable one, to save 15% at startup and lose 85% later.

This is exactly the French IT department economy: preferring a controlled and predictable expense (changing the belt every 60,000 km) to an initial investment that guarantees survival (the chain).


5. Conclusion: Modernity Tailored to Reassure, Not to Advance

Symfony AI isn't dangerous, nor useless. It's simply cosmetic: an elegant way to tell teams "don't you dare change your stack."

It's makeup on an unsuitable architecture. A yellow school bus, solid but slow, to which we stick "AI ready," "Vector search inside" and two metallic stickers.

From afar, it shines. Up close, you still see traces of the old "Municipal Service" logo.

The illusion doesn't go into orbit, even with NASA stickers.

It's AI for those who are afraid of AI. A stagecoach disguised as a spaceship. Ceremonial modernity.

And in a world evolving at the speed of AI, it's funnier than it is serious.

Top comments (0)