DEV Community

Cover image for How I Structure a FastAPI Backend with LLM Features (From a Real Project)
aichannode
aichannode

Posted on

How I Structure a FastAPI Backend with LLM Features (From a Real Project)

How I Structure a FastAPI Backend with LLM Features (From a Real Project)

I Don’t Start With Endpoints Anymore

When I used to start backend projects, I’d jump straight into writing routes.

That worked… until the project grew.

Now, I start with something else:

“How will this project fall apart in 3 months?”

Because it will — especially if you’re using LLMs.

You’ll start seeing:

  • prompts copied across files
  • random LLM calls inside endpoints
  • parsing logic that no one wants to touch
  • “temporary” hacks that become permanent

So these days, I focus heavily on structure first, features second.

This post is how I structured a FastAPI backend with LLM integration for a real estate consultant system — and what actually held up.


FastAPI vs Express — Different Problems

Coming from Node.js + Express, I was used to this:

routes/
controllers/
services/
models/
Enter fullscreen mode Exit fullscreen mode

Flexible, simple… and easy to mess up.

Over time:

  • controllers get bloated
  • services become dumping grounds
  • logic gets duplicated

With FastAPI, the issue is different:

It gives you powerful tools, but no strong opinion on structure.

So people end up with:

  • everything inside main.py
  • business logic inside route handlers
  • LLM calls scattered everywhere

And once LLM is involved, things get chaotic fast.


The Project (Real Context)

This is from a real project:

A backend that collects user preferences, uses an LLM to interpret them, and guides real estate search.

Not just CRUD. It includes:

  • multi-step intake flow
  • LLM-based parsing
  • dynamic question generation

The Structure I Landed On

api/
core/
llm/
models/
repositories/
schemas/
utils/
Enter fullscreen mode Exit fullscreen mode

High-Level Flow

[Client]
   ↓
[API Layer]
   ↓
[Repositories + LLM]
   ↓
[Database]   [LLM Provider]
Enter fullscreen mode Exit fullscreen mode

API Layer — Keep It Boring

api/v1/endpoints/
Enter fullscreen mode Exit fullscreen mode

Responsibilities:

  • request/response
  • validation
  • calling repositories or LLM layer
@router.post("/intake")
def create_intake(...):
    return intake_repo.create(...)
Enter fullscreen mode Exit fullscreen mode

Core — The Foundation

core/
Enter fullscreen mode Exit fullscreen mode

Includes:

  • config
  • DB connection
  • dependency injection
  • external SDK wrappers

LLM Layer — Treat It as a Domain

llm/
  intake/
    prompts.py
    schema.py
    service.py
  providers/
Enter fullscreen mode Exit fullscreen mode

All LLM-related logic lives here.

llm_intake_service.parse_user_input(text)
Enter fullscreen mode Exit fullscreen mode

Why?

Because LLM is:

  • non-deterministic
  • sensitive to prompts
  • provider-dependent

Models vs Repositories

models/
repositories/
Enter fullscreen mode Exit fullscreen mode
  • models/ → DB structure
  • repositories/ → queries

Keeps data access clean and testable.


Schemas — Critical for LLM

schemas/
Enter fullscreen mode Exit fullscreen mode

LLMs:

  • hallucinate
  • return inconsistent formats

So:

  • define strict schemas
  • validate every response

Utils — Use Carefully

utils/
Enter fullscreen mode Exit fullscreen mode

Good for:

  • small helpers

Bad when:

  • it becomes a dumping ground

LLM Flow

User Input
   ↓
LLM Prompt
   ↓
LLM Response
   ↓
Schema Validation
   ↓
Structured Data
Enter fullscreen mode Exit fullscreen mode

One Key Idea

Treat LLM as its own domain, not just a tool.


Final Thoughts

This structure isn’t perfect.

But it worked in a real project.

Structure isn’t about being clean.

It’s about staying sane when things get messy.

Top comments (0)