DEV Community

FC Quiles
FC Quiles

Posted on

We rebuilt the same Django AI backend 12 times. So we open-sourced it.

Every AI project we took on started the same way.
A new client, a new idea, real urgency — and then two weeks of setup before we could write a single line of actual product logic.
Configure Django. Wire up Celery. Write the Docker Compose file. Set up Redis. Hook in the LLM. Fight the environment variables. Do it all again for staging. Do it all again for prod.
We're a team building AI applications for businesses across Latin America and the Caribbean. In three years, we rebuilt this foundation twelve times. Different projects, different clients, same skeleton.
So we extracted it, cleaned it up, and open-sourced it.
What Glápagos Backend is
Glápagos Backend is a production-ready Django boilerplate designed specifically for AI-enabled applications. It's opinionated where it matters and open where you need flexibility.
Out of the box you get:

Django 4.x with a clean modular app structure that won't fight you when the project grows
Celery + Redis pre-configured for background job execution — critical when your AI inference calls take 10 seconds and you can't block the request cycle
Docker Compose templates for dev, staging, and production environments with a single flag difference
REST API scaffolding with authentication, pagination, and serializer patterns already wired
Environment-aware configuration using .env profiles so you never accidentally run prod settings locally
Optional AI/ML hooks for OpenAI, Anthropic, and vector stores — plug in what you need, leave out what you don't

The architecture decision that matters most
The thing that makes this different from a standard Django boilerplate is how it treats AI inference.
LLM calls are slow. They're unpredictable. They fail. If you run them synchronously in your API views, your app becomes unreliable the moment it does anything interesting.
Glápagos Backend routes all inference work through Celery workers from the start. Your API stays fast. Your AI runs in the background. Results come back through polling or websockets. This is the pattern we learned the hard way — by not doing it on the first three projects.
Why Django specifically
We get this question a lot. FastAPI is popular. Node is everywhere.
But for teams building AI products, Python is already home. Your data scientists, your ML engineers, your prompt engineers — they all live in Python. Django gives you a production-grade ORM, a battle-tested auth system, and an admin interface that saves weeks of internal tooling work. The ecosystem is mature and boring in exactly the right way.
The missing piece was always async AI workloads. Celery fills that gap cleanly.
Building for the Americas
One thing worth mentioning: this repo was forged in a specific context.
Building AI applications for Latin American markets means working with multilingual data, navigating different regulatory environments across a dozen countries, and operating under real cloud cost constraints that US-centric products don't face.
The defaults in this repo reflect those realities. Lightweight where possible. Modular so you can swap components. Designed to run efficiently on smaller instances.
The live platform this powers — Glápagos by GENIA Americas — is an AI platform built for the Western Hemisphere. The backend you're looking at is what runs it.
Getting started in 4 commands
bashgit clone https://github.com/GENIA-Americas/Glapagos-Backend.git
cd Glapagos-Backend
cp .env.example .env
docker compose up --build
Your API is live at http://localhost:8000/api/. The Django admin is at /admin/. Celery workers are running. Redis is connected.
That's it. Everything else is building your actual product.
What's next
We're actively developing the repo. On the roadmap:

WebSocket support for streaming LLM responses
Pre-built authentication flows (OAuth, magic link)
Vector store integration templates for RAG applications
Deployment guides for AWS, GCP, and Railway

Try it, break it, contribute
The repo is MIT licensed. If you use it, we'd love a star — it helps others find it. If you run into something broken or have a pattern that should be included, open an issue or a PR.
We're building in public. Come build with us.

GitHub: github.com/GENIA-Americas/Glapagos-Backend
Platform: glapagos.com/glapp

Top comments (0)