DEV Community

Cover image for How we use Django and MongoDB in Energy AI - a unified Python web app for adaptive conversational AI
Harshith Varma Keerthipati
Harshith Varma Keerthipati

Posted on

How we use Django and MongoDB in Energy AI - a unified Python web app for adaptive conversational AI

Energy AI is a Django-served Python web application backed by MongoDB for application persistence. The project combines authentication, chat APIs, static asset delivery, adaptive assistant behavior, and MongoDB-backed storage in one deployment story. This post explains why we moved to that architecture, what Django now serves, and how MongoDB fits into the runtime and data flow.


Team Members

This project was developed by:

We would like to express our sincere gratitude to @chanda_rajkumar for their valuable guidance and support throughout this project.

Their insights into system design, architecture, and development played a key role in shaping Energy AI.


Why Django over a split frontend/backend stack

When we reviewed the runtime shape of Energy AI, the biggest friction point was not the assistant logic itself. It was the fact that interface delivery, API behavior, deployment wiring, and asset serving were drifting apart. A conversational product needs predictable request flow: the browser should load the same UI that the backend expects, the backend should know where its static files live, and authentication links should resolve against the same runtime that serves chat requests.

That is why we reorganized the project around Django as the single Python runtime while keeping MongoDB as the persistence layer for application data. Instead of depending on an older frontend bundle server during execution, the application now serves its own template, style sheets, JavaScript, and logo through Django static files. That change removed a whole category of mismatches where the interface and the backend could get out of sync. It also made local development and deployment easier because the same Python stack that runs assistant orchestration also handles page rendering and route delivery, while MongoDB stores the dynamic chat and user-related records that naturally fit a document-style structure.

Django is the runtime contract for Energy AI, and MongoDB is the application persistence layer. The homepage, static files, auth endpoints, and chat routes are Django-served, while users, sessions, chats, training data, and evaluation records can be stored as MongoDB documents.

This matters even more for AI products than it does for ordinary CRUD apps. Energy AI has streamed chat responses, email verification, password reset flows, chat history, energy-aware request handling, and training-related data that evolves over time. When these pieces are split across too many runtimes, or when the persistence layer is disconnected from the actual assistant workflow, small deployment mismatches create large user-facing problems. By consolidating the web runtime in Django and anchoring application state in MongoDB, we made the system easier to reason about, test, and publish.


Our application architecture

The current Energy AI stack is organized around a Django project with a dedicated api application and a MongoDB-backed persistence layer for application records. The UI shell is served by app.html, while the browser loads first-party static files such as frontend.bundle.css, frontend.css, frontend.js, and energy-logo.svg directly from Django's static pipeline. The backend layer exposes health, authentication, chat, and session routes. The assistant layer handles prompt interpretation, workspace-aware behavior, and energy-aware response generation, and the data layer can persist chat sessions, auth sessions, approved training items, rejected items, and evaluation runs in MongoDB collections.

{
  "runtime": "Django-served Python web app",
  "ui_entry": "api/templates/api/app.html",
  "static_assets": [
    "api/static/api/frontend.bundle.css",
    "api/static/api/frontend.css",
    "api/static/api/frontend.js",
    "api/static/api/energy-logo.svg"
  ],
  "core_modules": [
    "views.py",
    "auth.py",
    "chat_engine.py",
    "storage.py",
    "email_service.py",
    "workspace.py"
  ],
  "mongodb_collections": [
    "users",
    "authSessions",
    "chatSessions",
    "trainingApproved",
    "trainingCandidates",
    "trainingRejected",
    "evaluationRuns"
  ],
  "assistant_behavior": "energy-aware routing for lightweight vs deeper responses",
  "deployment": "manage.py + collectstatic + Gunicorn",
  "migration_support": "server/scripts/migrate-file-to-mongo.js"
}
Enter fullscreen mode Exit fullscreen mode

At the assistant level, Energy AI still keeps its identity as an energy-aware system. Not every user request should be handled with the same computational intensity. Simpler requests can follow a lighter path, while coding or analysis-heavy prompts can trigger deeper assistant behavior. The Django migration did not remove that logic. It gave it a cleaner execution surface.
contents


MongoDB data model in Energy AI

MongoDB is important in Energy AI because the platform does not generate just one type of data. User profiles, authentication sessions, chat histories, model-routing records, training candidates, approved feedback pairs, rejected examples, and evaluation logs all have different structures. A rigid table design would make those changes harder to maintain every time the assistant gains a new feature or starts storing richer metadata such as route reason, workspace mode, latency, or model identity. With MongoDB, each record can remain a self-contained document while still being grouped into meaningful collections for retrieval and analytics.

This document-oriented design is especially useful for conversational AI. A single chat session may store a title, timestamps, energy mode, route metadata, and a variable-length array of user and assistant messages. Training-related documents may additionally include approval state, reviewer decisions, and cleaned prompt-response pairs. Evaluation logs may contain metrics and summary fields that do not belong inside ordinary user-chat documents. MongoDB supports that variation naturally, which is why it fits both the current Energy AI implementation and future extensions of the system.

{
  "type": "chat_session",
  "userId": "usr_102",
  "sessionId": "sess_44f2",
  "workspaceMode": "coding",
  "energyMode": "low",
  "title": "Django MongoDB integration discussion",
  "messages": [
    {
      "role": "user",
      "content": "Explain why MongoDB fits this project"
    },
    {
      "role": "assistant",
      "content": "MongoDB stores chat, session, and training data as flexible documents."
    }
  ],
  "routeMeta": {
    "provider": "own",
    "model": "energy-low-own-v1",
    "latencyMs": 842
  },
  "createdAt": "2026-04-26T10:20:00Z",
  "updatedAt": "2026-04-26T10:21:14Z"
}

Enter fullscreen mode Exit fullscreen mode

Chat

Meanings

Because these collections are separated by responsibility, the system can query them efficiently for different tasks. For example, Energy AI can retrieve a user's previous chats, inspect recent authentication activity, collect approved training examples for future retraining, or analyze evaluation runs to understand how the assistant behaves across different prompt categories. This makes MongoDB useful not only as a storage engine, but also as a support layer for monitoring, retraining, and long-term project scalability.


Web search and live knowledge support

Another important capability in Energy AI is web-aware assistance for questions that depend on current or externally grounded information. In a conversational system, not every useful answer can come only from stored model parameters. Some user prompts require recent updates, discoverable context, or broader information gathering. To support those cases, Energy AI includes a knowledge-oriented layer that can trigger web-search-style retrieval when the system detects that a question is discovery-based, information-seeking, or time-sensitive.

This is a meaningful feature because it improves practical usefulness. If a user asks for the latest information, wants topical exploration, or needs broader context than the chat history alone can provide, the assistant is able to shift from pure response generation toward retrieval-assisted behavior. In architectural terms, that means Energy AI is not limited to a closed conversational loop. It can expand outward when the task benefits from web-connected context, then return the final answer through the same Django-served chat flow.

Web search is highlighted in Energy AI as a practical knowledge layer. It helps the assistant answer discovery-oriented and current-information prompts more effectively instead of depending only on pre-existing model memory.

From a systems perspective, this also fits the energy-aware philosophy of the project. Web search is not triggered for every message. It is used selectively when the prompt type benefits from external knowledge. That keeps the assistant efficient while still making it more capable in real-world scenarios where freshness and context matter.


Project-created model stack

One of the strongest technical aspects of Energy AI is that the assistant behavior is organized around a project-defined model stack instead of behaving like a plain wrapper over one generic chatbot endpoint. The system defines its own routing role, its own low-energy response role, and its own deeper-response role. In the project configuration these appear as artifacts such as energy-router-own-v1, energy-low-own-v1, and energy-high-own-v1, which makes the assistant pipeline identifiable as part of the Energy AI system itself.

The important point is not to claim that every building block in modern AI appears from nothing, but to show that the actual Energy AI behavior was created as a project-specific stack. The routing logic, model-role separation, local artifacts, training scripts, dataset preparation flow, evaluation pipeline, and Django deployment path were built for this application. That means the assistant is not just a borrowed interface. It is a structured system with its own routing design, its own local model workflow, and its own full-stack integration strategy.

{
  "router_model": "energy-router-own-v1",
  "low_energy_model": "energy-low-own-v1",
  "high_energy_model": "energy-high-own-v1",
  "custom_capabilities": [
    "adaptive routing",
    "workspace-aware chat behavior",
    "own model artifacts",
    "training and evaluation pipeline",
    "Django + MongoDB deployment integration"
  ]
}
Enter fullscreen mode Exit fullscreen mode

This model-stack design is what gives Energy AI its identity. Instead of treating all prompts the same way, the platform routes them according to complexity, context, and purpose. That is why the assistant can stay efficient for simple requests while still supporting deeper interaction for coding, explanation, and analytical tasks.


The Django service layer

One of the strongest parts of the migration is the way Django now owns route flow end to end. The root URL configuration sends /api/ traffic into the API application while non-API traffic is routed to the main app shell. That means a user can open the site, register an account, verify email, log in, request a password reset, and start chatting without leaving the same runtime boundary.


from django.contrib import admin
from django.urls import include, re_path

urlpatterns = [
    re_path(r"^django-admin/", admin.site.urls),
    re_path(r"^api/", include("api.urls")),
    re_path(r"^(?!api/|django-admin/).*", include("api.urls_spa")),
]
Enter fullscreen mode Exit fullscreen mode

Inside the API app, responsibilities are intentionally separated. views.pycoordinates HTTP-facing behavior. auth.py handles registration, verification, login, and reset logic. chat_engine.py handles assistant-side prompt processing and response generation. storage.py persists application data and sits naturally beside the MongoDB-backed persistence flow. email_service.py manages outbound verification and reset communication. workspace.py helps the assistant behave differently when the user is in a coding or general-help context.

This modular approach matters because conversational systems become hard to maintain when all logic is mixed together inside one giant endpoint file. By separating transport, auth, assistant logic, and persistence, the project remains easier to debug and extend.


What Django serves and why

The design rule for the new system is simple: if the browser needs it at runtime, Django should be able to serve it directly. That includes the application shell, styling, interactivity, and branding. This change removed the older dependency on an external runtime bundle and made deployments more self-contained.

Energy Ai

Because the static layer is now first-party, collecting and serving assets becomes a standard Django operation rather than a separate deployment concern. The application can run with collectstatic, then be served by Gunicorn using the same runtime assumptions that were used during verification.


Persistent state and assistant workflows

Energy AI uses Django for routing and request handling, but MongoDB plays a major role in persistence. The deployment configuration includes explicit MongoDB environment variables such as MONGODB_URI, MONGODB_DB_NAME,MONGODB_COLLECTION, MONGODB_USERS_COLLECTION, MONGODB_SESSIONS_COLLECTION, MONGODB_CHATS_COLLECTION, and training/evaluation collections. That setup shows the overall architecture clearly: Django is the web framework and runtime boundary, while MongoDB is used to persist the assistant's application-level state.

STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"

MONGODB_URI=
MONGODB_DB_NAME=energy_ai
MONGODB_COLLECTION=appData
MONGODB_USERS_COLLECTION=users
MONGODB_SESSIONS_COLLECTION=authSessions
MONGODB_CHATS_COLLECTION=chatSessions
MONGODB_TRAINING_APPROVED_COLLECTION=trainingApproved
MONGODB_TRAINING_CANDIDATE_COLLECTION=trainingCandidates
MONGODB_TRAINING_REJECTED_COLLECTION=trainingRejected
MONGODB_EVALUATION_COLLECTION=evaluationRuns
Enter fullscreen mode Exit fullscreen mode

MongoDB is a strong fit here because assistant data is not uniform. A chat session does not look like an auth session. A training candidate record does not look like an evaluation run. Some documents may contain message arrays, some may store metadata, and some may store model-related scoring or status information. A document database handles that variation cleanly without forcing the project into rigid table design for every evolving assistant feature.

The project also includes migration support through server/scripts/migrate-file-to-mongo.js. This provides a clean path for moving legacy file-based application state into MongoDB once the system grows beyond simple local persistence. In practice, that makes the architecture more durable, because early-stage prototypes can evolve into deployment-ready systems without rewriting the entire application data flow.


Validation before deployment

We did not treat the migration as complete just because the homepage loaded once. We verified the stack at the framework level, the storage level, the deployment level, and the user-flow level. The Django side passed framework validation with python manage.py check. Database migrations completed without input. Static files were collected successfully. Gunicorn configuration also passed validation, which confirmed that the project was ready for a production-style WSGI launch path. MongoDB settings were also part of the deployment configuration, which shows that persistence was treated as a first-class runtime concern rather than an afterthought.

python manage.py check
python manage.py migrate --noinput
python manage.py collectstatic --noinput
gunicorn --check-config
Enter fullscreen mode Exit fullscreen mode

After that, we ran an isolated auth and chat smoke test. That flow covered registration, verification, login, chat access, chat save behavior, and streamed assistant output. Finally, we rendered the homepage in headless Chrome to confirm that the app shell and static assets behaved correctly in an actual browser context. That combination of checks is what made the migration trustworthy.


Email verification and fallback behavior

Email is a common failure point in local and sandboxed environments, so Energy AI was designed to degrade cleanly. Verification and reset still work under the Django runtime, but if the upstream email provider cannot be reached, the system exposes preview links instead of silently failing the user flow. That preserves testability and keeps the application usable even in restricted environments.

When outbound mail delivery through Brevo is unavailable, Energy AI falls back to preview links for verification and password reset. The auth pipeline remains testable without blocking the rest of the application.

This fallback matters in academic and demo settings because infrastructure limits should not invalidate application logic. A user should still be able to validate the workflow, and a developer should still be able to verify that the integration behaves correctly from end to end.


Key takeaways

Django and MongoDB became the right combination for Energy AI for three reasons. First, Django gave the project a single Python runtime for UI delivery, API routes, auth, chat, and deployment. Second, MongoDB provided a flexible persistence model for chats, sessions, training records, and evaluation outputs whose shapes naturally vary over time. Third, the architecture removed runtime dependence on an older frontend bundle and replaced it with Django-served static assets that travel with the app, while still keeping a clean path for Mongo-backed application data and migration from older storage formats.

For an AI-enabled application, this kind of unification is not just a deployment convenience. It improves reliability. A conversational assistant becomes much easier to maintain when its request flow, page rendering, auth logic, and assistant orchestration share one execution environment, and when its persistence layer is designed for evolving document-shaped data. That is the real architectural win of the Energy AI Django plus MongoDB stack.


Execution

The final deployed version of Energy AI is available through the hosted application, the project repository, and the embedded demo recording below. This makes the implementation easy to inspect both as source code and as a running system.

Live Demonstration

GitHub Repository

Video Demonstration

Top comments (0)