Migrating Django Endpoints to Rust: My NDVI & Weather Services Journey
When I started rethinking my NDVI and weather endpoints, the goal was simple: improve performance, enforce strong auth, and gain full observability. Over the last few weeks, I migrated critical services from Django to Rust, and the process turned out to be an engineering adventure worth sharing.
Phase 0 – Contract Freeze: Locking the APIs
Before touching Rust, I froze all NDVI and weather API contracts in Django. This ensured that the front-end and other consumers could continue working without disruptions. Think of it as putting a protective glass over your APIs: nothing moves until Rust is ready to take over.
Output: Frozen NDVI + weather contracts from Django.
Phase 1 – Multi-Service Architecture & Shared Auth/Throttle
Next, I set up a Rust workspace with multiple services:
- NDVI service: Handles vegetation index calculations.
- Weather service: Will eventually serve weather data.
- Shared auth & throttling module: Ensures consistent authentication and rate limiting across all services.
This phase established the skeleton for independent Rust microservices while maintaining the same contract as Django.
Output: Rust workspace, shared auth/throttle, NDVI envelope.
Phase 2 – Weather Migration
With the workspace ready, I migrated weather endpoints from Django to Rust. Key steps included:
- Implementing shared authentication and throttling.
- Integrating MySQL connections safely with Rust’s type system.
- Ensuring the endpoints conformed to the frozen contract from Phase 0.
After this phase, all weather requests were fully handled by Rust services, improving throughput and reliability.
Output: Weather endpoints implemented in Rust.
Phase 3 – Gateway Cutover (Planned)
The final phase will transition Django routes to forward requests to Rust microservices. This will include:
- Canary deployments to avoid downtime.
- Metrics and alerting for observability.
- CI enforcement for Rust formatting, clippy lints, and tests across the workspace.
End state after Phase 3:
- Django acts as a gateway, routing NDVI + weather requests to Rust services.
- NDVI is fully served by Rust/Postgres.
- Weather is fully served by Rust/MySQL.
- Shared auth and throttling are enforced in Rust.
- Observability and canary rollouts ensure safe production deployment.
- CI checks formatting, linting, and tests across the workspace.
Lessons Learned
- Contract first: Freezing contracts before migration prevents chaos.
- Shared modules are gold: Auth and throttling reused across services reduces duplication.
- Rust’s type system and ownership model force careful database and network design.
- Incremental migration avoids “big bang” outages.
Why Rust?
Migrating to Rust allowed me to:
- Serve high-throughput endpoints with lower latency.
- Reduce runtime errors with compile-time guarantees.
- Scale services independently while sharing critical modules like auth and throttling.
Example: Rust Weather Service (axum + sqlx)
// src/main.rs
#![deny(clippy::all)]
#![forbid(unsafe_code)]
use axum::{routing::{get, post}, Router, Json, extract::Extension, response::IntoResponse, http::StatusCode};
use serde::{Deserialize, Serialize};
use sqlx::mysql::MySqlPoolOptions;
use std::sync::Arc;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
mod auth;
mod rate_limit;
#[derive(Clone)]
struct AppState {
db: sqlx::MySqlPool,
}
#[derive(Deserialize)]
struct WeatherQuery {
lat: f64,
lon: f64,
ts: Option<i64>,
}
#[derive(Serialize)]
struct WeatherResponse {
temp_c: f32,
precip_mm: f32,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_subscriber::fmt::layer())
.init();
let db = MySqlPoolOptions::new()
.max_connections(20)
.connect(&std::env::var("WEATHER_DATABASE_URL")?)
.await?;
let state = Arc::new(AppState { db });
let app = Router::new()
.route("/api/v1/weather/point", get(get_weather_point))
.route("/api/v1/weather/bulk", post(post_weather_bulk))
.layer(auth::AuthLayer::new())
.layer(rate_limit::RateLimitLayer::new(100));
let addr = std::env::var("LISTEN_ADDR").unwrap_or_else(|_| "0.0.0.0:8080".into());
tracing::info!("listening on {}", addr);
axum::Server::bind(&addr.parse()?)
.serve(app.into_make_service())
.await?;
Ok(())
}
Django Gateway Proxy Example
# views/proxy.py
import httpx
from django.http import HttpResponse
from django.conf import settings
PROXY_TARGET = settings.PROXY_TARGET # e.g., "http://rust-weather:8080"
async def proxy_request(request):
upstream = f"{PROXY_TARGET}{request.path}"
headers = {"x-forwarded-for": request.META.get("REMOTE_ADDR", "")}
if "Authorization" in request.headers:
headers["Authorization"] = request.headers["Authorization"]
async with httpx.AsyncClient(timeout=30.0) as client:
upstream_resp = await client.request(
request.method,
upstream,
headers=headers,
params=request.GET,
content=await request.body()
)
return HttpResponse(
content=upstream_resp.content,
status=upstream_resp.status_code,
headers={k: v for k, v in upstream_resp.headers.items() if k.lower() != "transfer-encoding"},
)
CI Example (GitHub Actions)
name: CI
on: [push, pull_request]
jobs:
rust-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain-action@v1
- run: cargo fmt --all -- --check
- run: cargo clippy --workspace --all-targets -- -D warnings
- run: cargo test --workspace --all-features
python-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install ruff mypy bandit
- run: ruff check .
- run: mypy .
- run: bandit -r .
This setup ensures a production-ready, highly observable Rust microservices environment while keeping Django as a stable gateway. Phase 3 will finalize the gateway cutover with canary deployment and metrics monitoring.
Top comments (0)