Choosing Rust for a backend can feel irrational at first.
If your short‑term goal is raw development speed, Rust does not win. The compiler is unforgiving, type design takes real thought, and compile cycles can test your patience. On this project, there were phases where a full rebuild felt endless—and that kind of friction is emotionally expensive when all you want is to ship something that works.
But I still chose Rust for two reasons:
- reliability under pressure
- predictable resource usage
This backend handles encryption‑related workflows, file metadata, quota enforcement, and versioned storage paths. I wanted something that stays lean, stable, and boring under load. Rust makes that possible—after you push through the painful parts.
Once the architecture settles, the payoff is real: a compact server binary that runs consistently and fails loudly when something is wrong.
A Simple Architecture, on Purpose
The guiding rule for this backend: do not outsmart yourself.
The overall structure is deliberately conservative:
- Axum router composition by feature
- Explicit state objects per route group
- Trait‑based storage abstractions
- Strict model separation between API and persistence layers
Server startup is intentionally boring:
Router::new()
.route("/health", get(health))
.nest("/api/auth", auth_sql_router(auth_state))
.nest("/api/mindmaps", mindmaps_sql_router(mindmaps_state))
.nest("/api/public", public_router(public_state.clone()))
Each feature owns its routes, its state, and its responsibilities. There is no global god‑object, no implicit shared context, and no “we’ll refactor this later” escape hatch.
Capability‑Driven State, Not Concrete Dependencies
Route handlers depend on what they need, not how it is implemented.
Here is the state injected into mind‑map routes:
pub struct MindMapsSqlState {
pub db: DynSqlStore,
pub minio: MinioClient,
pub jwt: Arc<JwtService>,
pub diagnostics_enabled: bool,
}
The important detail here is not the fields themselves—it’s that persistence is expressed through traits. Handlers care about capabilities (load, update, store) rather than concrete database implementations.
This is not about abstraction for abstraction’s sake. It keeps the surface area of each feature honest.
Fetching a Mind Map: Ownership First, Then Data
async fn get_mind_map(
State(state): State<MindMapsSqlState>,
user: AuthenticatedUser,
Path(id): Path<String>,
) -> Result<Json<MindMapDetail>, AppError> {
let map = find_owned(&state.db, &id, &user.0).await?;
Ok(Json(to_detail(map)))
}
Nothing fancy here—and that is the point.
Ownership checks happen before transformation. Models coming out of storage are never leaked directly to the API layer. Every conversion is explicit.
Rust’s type system makes cutting corners uncomfortable, which is exactly what you want when dealing with encrypted artifacts.
Key Distribution (Encryption Happens in the Browser)
This system performs client‑side encryption. The backend never sees plaintext content.
The backend’s job is to distribute encrypted key material safely and consistently:
async fn get_keys(
State(state): State<AuthSqlState>,
user: AuthenticatedUser,
) -> Result<Json<KeyBundleResponse>, AppError> {
let db_user = state
.db
.load_user_by_id(&user.0)
.await?
.ok_or_else(|| AppError::NotFound("user not found".to_string()))?;
Ok(Json(KeyBundleResponse {
classical_public_key: db_user.classical_public_key,
pq_public_key: db_user.pq_public_key,
classical_priv_encrypted: db_user.classical_priv_encrypted,
pq_priv_encrypted: db_user.pq_priv_encrypted,
key_version: db_user.key_version,
}))
}
Every field in this response is intentionally named and versioned. There is no “future me will remember what this blob means.”
Discipline here prevents silent cryptographic foot‑guns later.
And upload a blob to S3 endpoint with authentication, quora check and versioning.
Uploading Encrypted Blobs with Quotas and Versioning
Uploading a mind‑map snapshot involves more than just dumping bytes to object storage:
async fn upload_blob(
State(state): State<MindMapsSqlState>,
user: AuthenticatedUser,
Path(id): Path<String>,
body: Bytes,
) -> Result<Json<ConfirmUploadResponse>, AppError> {
if body.is_empty() {
return Err(AppError::BadRequest("blob is required".to_string()));
}
let map = find_owned(&state.db, &id, &user.0).await?;
let subscription_tier = load_effective_subscription_tier(&state.db, &user.0).await?;
let current_total_bytes = load_storage_usage_total_bytes(&state.db, &state.minio, &user.0).await?;
let projected_total_bytes = current_total_bytes + body.len() as i64;
let plan_limit_bytes = subscription_tier.storage_limit_bytes();
if projected_total_bytes > plan_limit_bytes {
return Err(storage_quota_exceeded_error(
&subscription_tier,
projected_total_bytes,
plan_limit_bytes,
));
}
let version_id = state
.minio
.upload_blob(&map.minio_object_key, body.to_vec())
.await?;
let mut version_history = map.version_history.clone();
version_history.push(VersionSnapshot {
version_id: version_id.clone(),
eph_classical_public: map.eph_classical_public.clone(),
eph_pq_ciphertext: map.eph_pq_ciphertext.clone(),
wrapped_dek: map.wrapped_dek.clone(),
saved_at: Utc::now(),
});
if let Err(error) = state
.db
.update_mind_map_upload(&id, &user.0, &version_id, version_history)
.await
{
if let Err(cleanup_error) = state.minio.delete_version(&map.minio_object_key, &version_id).await {
tracing::error!(
?cleanup_error,
map_id = %id,
user_id = %user.0,
version_id = %version_id,
"failed to roll back uploaded version after metadata update error"
);
}
return Err(error);
}
let prune_key = map.minio_object_key.clone();
let prune_limit = map.max_versions;
let minio = state.minio.clone();
tokio::spawn(async move {
if let Err(e) = minio.prune_versions(&prune_key, prune_limit).await {
tracing::warn!("Failed to prune old versions for {prune_key}: {e}");
}
});
Ok(Json(ConfirmUploadResponse { version_id }))
}
async fn get_mind_map(
State(state): State<MindMapsSqlState>,
user: AuthenticatedUser,
Path(id): Path<String>,
) -> Result<Json<MindMapDetail>, AppError> {
let map = find_owned(&state.db, &id, &user.0).await?;
Ok(Json(to_detail(map)))
}
Before the upload even happens, several constraints are enforced:
- request body must exist
- user must own the mind map
- storage quota must not be exceeded
- projected usage must include the new blob
Only after all guards pass does the upload occur.
Versioning metadata is recorded atomically with cleanup logic in case persistence fails. If the database update errors out, the uploaded object version is explicitly deleted. If that fails, the error is logged loudly.
Finally, old versions are pruned asynchronously.
This is not clever code. It is defensive code.
Tooling Helps—But Discipline Matters More
rust-analyzer in VS Code can absolutely make your life easier. It can also overwhelm you with diagnostics if your mental model is fuzzy.

The important part is discipline.
Two patterns became essential in keeping this codebase sane:
- State Segregation per Route Area Each domain owns its state:
- auth
- admin
- mind maps
- billing
- public endpoints
No shared mutable mega‑state. No “just add it here for now.”
- Strongly Typed Data Contracts Models like StoredMindMap, NewMindMap, and update‑specific structs force explicit mappings. Field drift becomes visible immediately instead of months later through corrupted data.
The Emotional Reality of Rust
Developing in Rust is harder, at least compare to Python.
Sometimes it feels like trying to speak while a grammar teacher interrupts every sentence. But once the code compiles and the model fits, the confidence is different.
- Fewer runtime surprises
- Clearer failure modes
- Stronger guarantees under refactoring
That is why this “crazy” idea stayed.
In the next part of this series, I’ll dive into how encryption boundaries shaped both the frontend and storage layout—and how Rust made those edges explicit rather than implicit
PS: A Note on Cognitive Load: Rust’s Memory Model Is Not Free
It would be dishonest to talk about Rust without calling out the extra cognitive complexity its memory model imposes on the developer.
Ownership, borrowing, and lifetimes are not just compiler mechanics—they shape how you think. Every non‑trivial data flow forces you to reason about who owns what, for how long, and under which constraints. This is especially noticeable when designing API boundaries, async flows, or shared state. Even when the compiler eventually guides you to a correct solution, getting there can be mentally expensive.
In practice, this means Rust demands more upfront thinking than garbage‑collected languages. The friction is real, and it slows you down early—sometimes significantly. There were moments in this project where the hardest part was not the business logic, but expressing it in a way that satisfied ownership rules without distorting the design.
The trade‑off, however, is intentional pressure. Rust pushes complexity from runtime into design time. Once the code compiles, entire classes of memory bugs, data races, and lifetime errors simply stop being possible. The cognitive load doesn’t disappear—but it pays off by converting uncertainty into explicit structure.
Rust does not make you faster by default. It makes you more precise, and precision has a cost.
For readers less familiar with Rust, some of this complexity shows up very visibly in the code itself: frequent symbols like &, *, ', < >, ::, and others. These are not decoration—they encode rules about ownership, borrowing, lifetimes, and type relationships. Even simple operations often carry extra markers that force the developer to be explicit about how memory is accessed and shared. This makes Rust code denser and harder to read at first glance for newcomers, but it is exactly this explicitness that allows the compiler to catch entire classes of errors before the program ever runs.
For me personally, the most effective way to get over Rust’s initial hurdle was Rust Essential Training by Barron Stone on LinkedIn Learning, which helped solidify ownership and borrowing concepts early on.
Top comments (0)