Hello again, everyone! It's Owen. 👋
Back in March, I posted about how I over-engineered my first project by building a hybrid Zig + TypeScript cache engine called Dunena, and then deployed it on Kubernetes because... honestly I still don't fully know why. 😅
The response was really encouraging, especially the detailed audit from Kowshik in the comments. Since then I've been heads-down building, and Dunena has grown into something I genuinely didn't expect when I first pushed that repo.
Let me catch you up.
🔢 By the Numbers
The project went from a fun experiment to a proper versioned release. We're now at v0.3.1, with a full changelog, release pipeline, Helm charts, a Python SDK, and more. The codebase has grown across multiple packages in the monorepo and I've learned an embarrassing amount along the way.
🆕 What's New
ARC Eviction Policy
The original Dunena only had LRU. I then added LFU. Now there's a third option: ARC (Adaptive Replacement Cache), implemented directly in the Zig core.
ARC is a hybrid — it watches both how recently and how frequently you access keys, and adapts automatically. The idea is that you don't have to manually tune whether recency or frequency matters more for your workload. The Zig implementation maintains four conceptual zones and adjusts the target balance on each cache miss.
You can pick your policy at startup:
DUNENA_EVICTION_POLICY=arc
Atomic Operations: INCR, DECR, and CAS
One thing I kept running into was needing to update a numeric counter without doing a GET, increment, then SET dance. So I added native atomic increment and decrement to the Zig core. It parses the stored value as an integer, adds the delta, and writes it back — all inside Zig with no round trips.
I also added Compare-and-Swap (CAS) with version tracking. Every cache entry now carries a version number that increments on write. If you want to update a key only when you're sure it hasn't changed since you last read it, you can use CAS:
curl -X PUT http://localhost:3000/cache/mykey/cas \
-H "Content-Type: application/json" \
-d '{"value": "new-value", "expectedVersion": 3}'
If the version doesn't match, you get a 409 instead of silently overwriting someone else's change. It's a small thing but it opens the door to building coordination patterns on top of Dunena.
Distributed Locks
Speaking of coordination — there's now a full distributed lock service. Locks are stored in the cache with TTL for automatic release, so a crashed process won't hold a lock forever. You can acquire, release, extend, and force-release locks via the API or CLI:
dunena lock-acquire my-job worker-1 30000
dunena lock-release my-job worker-1
Cache Replication
Dunena now supports write-through replication to secondary instances. When you write to the primary, it fans out the mutation to registered replicas — either synchronously (waiting for confirmation) or asynchronously (fire and forget). This is useful if you want a hot standby or want to keep caches warm across regions.
I want to be upfront: this is not clustering. It's intentionally simple. Dunena still uses SQLite and is still a single-writer system. But for the most common "I want redundancy" use case, this works.
Expanded Database Proxy Connectors
The database proxy originally only supported PostgreSQL, MySQL, and HTTP APIs. It now also supports MongoDB, Redis, and Elasticsearch as backend connectors — all using dynamic imports so they're optional peer dependencies. If you don't install mongodb, the connector just isn't available; nothing breaks.
The pattern is the same regardless of backend: register a connector, then query through it with automatic cache-aside behavior.
dunena db-proxy-register mongo-users mongodb mongodb://localhost/mydb
dunena db-proxy-query mongo-users "users.find"
Python SDK
This was a big one. There's now an official Python SDK for Dunena at sdks/python/, installable as the dunena package. It wraps the full REST API with both sync and async clients (powered by httpx), typed dataclasses for responses, and a clean exception hierarchy.
from dunena import Dunena
with Dunena("http://localhost:3000", token="optional") as client:
client.set("hello", "world", ttl=60000)
print(client.get("hello")) # "world"
# Durable storage
client.db.set("user:42", '{"name":"Alice"}', tags=["users"])
entry = client.db.get("user:42")
The async version works the same way with AsyncDunena and async with. Go and Rust SDKs are still on the backlog.
Health Check Enhancements (responding to Kowshik's feedback!)
One of the things Kowshik pointed out was that production health checks need more than just { "status": "ok" }. The /health endpoint now returns structured diagnostics:
{
"status": "healthy",
"version": "0.3.1",
"uptime": 3600,
"checks": {
"zigCore": { "status": "up", "latencyMs": 0.2 },
"sqlite": { "status": "up", "latencyMs": 1.1 },
"memory": { "heapUsedMB": 45, "rssMB": 180 },
"cache": { "entries": 5421, "hitRate": 0.87 }
}
}
There are also two new probe endpoints that Kubernetes can use properly:
-
GET /health/live— just returns 200, confirms the process is alive -
GET /health/ready— checks SQLite is writable before saying ready
The k8s manifests and Helm chart now point probes at these instead of the generic /health.
k6 Benchmark Suite
Kowshik also asked about load testing. The honest answer in March was "I don't have one." That's fixed now. There's a proper k6 benchmark suite at scripts/bench/ with scripts for basic CRUD throughput, batch operations, and a realistic 80/20 read/write mixed workload.
The thresholds are enforced too: p95 < 10ms for GETs, p95 < 15ms for SETs, and less than 1% error rate. These run as part of the release preparation workflow.
Helm Chart and Terraform
The deployment story got a lot better. There's now a full Helm chart at deploy/helm/dunena/ that exposes all config as values, handles PVC creation, wires up the new health probes, and manages auth token secrets properly.
There's also a Terraform module for deploying to AWS ECS Fargate with EFS for persistent SQLite storage. The EFS approach sidesteps the SQLite single-writer problem cleanly since there's still only one task running against it, but storage is detached from the container lifecycle.
Next.js Documentation
The documentation site (previously a static HTML file served from the server) has been migrated to Next.js with static export. It now has a proper sidebar, dark/light theme, mobile layout, search, and an interactive API explorer powered by Scalar.
🤔 What I've Learned Since March
AI agents got better at the FFI boundary, but still need watching. I still use Claude and Gemini heavily, but I've gotten better at catching when generated code changes a Zig type without updating the corresponding FFI declaration in TypeScript. The rule I now follow without exception: any change to exports.zig gets a diff against ffi.ts before it gets committed. The CI pipeline type-checks this, but catching it manually first saves a confusing crash later.
ReleaseSafe is the right default. This came up in Kowshik's comment and I want to reinforce it. Building Zig with -Doptimize=ReleaseSafe (which is what bun run build:zig does) keeps bounds checking active. Panics become clean process aborts rather than silent undefined behavior. The performance cost is small and worth it.
SQLite is fine, actually. I got a bit anxious after all the comments about scaling SQLite, but for what Dunena is — a single-writer cache layer — SQLite with WAL mode is genuinely excellent. The replication feature lets you propagate writes to replicas. For horizontal read scaling, you disable the DB layer and use only the in-memory Zig cache per instance. It's not Postgres, but it doesn't need to be.
🔭 What's Still Coming
There are a few things still on the backlog that I want to be honest about:
- Redis RESP protocol adapter — so Dunena can act as a drop-in replacement for Redis clients without any code changes on the client side. This is a big one and still in design phase.
- GraphQL API endpoint — for batched queries, especially useful with the database proxy layer.
- OpenTelemetry tracing — alongside the existing Prometheus metrics.
- Go and Rust SDKs — the Python SDK was first but there's more to do.
- WebSocket integration tests — I'm not going to pretend these are fully covered yet.
🙏 Thanks
Genuinely, thank you to everyone who starred the repo, and reached out. Building in public is kind of terrifying but the feedback has been really useful and pushed the project forward in ways I wouldn't have figured out alone.
🔗 GitHub Repository
🔗 Documentation
If you've got feedback, ideas, or want to contribute — PRs and issues are very welcome. And if you've built something with Dunena or tried the Python SDK, I'd genuinely love to hear about it in the comments. 👇
Top comments (0)