Building Ferrous Network exposed the limits of general-purpose databases. Here's why I'm writing GraniteDB from scratch.
Hey folks, Altug here — founder of Ferrous Network, a Rust-based Bitcoin-like L1 blockchain that's live on testnet. Today I want to share a very specific dev story: how wrestling with RocksDB in production led me to start GraniteDB, a correctness-first storage engine designed specifically for blockchain state.
The Pain Point That Started It All
When building Ferrous, I needed persistent storage for:
- UTXO set
- Block index
- Chain state
- Mempool
RocksDB was the obvious choice. Battle-tested, performs great under pressure, used everywhere from Bitcoin to Kafka. Setup was straightforward:
cargo add rocksdb
But then reality hit. The load times drove me insane.
On first startup (especially IBD - Initial Block Download), I'd sit there watching compilation bars, Clang deps resolving, and RocksDB initializing... for minutes. On reasonably beefy hardware. Every. Single. Time.
This wasn't just "slow." It was uncontrollable. A general-purpose C++ behemoth sitting in my carefully crafted, zero-warnings Rust node. I couldn't debug it. Couldn't easily audit it. Couldn't make it behave predictably for blockchain workloads.
The Core Insight
RocksDB is amazing at what it does: high-throughput, general-purpose key-value storage.
But blockchain nodes don't need "general purpose." They need something much more specific:
✅ Deterministic crash recovery
✅ Predictable snapshot behavior
✅ Fast point reads for account/state lookups
✅ Safe batch writes for block execution
✅ Lightning-fast startup (no C++ deps hell)
✅ Actually auditable code
❌ Peak TPS for ad serving
❌ Every possible table format optimization
❌ Multi-writer concurrency (yet)
Enter GraniteDB
So I started writing GraniteDB — not to "beat RocksDB," but to solve my specific pain:
GraniteDB = Rust storage engine
+ Blockchain state semantics
+ Correctness > Throughput (initially)
+ No C++ interop nightmares
What I've Spec'd So Far
- Crystal-clear API contract
pub struct DB {
// put(key, value) — creates newer version
// delete(key) — tombstone
// snapshot() — sequence-based isolation
// write_batch(batch) — atomic, crash-safe
}
- Production-grade WAL format
32KB fixed blocks + CRC32C fragments
FULL/FIRST/MIDDLE/LAST record types
"truncate at first corruption" recovery
- Single-writer threading model
Writer owns: seq assignment, WAL, memtable
Concurrent readers: short-lived guards
"No races by design"
- Explicit invariants everywhere
- No partial batches visible after crash
- Manifest = single source of SST truth
- Deterministic replay from WAL+Manifest
- Snapshot reads see consistent sequence
Why This Actually Works for Blockchains
Most blockchain state workloads are embarrassingly parallel for reads, but need sequential, crash-safe writes:
Block execution → WriteBatch → WAL → ACK
Account lookup → Memtable → L0 → L1 (point read)
UTXO scan → Iterator with snapshot
Pruning → Background compaction (Phase 2)
GraniteDB targets exactly this shape.
The Honest Tradeoffs
I'm not promising "10x faster than RocksDB":
✨ GraniteDB wins:
• Startup time (no C++ deps)
• Deterministic recovery
• Predictable snapshots
• Full auditability
• Zero vendor complexity
⚡ RocksDB still wins:
• Raw throughput
• Years of battle scars
• Every optimization known to humankind
What's Next
Phase A (Now): Correctness core + crash tests
- WAL/SST/Manifest formats locked
- Single memtable + sync flush
- Property tests vs reference model
Phase B: Immutable memtable queue + async flush
Phase C: L0→L1 compaction
Phase D: Background workers
Phase E: Reader optimizations
Then Ferrous integration. If it works there, it'll work anywhere.
Closing Thought
Sometimes the best projects come from personal pain. I got tired of waiting for RocksDB to load in my own node. So I'm building something that starts instantly, behaves predictably, and I can actually reason about when my chain crashes at 3 AM.
GraniteDB won't be everything to everyone. But for blockchain state? It just might be exactly right.
Follow progress: GraniteDB specs | Ferrous Network
Top comments (0)