DEV Community

Cover image for I Benchmarked LacertaDB & PouchDB — Here's What Happened
Matias Affolter
Matias Affolter

Posted on

I Benchmarked LacertaDB & PouchDB — Here's What Happened

LacertaDB vs PouchDB: a head-to-head performance comparison of two browser-native document databases, and why your serializer matters more than you think.

The Browser Database Problem Nobody Talks About

Every browser database eventually hits the same wall: serialization.

Your data has to be transformed before it hits IndexedDB, and transformed back when it comes out. Most libraries use JSON. Some use CBOR or MessagePack. The choice of serializer silently determines your throughput ceiling — and most developers never question it.

I built LacertaDB to be the fastest browser-native document database possible. Along the way, I had to build its serializer from scratch too, because nothing on npm was fast enough or complete enough. Then I ran it against PouchDB — the established king of browser databases — to see where things actually stand.

Here are the numbers. No cherry-picking, no synthetic micro-ops. Real document CRUD at scale.


The Benchmark Setup

Both databases ran identical workloads in the same browser tab, same machine, same IndexedDB backend. The test: 5,000 documents with ~200-byte payloads, covering the five operations that matter in a real app:

  • Bulk Write — insert all documents in a single batch
  • Read All — retrieve every document
  • Query (filter) — find documents matching a field condition
  • Bulk Update — modify every document
  • Delete All — remove everything

LacertaDB uses batchAdd / getAll / query / batchUpdate / clear.
PouchDB uses bulkDocs / allDocs / find (with pouchdb-find) / bulkDocs / bulkDocs with _deleted.

Both are fair comparisons of each library's recommended bulk API.


The Results

Operation LacertaDB PouchDB 7.3 Speedup
Bulk Write 532 ms 2,664 ms
Read All 146 ms 356 ms 2.4×
Query 124 ms 399 ms 3.2×
Bulk Update 101 ms 2,708 ms 26.8×
Delete All 239 ms 2,573 ms 10.8×

The gap ranges from 2.4× to nearly 27×, depending on the operation. Writes and deletes show the biggest difference — and those are precisely the operations where serialization overhead dominates.

Throughput in Context

At 5,000 documents:

  • LacertaDB sustained ~9,400 writes/sec and ~34,000 reads/sec
  • PouchDB managed ~1,900 writes/sec and ~14,600 reads/sec

This isn't a marginal difference. For offline-first apps syncing hundreds of records, or Web3 dApps caching blockchain state locally, the gap between "feels instant" and "shows a spinner" lives right in this range.


Why the Gap? It's the Serializer.

PouchDB stores documents as JSON. That's fine for simple objects, but JSON has well-known limitations: no Date, no undefined, no Map, no Set, no RegExp, no typed arrays, no binary data. And JSON.stringify/parse, while native C++ under the hood, still has to traverse every property, escape every string, and produce a text representation that's larger than the source data.

LacertaDB uses TurboSerial, a binary serializer I built specifically for this problem. The design goals were:

  1. Serialize everything JavaScript can hold — not just JSON-safe types
  2. Produce smaller output — binary, no delimiters, no escaping
  3. Be faster than the alternatives — including CBOR and MessagePack

TurboSerial vs MessagePack vs CBOR

I ran TurboSerial against the two established binary serialization formats. The benchmark measures both throughput (ops/sec) and output size across three payload profiles:

Test Case MessagePack CBOR TurboSerial
Small Data (API response) 20,101 ops/s · 34B 112,360 ops/s · 36B 176,991 ops/s · 62B
Medium Data (array of objects) 334 ops/s · 4,093B 3,460 ops/s · 4,168B 5,587 ops/s · 4,809B
Large Data (TypedArray 0.2MB) 711 ops/s · 200,005B 341 ops/s · 200,005B 2,703 ops/s · 200,023B

TurboSerial is consistently 1.5–8× faster than both MessagePack and CBOR across payload sizes. The output is slightly larger on small payloads (it encodes richer type metadata), but the throughput advantage more than compensates — especially at the medium and large data sizes that matter in a database context.

Type Coverage: Where JSON, CBOR, and TurboSerial Diverge

This is the part that rarely makes it into benchmark posts, but it's what actually matters when you're building a real app. Here's what each format can serialize natively:

Type JSON MessagePack CBOR TurboSerial
Strings, Numbers, Booleans, null
Nested Objects / Arrays
undefined
Date ✅ (tag)
Map / Set
RegExp
BigInt ✅ (tag)
ArrayBuffer / TypedArrays
Int8Array through Float64Array
Error objects
URL
Sparse Arrays
NaN, Infinity, -Infinity
-0 (negative zero)

JSON covers about 7 types. CBOR and MessagePack stretch to ~12 with tags and binary extensions. TurboSerial natively handles 20+ JavaScript types — including the edge cases that silently break your data when you use JSON (like undefined being stripped from objects, or Date becoming a string you have to manually parse back).

When your database serializer supports Map, Set, RegExp, and typed arrays out of the box, you stop writing workaround code. Your documents go in, and they come out identical. No reviver functions, no manual reconstruction.


Beyond Raw Speed: What LacertaDB Actually Offers

Performance is one axis. Here's what the full picture looks like against PouchDB:

Feature LacertaDB PouchDB
Bundle size (minified) ~110 KB ~200 KB
Serialization TurboSerial (binary) JSON
Storage backend IndexedDB + OPFS + localStorage IndexedDB (+ adapters)
Query syntax MongoDB-style, 20+ operators Mango (pouchdb-find plugin)
Index types B-Tree, Hash, Full-text, Geo B-Tree equivalent
Encryption AES-GCM-256, PBKDF2 Master Key Wrap ❌ (requires plugin)
Aggregation pipeline $match, $group, $sort, $lookup
Geospatial queries QuadTree $near, $within
Full-text search Built-in with CJK support ❌ (requires plugin)
Caching strategies LRU / LFU / TTL per collection
Binary attachments OPFS-backed Blob-based
CouchDB sync
Node.js / server-side ❌ (browser-only)

The honest tradeoff: PouchDB has CouchDB sync and server-side support. If you need those, PouchDB is the right tool. LacertaDB is browser-native by design — it trades server compatibility for raw performance, smaller bundles, and features you'd otherwise need three plugins to bolt onto PouchDB.


When to Reach for LacertaDB

LacertaDB was built for a specific class of application:

  • Offline-first PWAs that need to cache and query significant amounts of data locally without UI jank
  • Web3 dApps that store blockchain state, wallet keys (with real encryption), or NFT metadata client-side
  • Data-heavy SPAs where the difference between 100ms and 2,700ms on a bulk write is the difference between feeling native and feeling broken
  • Apps storing rich JavaScript types — if your data model uses Map, Set, Date, typed arrays, or BigInt, you'll stop fighting your serializer

If you need CouchDB replication or server-side rendering, PouchDB remains excellent. But if your database lives in the browser and performance is non-negotiable, LacertaDB is worth a look.


Try It Yourself

npm install @pixagram/lacerta-db
Enter fullscreen mode Exit fullscreen mode
import { LacertaDB } from '@pixagram/lacerta-db';

const lacerta = new LacertaDB();
const db = await lacerta.getDatabase('myapp');
const users = await db.createCollection('users');

// Store a document with types JSON can't handle
await users.add({
  name: 'Alice',
  joined: new Date(),
  preferences: new Map([['theme', 'dark'], ['lang', 'en']]),
  tags: new Set(['admin', 'beta-tester']),
  avatar: new Uint8Array([137, 80, 78, 71]) // PNG header bytes
});

// Query it back — every type is preserved, no revivers needed
const admins = await users.query({
  tags: { $contains: 'admin' }
});
Enter fullscreen mode Exit fullscreen mode

The source is on GitHub, the package is on npm, and the benchmark playground is included in the repo.


LacertaDB is MIT-licensed and built by Pixagram SA in Zug, Switzerland.

Top comments (0)