You build a web app. You store data in IndexedDB. It works great, until one day a user reports that all their data is just... gone. No error. No warning. It's as if it never existed.
This happened to me with Mock Studio, a Chrome extension that stores API mocks in IndexedDB using Dexie.js. Users would work for days building up their mock configurations, then come back to find everything wiped. We dug in and found three root causes, all fixable. Here's what we learned.
The Short Version
IndexedDB data can silently disappear because:
- Chrome evicts your entire database when storage quota is exceeded, and your app might be filling it up faster than you think.
- An abrupt connection close mid-write loses uncommitted data, service worker restarts can trigger this.
- A "clear then import" pattern leaves you with nothing if the import fails, even inside a transaction.
Root Cause #1: Chrome Will Evict Your Entire Database
This is the one that hurts the most because it's invisible until it's too late.
Browsers allocate storage to origins (a combination of scheme + host + port). When an origin exceeds its quota, Chrome doesn't trim your data gracefully, it evicts the entire origin's storage: IndexedDB, Cache API, localStorage, all of it. Gone.
How we hit this
Our extension's network logging feature stored every captured HTTP request in IndexedDB, including the full response body:
const newRequest = {
method: request.request.method,
url: request.request.url,
status: request.response.status,
content: content || undefined, // ← full response body, no size check
// ...
};
await db.networkLogs.add(newRequest);
We had a row count limit of 50,000 records. Sounds reasonable. But 50,000 records × potentially megabytes of response body each = gigabytes of storage. On a busy app making lots of API calls, this table would balloon fast.
When the storage quota was exceeded, Chrome evicted CMockDB, wiping out all the user's mocks, projects, and environments along with the logs. The user lost everything because of a logging side-feature they didn't even know existed.
The fix
Stop storing what you don't need. We only needed aggregate stats (counts, average times, error rates), not raw response bodies. Removing the raw log storage entirely solved the problem.
If you do need raw logs, apply a size-based limit, not just a row count:
// Check byte size before storing
const contentSize = new Blob([content]).size;
const entry = {
...requestData,
content: contentSize < 50_000 ? content : undefined, // skip bodies > 50KB
};
Or cap by row count but make it realistic, 500 rows of potentially-large records is very different from 500 rows of small records.
How to protect your extension from eviction
For Chrome extensions specifically, add unlimitedStorage to your manifest.json:
{
"permissions": [
"storage",
"unlimitedStorage"
]
}
This exempts your extension's storage from Chrome's quota-based eviction. Without it, your extension is subject to the same eviction policy as any website.
For regular web apps, you can request persistent storage, the browser will ask the user for permission and the data won't be evicted without explicit user action:
const persisted = await navigator.storage.persist();
if (persisted) {
console.log('Storage will not be evicted');
}
You can also check how much quota you're using before it becomes a problem:
const estimate = await navigator.storage.estimate();
console.log(`Using ${estimate.usage} of ${estimate.quota} bytes`);
Root Cause #2: Service Worker Restarts Can Abort In-Flight Writes
Chrome terminates idle service workers after about 30 seconds of inactivity and restarts them on demand. When a restart happens while a database write is in progress, the write can be lost.
How this works in practice
When a new service worker instance starts up, it opens a new connection to IndexedDB. If your existing page (say, a DevTools panel) already has an open connection, the browser fires a versionchange event on the old connection. The standard pattern for handling this is to close the old connection immediately:
this.on('versionchange', () => {
this.close(); // Let the upgrade proceed
});
This is correct, but it's incomplete. After close() is called, any subsequent operations on that db instance will fail with a "Database is closing" or "Connection is closing" error. Your Zustand store might have already applied an optimistic UI update, but the actual DB write never completed. On the next page load, the data isn't there.
The fix
Reopen the connection after closing it:
this.on('versionchange', () => {
this.close();
this.open().catch((err) => {
console.warn('Failed to reopen DB after version change:', err);
});
});
This ensures the connection is restored automatically after the upgrade completes, without requiring a full page reload.
Root Cause #3: "Clear Then Import" Leaves You With Nothing on Failure
This one is a logic trap that's easy to fall into. If you implement a backup/restore feature, you probably wrote something like this:
await db.transaction('rw', [db.projects, db.mocks, ...], async () => {
// Step 1: Wipe everything
await db.projects.clear();
await db.mocks.clear();
// ...
// Step 2: Import new data
await db.projects.bulkAdd(data.projects);
await db.mocks.bulkAdd(data.mocks); // ← what if this throws?
});
It's all in a transaction, so if bulkAdd throws, the transaction rolls back, right?
Yes, but you still end up with an empty database.
A transaction rollback undoes every operation in the transaction, including the clear() calls. So technically the data is restored. But if the rollback itself fails (e.g., a connection drops mid-rollback, or the browser crashes), or if your error handling logic doesn't expect the rollback path, the user sees an empty app.
More critically: if you're using Dexie and the bulkAdd encounters a constraint error with default options, it may reject with a partial write, some records added, some not, and the rollback may not be as clean as you expect depending on the error type.
The fix
Take a snapshot before you clear anything, and use it as an explicit fallback:
const snapshot = await exportAllData(); // capture current state
try {
await db.transaction('rw', [...tables], async () => {
await db.projects.clear();
await db.mocks.clear();
// ...
await db.projects.bulkAdd(data.projects);
await db.mocks.bulkAdd(data.mocks);
// ...
});
} catch (err) {
// Import failed, restore the snapshot explicitly
await db.transaction('rw', [...tables], async () => {
await db.projects.clear();
await db.mocks.clear();
// ...
await db.projects.bulkAdd(snapshot.projects);
await db.mocks.bulkAdd(snapshot.mocks);
// ...
});
throw err; // re-throw so the UI can show an error message
}
This is more verbose but the intent is explicit: if the import fails, the user's previous data is unconditionally restored.
Summary
| Issue | Why It Happens | Fix |
|---|---|---|
| Full database eviction | Unbounded storage growth hits browser quota | Size-cap logged data; request persistent storage; use unlimitedStorage in extensions |
| Lost writes on reconnect | Service worker restart closes DB connection mid-write | Reopen DB automatically after versionchange close |
| Empty DB on import failure | Transaction rollback isn't always reliable as a recovery strategy | Snapshot before clearing, restore explicitly in catch block |
The Bigger Lesson
IndexedDB is the closest thing to a real database that the browser gives you, but it doesn't behave like one in a few important ways:
- No durability guarantee under quota pressure, the browser can reclaim storage without asking.
- Connection lifecycle is tied to page/worker lifecycle, disconnections can happen at any time.
- Transactions are atomic but not indestructible, network/browser crashes can interrupt them.
If your app uses IndexedDB to store data that users care about, treat it like you would any database: monitor usage, cap growth, handle reconnection, and always have a recovery path on destructive operations.
This post is based on a real debugging session on Mock Studio, a Chrome extension for mocking HTTP APIs. The fixes described here are live in production.
Top comments (0)