At 2 a.m., our user group exploded — people were saying data had just vanished, as if the browser had “eaten” it. Our frontend stores application state in IndexedDB, which is supposed to be far more reliable than localStorage. How could it disappear without a trace? I spent two hours digging through logs and backend records before zeroing in on a dark secret of browser storage: when disk space gets tight, Chrome will silently delete IndexedDB data without any notification. Worse, you can’t reproduce it by hand because you’re not running on the “chosen” hard drive. I decided to write an automated test with Playwright that simulates browser crashes and storage pressure — and expose IndexedDB’s real behavior.
Breaking down the problem
IndexedDB was designed to be a client-side persistent storage, and the W3C spec even says “data should be kept as long as possible”. But a spec is one thing; what browser vendors actually implement is another. Chrome has a mechanism called “Storage Pressure Eviction”: when the user’s disk space drops below a certain threshold, the browser evicts data from less “important” origins using an LRU policy. By default, IndexedDB does not request a persistent-storage permission (navigator.storage.persist()), so it’s very easy to get kicked out. If you haven’t applied for persistent storage permission in a PWA, your database is about as sturdy as a camping tent.
Why don’t normal testing approaches work? Because manual testing only covers “normal reads and writes” — it can’t simulate:
- A sudden browser process crash (kill, power loss)
- The context being unexpectedly destroyed and then restarted (user closing a tab and reopening it)
- The internal cleanup triggered by a disk-space warning
These scenarios require a controlled environment where you can repeatedly run a fast write → destroy → rebuild → verify loop automatically. That’s exactly what Playwright’s Browser Context isolation and its rich CDP (Chrome DevTools Protocol) capabilities are built for.
Solution design
I didn’t choose Selenium because it’s too heavy and context management feels unnatural. I skipped Puppeteer because Playwright natively supports multiple browsers and multiple contexts with a more modern API. Most importantly, each context created by Playwright’s browser.new_context() has its own independent storage sandbox — closing that context is equivalent to destroying the entire session’s IndexedDB, perfectly simulating the “user closes browser / tab” action.
The architecture is a straightforward “brutal loop validation”:
- Use Playwright to create a persistent context (so it won’t be automatically cleaned up).
- Open the page and inject a script that writes a record with a unique ID and a checksum into IndexedDB, then explicitly call
navigator.storage.persist()to request persistence. - Actively close that context to simulate a browser close or crash.
- Create a new context, open the same page, read from IndexedDB, and check both data integrity and the number of records.
- Repeat N times, each time writing data of random sizes and occasionally using CDP commands to simulate storage-pressure events.
- Count the number of data-loss events and inconsistencies, then generate a report.
Why not use incognito mode for this? Because IndexedDB in incognito is designed to be wiped on close — testing persistence there would be pure performance art.
Core implementation
First, install Playwright and pytest. Then you can run the following three pieces of code directly.
Code 1: IndexedDB utility functions — solving “how to reliably write and make sure it’s actually flushed to disk”
This is the foundation. Inside page.evaluate() we wrap the entire IndexedDB transaction lifecycle in a Promise, ensuring the data is committed before returning.
# idb_helpers.py
from playwright.sync_api import Page
IDB_WRITE_SCRIPT = """
async (dbName, storeName, key, value) => {
return new Promise((resolve, reject) => {
const request = indexedDB.open(dbName, 1);
request.onupgradeneeded = (event) => {
const db = event.target.result;
if (!db.objectStoreNames.contains(storeName)) {
db.createObjectStore(storeName, { keyPath: 'id' });
}
};
request.onsuccess = (event) => {
const db = event.target.result;
// The transaction scope must include storeName, otherwise the write won't go through
const tx = db.transaction(storeName, 'readwrite');
const store = tx.objectStore(storeName);
// Store a CRC field inside value to verify consistency later
store.put({ id: key, data: value, checksum: simpleChecksum(value) });
tx.oncomplete = () => resolve(true);
tx.onerror = (e) => reject(e);
};
request.onerror = (e) => reject(e);
function simpleChecksum(str) {
let hash = 0;
for (let i = 0; i < str.length; i++) {
hash = ((hash << 5) - hash) + str.charCodeAt(i);
hash |= 0; // Convert to 32bit integer
}
return hash;
}
};
}
"""
def write_indexeddb(page: Page, db_name, store_name, key, value):
return page.evaluate(IDB_WRITE_SCRIPT, db_name, store_name, key, value)
Why did we add a checksum here? Because we
Top comments (0)