Solving Async Race Conditions in JavaScript with a 500B Library
JavaScript is single-threaded, but race conditions still happen.
Any time multiple asynchronous operations modify the same resource, you risk inconsistent state.
Examples include:
- double charging a customer
- processing the same webhook twice
- creating duplicate database records
- cache stampedes
- concurrent file writes
To solve this problem, I built a tiny utility:
async-mutex-lite — a keyed async mutex for JavaScript & TypeScript.
- 🔒 Sequential execution per key
- ⚡ Parallel execution across different keys
- 📦 ~400–600 bytes gzip
- 🧩 Zero dependencies
- 🟦 Full TypeScript support
npm install async-mutex-lite
The Problem: Async Race Conditions
Consider a typical checkout endpoint.
Two requests arrive at nearly the same time for the same user.
app.post("/checkout", async (req) => {
const balance = await getBalance(req.userId)
if (balance >= req.amount) {
await deductBalance(req.userId, req.amount)
await createOrder(req.userId)
}
})
What happens if both requests read the balance before either deducts it?
Request A -> balance = 100
Request B -> balance = 100
Request A deducts
Request B deducts
Now the balance has been deducted twice.
This is a classic race condition.
The Solution: Keyed Async Mutex
async-mutex-lite ensures tasks with the same key run sequentially, while tasks with different keys run in parallel.
import { mutex } from "async-mutex-lite"
app.post("/checkout", async (req) => {
await mutex(`checkout:${req.userId}`, async () => {
const balance = await getBalance(req.userId)
if (balance >= req.amount) {
await deductBalance(req.userId, req.amount)
await createOrder(req.userId)
}
})
})
Now the execution becomes:
checkout:user1 -> taskA -> taskB -> taskC (sequential)
checkout:user2 -> taskD (parallel)
Requests for the same user are queued.
Requests for different users still run concurrently.
How It Works Internally
Instead of maintaining a traditional queue, the library uses Promise chaining.
Each key has its own promise chain.
mutex("user:1", taskA) ─┐
mutex("user:1", taskB) ─┼─► taskA → taskB → taskC
mutex("user:1", taskC) ─┘
mutex("user:2", taskD) ───► taskD
Benefits of this approach:
- minimal code
- extremely small bundle size
- FIFO execution
- automatic cleanup
- no memory leaks
Basic Usage
Using the mutex is simple.
import { mutex } from "async-mutex-lite"
const result = await mutex("my-key", async () => {
const data = await fetchSomething()
return data
})
Synchronous functions also work:
const value = await mutex("my-key", () => {
return 42
})
Real World Use Cases
Prevent Double Charges
await mutex(`wallet:${userId}`, async () => {
await processPayment(userId, amount)
})
Prevent Duplicate Webhook Processing
await mutex(`webhook:${webhookId}`, async () => {
await processWebhook(webhookId)
})
Cache Stampede Protection
async function getUser(userId: string) {
if (cache.has(userId)) return cache.get(userId)
return mutex(`cache:${userId}`, async () => {
if (cache.has(userId)) return cache.get(userId)
const user = await db.findUser(userId)
cache.set(userId, user)
return user
})
}
Inventory Updates
await mutex(`product:${productId}`, async () => {
const stock = await getStock(productId)
if (stock > 0) {
await decrementStock(productId)
}
})
Error Handling Strategy
The library supports configurable error handling.
Default: "continue"
The queue continues even if a task fails.
await mutex("key", () => {
throw new Error("failed")
}).catch(console.error)
await mutex("key", () => {
console.log("this task still runs")
})
"stop" Strategy
Stop all queued tasks when an error occurs.
await mutex("key", () => {
throw new Error("failed")
}, { onError: "stop" }).catch(console.error)
await mutex("key", () => {
console.log("this will never run")
})
This is useful for transaction-like workflows.
When Should You Use It?
Good use cases:
- financial transactions
- idempotent APIs
- webhook processing
- per-user locking
- cache rebuilding
- inventory updates
- sequential file writes
When You Should NOT Use It
Mutexes are unnecessary for:
- stateless operations
- pure read queries
- CPU-bound workloads
- operations that are already sequential
A mutex only helps when multiple async tasks modify the same resource.
Why async-mutex-lite?
Many mutex libraries exist, but most of them are heavier than necessary.
| Library | Size | Keyed Lock | Error Strategy | TypeScript |
|---|---|---|---|---|
| async-lock | ~5 KB | ✅ | ❌ | Partial |
| async-mutex | ~3 KB | ❌ | ❌ | ✅ |
| await-lock | ~1 KB | ❌ | ❌ | ❌ |
| async-mutex-lite | ~0.5 KB | ✅ | ✅ | ✅ |
Goals of this library:
- minimal bundle size
- modern TypeScript support
- simple API
- zero dependencies
Serverless Note
This mutex works within a single process.
In serverless environments:
- each instance has its own memory
- mutex only applies inside that instance
If you need cross-instance locking, use:
- Redis locks
- database transactions
- distributed lock systems
Try It Out
If you're dealing with async race conditions in JavaScript, this tiny utility might save you a lot of headaches.
npm install async-mutex-lite
GitHub:
NPM:
Docs:
⭐ If you find it useful, consider giving the repository a star.
Top comments (0)