DEV Community

Nventory
Nventory

Posted on

The Engineering Problem Nobody Talks About When You Sell on Multiple Platforms

I want to tell you about a bug that isn't a bug.
It doesn't throw an error. It doesn't crash your server. It doesn't show up in your logs.
But it costs businesses thousands of dollars a month, and most of them never trace it back to the root cause.
Here's the scenario.
A seller runs an online store across Amazon, Shopify, and Flipkart. They have 10 units of a product in their warehouse. All three platforms show 10 units available.
At 2:14pm, someone buys 10 units on Amazon.
At 2:17pm — three minutes later — someone buys 3 units on Shopify.
At 2:19pm — five minutes after the Amazon sale — someone buys 2 units on Flipkart.
Total orders: 15 units. Actual stock: 10 units. 5 orders that cannot be fulfilled.

No error was thrown. The database didn't corrupt. Every system did exactly what it was told.
The bug is the three-minute gap.

Why this is actually an architecture problem

Most multichannel ecommerce platforms are built with a polling model. Every N minutes, the system checks each connected channel and updates stock levels accordingly.
This made total sense in 2012. API rate limits were strict. Webhooks weren't standard. Batch jobs were the norm.
But in 2026, polling-based inventory sync is a design liability.
Here's what polling looks like under the hood:
// Simplified polling approach
setInterval(async () => {
const stock = await warehouse.getStockLevel(skuId);

await Promise.all([
amazon.updateListing(skuId, stock),
shopify.updateInventory(skuId, stock),
flipkart.updateStock(skuId, stock),
]);
}, 1000 * 60 * 15); // every 15 minutes
Looks fine. Works fine. Until two channels sell the same last item in the gap between polls.

The event-driven alternative

The fix is treating every stock change as an event — not a state to be periodically checked.
// Event-driven approach
stockEventEmitter.on('stock.changed', async (event) => {
const { skuId, newQuantity, source, timestamp } = event;

// Propagate to all channels except the source
const targets = channels.filter(c => c.id !== source);

await Promise.all(
targets.map(channel =>
channel.updateInventory(skuId, newQuantity, {
idempotencyKey: ${skuId}-${timestamp}
})
)
);
});

When a sale happens on Amazon, it fires a stock.changed event immediately. Every other channel gets updated within seconds — not at the next poll interval.

But wait — what about race conditions?

This is the part that gets interesting.
What if two channels sell the last item simultaneously? You fire two stock.changed events at essentially the same time. Both channels try to update each other. You end up with -1 stock.
This is where you need optimistic locking at the inventory level:
async function decrementStock(skuId, quantity, channelId) {
const result = await db.query(
UPDATE inventory
SET quantity = quantity - $1,
version = version + 1,
last_modified_by = $2
WHERE sku_id = $3
AND quantity >= $1 -- prevent negative stock
RETURNING quantity, version
, [quantity, channelId, skuId]);

if (result.rowCount === 0) {
// Stock wasn't available — trigger oversell prevention
throw new InsufficientStockError(skuId, quantity);
}

// Emit event with new quantity
await stockEventEmitter.emit('stock.changed', {
skuId,
newQuantity: result.rows[0].quantity,
source: channelId,
timestamp: Date.now(),
version: result.rows[0].version
});
}
The WHERE quantity >= $1 clause is the guard. If two channels try to sell the last item simultaneously, only one database write succeeds. The other throws InsufficientStockError and the order gets flagged before it's confirmed.
No oversell. No negative inventory. No angry customer.

The three layers that actually make this production-ready
After building this system, here's what we learned:

  1. Idempotency keys everywhere Channel APIs get called multiple times on network failures. Without idempotency keys, a retry creates a second inventory update that corrupts your counts.
  2. Version vectors for conflict resolution If Channel A updates stock to 8 and Channel B updates stock to 7 in the wrong order, you need a version number to know which update is canonical.
  3. Dead letter queues for failed propagations Sometimes a channel API is down. You need a queue that retries failed updates, not a system that silently drops them and leaves your channels out of sync indefinitely.

What this looks like at scale

When we built this out for Nventory — a multichannel inventory platform — the results were measurable immediately:

Sync lag dropped from 15 minutes average → under 5 seconds
Oversold orders went from 15–30/month → 0
Manual reconciliation time: 4–6 hrs/week → zero

The architecture wasn't complex. The concepts aren't new. Event-driven systems have existed for decades.

The problem was that most ecommerce tooling was built before this pattern was standard — and nobody went back to fix it.

The broader lesson
Polling is a smell.

Any time you're checking state periodically instead of reacting to changes, ask yourself: what's the worst-case cost of the gap?
For a weather dashboard, a 15-minute lag is fine. For inventory that's selling across 5 platforms simultaneously during a flash sale — it's catastrophic.

The right architecture matches the cost of latency in your domain. In inventory, that cost is high. So latency has to be near-zero.
That one reframe changed how we designed everything.

Curious if anyone else has hit this in other domains — booking systems, ticket availability, seat reservations. It's the same problem wearing different clothes.

Would love to hear how others have handled concurrent writes at the inventory/availability layer.

Top comments (0)