There's a particular kind of pain that hits product-based businesses around $5M in revenue. The spreadsheets stop working. The legacy ERP costs more to maintain than a mid-level engineer. And someone finally says the thing nobody wanted to say: we need to build our own inventory system.
This article answers the stack question directly, without the hand-waving.
Why inventory is harder than it looks
It sounds like a CRUD app. Items, quantities, locations, movements. But add warehouses, serialized goods, kitting, multi-currency POs, or supplier lead times, and you're now dealing with an event-driven system where the state must never be wrong. Stock discrepancies cost real money. The write logic has to be bulletproof. The read logic has to be fast.
Every architectural decision you make is really a decision about where you're willing to tolerate eventual consistency, and where you absolutely cannot.
Database: PostgreSQL for the core, Redis for the cache
PostgreSQL handles the demands of inventory: ACID transactions, row-level locking, complex joins, and a query planner that processes millions of stock movement records with proper indexing. Use the serializable isolation level to prevent overselling. Use window functions for running balances.
Redis belongs in the stack too, but not as a source of truth. Use it to cache real-time availability counts with a short TTL. Postgres owns the authoritative number. Redis serves it fast.
For larger operations with heavy analytics needs, TimescaleDB (a Postgres extension) handles stock history and demand trends well. ClickHouse works if your team has the appetite for it.
Backend: boring is good
Inventory systems outlive the teams that build them. Pick something your successor will recognize.
Node.js + Fastify works well. TypeScript support is strong, Fastify is faster than Express, and schema validation is built in. Python + FastAPI is the right call if your team leans toward Python and you plan to use forecasting libraries later. Go is worth considering for high-throughput warehouse services. ASP.NET Core is a legitimate choice for .NET shops.
The pattern matters more than the framework: a clean service layer, command/query separation, and an immutable audit log for every stock-affecting event. That last one is non-negotiable. Every movement should be written as an event you can replay, not an update you overwrote.
Message queue: RabbitMQ vs Kafka
RabbitMQ is the right call for most systems. Reliable task processing, dead-letter queues, simple operational model, easy to debug at 2 am.
Kafka is worth it if you need high throughput (thousands of scan events per second) or multiple downstream consumers reading the same stream.
Managing Kafka via Confluent Cloud or MSK significantly reduces the operational burden. Retrofitting Kafka into a system that started with RabbitMQ is painful, so think about expected volume early.
API layer
REST is fine. Use it unless you have a specific reason not to.
GraphQL makes sense for rich frontends where different users want different views of the same data. The complexity cost is real: N+1 queries, schema registries, and more complex auth middleware. Worth it in the right context.
gRPC is solid for internal service-to-service communication in a microservices setup. Don't expose it to external clients unless you have a strong reason.
Frontend: build a tool, not a dashboard
Warehouse staff work on tablets, ruggedized handhelds, and shared workstations. A beautiful React SPA that takes three seconds to load on slow warehouse Wi-Fi is worse than a fast, ugly one.
For internal power users: React or Vue SPA, React Query for data fetching, TanStack Table for dense data grids, Zustand for state, Tailwind for speed. For warehouse floor use cases like pick-and-pack or receiving screens, consider Next.js to server-render heavy pages and client-render interactive bits. That mix is not a compromise. It's the right tool for each job.
Infrastructure
Containerize everything. Run on Kubernetes or a managed container service. On AWS, the combination of RDS, ElastiCache, ECS or EKS, and SQS covers most of what inventory systems need. GCP and Azure work equally well if you're already invested there.
Integrate OpenTelemetry from the start. It costs almost nothing to add early and an enormous amount to add later. You should be able to answer the question "Why did that stock count take 45 seconds?" without reading the raw logs.
The decisions that aren't about the stack
Most failed inventory systems failed because the domain model was wrong, not the technology. Reserved quantities bolted on six months later. Warehouse locations are not modeled as first-class entities. ERP integration via brittle batch jobs.
Spend a week on the domain model before writing code. Understand the difference between a SKU and a physical product instance. Model backorders, holds, and in-transit inventory explicitly from the start.
The practical verdict
PostgreSQL + Redis + Node.js or Python + RabbitMQ + Next.js + managed Kubernetes. Build the event log first. Model the domain carefully before the first migration. Add Kafka, TimescaleDB, or microservices only when you have specific evidence that you've outgrown the simpler approach. The teams that build the most reliable inventory systems are almost always the ones that resisted the urge to make it interesting.
Top comments (0)