DEV Community

Cover image for MongoDB vs Postgres: A Decision Table That Actually Picks For You
Gabriel Anhaia
Gabriel Anhaia

Posted on

MongoDB vs Postgres: A Decision Table That Actually Picks For You


You're picking the primary store for a multi-tenant SaaS. Three product managers want a relational schema, one wants per-tenant flexibility, and the SRE has a 47-page deck arguing both sides at once. You have a Tuesday to decide.

The argument never ends because the question is wrong. "MongoDB or Postgres" depends on the shape of the workload: write rate, schema volatility, query mix, consistency story. Pick the workload first, then the store falls out. Below is the decision table I keep handing engineers when they ask me which one to bet on, with the actual reasons each row goes the way it does.

The decision table

Workload Winner Why
Extreme write throughput, independent docs MongoDB Sharding by document key, no joins to coordinate
Schema-per-tenant evolution MongoDB No ALTER TABLE per tenant, drift is expected
Document-shaped CMS payloads MongoDB Native nested arrays, partial updates by path
Relational integrity (FKs, unique constraints) Postgres Constraints are first-class, not a library on top
Mixed OLTP plus ad-hoc analytics Postgres Window functions, CTEs, planner that handles both
JSON with real queries Postgres jsonb plus GIN indexes, type-safe operators
Strong-consistency workflows Postgres Real serializable isolation, no compromise modes

That's the whole post in one screen. The rest is the reasoning, with code where it matters.

Three cases where MongoDB wins

Extreme write throughput on independent documents

Telemetry ingestion. Activity feeds. IoT events. Each document is self-contained, no foreign key points at it, and the read pattern is "give me the recent N for this device." MongoDB shards by a key you pick and writes go to whichever shard owns that key. Add a node, the cluster rebalances, write rate scales out. There is no global lock, no WAL to coordinate across shards, no hot tablespace.

// Sharded write — goes to the shard owning device_id.
db.events.insertOne({
  device_id: "dvc_8814",
  ts: new Date(),
  battery: 0.62,
  payload: { temp: 21.4, humidity: 0.41 },
});
Enter fullscreen mode Exit fullscreen mode

A team I talked to was running a fleet-monitoring product on Postgres and hitting wall-clock contention on a single hot table during evening peaks. They moved that one table to a sharded Mongo cluster keyed by device, kept everything else in Postgres, and the contention disappeared. Mongo is not faster per write on a single node, by any benchmark I trust. The workload was just embarrassingly partitionable, and their Postgres setup wasn't partitioned for it.

If your write hot path is one logical entity that cleanly partitions by a stable key, this row is yours.

Schema-per-tenant evolution

Multi-tenant B2B where every customer wants three custom fields on the Order object and two of them want to remove a field nobody else uses. In Postgres you end up with one of three patterns: a giant jsonb blob (then why are you on Postgres for this part), per-tenant tables (operational pain), or an EAV nightmare. In Mongo, each tenant's documents have whatever shape they have, and the application reads what it expects.

// Tenant A has a discount tier. Tenant B has a contract id.
db.orders.insertOne({
  tenant_id: "tnt_a",
  total: 142.00,
  discount_tier: "gold",
});

db.orders.insertOne({
  tenant_id: "tnt_b",
  total: 891.00,
  contract_id: "ctr_19",
});
Enter fullscreen mode Exit fullscreen mode

You will pay for this elsewhere. Every read needs to defend against missing fields, and reporting is harder. But for the write path of a heavily customized SaaS, Mongo lets each tenant's shape change without an ALTER TABLE.

Document-shaped CMS payloads

Headless CMS, page builders, anything where a document is a tree of blocks with arrays of nested children. The Postgres translation is six joined tables and a recursive CTE to assemble the page. The Mongo version is one read.

db.pages.findOne({ slug: "pricing" });
// Returns the entire page tree, blocks, variants,
// localized strings, in a single round trip.
Enter fullscreen mode Exit fullscreen mode

Partial updates by path ($set on blocks.3.children.0.text) are also genuinely nice for editor workflows. The shape of the data matches the shape of the access pattern. That's the whole reason document stores exist.

Four cases where Postgres ends the argument

Relational integrity

If your domain has hard rules (an Order must reference a real Customer, a LineItem cannot exist without an Order, a SKU is unique), Postgres enforces them at the storage layer and Mongo does not. Mongo added schema validation and unique indexes, but referential integrity across collections is something your application has to maintain. That works until the day a partial deploy or a script run leaves orphaned LineItem documents nobody notices for six weeks.

CREATE TABLE orders (
  id BIGSERIAL PRIMARY KEY,
  customer_id BIGINT NOT NULL
    REFERENCES customers(id) ON DELETE RESTRICT,
  total NUMERIC(10, 2) NOT NULL CHECK (total >= 0)
);
Enter fullscreen mode Exit fullscreen mode

Three lines, four invariants, enforced by the engine. The cost of replicating that in application code across three services in two languages is the kind of thing you don't notice until an incident.

Mixed OLTP plus ad-hoc analytics

Most internal tools end up here. The product team writes the OLTP path, then a quarter later the data team needs cohort analysis, retention curves, "show me the top 20 accounts by usage trend." In Postgres you get window functions, CTEs, lateral joins, and a planner that handles both transactional and analytical queries on the same instance.

SELECT
  customer_id,
  SUM(total) AS revenue,
  RANK() OVER (ORDER BY SUM(total) DESC) AS rev_rank
FROM orders
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY customer_id
ORDER BY rev_rank
LIMIT 20;
Enter fullscreen mode Exit fullscreen mode

Mongo's aggregation pipeline can express most of this, but the ergonomics and the planner sophistication are not in the same league once your queries get composed and parameterized. Read the 2026 PostgreSQL vs MongoDB write-up if you want the long version.

JSON with real queries (jsonb plus GIN)

This is the row that surprises people. "We need flexible JSON, so Mongo." Not so fast. If you need flexible JSON and you query into it, Postgres jsonb with a GIN index is a serious option.

CREATE TABLE products (
  id BIGSERIAL PRIMARY KEY,
  attrs JSONB NOT NULL
);

CREATE INDEX products_attrs_gin
  ON products USING GIN (attrs jsonb_path_ops);

SELECT id, attrs->>'name'
FROM products
WHERE attrs @> '{"color": "red", "size": "M"}';
Enter fullscreen mode Exit fullscreen mode

That @> containment query uses the GIN index. You get document flexibility on attribute storage and the rest of the row stays relational with the constraints, joins, and transactions you actually want. The Postgres docs on JSONB indexing are blunt about the trade-offs: jsonb_path_ops is faster and smaller for containment, the default operator class supports more operators. Read it before you decide.

The honest caveat: GIN is excellent for containment and key-existence, weaker for range and sort. If your dominant query on JSON is "find documents where attrs.score > 0.8 ordered by attrs.score," Mongo's multikey indexes handle that more naturally. For most CRUD-with-attributes workloads, jsonb wins.

Strong-consistency workflows

Money. Bookings. Inventory. Anything where two writes racing produces a wrong answer that costs you. Postgres gives you real SERIALIZABLE isolation, advisory locks for cross-row coordination, and a transaction model that has been load-bearing for banks for decades.

BEGIN ISOLATION LEVEL SERIALIZABLE;

UPDATE inventory
SET qty = qty - 1
WHERE sku = 'SKU-001' AND qty > 0;

INSERT INTO orders (customer_id, sku) VALUES (42, 'SKU-001');

COMMIT;
Enter fullscreen mode Exit fullscreen mode

Mongo has multi-document transactions since 4.0, and they work, but the operational story is more fragile, the latency cost is higher, and the patterns are less mature. If transactional correctness is the headline requirement, you're not arguing about this row.

How to use the table on a real decision

Walk your top three workloads, not your imagined ones. Write down the read pattern, the write pattern, the consistency requirement, and the schema volatility. Each row has a winner. If two rows disagree, the answer is usually Postgres for the system of record and a secondary store for the outlier: sharded Mongo for the telemetry firehose, a CMS-shaped collection for the editorial payload, and Postgres holding everything that has constraints.

The mistake teams make is picking one store for ideological reasons and bending every workload to fit. The teams that ship pick per workload and accept the operational tax of running two stores when the data really wants two stores.

If this was useful

Database Playbook walks through this exercise on twelve concrete systems: what the workload actually looked like, which store won each row, and the migrations that happened when the first guess was wrong. If you're staring at a Tuesday deadline and a 47-page deck, it's the shortcut.

Database Playbook: Choosing the Right Store for Every System You Build

Top comments (0)