Most database comparisons stay theoretical. Real production decisions come down to a few repeatable code patterns.
Here are six patterns JavaScript teams run into constantly and how the PostgreSQL and MongoDB approaches differ in actual code.
1. Modeling Relationships Between Entities
Applications with users, companies, orders, or projects almost always require relationships.
PostgreSQL (relational model)
CREATE TABLE companies (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE jobs (
id SERIAL PRIMARY KEY,
company_id INTEGER REFERENCES companies(id),
title TEXT NOT NULL
);
Querying related data:
const jobs = await prisma.job.findMany({
include: {
company: true
}
})
MongoDB (embedded document)
db.jobs.insertOne({
title: "Senior React Developer",
company: {
name: "TechCorp"
}
})
Querying:
const jobs = await Job.find().limit(20)
The MongoDB version is faster for single reads because the company data is embedded. PostgreSQL wins when relationships grow beyond one level because joins scale much better than duplicated documents.
2. Preventing Invalid Data
Relational databases enforce rules directly in the database layer.
PostgreSQL
CREATE TABLE applications (
id SERIAL PRIMARY KEY,
job_id INTEGER REFERENCES jobs(id),
developer_id INTEGER,
UNIQUE(job_id, developer_id)
);
Now duplicate applications are impossible.
await prisma.application.create({
data: {
jobId: 15,
developerId: 42
}
})
If the same developer applies twice, PostgreSQL rejects the write.
MongoDB
MongoDB relies on application validation.
const existing = await Application.findOne({
jobId,
developerId
})
if (!existing) {
await Application.create({ jobId, developerId })
}
This works but creates race conditions under high concurrency. PostgreSQL enforces constraints automatically, which is why it dominates in SaaS and fintech systems.
3. Querying Related Data at Scale
Aggregations across relationships are common in dashboards and analytics.
PostgreSQL
SELECT
c.name,
COUNT(j.id) AS job_count,
AVG(j.salary_max) AS avg_salary
FROM companies c
JOIN jobs j ON j.company_id = c.id
WHERE j.remote = true
GROUP BY c.name
ORDER BY avg_salary DESC;
JavaScript execution:
const stats = await prisma.$queryRaw`
SELECT c.name, COUNT(j.id)
FROM companies c
JOIN jobs j ON j.company_id = c.id
GROUP BY c.name
`
MongoDB
const stats = await Job.aggregate([
{
$group: {
_id: "$company.name",
jobCount: { $sum: 1 },
avgSalary: { $avg: "$salary.max" }
}
}
])
MongoDB aggregation pipelines work well but become difficult when joins or multi-table relationships appear. PostgreSQL's query planner was built for this type of workload.
Architectural decisions like this are a core part of system design, which is why senior engineers spend significant time mastering database trade-offs alongside topics like JavaScript application architecture and system design decisions.
4. Flexible Fields Without Schema Migrations
One reason teams choose MongoDB is schema flexibility.
MongoDB
db.products.insertOne({
name: "Laptop",
specs: {
cpu: "M3",
ram: "16GB",
gpu: "Integrated"
}
})
Another document in the same collection can have different fields.
db.products.insertOne({
name: "T-Shirt",
size: "L",
material: "Cotton"
})
PostgreSQL with JSONB
Modern PostgreSQL can store flexible structures using JSONB.
CREATE TABLE products (
id SERIAL PRIMARY KEY,
name TEXT,
metadata JSONB
);
Insert flexible data:
await prisma.product.create({
data: {
name: "Laptop",
metadata: {
cpu: "M3",
ram: "16GB"
}
}
})
Query JSON fields:
const products = await prisma.product.findMany({
where: {
metadata: {
path: ["cpu"],
equals: "M3"
}
}
})
JSONB gives PostgreSQL document flexibility without abandoning relational structure.
5. High Throughput Event Logging
Logging and analytics systems generate huge write volumes.
MongoDB
await ActivityLog.create({
type: "page_view",
userId: 42,
page: "/jobs/react",
metadata: {
device: "mobile",
country: "DE"
},
createdAt: new Date()
})
MongoDB handles massive insert throughput with minimal overhead.
PostgreSQL
await prisma.activityLog.create({
data: {
type: "page_view",
userId: 42,
page: "/jobs/react"
}
})
PostgreSQL writes are slower because of constraints, indexes, and transaction guarantees.
If your system writes millions of events per hour, MongoDB or another log-focused database becomes a better fit.
6. Atomic Multi-Step Operations
Business logic often requires several updates to happen together.
PostgreSQL transactions
await prisma.$transaction(async (tx) => {
const order = await tx.order.create({
data: { userId: 15, total: 120 }
})
await tx.inventory.update({
where: { productId: 9 },
data: { stock: { decrement: 1 } }
})
await tx.payment.create({
data: { orderId: order.id, amount: 120 }
})
})
If any step fails, everything rolls back.
MongoDB multi-document transaction
const session = await mongoose.startSession()
await session.withTransaction(async () => {
await Order.create([{ userId: 15 }], { session })
await Inventory.updateOne({ productId: 9 }, { $inc: { stock: -1 } }, { session })
})
MongoDB supports transactions but they are slower and used less frequently. PostgreSQL was designed around transactional consistency from the beginning.
Choosing the Right Database
Most JavaScript applications fall into one of two categories.
If your system revolves around relationships between users, products, payments, permissions, or business rules, PostgreSQL usually wins.
If your system stores high-volume documents like logs, analytics events, chat messages, or evolving content structures, MongoDB often fits better.
The most mature architectures use both. PostgreSQL stores core business data. MongoDB handles high-volume documents or logs.
Senior engineers do not pick databases based on preference. They choose the one that matches the data model the system actually needs.
Top comments (0)