A few months ago I was looking at why a PostgreSQL instance was running at 94% memory on a server that, by all accounts, should have had plenty of headroom. The queries were fast, the data volume was modest, and CPU was barely touched.
The culprit was 280 open connections.
No single connection was doing anything particularly expensive. But each one carries a cost that most developers don't think about until they're in production staring at an OOM kill: PostgreSQL spawns a dedicated backend process per connection, and each process consumes roughly 5-10MB of RAM regardless of whether it's actively running a query.
280 connections x 7MB average = 1.96GB. On a server with 4GB RAM and PostgreSQL's own memory settings (shared_buffers, work_mem), that leaves almost nothing for actual query execution.
Why Node.js Apps Over-Connect
The problem is architectural. Node.js applications are typically deployed as multiple processes or containers: a web server, one or more background workers, maybe a separate process for scheduled jobs. Each runs its own connection pool. Each pool opens connections eagerly.
With pg and a default pool size of 10, and 3 services each with 3 replicas:
web server (3 replicas x 10 connections) = 30 connections
background worker (3 replicas x 10 connections) = 30 connections
job scheduler (3 replicas x 5 connections) = 15 connections
Total: 75 connections at idle
Add a traffic spike, pool expansion, and a few long-running queries holding connections open, and you're at 150+ before anything goes wrong with your code.
PostgreSQL's default max_connections is 100. Many managed databases (RDS, Supabase, Neon) set it lower for small instance sizes.
What Happens When You Hit the Limit
Error: remaining connection slots are reserved for non-replication superuser connections
Or, worse, requests that queue indefinitely waiting for a connection that never frees up because every connection is held by a slow query, and the slow query is slow because it can't get a lock, because another connection holds it, and that connection is waiting for... a connection.
You get the idea.
The Wrong Fix
The instinct is to increase max_connections. This works until it doesn't: more connections means more RAM pressure, more context switching, and more lock contention. PostgreSQL is not designed for thousands of concurrent connections. It's designed for dozens of active queries with efficient I/O, and it's exceptional at that.
The right fix is to not open connections you don't need.
PgBouncer: A Connection Pool in Front of PostgreSQL
PgBouncer sits between your application and PostgreSQL. Your application thinks it's talking to PostgreSQL directly - same protocol, same port behavior. PgBouncer maintains a much smaller pool of real PostgreSQL connections and multiplexes client connections onto them.
App (100 client connections)
|
[PgBouncer]
|
PostgreSQL (20 server connections)
100 application connections, 20 actual PostgreSQL connections. The application never notices.
PgBouncer has three pooling modes:
Session pooling - a server connection is assigned to a client for the entire session duration. Equivalent to no pooling for persistent connections, but useful for clients that connect and disconnect frequently.
Transaction pooling - a server connection is assigned only for the duration of a transaction. As soon as your transaction commits or rolls back, the connection goes back to the pool. This is the mode that actually reduces your connection count dramatically.
Statement pooling - a server connection is assigned for a single statement. Very aggressive, incompatible with multi-statement transactions. Rarely the right choice.
For most Node.js workloads, transaction pooling is what you want.
Setting Up PgBouncer with Docker
# docker-compose.yml
services:
pgbouncer:
image: bitnami/pgbouncer:latest
environment:
POSTGRESQL_HOST: postgres
POSTGRESQL_PORT: 5432
POSTGRESQL_DATABASE: myapp
POSTGRESQL_USERNAME: app_user
POSTGRESQL_PASSWORD: ${DB_PASSWORD}
PGBOUNCER_PORT: 6432
PGBOUNCER_POOL_MODE: transaction
PGBOUNCER_MAX_CLIENT_CONN: 1000
PGBOUNCER_DEFAULT_POOL_SIZE: 25
PGBOUNCER_MIN_POOL_SIZE: 5
PGBOUNCER_RESERVE_POOL_SIZE: 5
PGBOUNCER_RESERVE_POOL_TIMEOUT: 3
PGBOUNCER_SERVER_IDLE_TIMEOUT: 600
ports:
- "6432:6432"
depends_on:
- postgres
Your application connects to port 6432 (PgBouncer) instead of 5432 (PostgreSQL). Everything else stays the same.
// Before
const pool = new Pool({
connectionString: "postgresql://app_user:password@postgres:5432/myapp",
max: 10,
});
// After
const pool = new Pool({
connectionString: "postgresql://app_user:password@pgbouncer:6432/myapp",
max: 25, // can be higher now - PgBouncer handles the real limit
});
The Numbers
Same application, same workload, same PostgreSQL instance. Before and after adding PgBouncer in transaction mode:
| Metric | Without PgBouncer | With PgBouncer |
|---|---|---|
| PostgreSQL connections (idle) | 75 | 8 |
| PostgreSQL connections (peak load) | 210 | 25 |
| PostgreSQL RAM used by connections | 1.47GB | 175MB |
| p99 query latency (peak) | 340ms | 95ms |
| Errors under load | connection limit exceeded | 0 |
The latency improvement is not because PgBouncer makes queries faster. It's because without it, queries were queuing for a connection slot. With transaction pooling, a query gets a connection, runs, and returns it immediately - no waiting.
What Transaction Pooling Breaks
This is important. Transaction pooling is not a drop-in change if you use any of the following:
Named prepared statements. Prepared statements are created on a specific server connection. With transaction pooling, you might get a different connection per transaction, so the prepared statement doesn't exist there.
Good news for Node.js developers: pg does NOT use protocol-level prepared statements by default. Standard parameterized queries work fine with PgBouncer in transaction mode:
// This does NOT use a persistent prepared statement - works fine with PgBouncer
await client.query("SELECT * FROM users WHERE id = $1", [userId]);
// This DOES use a persistent prepared statement (the `name` property) - breaks with PgBouncer
await client.query({
name: "get-user-by-id",
text: "SELECT * FROM users WHERE id = $1",
values: [userId],
});
The issue only appears if you explicitly pass a name property in the query object. If you're using standard pool.query(sql, params) calls, you don't need to change anything.
SET statements and session-level configuration. SET search_path TO tenant_abc applies to the session, not the transaction. With transaction pooling, the setting evaporates when the transaction ends and the connection goes back to the pool.
If you're using RLS with set_config('app.organization_id', orgId, true), the true parameter already makes it transaction-scoped, so this works correctly with PgBouncer. Just make sure you're not relying on any session-level state persisting between transactions.
Advisory locks. pg_advisory_lock() is session-scoped. Use pg_advisory_xact_lock() instead, which is transaction-scoped and releases automatically on commit/rollback.
LISTEN/NOTIFY. Subscriptions are session-scoped. If you're using LISTEN, you need a dedicated long-lived connection that bypasses PgBouncer - or use a separate direct PostgreSQL connection just for pub/sub.
// Direct connection for LISTEN/NOTIFY, bypassing PgBouncer
const notifyClient = new Client({
connectionString: process.env.DATABASE_DIRECT_URL, // points to :5432
});
await notifyClient.connect();
await notifyClient.query("LISTEN log_events");
PgBouncer on Managed Databases
If you're using RDS, Supabase, Neon, or similar, you often don't need to run PgBouncer yourself.
- RDS: RDS Proxy is AWS's managed connection pooler. It's PgBouncer-like, works in transaction mode, integrates with IAM authentication. It costs extra ($0.015/vCPU-hour) but removes the operational burden.
- Supabase: Has a built-in connection pooler called Supavisor (which replaced their PgBouncer setup in 2023) working in transaction mode on port 6543. Use that URL for your application instead of the direct connection string.
- Neon: Serverless pooling built-in, similar to transaction mode.
- PlanetScale: MySQL-based, different story entirely.
If you're using Prisma with any connection pooler in transaction mode, you must add ?pgbouncer=true to your database URL - otherwise Prisma's internal prepared statement handling will crash:
# Without this flag, Prisma breaks silently with PgBouncer/Supavisor in transaction mode
DATABASE_URL="postgresql://user:password@pgbouncer:6432/myapp?pgbouncer=true"
This one parameter has saved countless hours of "why is Prisma throwing random errors in production" debugging.
For self-hosted PostgreSQL, running PgBouncer yourself is the standard approach.
Tuning max_connections in PostgreSQL
Once PgBouncer is in front, you can lower PostgreSQL's max_connections to something realistic:
-- See current value
SHOW max_connections;
-- See current active connections
SELECT count(*) FROM pg_stat_activity;
A reasonable formula for max_connections when using a pool:
max_connections = (pool_size * number_of_pools) + reserved_superuser_connections
For PgBouncer with default_pool_size = 25 and a few admin connections:
max_connections = 25 + 10 (headroom) = 35
Set this in postgresql.conf:
max_connections = 35
shared_buffers = 256MB # ~25% of available RAM
work_mem = 16MB # per sort/hash operation, per connection
Lowering max_connections lets PostgreSQL allocate more memory to shared_buffers and work_mem, which directly improves query performance. The memory that was being eaten by connection overhead goes back to the query executor.
The Checklist
If you're running Node.js with PostgreSQL in production:
- Is your pool size per process configured explicitly, or defaulting to 10?
- How many processes/replicas connect to the database? What's the total connection count?
- Are you within 80% of
max_connectionsat peak? - Do you have PgBouncer or equivalent in front of PostgreSQL?
- Are you using
set_configfor RLS context rather thanSETstatements? - Are you using
pg_advisory_xact_lockinstead ofpg_advisory_lock? - Do you have a dedicated connection for
LISTEN/NOTIFYthat bypasses the pool?
Connection exhaustion is one of those problems that hides until traffic spikes, then appears as a cascade of unrelated-looking errors. The fix is not complicated, but it requires understanding what PostgreSQL is actually doing with each connection.
What connection pool setup are you running in production? Any gotchas with PgBouncer that aren't covered here? Comments are open.
Top comments (0)