Why Your Database Runs Out of Connections
PostgreSQL handles each connection with a dedicated OS process. At 100 connections, it's using 100 processes. At 500, memory is exhausted and queries slow to a crawl.
Node.js apps compound this—each serverless function instance wants its own connection pool.
PgBouncer sits between your app and PostgreSQL, multiplexing hundreds of app connections onto a small number of real database connections.
App instances (500 connections) → PgBouncer → PostgreSQL (20 connections)
PgBouncer Modes
Session Mode
One database connection per client connection. Same as no pooling.
pool_mode = session
Use only for compatibility. Provides no real benefit.
Transaction Mode
Database connection is released back to the pool after each transaction.
pool_mode = transaction
This is what you want. Each query or transaction gets a connection, then releases it. 500 app connections can share 20 database connections comfortably.
Limitation: Prepared statements and SET variables don't work across transactions (they're tied to a specific connection). Prisma handles this automatically.
Statement Mode
Connection released after each statement. Only for simple queries, not transactions.
Configuration
# /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=postgres-primary port=5432 dbname=myapp
[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000 # max app connections
default_pool_size = 20 # connections to PostgreSQL per user/db pair
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 5
server_idle_timeout = 600 # close idle DB connections after 10 min
client_idle_timeout = 0 # don't close idle app connections
log_connections = 0
log_disconnections = 0
# /etc/pgbouncer/userlist.txt
"myapp_user" "scram-sha-256$..."
Docker Setup
# docker-compose.yml
services:
pgbouncer:
image: bitnami/pgbouncer:latest
environment:
POSTGRESQL_HOST: postgres
POSTGRESQL_PORT: 5432
POSTGRESQL_DATABASE: myapp
POSTGRESQL_USERNAME: myapp_user
POSTGRESQL_PASSWORD: ${DB_PASSWORD}
PGBOUNCER_POOL_MODE: transaction
PGBOUNCER_MAX_CLIENT_CONN: 1000
PGBOUNCER_DEFAULT_POOL_SIZE: 20
ports:
- "6432:6432"
depends_on:
- postgres
postgres:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp_user
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
Application Connection
// Point to PgBouncer, not PostgreSQL directly
const DATABASE_URL = process.env.DATABASE_URL;
// postgresql://myapp_user:password@pgbouncer:6432/myapp
// Prisma: disable prepared statements for transaction mode
// DATABASE_URL="...?pgbouncer=true&connection_limit=1"
// schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_DATABASE_URL") // For migrations, bypass PgBouncer
}
Monitoring
# Connect to PgBouncer admin console
psql -p 6432 -U pgbouncer pgbouncer
# View pool stats
SHOW POOLS;
# Output:
# database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | maxwait
# myapp | myapp_user| 47 | 0 | 20 | 0 | 0 | 0
# View all clients
SHOW CLIENTS;
# View config
SHOW CONFIG;
cl_waiting > 0 means clients are waiting for a connection—increase default_pool_size or optimize slow queries.
Supabase and Managed Postgres
Cloud providers handle this for you:
Supabase:
Direct: postgresql://...@db.xxx.supabase.co:5432/postgres
Pooled: postgresql://...@aws-0-us-east-1.pooler.supabase.com:6543/postgres
Neon: Built-in serverless connection pooling
PlanetScale: Built-in connection pooling (MySQL)
If you're on a managed platform, use their pooling endpoint—PgBouncer is already set up for you.
For self-hosted PostgreSQL with more than 50 app instances: PgBouncer is not optional.
Database configuration, PgBouncer setup, and connection pooling guide included: Whoff Agents AI SaaS Starter Kit.
Top comments (0)