DEV Community

Cover image for PostgreSQL Connection Pooling: PgBouncer, Supavisor & Built-In
Philip McClarence
Philip McClarence

Posted on

PostgreSQL Connection Pooling: PgBouncer, Supavisor & Built-In

PostgreSQL Connection Pooling: PgBouncer, Supavisor & Built-In

Every PostgreSQL connection creates a new OS process. Each one allocates roughly 5-10 MB of memory. A server with max_connections = 200 might use 2 GB just for connection overhead -- before a single query runs. At 500 connections, you're at 5 GB of memory consumed by connection infrastructure alone.

If you've ever seen "too many connections" errors and responded by increasing max_connections, you've treated the symptom while creating a bigger problem.

Why More Connections Make Things Worse

Modern architectures multiply connections fast. 20 container replicas with 10 connections each = 200 persistent connections. During a rolling deployment, old and new replicas coexist briefly, doubling the count. Serverless functions are worse -- each invocation might open its own connection, creating hundreds of short-lived connections per second.

Each new connection takes 50-100ms to establish (with TLS). PostgreSQL must fork a process, authenticate, load shared catalog data, and negotiate encryption. That's not free.

There are two walls:

  1. max_connections exhaustion: application gets "too many connections" errors
  2. Memory exhaustion: so much RAM consumed by connection overhead that there's nothing left for shared_buffers, work_mem, and the OS page cache

Connection pooling solves both by maintaining a fixed pool of database connections and sharing them among application connections. A pool of 20 database connections can serve hundreds of application connections because most connections are idle at any moment.

Diagnosing Connection Problems

-- Connection count by state and application
SELECT
    state,
    usename AS username,
    application_name,
    count(*) AS connection_count,
    count(*) FILTER (WHERE state = 'idle') AS idle_connections,
    count(*) FILTER (WHERE state = 'idle in transaction') AS idle_in_transaction
FROM pg_stat_activity
WHERE backend_type = 'client backend'
GROUP BY state, usename, application_name
ORDER BY connection_count DESC;

-- How close are you to max_connections?
SELECT
    count(*) AS active_connections,
    current_setting('max_connections')::int AS max_connections,
    round(100.0 * count(*) / current_setting('max_connections')::int, 1)
        AS utilization_percentage
FROM pg_stat_activity
WHERE backend_type = 'client backend';
Enter fullscreen mode Exit fullscreen mode

Above 70% utilization means you're at risk during traffic spikes. A high idle_connections count relative to total means connections are held open unnecessarily -- a pooler would reclaim these.

-- Find idle-in-transaction connections wasting pool capacity
SELECT
    pid,
    usename,
    application_name,
    state,
    now() - state_change AS idle_duration,
    left(query, 100) AS last_query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
  AND now() - state_change > interval '5 minutes'
ORDER BY state_change;
Enter fullscreen mode Exit fullscreen mode

These are the worst offenders. They hold locks, snapshots, and pool slots while doing absolutely nothing.

PgBouncer Setup (The Standard Choice)

PgBouncer is the most widely deployed PostgreSQL pooler -- lightweight, stable, battle-tested:

; /etc/pgbouncer/pgbouncer.ini

[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

; Transaction mode: connection returned after each transaction
pool_mode = transaction

; Pool sizing
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3

; Connection limits
max_client_conn = 1000
max_db_connections = 50

; Timeouts
server_idle_timeout = 600
query_timeout = 300
Enter fullscreen mode Exit fullscreen mode

The critical setting is pool_mode. Transaction mode gives the best connection reuse but means you can't use session-level features (prepared statements, SET commands, LISTEN/NOTIFY, temporary tables) across transactions.

# PgBouncer admin console
psql -p 6432 -U pgbouncer pgbouncer

SHOW POOLS;
SHOW STATS;
SHOW SERVERS;
Enter fullscreen mode Exit fullscreen mode

The Pool Sizing Formula

The optimal pool size depends on your hardware, not your application concurrency:

optimal_pool_size = (CPU_cores * 2) + number_of_disks
Enter fullscreen mode Exit fullscreen mode

For 8 cores with SSD (count as 1 disk): ~17 connections. This seems small, but PostgreSQL can only execute as many queries in parallel as it has CPU cores. Beyond that, connections compete for CPU and context switching reduces throughput.

-- Check how many connections are truly active at any time
SELECT
    count(*) FILTER (WHERE state = 'active') AS truly_active,
    count(*) FILTER (WHERE state = 'idle') AS idle,
    count(*) FILTER (WHERE state = 'idle in transaction') AS idle_in_txn,
    count(*) AS total
FROM pg_stat_activity
WHERE backend_type = 'client backend';
Enter fullscreen mode Exit fullscreen mode

If truly_active is consistently below 20, a pool of 20-25 is sufficient regardless of your application instance count.

Supavisor (Cloud-Native Alternative)

Supavisor is Supabase's open-source pooler built in Elixir, designed for multi-tenant cloud environments:

tenants:
  - id: "my-tenant"
    db_host: "localhost"
    db_port: 5432
    db_database: "myapp"
    pool_size: 20
    mode: "transaction"
Enter fullscreen mode Exit fullscreen mode

Key advantages over PgBouncer:

  • Multi-threaded (handles more client connections per instance)
  • Supports named prepared statements in transaction mode
  • Per-tenant pool isolation

Application-Level Pooling

Use it alongside (not instead of) an external pooler:

# Python: SQLAlchemy
engine = create_engine(
    "postgresql://user:password@pgbouncer-host:6432/myapp",
    pool_size=5,
    max_overflow=10,
    pool_timeout=30,
    pool_recycle=3600,
    pool_pre_ping=True,
)
Enter fullscreen mode Exit fullscreen mode
// Node.js: pg
const pool = new Pool({
  host: 'pgbouncer-host',
  port: 6432,
  max: 5,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
});
Enter fullscreen mode Exit fullscreen mode

Keep the application pool small (5-10 per instance) when connecting through PgBouncer. The application pool handles lifecycle; PgBouncer handles multiplexing.

Preventing Connection Waste

-- Terminate idle-in-transaction connections after 60 seconds
ALTER SYSTEM SET idle_in_transaction_session_timeout = '60s';
SELECT pg_reload_conf();

-- Kill runaway queries after 5 minutes
ALTER SYSTEM SET statement_timeout = '300s';
SELECT pg_reload_conf();
Enter fullscreen mode Exit fullscreen mode

Quick Decision Guide

  • PgBouncer: battle-tested, lightweight, works for most deployments
  • Supavisor: multi-tenant, cloud-native, supports prepared statements in transaction mode
  • pgcat: adds sharding and load balancing
  • Application pool only: fine for a single instance with low concurrency, insufficient for anything larger

PostgreSQL does not have built-in connection pooling as of version 17. Every client connection creates a new OS process. Until that changes, an external pooler is essential for any production workload with more than a handful of concurrent users.

Top comments (0)