DEV Community

SoftwareDevs mvpfactory.io
SoftwareDevs mvpfactory.io

Posted on • Originally published at mvpfactory.io

PostgreSQL Connection Pooling Under Pressure

---
title: "PostgreSQL Pool Sizing That Actually Works"
published: true
description: "PgBouncer transaction mode, prepared statement fixes, and the pool sizing formula that keeps multi-tenant SaaS running under pressure."
tags: postgresql, architecture, performance, cloud
canonical_url: https://blog.mvpfactory.co/postgresql-pool-sizing-that-actually-works
---

## What We Will Build

Let me show you the connection pooling setup I use on every multi-tenant SaaS project. By the end of this walkthrough, you will have a properly sized PgBouncer configuration, a fix for the prepared statement problem that silently breaks transaction mode, and the monitoring queries that catch pool saturation before your customers do.

When your tenant count doubles in a year, the first thing that buckles is not your application code. It is your database connection layer. I have seen this enough times to know: connection pooling is the most under-monitored piece of infrastructure right up until it becomes the most urgent fire.

## Prerequisites

- A PostgreSQL instance (any recent version)
- PgBouncer installed and running in transaction mode
- A JVM-based service using HikariCP (examples in Kotlin, but the concepts apply everywhere)

## Step 1: Fix the Prepared Statement Problem

Here is the gotcha that will save you hours. PgBouncer in transaction mode multiplexes connections — when your transaction ends, that server connection returns to the pool and may serve a completely different client next. But prepared statements are bound to a server-side session, not a transaction.

Client A prepares `stmt_1` on Connection 5. Connection 5 gets reassigned to Client B. Client A's next `EXECUTE stmt_1` lands on Connection 7, where it was never prepared. Chaos.

| Approach | Mechanism | Tradeoff |
|---|---|---|
| `DEALLOCATE ALL` on checkout | Reset prepared statements when a connection is assigned | Adds ~0.5ms per transaction, eliminates stale state |
| Disable prepared statements (driver-level) | Use simple query protocol instead of extended | Marginal CPU increase on Postgres (~3-7%), but fully safe |
| `pgcat` / `Supavisor` protocol-aware pooling | Pooler intercepts and replays `PARSE` messages | Zero client-side changes, but newer and less proven in the wild |

Here is the minimal setup to get this working. For most teams, disabling prepared statements at the driver level is the pragmatic choice:

Enter fullscreen mode Exit fullscreen mode


kotlin
// HikariCP + JDBC config for PgBouncer transaction mode
HikariConfig().apply {
jdbcUrl = "jdbc:postgresql://pgbouncer:6432/mydb"
addDataProperty("prepareThreshold", "0") // disables server-side prepared statements
addDataProperty("preparedStatementCacheQueries", "0")
}


The ~5% CPU overhead on Postgres is a rounding error compared to cascading failures from stale prepared statement handles at 2 AM.

## Step 2: Size Your Pool With the Formula

The docs do not mention this, but most teams set `max_connections = 200` by default and wonder why performance degrades. The PostgreSQL wiki publishes a formula that actually works:

Enter fullscreen mode Exit fullscreen mode


python
optimal_connections = ((core_count * 2) + effective_spindle_count)


For a modern 8-core cloud instance with SSDs (effective spindle count ≈ 1):

Enter fullscreen mode Exit fullscreen mode


plaintext
(8 * 2) + 1 = 17 connections


Beyond that count, you are paying for context switching, lock contention, and cache thrashing. More connections does not mean more throughput. It means less.

For multi-tenant workloads, I layer in a service multiplier:

Enter fullscreen mode Exit fullscreen mode


python
pgbouncer_default_pool_size = optimal_connections / num_services
reserve_pool = ceil(optimal_connections * 0.15)


For 4 services sharing a 17-connection Postgres:

| Parameter | Value |
|---|---|
| `default_pool_size` | 4 per service |
| `reserve_pool_size` | 3 |
| `max_db_connections` | 17 |
| `max_client_conn` | 1000 |

This lets PgBouncer queue up to 1,000 application connections while never overwhelming Postgres beyond 17 real server connections.

## Step 3: Set Up Monitoring

Run these against PgBouncer's admin console before your p99 tells you something is wrong:

Enter fullscreen mode Exit fullscreen mode


sql
-- Connections waiting for a server connection (saturation signal)
SHOW POOLS;
-- Watch: cl_waiting > 0 sustained = pool exhaustion incoming

-- Average wait time per pool
SHOW STATS;
-- Watch: avg_wait_time > 50ms = time to scale or optimize


When `cl_waiting` stays above zero for more than 30 seconds, you are already degraded. Alert on it.

## Gotchas

**Session pinning will silently destroy your pool capacity.** These operations force PgBouncer to pin a client to a single server connection, defeating multiplexing entirely:

- `SET statement_timeout = ...` — use `SET LOCAL` inside a transaction instead
- `LISTEN / NOTIFY` — move to a dedicated non-pooled connection
- Advisory locks (`pg_advisory_lock`) — use `pg_advisory_xact_lock` for transaction-scoped variants
- Temporary tables — create and drop within the same transaction

One `SET` statement outside a transaction block can pin a connection for the lifetime of the client session. No warning, no error. I have seen this single mistake reduce effective pool capacity by 60% in production. Took hours to find because everything *looked* fine in the application logs.

**Choosing the right pooler matters.** Here is how they compare:

| Feature | PgBouncer | pgcat | Supavisor |
|---|---|---|---|
| Maturity | 15+ years in production | ~2 years, production at Instacart | ~2 years, production at Supabase |
| Protocol-aware prepared stmts | No | Yes | Yes |
| Multi-threaded | No (single-threaded) | Yes (Rust, Tokio) | Yes (Elixir/BEAM) |
| Sharding support | No | Yes | No |
| Throughput (single instance) | ~15K TPS | ~25K TPS | ~20K TPS |
| Operational complexity | Low | Medium | Medium |

PgBouncer is the safe default. Boring is good for infrastructure. If you need prepared statement transparency or you are hitting single-threaded throughput limits, pgcat is worth evaluating — but run shadow traffic against it for a few weeks first.

## Wrapping Up

Let me show you a pattern I use in every project. Three rules to keep your pool healthy:

1. Set `prepareThreshold=0` at the driver level when using PgBouncer transaction mode. The CPU tradeoff is negligible.
2. Size your pool using the formula, not gut feel. `(cores * 2) + 1` for Postgres `max_connections`, then divide across services for PgBouncer `default_pool_size`.
3. Alert on `cl_waiting > 0` sustained for 30+ seconds. This is the earliest signal of pool saturation, well before latency percentiles start climbing.

Get these right and your connection layer stops being the thing that wakes you up at night.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)