---
title: "PostgreSQL Connection Pooling: Handle 10K+ Mobile Users Without Melting Your Database"
published: true
description: "A hands-on guide to configuring PgBouncer for mobile backends — with production configs, two-layer pooling architecture, and the gotchas that will save you a 3 AM incident."
tags: postgresql, architecture, mobile, api
canonical_url: https://mvpfactory.co/blog/postgresql-connection-pooling-mobile-backends
---
## What We're Building
By the end of this tutorial, you'll have a production-ready PgBouncer configuration that lets a single PostgreSQL instance serve 10,000+ concurrent mobile users. I'll walk you through the two-layer pooling pattern I use in every mobile backend project and show you exactly why your web pooling config falls apart under mobile traffic.
## Prerequisites
- A running PostgreSQL 15+ instance
- PgBouncer 1.21+ installed (`apt install pgbouncer` or your package manager of choice)
- Basic familiarity with PostgreSQL connection settings
- An application server with a connection pool (HikariCP, Prisma, SQLAlchemy — any will do)
## Step 1: Understand Why Mobile Traffic Is Different
Here is the minimal mental model. Web clients hold stable connections — load page, query, return to pool. Mobile clients are chaos. Users open your app on the morning commute, the OS killed your background process, and suddenly you get 5,000 simultaneous fresh connection attempts.
Each PostgreSQL connection costs **5–10 MB of RAM**. Default `max_connections` is ~100. Do the math: you hit a wall fast.
| Pooler | Queries/sec | Memory per client | p99 latency overhead |
|---|---|---|---|
| PgBouncer (transaction) | ~18,000 | ~2 KB | +0.1 ms |
| Supavisor | ~14,000 | ~3 KB | +0.3 ms |
| PgPool-II | ~9,500 | ~5–10 MB | +1.2 ms |
*Benchmarked with `pgbench -c 200 -j 64 -T 120` on PostgreSQL 16, 4-vCPU/16 GB. Your numbers will vary — run your own.*
PgBouncer in transaction mode wins on throughput and memory. Let me show you the config.
## Step 2: Configure PgBouncer for Mobile
Here is the configuration I use for 10K+ concurrent users. Create or edit `/etc/pgbouncer/pgbouncer.ini`:
ini
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_port = 6432
listen_addr = 0.0.0.0
auth_type = scram-sha-256
pool_mode = transaction
max_client_conn = 10000
default_pool_size = 25
min_pool_size = 10
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 300
client_idle_timeout = 60
query_wait_timeout = 10
**`pool_mode = transaction`** — connections return to the pool after each transaction, not after the client disconnects. This is the single most important setting.
**`default_pool_size = 25`** — the formula I use: `(CPU cores × 2) + 1`. For a 4-vCPU server, 25 gives you parallelism headroom without overloading Postgres.
**`client_idle_timeout = 60`** — aggressive for web, perfect for mobile. If a mobile client is idle for 60 seconds, it's probably been killed by the OS anyway. Reclaim that slot.
**`query_wait_timeout = 10`** — fail fast. Mobile UIs should show an error within 10 seconds, not spin forever. This prevents queue pileup during connection storms.
## Step 3: Set Up Two-Layer Pooling
Here is the pattern that actually scales — and where most teams go wrong by pooling at only one layer:
Mobile Clients (10K+)
→ API Gateway (rate limiting + auth)
→ App Servers (3–5 instances, pool_size=20 each)
→ PgBouncer (25 active PG connections)
→ PostgreSQL (max_connections=120)
Set your application pool (HikariCP, Prisma, etc.) to a **small** size — 15–20 per instance. PgBouncer multiplexes all of them down. Ten thousand mobile clients become ~80 app-level connections, which PgBouncer funnels into 25 active backends. That's where the leverage is.
## Step 4: Monitor the Right Metrics
Connect to PgBouncer's admin console and run `SHOW POOLS;`. Three numbers predict failures before they happen:
- **`cl_waiting` > 0 sustained** — pool is saturated, scale out or optimize queries
- **`avg_wait_time` > 100 ms** — clients are queueing, you're minutes from user-facing errors
- **`sv_idle` at 0** — no headroom left, bump `default_pool_size`
Put these in a Grafana dashboard. Alert on them during peak hours. During long debugging sessions tracking these metrics, I keep [HealthyDesk](https://play.google.com/store/apps/details?id=com.healthydesk) running to remind me to step away from the screen — it's easy to lose hours staring at pool stats.
## Gotchas
**The prepared statement trap.** Transaction mode reassigns connections between transactions, which breaks named prepared statements. On PgBouncer 1.21+, set `server_prepared_statements = yes` to handle this at the protocol level. For older versions, disable prepared statement caching in your ORM.
**Supavisor is not always better.** It shines for multi-tenant SaaS routing across isolated databases. For a single-database mobile backend, it adds latency and operational overhead you don't need.
**Don't skip the application-level pool.** Running app servers directly against PgBouncer without their own pool means every HTTP request opens a new PgBouncer connection. You lose connection reuse at the app layer and put unnecessary pressure on the pooler.
**Zombie connections from cellular networks.** Mobile users enter tunnels and elevators. TCP connections hang. That `client_idle_timeout = 60` setting isn't optional — it's your defense against ghost connections holding pool slots.
## Wrapping Up
The combination of PgBouncer in transaction mode with two-layer pooling handles 10K+ mobile users on a single PostgreSQL instance. Set aggressive client timeouts, keep `default_pool_size` tied to your CPU count, and monitor `cl_waiting` like your uptime depends on it — because it does. Start with the config above, run `pgbench` against your own hardware, and adjust from there.
Top comments (0)