DEV Community

1xApi
1xApi

Posted on • Originally published at 1xapi.com

How to Implement Database Connection Pooling for High-Performance Node.js APIs in 2026

How to Implement Database Connection Pooling for High-Performance Node.js APIs in 2026

Building a high-performance API in Node.js isn't just about writing efficient queries—it's about managing database connections intelligently. In this guide, you'll learn how to implement connection pooling to handle thousands of concurrent requests without exhausting your database server.

Why Connection Pooling Matters in 2026

Every database connection carries overhead. Establishing a new TCP connection, performing authentication, and negotiating SSL/TLS can take 10-50ms per connection. For an API handling 1,000 requests per second, that's catastrophic:

Without pooling: 1,000 × 50ms = 50 seconds of overhead per second
With pooling: 20 connections × 50ms = 1 second (reused!)
Enter fullscreen mode Exit fullscreen mode

Connection pooling solves this by maintaining a pool of pre-established connections that can be reused across requests. In 2026, with Node.js 24 and Bun 3.0 offering improved connection handling, there's no excuse for creating new connections per request.

Understanding Connection Pool Modes

Before implementing, understand the three main pooling strategies:

1. Client-Side Pooling (Application Level)

The Node.js application manages connection reuse. Simplest to implement, works well for single-instance deployments.

2. Server-Side Pooling (pgBouncer/Proxy)

A middleman like pgBouncer sits between your app and database, managing connections at the database level. Essential for serverless and auto-scaling environments.

3. Serverless Connection Pooling

Cloud providers like AWS Aurora Serverless and Neon offer built-in pooling. Best for unpredictable workloads.

Implementing Client-Side Pooling with PostgreSQL

For PostgreSQL, we'll use the pg library's built-in Pool:

// config/database.js
import { Pool } from 'pg';

const pool = new Pool({
  // Connection string (or individual parameters)
  connectionString: process.env.DATABASE_URL,

  // Maximum connections in pool
  // Rule of thumb: (CPU cores × 2) + effective spindle count
  max: 20,

  // Idle connection timeout (ms)
  idleTimeoutMillis: 30000,

  // Connection timeout (ms)
  connectTimeoutMillis: 5000,

  // Enable SSL for production
  ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
});

// Handle pool errors gracefully
pool.on('error', (err) => {
  console.error('Unexpected pool error:', err);
});

// Export for use in your API
export const query = (text, params) => pool.query(text, params);
export const getClient = () => pool.connect();
export const pool.end = () => pool.end();
Enter fullscreen mode Exit fullscreen mode

Using the Pool in Your API

// api/users.js
import { query, getClient } from '../config/database.js';

// Simple query - automatic connection management
export async function getUserById(id) {
  const result = await query(
    'SELECT id, email, created_at FROM users WHERE id = $1',
    [id]
  );
  return result.rows[0];
}

// Transaction - explicit client management
export async function createUserWithProfile(userData, profileData) {
  const client = await getClient();

  try {
    await client.query('BEGIN');

    const userResult = await client.query(
      'INSERT INTO users (email, password_hash) VALUES ($1, $2) RETURNING id',
      [userData.email, userData.passwordHash]
    );
    const userId = userResult.rows[0].id;

    await client.query(
      'INSERT INTO profiles (user_id, name, avatar_url) VALUES ($1, $2, $3)',
      [userId, profileData.name, profileData.avatarUrl]
    );

    await client.query('COMMIT');
    return { userId, ...userData, ...profileData };
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release(); // Always release back to pool
  }
}
Enter fullscreen mode Exit fullscreen mode

Implementing Connection Pooling with MongoDB

MongoDB's Mongoose provides built-in connection pooling:

// config/database.js
import mongoose from 'mongoose';

const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/mydb';

mongoose.connect(MONGODB_URI, {
  // Maximum connection pool size
  maxPoolSize: 10,

  // Minimum connection pool size
  minPoolSize: 2,

  // Server selection timeout
  serverSelectionTimeoutMS: 5000,

  // Socket timeout
  socketTimeoutMS: 45000,

  // Compression options (new in Mongoose 8+)
  compressors: ['zstd', 'snappy'],
});

mongoose.connection.on('error', (err) => {
  console.error('MongoDB connection error:', err);
});

mongoose.connection.on('disconnected', () => {
  console.warn('MongoDB disconnected');
});

export default mongoose;
Enter fullscreen mode Exit fullscreen mode

Server-Side Pooling with pgBouncer

For production environments handling high concurrency, add pgBouncer:

# docker-compose.yml
version: '3.8'

services:
  api:
    build: .
    environment:
      DATABASE_URL: postgresql://user:pass@pgbouncer:6432/mydb
    depends_on:
      - pgbouncer

  pgbouncer:
    image: edoburu/pgbouncer:latest
    environment:
      DATABASE: mydb
      POOL_MODE: transaction
      MAX_CLIENT_CONN: 1000
      DEFAULT_POOL_SIZE: 20
      MIN_POOL_SIZE: 5
    ports:
      - "6432:5432"
    volumes:
      - pgbouncer:/etc/pgbouncer

  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
    volumes:
      - postgres_data:/var/lib/postgresql/data
Enter fullscreen mode Exit fullscreen mode
# pgbouncer.ini
[databases]
mydb = host=postgres port=5432 dbname=mydb

[pgbouncer]
listen_addr = *
listen_port = 5432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
min_pool_size = 5

# Connection lifetime
server_lifetime = 3600
server_idle_timeout = 600
Enter fullscreen mode Exit fullscreen mode

Critical Setting: Use pool_mode = transaction for serverless and auto-scaling environments. This allows connection reuse between transactions, dramatically increasing capacity.

Health Checks and Monitoring

Always implement pool health monitoring:

// api/health.js
import { pool } from '../config/database.js';

export async function getPoolHealth() {
  const health = {
    totalConnections: pool.totalCount,
    idleConnections: pool.idleCount,
    waitingClients: pool.waitingCount,
    maxConnections: pool.options.max,
    utilizationPercent: 0,
  };

  health.utilizationPercent = Math.round(
    ((health.totalConnections - health.idleConnections) / health.maxConnections) * 100
  );

  return health;
}

// Express middleware for health endpoint
export function poolHealthMiddleware(req, res, next) {
  if (req.path === '/health/pool') {
    getPoolHealth().then(health => {
      const status = health.utilizationPercent > 90 ? 503 : 200;
      res.status(status).json(health);
    });
    return;
  }
  next();
}
Enter fullscreen mode Exit fullscreen mode

Tuning Pool Size for Your Workload

Finding the optimal pool size requires understanding your workload:

Workload Type Recommended Pool Size Reasoning
API with complex queries 10-20 Queries hold connections longer
Simple CRUD API 20-50 Fast query execution
Serverless/Lambda 5-10 Limited concurrent executions
Batch processing 1-5 per worker Connections held for duration

The Formula:

Optimal Pool Size = (Total Memory / Memory per Connection) × 0.7
Enter fullscreen mode Exit fullscreen mode

For PostgreSQL, each connection typically uses 5-10MB. If your server has 2GB and each query uses 8MB:

(2048MB / 8MB) × 0.7 = ~180 connections (too high!)

Conservative: 20-30 connections with pgBouncer handling multiplexing
Enter fullscreen mode Exit fullscreen mode

Graceful Shutdown

Always close connections on shutdown:

// shutdown.js
import { pool } from './config/database.js';
import mongoose from './config/mongodb.js';

async function gracefulShutdown(signal) {
  console.log(`${signal} received, closing connections...`);

  try {
    // Close PostgreSQL pool
    await pool.end();
    console.log('PostgreSQL pool closed');

    // Close MongoDB connection
    await mongoose.connection.close();
    console.log('MongoDB connection closed');

    process.exit(0);
  } catch (err) {
    console.error('Error during shutdown:', err);
    process.exit(1);
  }
}

process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls to Avoid

  1. Not releasing connections: Always use try/finally or the finally block
  2. Pool size too small: Watch for connection wait times in monitoring
  3. Pool size too large: Can overwhelm the database server
  4. Long-running queries: Use query timeouts to prevent connection hoarding
  5. Not handling pool errors: Unhandled errors crash your application

Conclusion

Connection pooling is the backbone of high-performance database-backed APIs. In 2026, with tools like Node.js 24's improved connection handling and pgBouncer's transaction pooling, there's no reason to create new connections per request.

Start with client-side pooling for simplicity, add pgBouncer for production scale, and always monitor your pool health. Your database (and your users) will thank you.


Ready to scale your API? 1xAPI offers high-performance APIs for football data, movies, and more. Check out our RapidAPI collection for reliable, well-documented endpoints backed by proper connection management.

Top comments (0)