DEV Community

Cover image for Handling 10,000+ database queries during build: A Node.js connection pooling story
Elyvora US
Elyvora US

Posted on

Handling 10,000+ database queries during build: A Node.js connection pooling story

I thought I had it all figured out.
PostgreSQL database? ✅
Prisma ORM? ✅
Node.js static site generation? ✅

FATAL: too many connections for role "role_xxxxx"
Prisma Accelerate has built-in connection pooling to prevent such errors.

My build process was generating 70+ pages, each requiring multiple database queries. The math was simple and brutal: 70 pages × 4 queries per page × concurrent execution = connection pool explosion.

The problem: Node.js event loop vs. Database connections

Here's what most tutorials don't tell you: Node.js is too good at concurrency. When you generate static pages, Node.js doesn't wait politely for one page to finish before starting the next. It fires off all page generations simultaneously. Each page needs to:
1. Query products from the database
2. Query blog posts for related content
3. Query category data
4. Generate dynamic sitemap entries

That's potentially 280+ concurrent database connections hitting a connection pool limited to 5-10 connections.

The result? Build failures, timeout errors, and a lot of frustration.

Why standard solutions failed?

Attempt 1: Increase connection limit

DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=20"
Enter fullscreen mode Exit fullscreen mode

Result: Still failed. The problem wasn't the limit—it was the rate of connection creation.

Attempt 2: Manual connection management

await prisma.$connect();
const data = await prisma.product.findMany();
await prisma.$disconnect();
Enter fullscreen mode Exit fullscreen mode

Result: Made it worse. Now I was creating and destroying connections rapidly, overwhelming the database server.

Attempt 3: Prisma's built-in pooling

const prisma = new PrismaClient();
Enter fullscreen mode Exit fullscreen mode

Result: Not enough. Prisma's singleton pattern works great for runtime requests, but static generation's concurrent nature bypassed it.

The solution: Query queuing in Node.js

I needed to serialize database operations during the build process while keeping them concurrent at runtime. Here's the pattern that worked:

// lib/db.ts
import { PrismaClient } from '@prisma/client'

const globalForPrisma = globalThis as unknown as {
  prisma: PrismaClient | undefined
  queryQueue: Promise<any> | undefined
}

const createPrismaClient = () => {
  return new PrismaClient({
    log: process.env.NODE_ENV === 'development' ? ['error', 'warn'] : ['error'],
  })
}

export const prisma = globalForPrisma.prisma ?? createPrismaClient()

if (process.env.NODE_ENV !== 'production') {
  globalForPrisma.prisma = prisma
}

// Query queue to serialize database queries during build
const queueQuery = async <T>(fn: () => Promise<T>): Promise<T> => {
  if (process.env.NEXT_PHASE === 'phase-production-build' || 
      process.env.NODE_ENV === 'production') {
    const previousQuery = globalForPrisma.queryQueue || Promise.resolve()
    const currentQuery = previousQuery.then(() => fn()).catch(() => fn())
    globalForPrisma.queryQueue = currentQuery
    return currentQuery
  }
  return fn()
}

export async function withPrisma<T>(
  callback: (prisma: PrismaClient) => Promise<T>
): Promise<T> {
  return queueQuery(async () => {
    try {
      return await callback(prisma)
    } catch (error) {
      console.error('Database operation failed:', error)
      throw error
    }
  })
}
Enter fullscreen mode Exit fullscreen mode

The key insight: Use a promise chain as a queue. Each query waits for the previous one to complete during build, but executes immediately during runtime.

How it works

During build (Production)
Query 1 → Query 2 → Query 3 → Query 4 → ... (serial execution)

During Runtime (Server)
Query 1 ↘
Query 2 → All execute concurrently
Query 3 ↗

The globalForPrisma.queryQueue acts as a checkpoint. Each new query:
1. Waits for the previous promise to resolve
2. Executes its database operation
3. Becomes the new checkpoint for the next query

Implementation across the codebase

Sitemap generation

// app/sitemap.ts
export default async function sitemap() {
  let productPages = [];
  try {
    const products = await getAllAmazonProducts({ limit: 1000 });
    productPages = products.map((p) => ({
      url: `${baseUrl}/products/${p.slug}`,
      lastModified: p.updatedAt,
      priority: p.featured ? 0.85 : 0.75,
    }));
  } catch (error) {
    console.error('Error fetching products for sitemap:', error);
  }

  let blogPages = [];
  try {
    const posts = await prisma.blogPost.findMany({
      where: { status: 'published' }
    });
    blogPages = posts.map((post) => ({
      url: `${baseUrl}/blog/${post.slug}`,
      lastModified: post.updatedAt,
      priority: 0.65,
    }));
  } catch (error) {
    console.error('Error fetching blog posts for sitemap:', error);
  }

  return [...staticPages, ...productPages, ...blogPages];
}
Enter fullscreen mode Exit fullscreen mode

Each try-catch block handles failures gracefully. If the database connection fails, that section is skipped rather than crashing the entire build.

Production results:

After implementing this pattern:
✅ Build success rate: 100% (was ~30% before)
✅ Average build time: 45 seconds (consistent)
✅ Zero connection timeout errors
✅ 70+ pages generated successfully
You can see this architecture handling production traffic at elyvora.us — 70+ database-driven pages, all built without a single connection error.

Lessons learned

1. Environment-specific behavior matters
Don't assume build-time behavior matches runtime behavior. Node.js behaves differently in each context.

2. Global state isn't always evil
Using globalThis to maintain a promise queue across module boundaries was crucial. It's one of the few legitimate uses of global state.

3. Graceful degradation over perfection
My sitemap has fallback logic. If product queries fail, it still generates static pages. Partial success > complete failure.

4. Log everything during build

console.log(`✅ Sitemap generated with ${allPages.length} URLs`);
console.log(`   - Product pages: ${productPages.length}`);
console.log(`   - Blog pages: ${blogPages.length}`);
Enter fullscreen mode Exit fullscreen mode

These logs saved hours of debugging. You can't attach a debugger to a build process, so stdout is your best friend.

When to use this pattern

This approach works best when you have:

  • Static site generation with database-backed content
  • High page counts (50+ pages)
  • Limited database connection pools (shared hosting, free tiers)
  • Multiple queries per page (complex data relationships) For runtime-only applications (pure API servers), stick with standard connection pooling.

Conclusion

Node.js's async nature is usually a superpower. But during static generation with database dependencies, it becomes a challenge. The solution isn't to fight Node.js's concurrency—it's to control when concurrency happens. By serializing queries during build and keeping them concurrent at runtime, you get the best of both worlds: reliable builds and fast server responses.

Update: This pattern has been running in production for 3+ weeks at elyvora.us with zero connection issues. 70+ pages rebuild every hour without breaking a sweat.

Top comments (0)