Next.js Performance When You Have 200,000 Database Rows
Most Next.js tutorials show you how to build a blog with 10 posts. Real-world apps have hundreds of thousands of records. Here's what actually matters when your database isn't tiny.
The Problem
I recently worked on a marketplace with over 200,000 product listings. The standard patterns from tutorials and demos fall apart pretty quickly at that scale, so most of what follows is what we figured out along the way to keep things responsive.
Database Queries Matter More Than React
This sounds obvious but I see it ignored constantly: your database is the bottleneck, not React.
If your Postgres query takes 3 seconds, no amount of React optimization will help. Fix the query first.
Indexes Are Not Optional
Every column you filter or sort by needs an index. Period.
// schema.prisma - add index for search
model Product {
id Int @id @default(autoincrement())
name String
@@index([name])
}
For text search, we used Prisma's filtering with PostgreSQL GIN trigram indexes underneath. The index lives in a migration, and Prisma handles the query layer. Without the index, searching 200k rows by name was around 4 seconds. With it, 45ms.
Don't use contains on unindexed columns unless you enjoy watching progress spinners.
Pagination, Not Infinite Scroll (Usually)
Infinite scroll is trendy. It's also a trap.
Every time the user scrolls, you're fetching more data, keeping it in memory, and re-rendering the list. After 500 items, their browser is slow and you're wasting memory.
The implementation uses cursor-based pagination instead:
// Get 20 products after this cursor
const products = await db.product.findMany({
take: 20,
skip: 1,
cursor: {
id: lastProductId,
},
orderBy: {
createdAt: 'desc',
},
});
The user gets 20 items at a time, can paginate forward/backward, and the browser doesn't die from holding 10,000 DOM nodes.
Server Components Are Your Friend
The pattern that worked best for us was keeping the page layout, headings, metadata, filters label text, and anything else that doesn't change per-request as server components. All of that renders instantly on the server with zero client JavaScript.
The actual product grid, which changes based on search terms, filters, pagination, and sorting, lives in a client component nested inside the server component. Each product card is intentionally thin (image, name, price, set info) so the client component stays lightweight even when rendering a full page of results.
// app/products/page.tsx - Server Component
export default function ProductsPage() {
return (
<div>
<h1>Browse Cards</h1>
<p>Over 200,000 cards from Pokemon, MTG, Yu-Gi-Oh and more.</p>
{/* Static content above renders server-side immediately */}
<ProductBrowser /> {/* Client Component handles all dynamic stuff */}
</div>
);
}
'use client';
// components/ProductBrowser.tsx - Client Component
// Handles search, filters, pagination, all the interactive bits
export function ProductBrowser() {
const [filters, setFilters] = useState(defaultFilters);
const { data, isLoading } = useProducts(filters);
return (
<div>
<SearchBar onSearch={(q) => setFilters(f => ({ ...f, query: q }))} />
<FilterSidebar filters={filters} onChange={setFilters} />
<ProductGrid products={data?.products ?? []} loading={isLoading} />
<Pagination page={filters.page} total={data?.total ?? 0} />
</div>
);
}
Users see the page structure and static content immediately while the product listings load in. The split also means the server component output gets cached aggressively since it's the same for every visitor, and only the client component does per-request work.
Avoid N+1 Queries
Classic mistake:
// Bad: N+1 query
const products = await db.product.findMany();
for (const product of products) {
product.seller = await db.user.findUnique({
where: { id: product.sellerId }
});
}
You just made 1 query for products, then N queries for sellers. If you have 100 products, that's 101 database round-trips.
Use include or a join:
// Good: 1 query
const products = await db.product.findMany({
include: {
seller: true,
},
});
With Prisma, include does a join under the hood. One query, way faster.
Caching Strategy
For data that doesn't change often, cache it.
The project uses Redis for:
- Search results (cache for 5 minutes)
- Seller profiles (cache for 1 hour)
- Category listings (cache for 1 day)
import Redis from 'ioredis';
const redis = new Redis();
export async function getCachedProducts(category: string) {
const cacheKey = `products:${category}`;
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
const products = await db.product.findMany({
where: { category },
take: 20,
});
await redis.setex(cacheKey, 300, JSON.stringify(products)); // 5min TTL
return products;
}
This cuts database load by 80%+ for repeat visitors.
Image Optimization
With 200,000+ product images, serving full-resolution PNGs kills bandwidth even when individual images are small.
Next.js Image component handles this automatically:
import Image from 'next/image';
<Image
src={product.imageUrl}
width={300}
height={420}
alt={product.name}
/>
Next.js will:
- Serve WebP/AVIF where supported
- Resize images to fit the display size
- Lazy-load images below the fold
- Cache optimized versions
This dropped page weights from 2MB to 400KB just by using next/image everywhere.
Streaming for Slow Queries
Sometimes a query is just slow (complex joins, aggregations, whatever). Rather than blocking the whole page, stream the slow part.
import { Suspense } from 'react';
export default function Page() {
return (
<div>
<Header />
<Suspense fallback={<Skeleton />}>
<SlowProductList />
</Suspense>
<Footer />
</div>
);
}
async function SlowProductList() {
const products = await someSlowQuery();
return <ProductGrid products={products} />;
}
The header and footer render immediately. The product list streams in when ready. Users see something fast instead of staring at a blank page.
Measure Everything
Don't guess. Measure.
We're self-hosted, so we use:
- Prisma query logging to catch slow queries (should probably be doing this more consistently)
- Redis monitoring to track cache hit rates (another thing we should set up properly)
For a self-hosted Next.js app, you'd also want:
- Prisma's
log: ['query']option to surface anything slow - Redis INFO stats for hit/miss ratios
- Server-side performance monitoring (New Relic, Datadog, or simple Express middleware logging)
- Lighthouse CI in your deployment pipeline
If a page is slow, check:
- Is the database query slow? (Prisma logs / pg_stat_statements)
- Are we missing a cache? (Redis hit rate)
- Are we shipping too much JavaScript? (Next.js bundle analyzer)
Usually it's #1.
What Actually Moved the Needle
Here's what made the biggest performance impact:
- Database indexes - cut query times from seconds to milliseconds
- Redis caching - 80% fewer database hits
- Server Components - less client JavaScript, faster initial render
- Image optimization - page weight dropped 5x
The rest was marginal. Focus on those four and you'll be fine.
Conclusion
Big datasets break the patterns you learn in tutorials. The fixes aren't complicated, but you have to think about data flow differently.
Database first, cache aggressively, ship less JavaScript. That's it.
Built something similar and need help scaling it? morleymedia.dev
Originally published on kira.morleymedia.dev
Top comments (0)