DEV Community

Cover image for From 53 to 88: A Practical Guide to PageSpeed Optimization for Static Next.js Sites
Alexander Nitrovich
Alexander Nitrovich

Posted on

From 53 to 88: A Practical Guide to PageSpeed Optimization for Static Next.js Sites

I spent two weeks optimizing shoofaflam.tv — an Arabic streaming guide with ~20,000 static HTML pages — from a mobile PageSpeed Performance score of 53 to a stable 77-85. The site runs Next.js 16 with output: "export", served by nginx + Cloudflare CDN on a single Hetzner VPS.

This isn't a list of "10 tips for faster sites." This is the actual chronological journey, including the regressions, the surprises, and the code that made it work. If you run a content-heavy static site and you're stuck in the 50-70 range, this is for you.


The Setup

  • Next.js 16 with static export (React 19, Tailwind CSS 4)
  • ~20,000 HTML pages — movie/series pages generated from a 9MB JSON database
  • TMDB API images — ~7,400 posters + ~7,500 actor avatars, originally served cross-origin from image.tmdb.org
  • Monetization — Google AdSense + GTM
  • Hosting — nginx on a Hetzner VPS (4GB RAM, 2 CPU), Cloudflare CDN in front

The Key Principle

Optimize for Lighthouse's behavior: it doesn't scroll, click, or interact.

This one insight guided every decision. Lighthouse loads your page, waits for LCP, measures blocking time, and takes a screenshot. It never scrolls. It never clicks. Understanding this changes how you think about loading third-party scripts, lazy-loading, and content-visibility.


Before & After

Metric Before (Score 53) After (Score 85)
Performance 53 77-85
FCP 4.5s 1.4-1.8s
LCP 11.8s 3.8-5.6s
TBT 320ms 30-180ms
CLS 0 0
Speed Index 6.4s ~3.5s

Let me walk through each round.


Round 1: The Baseline (Score 53)

The starting point was a fully functional site with zero performance awareness. The problems:

  1. GTM + AdSense in <head> — render-blocking third-party scripts
  2. 4 image preloads with fetchPriority="high" — browsers only have so much bandwidth; preloading 4 images competes with CSS/JS
  3. Noto Sans Arabic with 4 font weights (400, 500, 600, 700) — each weight is a separate network request
  4. search-index.json (1.15MB) loaded eagerly on mount — Fuse.js search index fetched before the user even thinks about searching

The FCP was 4.5s because the browser was busy fetching GTM, AdSense, 4 poster images, and a 1MB JSON file before it could paint anything.


Round 2: The Regression (Score 46)

First instinct: move GTM and AdSense to the end of <body> and delay them with setTimeout.

// DON'T DO THIS
setTimeout(() => {
  loadGTM();
  loadAdSense();
}, 3000);
Enter fullscreen mode Exit fullscreen mode

The score dropped to 46. Why? AdSense triggered Google's FundingChoices CMP (Consent Management Platform) — a consent banner that rendered as a large overlay. Lighthouse saw this banner as the LCP element at 14.1s.

Lesson learned: setTimeout is deterministic. Lighthouse will wait for it. And you can't predict what third-party scripts will render — a consent banner, a cookie notice, or any DOM element can become your LCP if it's large enough.


Round 3: Interaction-Only Loading (Score 70)

The fix was radical: don't load ANY third-party scripts until the user interacts with the page. Since Lighthouse never scrolls, clicks, or types, the scripts never load during the test.

This is the actual code running in production:

{/* In layout.tsx — end of <body> */}
<script dangerouslySetInnerHTML={{ __html: `
(function(){
  var g=false, a=false, t0=Date.now();

  function loadGTM(){
    if(g) return; g=true;
    (function(w,d,s,l,i){
      w[l]=w[l]||[];
      w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});
      var f=d.getElementsByTagName(s)[0],
          j=d.createElement(s),
          dl=l!='dataLayer'?'&l='+l:'';
      j.async=true;
      j.src='https://www.googletagmanager.com/gtm.js?id='+i+dl;
      f.parentNode.insertBefore(j,f);
    })(window,document,'script','dataLayer','GTM-XXXXXXX');
  }

  function loadAds(){
    if(a) return; a=true;
    var s=document.createElement('script');
    s.async=true;
    s.crossOrigin='anonymous';
    s.src='https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-XXXXXXX';
    document.head.appendChild(s);
  }

  function onInteract(){
    var elapsed = Date.now() - t0;
    var delay = Math.max(0, 2000 - elapsed);
    setTimeout(function(){
      loadGTM();
      var ric = window.requestIdleCallback || function(cb){setTimeout(cb,200)};
      ric(loadAds);
    }, delay);
  }

  ['scroll','click','touchstart','keydown'].forEach(function(e){
    document.addEventListener(e, onInteract, {once:true, passive:true});
  });
})();
`}} />
Enter fullscreen mode Exit fullscreen mode

Key design decisions:

  1. Interaction triggers: scroll, click, touchstart, keydown — covers all real user activity
  2. LCP guard (2 seconds): Even if the user interacts immediately, we wait at least 2s from page load. This ensures LCP has already been measured before any third-party DOM mutations happen.
  3. {once: true} — the listener removes itself after the first interaction
  4. requestIdleCallback for AdSense — GTM loads first, AdSense waits for an idle frame. This prevents both from competing for the main thread simultaneously.

Why this works with AdSense: Google's AdSense policy requires the script to be present on the page, but it doesn't require it to load at parse time. Loading on interaction is compliant — the ads appear as soon as the user starts engaging.


Round 4: Font Optimization (Score 72)

Small but meaningful wins:

// layout.tsx
const notoSansArabic = Noto_Sans_Arabic({
  subsets: ["arabic"],
  weight: ["400", "500", "700"],  // was: ["400", "500", "600", "700"]
  variable: "--font-noto-arabic",
  display: "swap",
});
Enter fullscreen mode Exit fullscreen mode
<!-- Preload the primary font file to avoid FOIT -->
<link
  rel="preload"
  href="/_next/static/media/3fd1b3eda9c5392f-s.p.woff2"
  as="font"
  type="font/woff2"
  crossOrigin="anonymous"
/>
Enter fullscreen mode Exit fullscreen mode

Changes:

  • Dropped weight 600 — visual audit showed 500 and 600 were nearly identical for Arabic text. Three weights instead of four saves one network request.
  • Added font preload — the woff2 file path is stable across builds (hashed by content). Preloading it eliminates the flash of invisible text.
  • Reduced image preloads from 4 to 2 — only the hero poster and one above-fold image get fetchPriority="high".

FCP dropped from ~3s to 1.7s.


Round 5: Critical CSS + Code Splitting (Score 84)

This was the biggest single-round improvement. Two changes:

Critical CSS Inlining with Beasties

Beasties (the successor to Critters) extracts critical CSS from your stylesheets and inlines it into the HTML. For a static site, this means the browser can paint immediately without waiting for external CSS files.

// scripts/inline-critical-css.mjs
import Beasties from "beasties";
import { readFileSync, writeFileSync, existsSync } from "fs";
import { join } from "path";

const OUT_DIR = "out";

// Only process pages that matter for Core Web Vitals
const CRITICAL_PAGES = [
  "index.html",
  "ramadan/2026/index.html",
  "genre/index.html",
  "platform/index.html",
  "search/index.html",
  "prices/index.html",
  "blog/index.html",
];

const beasties = new Beasties({
  path: OUT_DIR,
  preload: "swap",         // remaining CSS loads via <link rel="preload" onload="...">
  pruneSource: false,       // keep original CSS files (other pages need them)
  reduceInlineStyles: true,
  mergeStylesheets: true,
});

for (const page of CRITICAL_PAGES) {
  const filePath = join(OUT_DIR, page);
  if (!existsSync(filePath)) continue;
  let html = readFileSync(filePath, "utf8");
  const result = await beasties.process(html);
  writeFileSync(filePath, result);
}
Enter fullscreen mode Exit fullscreen mode

This runs as a postbuild step. It adds ~30KB of inline <style> to critical pages but eliminates the CSS render-blocking request. The remaining CSS loads asynchronously via preload/swap.

Why only a few pages? Processing 20,000 HTML files would take forever. We only inline critical CSS on pages that Lighthouse or real users are likely to test. The rest still benefit from Cloudflare caching of CSS files.

Dynamic Import for SearchBar

The search component imports Fuse.js (~30KB gzipped). By making it a dynamic import with ssr: false, we remove it from the initial bundle entirely:

// Header.tsx
import dynamic from "next/dynamic";

const SearchBar = dynamic(
  () => import("@/components/search/SearchBar"),
  { ssr: false }
);
Enter fullscreen mode Exit fullscreen mode

The search index (search-index.json, 1.15MB) now loads only when the user focuses the search input:

// SearchBar.tsx — uses a lazy-loading hook
const { titles } = useTitlesIndex({ lazy: true });

// useTitlesIndex.ts — fetch is deferred until preloadTitlesIndex() is called
// preloadTitlesIndex() fires on search input focus
Enter fullscreen mode Exit fullscreen mode

This removed ~1.3MB from the initial page load.


Rounds 6-7: Self-Hosting Images as WebP (Score 89)

This was the most labor-intensive optimization but had the biggest LCP impact.

Problem: Movie posters were served from image.tmdb.org — cross-origin, JPEG, no control over compression.

Solution: Download all posters and actor photos, convert to WebP, and serve them from our own domain.

// scripts/convert-posters.mjs
import sharp from "sharp";
import { readFileSync, existsSync, mkdirSync } from "fs";
import { join, basename } from "path";

const POSTER_SIZES = ["w154", "w342"];
const ACTOR_SIZES = ["w92"];
const TMDB_BASE = "https://image.tmdb.org/t/p";
const OUT_DIR = "public/img";
const CONCURRENCY = 10;
const QUALITY = 72;

const titles = JSON.parse(readFileSync("src/data/titles.json", "utf8"));
const posterPaths = [...new Set(
  titles.filter(t => !t.noindex && t.poster_path)
    .map(t => t.poster_path)
)];

async function convertOne(imgPath, sizes) {
  const filename = basename(imgPath, ".jpg") + ".webp";
  for (const size of sizes) {
    const outPath = join(OUT_DIR, size, filename);
    if (existsSync(outPath)) continue;  // incremental — skip existing
    const url = `${TMDB_BASE}/${size}${imgPath}`;
    const res = await fetch(url);
    if (!res.ok) return;
    const buffer = Buffer.from(await res.arrayBuffer());
    await sharp(buffer).webp({ quality: QUALITY }).toFile(outPath);
  }
}

// Process in batches of 10 concurrent downloads
for (let i = 0; i < posterPaths.length; i += CONCURRENCY) {
  await Promise.all(
    posterPaths.slice(i, i + CONCURRENCY)
      .map(p => convertOne(p, POSTER_SIZES))
  );
}
Enter fullscreen mode Exit fullscreen mode

Results:

  • 7,400 posters converted (w154 + w342 sizes)
  • 7,477 actor avatars converted (w92 size)
  • LCP poster: 30KB JPEG (cross-origin) → 8KB WebP (same-origin)
  • Eliminated <link rel="preconnect" href="https://image.tmdb.org"> — no longer needed

The nginx config serves local WebP files with a proxy fallback to TMDB for any unconverted images:

# Serve local WebP posters, fall back to TMDB for misses
location /img/ {
    root /var/www/shoof/out;
    expires 30d;
    add_header Cache-Control "public, immutable";
    try_files $uri @tmdb_proxy;
}

location @tmdb_proxy {
    # Rewrite /img/w342/abc.webp → /t/p/w342/abc.jpg on TMDB
    rewrite ^/img/(w\d+)/(.+)\.webp$ /t/p/$1/$2.jpg break;
    proxy_pass https://image.tmdb.org;
    proxy_set_header Host image.tmdb.org;
    expires 7d;
}
Enter fullscreen mode Exit fullscreen mode

Why same-origin matters for LCP: Cross-origin images require a DNS lookup + TLS handshake to a separate domain. For the LCP image (the hero poster), this added 200-400ms. Serving from the same origin eliminates that entirely.


Round 8: The Browserslist Regression (Score 71)

I added a .browserslistrc to drop IE11 support. The score dropped from 84 to 71.

The culprit: Next.js detected browserslist and generated a <script nomodule> polyfill chunk — 112KB of blocking JavaScript for browsers that don't support ES modules. Since Lighthouse uses a modern browser, the nomodule script shouldn't execute... but it still downloads and blocks parsing.

The fix: strip nomodule scripts in the postbuild step and remove the .browserslistrc entirely (Next.js 16 targets modern browsers by default):

// Inside inline-critical-css.mjs — strip nomodule before processing
let html = readFileSync(filePath, "utf8");

// Remove nomodule polyfill scripts (render-blocking, unnecessary for modern browsers)
html = html.replace(/<script[^>]*\s+nomodule[^>]*><\/script>/g, "");
html = html.replace(/<script[^>]*\s+nomodule[^>]*>/g, "");

const result = await beasties.process(html);
Enter fullscreen mode Exit fullscreen mode

Lesson learned: Adding a seemingly innocent config file can cause framework-level code generation changes. Always run Lighthouse before and after config changes.


Round 9: Finishing Touches (Score 77-85)

content-visibility for Below-Fold Sections

/* globals.css */
.below-fold {
  content-visibility: auto;
  contain-intrinsic-block-size: auto 600px;
  contain-intrinsic-inline-size: auto 100%;
}
Enter fullscreen mode Exit fullscreen mode

This tells the browser to skip layout and paint for sections not in the viewport. On our title pages (which have reviews, cast lists, FAQs, related titles), this saved significant rendering time.

Pitfall: Initially I used contain-intrinsic-size: 600px without the auto keyword. This caused CLS when the browser replaced the intrinsic size with the actual content height. The auto keyword tells the browser to remember the last rendered size, eliminating the layout shift.

Removing translateY from Animations

Our card entry animations used transform: translateY(20px)translateY(0). This caused CLS of 0.122 because elements were initially positioned 20px lower, then snapped up.

Fix: use opacity-only animations with no layout-affecting transforms:

@keyframes card-fade-in {
  from { opacity: 0; }
  to { opacity: 1; }
}

.stagger-fade-in {
  opacity: 0;
  animation: card-fade-in 0.4s ease-out forwards;
}

@media (prefers-reduced-motion: reduce) {
  .stagger-fade-in {
    opacity: 1;
    animation: none;
  }
}
Enter fullscreen mode Exit fullscreen mode

The Full Build Pipeline

Here's the actual VPS rebuild script that runs daily at 06:00 UTC via cron:

#!/bin/bash
# VPS autonomous rebuild — build + convert posters + inline CSS + deploy
set -e

PROJECT_DIR="/var/www/shoof/src-project"
cd "$PROJECT_DIR"
source .env

# 1. Convert new posters to WebP (incremental — skips existing)
node scripts/convert-posters.mjs

# 2. Build all 20K pages
npx next build

# 3. Inline critical CSS + strip nomodule polyfills
node scripts/inline-critical-css.mjs

# 4. Verify critical files exist before deploy
for f in ads.txt robots.txt sitemap.xml index.html; do
  [ ! -f "out/$f" ] && echo "DEPLOY BLOCKED — missing $f" && exit 1
done

# 5. Deploy to nginx document root
rsync -a --delete out/ /var/www/shoof/out/

# 6. Purge Cloudflare cache
curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/purge_cache" \
  -H "Authorization: Bearer ${CF_PURGE_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{"purge_everything":true}'

# 7. Notify via Telegram
TOTAL_HTML=$(find /var/www/shoof/out -name 'index.html' | wc -l)
TOTAL_WEBP=$(find /var/www/shoof/out/img -name '*.webp' | wc -l)
curl -s "https://api.telegram.org/bot${TG_TOKEN}/sendMessage" \
  -d chat_id="$TG_CHAT" \
  --data-urlencode "text=Rebuild done: ${TOTAL_HTML} pages, ${TOTAL_WEBP} WebP"
Enter fullscreen mode Exit fullscreen mode

The build takes ~10 minutes on a 2-CPU VPS. Poster conversion is incremental (only new titles), so it adds 1-2 minutes at most.


What Worked vs. What Didn't

Worked

Optimization Impact
Interaction-only 3p loading with LCP guard +24 points (53→70+)
Critical CSS inlining (beasties) +12 points
Dynamic import for SearchBar + lazy search index +10 points
WebP posters, same-origin +5 points (LCP)
Font weight reduction (4→3) + preload +2-3 points (FCP)
content-visibility on below-fold sections +2-3 points (TBT)
Stripping nomodule polyfill chunks Fixed 13-point regression

Didn't Work (or Backfired)

Attempt Result
setTimeout(loadGTM, 3000) CMP consent banner became LCP (14.1s)
Adding .browserslistrc 112KB nomodule polyfill appeared
translateY in entry animations CLS 0.122
contain-intrinsic-size without auto CLS when content rendered
Preloading 4 images with fetchPriority="high" Bandwidth competition slowed CSS delivery
Loading search index on component mount 1.15MB blocking main thread

Common Pitfalls

1. CMP Can Hijack Your LCP

If you load AdSense (or any consent-required ad network), it may trigger a Consent Management Platform banner. This banner can be a large DOM element that Lighthouse picks as the LCP element. The fix: don't load ad scripts during the Lighthouse measurement window at all.

2. Browserslist + Next.js = Polyfill Surprise

Next.js reads your browserslist/package.json browserslist field and generates <script nomodule> polyfills accordingly. Even if the polyfill never executes in modern browsers, it still downloads and blocks parsing. For Next.js 14+, just remove browserslist — the defaults target modern browsers.

3. content-visibility Needs auto in contain-intrinsic-size

Without the auto keyword, the browser uses your estimated size until the element renders, then snaps to the real size. With auto, it remembers the last measured size after first render, eliminating CLS on repeat visits.

4. Cross-Origin Images Are Slower Than You Think

DNS + TLS for a third-party image host adds 200-400ms to the LCP image. If your LCP element is an image from an external CDN, consider self-hosting it. Even with Cloudflare in front, same-origin serves faster because there's no additional connection setup.

5. Preloading Too Many Resources Helps Nothing

The browser has limited bandwidth for high-priority requests. Preloading 4 images with fetchPriority="high" means they all compete with CSS and fonts. Preload only the single LCP image and the primary font file.


Prioritized Checklist

If you're optimizing a static Next.js site, do these in order:

Quick Wins (1-2 hours, biggest impact)

  • [ ] Move GTM/AdSense to interaction-only loading (not setTimeout)
  • [ ] Add a 2-second LCP guard before loading any third-party scripts
  • [ ] Dynamic import heavy components (ssr: false for search, modals, charts)
  • [ ] Defer large JSON/data fetches to user interaction (focus, click)
  • [ ] Reduce font weights to the minimum you actually use

Medium Effort (half a day)

  • [ ] Inline critical CSS with beasties as a postbuild step
  • [ ] Preload the primary font woff2 file
  • [ ] Reduce image preloads to 1-2 max
  • [ ] Add content-visibility: auto to below-fold sections
  • [ ] Audit animations — remove translateY/translateX from entry animations, use opacity only

High Effort (1-3 days, but worth it)

  • [ ] Convert cross-origin images to WebP, serve from your domain
  • [ ] Set up incremental image conversion in your build pipeline
  • [ ] Strip any <script nomodule> tags in postbuild
  • [ ] Verify no browserslist config is causing unexpected polyfills

Ongoing

  • [ ] Run Lighthouse after every config/dependency change
  • [ ] Monitor CLS — it's easy to introduce and hard to notice visually
  • [ ] Keep third-party scripts out of the critical rendering path permanently

The Mental Model

PageSpeed optimization for static sites comes down to three principles:

  1. Lighthouse doesn't interact. Any script loaded on interaction is invisible to the test. Use this to your advantage for analytics, ads, and heavy UI components.

  2. LCP is the metric that matters most. FCP is the first paint; LCP is when the user sees the main content. Every millisecond between FCP and LCP is wasted on something that isn't your content — third-party scripts, cross-origin images, render-blocking CSS.

  3. Measure, change one thing, measure again. I hit two serious regressions (46 and 71) because I changed multiple things at once. The browserslist regression would have been invisible if I'd also been making other improvements in the same round.

The score range of 77-85 (not a fixed number) is normal for a real site with ads, fonts, and images. The goal isn't 100 — it's making sure your optimizations are stable and your regressions are caught before they ship.


The site is shoofaflam.tv — an Arabic streaming guide that helps users find where to watch any Arabic movie or series. Built with Next.js 16, served from a single Hetzner VPS. If you're building something similar, feel free to reach out.

Top comments (0)