DEV Community

Quicreatdev
Quicreatdev

Posted on

Stop Running Puppeteer on Your Main Server: The Serverless Approach to Screenshots

Introduction

We've all been there. You're building a side project—maybe a link preview generator, an SEO tool, or a dashboard—and you think: "I just need to take a quick screenshot of this URL."

So you npm install puppeteer, write 10 lines of code, and it works locally. Great!

Then you deploy it to production (Docker, Ubuntu, or Heroku), and hell breaks loose.

  • The fonts are broken (rectangles instead of text).
  • The memory usage spikes to 2GB and crashes your server.
  • The target website shows a giant "Accept Cookies" banner covering the content.
  • Half the images are missing because of lazy-loading.

I spent the last month fighting these battles while building a screenshot microservice. Here is what I learned about doing it the hard way, and why I eventually turned it into a dedicated API.

The Trap: "It works on my machine"

Basic Puppeteer is deceptive. Here is the code everyone starts with:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({ path: 'example.png' });
  await browser.close();
})();

Enter fullscreen mode Exit fullscreen mode

This works fine for example.com. But try running this against a modern Single Page Application (SPA) or a news site, and you will hit three major walls.

Wall #1: The Lazy Loading Problem

Modern web performance relies on lazy loading. Images only load when they enter the viewport. If you take a screenshot immediately after page.goto, you get a page full of placeholders.

The Fix: You need to simulate a user scrolling down, or wait for network activity to settle.

// Waiting for networkidle0 is reliable but SLOW (can take 10s+)
await page.goto(url, { waitUntil: 'networkidle0' });

Enter fullscreen mode Exit fullscreen mode

Wall #2: The "Cookie Banner" Apocalypse

In 2026, the web is 50% content and 50% GDPR popups. A screenshot tool that captures the cookie banner is useless.

The Fix: You have to inject CSS or JS to nuke these elements before the shutter clicks.

await page.addStyleTag({
  content: '#onetrust-banner-sdk, .cookie-popup { display: none !important; }'
});

Enter fullscreen mode Exit fullscreen mode

But maintaining a list of selectors for every site on the internet? Impossible.

Wall #3: Server Costs & Zombie Processes

Chromium is heavy. Running it in a standard container requires significant RAM. If your script crashes before browser.close() is called, you are left with "zombie" Chrome processes eating up your CPU until the server dies.

The Solution: Going Serverless (AWS Lambda)

To solve the crashing and scaling issues, I moved the architecture to AWS Lambda. This ensures that:

  1. Each screenshot gets a fresh, isolated environment.
  2. If it crashes, it doesn't take down my main server.
  3. I only pay when a screenshot is taken.

However, getting Puppeteer on Lambda is tricky (binary sizes, font packages). I used puppeteer-core and @sparticuz/chromium to keep the package size under the 50MB limit.

Introducing FlashCapture

After refining this architecture to handle ad-blocking, dark mode, and full-page stitching automatically, I realized this was too valuable to keep as a messy internal script.

So, I wrapped it into a clean, public API called FlashCapture.

It handles all the edge cases I mentioned above:

  • Smart Ad-Blocker: Automatically hides banners and trackers.
  • Async Processing: No HTTP timeouts on large pages.
  • Lazy Loading Support: We handle the wait logic.

Trying it out

If you are tired of maintaining your own Puppeteer instance, you can use the API directly via RapidAPI. There is a free tier for developers.

Here is how simple it is compared to the 50 lines of Puppeteer code:

const axios = require('axios');

const options = {
  method: 'POST',
  url: 'https://flashcapture.p.rapidapi.com/capture',
  headers: {
    'content-type': 'application/json',
    'X-RapidAPI-Key': 'YOUR_API_KEY',
    'X-RapidAPI-Host': 'flashcapture.p.rapidapi.com'
  },
  data: {
    url: 'https://www.reddit.com',
    options: {
      fullPage: true,
      darkMode: true, // Automagically renders in dark mode
      width: 1920
    }
  }
};

try {
    const response = await axios.request(options);
    console.log("Job ID:", response.data.id);
    // Then just poll the /status endpoint to get your image!
} catch (error) {
    console.error(error);
}

Enter fullscreen mode Exit fullscreen mode

Conclusion

If you are building a production app, think twice before running a headless browser on your primary web server. It's a resource hog that introduces security risks and stability issues.

Whether you build your own microservice on AWS Lambda (like I did initially) or use a managed API like FlashCapture, decoupling this heavy task is the best architectural decision you can make.

👉 Check out FlashCapture on RapidAPI here

Happy coding! 🚀

Top comments (0)