Puppeteer is the obvious choice when you first need to convert HTML to an image in Node.js. It's well-documented, flexible, and produces accurate output. Then you try to deploy it.
The binary is 150–300 MB depending on the platform. It fails silently on Vercel. It crashes on underpowered Lambda functions. Cold starts add seconds to the first request. And every few months a new version of Chrome breaks something in your launch flags.
There's a simpler path: offload the render to an API and use Node's built-in fetch to get the image back. No binary dependencies, no process management, no platform restrictions.
Why Puppeteer causes deployment pain
Puppeteer bundles a full Chromium binary. On Linux that binary is around 300 MB after extraction. Three compounding problems:
- Bundle size limits — AWS Lambda has a 250 MB unzipped limit. Puppeteer alone exceeds it. Vercel's serverless functions have similar restrictions.
- Cold starts — Launching a new Chromium process takes 1–3 seconds. That latency appears on every uncached request.
-
Missing system libraries — Chromium depends on
libgbm,libnss3,libatk, and others that aren't present in minimal Lambda or Cloud Run environments.
The alternative: a render API
An HTML-to-image API runs Chromium in a managed pool. Your Node.js code makes an HTTP request and receives binary image data.
| Factor | Puppeteer | Render API |
|---|---|---|
| Deployment size | +150–300 MB | 0 MB added |
| Works on Vercel / Lambda | Rarely | Always |
| Cold start | 1–3 s | None |
| System library deps | Many | None |
| Maintenance | Breaks on Chrome updates | API versioned |
Migrating existing Puppeteer code
Before:
import puppeteer from 'puppeteer';
export async function htmlToImage(html, width = 1200, height = 630) {
const browser = await puppeteer.launch({ args: ['--no-sandbox'] });
const page = await browser.newPage();
await page.setViewport({ width, height });
await page.setContent(html, { waitUntil: 'networkidle0' });
const screenshot = await page.screenshot({ type: 'png' });
await browser.close();
return screenshot;
}
After:
export async function htmlToImage(html, width = 1200, height = 630) {
const res = await fetch('https://renderpix.dev/v1/render', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Api-Key': process.env.RENDERPIX_API_KEY,
},
body: JSON.stringify({ html, width, height, format: 'png' }),
});
if (!res.ok) throw new Error(`Render failed: ${res.status} ${await res.text()}`);
return Buffer.from(await res.arrayBuffer());
}
Function signature identical. Return type identical. Calling code unchanged.
Writing to disk, S3, or HTTP response
import { writeFile } from 'node:fs/promises';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const buf = await htmlToImage(myHtml, 1200, 630);
// Write to disk
await writeFile('output.png', buf);
// Upload to S3
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: `images/${id}.png`,
Body: buf,
ContentType: 'image/png',
}));
// Express
res.set('Content-Type', 'image/png');
res.send(buf);
// Next.js Route Handler
return new Response(buf, { headers: { 'Content-Type': 'image/png' } });
TypeScript wrapper
interface RenderOptions {
html: string;
width?: number;
height?: number;
format?: 'png' | 'jpeg' | 'webp';
quality?: number;
}
export async function render(opts: RenderOptions): Promise<Buffer> {
const { html, width = 1200, height = 630, format = 'png', ...rest } = opts;
const res = await fetch('https://renderpix.dev/v1/render', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Api-Key': process.env.RENDERPIX_API_KEY!,
},
body: JSON.stringify({ html, width, height, format, ...rest }),
});
if (!res.ok) throw new Error(`RenderPix error ${res.status}: ${await res.text()}`);
return Buffer.from(await res.arrayBuffer());
}
Remove puppeteer from package.json. Run npm install. Watch node_modules shrink.
RenderPix has a free tier — replace your first Puppeteer function in under 10 minutes.
Top comments (0)