Originally published at chudi.dev
It was 2 AM. StatementSync was ready to deploy. I pushed to Vercel and watched the build fail.
Error: Cannot find module 'canvas'
at Function.Module._resolveFilename
Canvas? I'm processing PDFs, not drawing graphics. Three hours later, I learned why pdf-parse breaks on serverless.
pdf-parse depends on the canvas module, which requires native bindings unavailable in Lambda and Edge environments. The fix is unpdf—a pure JavaScript PDF parser with zero native dependencies that works on Vercel serverless functions, AWS Lambda, and Cloudflare Workers. Same extraction quality, no build failures.
The Problem
pdf-parse is the go-to library for PDF text extraction in Node.js:
import pdf from 'pdf-parse';
const dataBuffer = fs.readFileSync('statement.pdf');
const data = await pdf(dataBuffer);
console.log(data.text);
Works perfectly locally. Crashes spectacularly on Vercel.
Why It Fails
pdf-parse depends on pdfjs-dist, Mozilla's PDF.js port for Node. pdfjs-dist has optional dependencies:
{
"optionalDependencies": {
"canvas": "^2.x",
"node-fetch": "^2.x"
}
}
Canvas is a native module that requires:
- Python
- node-gyp
- C++ build tools
Vercel's serverless runtime doesn't have these. The build either:
- Fails outright with missing module errors
- Succeeds but crashes at runtime with segfaults
Sometimes the build passes but the function crashes when processing PDFs. This is worse—you discover it in production, not deployment.
The Debugging Journey
Attempt 1: Exclude Canvas
"Just mark canvas as external," Stack Overflow said.
// next.config.js
module.exports = {
webpack: (config) => {
config.externals = [...(config.externals || []), 'canvas'];
return config;
},
};
Result: Different error.
Error: Could not load the "canvas" module
pdfjs-dist tries to load canvas at runtime, not just build time.
Attempt 2: Legacy Build
"Use pdf-parse legacy mode," another answer suggested.
const pdf = require('pdf-parse/lib/pdf-parse');
Result: Still fails. The dependency chain remains.
Attempt 3: pdfjs-dist Directly
"Skip pdf-parse, use pdfjs-dist with worker disabled."
import * as pdfjsLib from 'pdfjs-dist';
pdfjsLib.GlobalWorkerOptions.workerSrc = '';
const pdf = await pdfjsLib.getDocument({ data: buffer }).promise;
Result: Works locally, memory errors on Vercel.
Vercel functions have 1GB memory limit. pdfjs-dist's memory usage is unpredictable with large PDFs.
The Solution: unpdf
After three hours, I found unpdf:
import { getDocument, extractText } from 'unpdf';
const pdf = await getDocument({ data: buffer }).promise;
const text = await extractText(pdf);
Result: Works. First try.
Why unpdf Works
unpdf is built specifically for serverless:
| Feature | pdf-parse | unpdf |
|---|---|---|
| Native deps | Yes (canvas) | No |
| Vercel compatible | No | Yes |
| Edge runtime | No | Yes |
| Bundle size | Large | Small |
| Memory usage | Unpredictable | Controlled |
The library uses a pure JavaScript PDF parser without native modules. No build-time compilation, no runtime loading issues.
Implementation
Here's the complete pattern for serverless PDF processing:
import { getDocument, extractText } from 'unpdf';
interface Transaction {
date: string;
description: string;
amount: number;
type: 'debit' | 'credit';
}
async function processPdf(buffer: Buffer): Promise<Transaction[]> {
// Load PDF
const pdf = await getDocument({ data: buffer }).promise;
// Extract text
const text = await extractText(pdf);
// Parse transactions (pattern-based for bank statements)
const transactions = parseTransactions(text);
// Cleanup
pdf.destroy();
return transactions;
}
function parseTransactions(text: string): Transaction[] {
// Bank-specific parsing patterns
const lines = text.split('\n');
const transactions: Transaction[] = [];
for (const line of lines) {
const match = line.match(/(\d{2}\/\d{2})\s+(.+?)\s+(-?\$[\d,]+\.\d{2})/);
if (match) {
transactions.push({
date: match[1],
description: match[2].trim(),
amount: parseFloat(match[3].replace(/[$,]/g, '')),
type: match[3].startsWith('-') ? 'debit' : 'credit'
});
}
}
return transactions;
}
Performance
On Vercel's free tier (1GB memory, 10s timeout):
| PDF Size | Processing Time | Memory Used |
|---|---|---|
| 1 page | 1-2 seconds | ~100MB |
| 5 pages | 3-4 seconds | ~200MB |
| 10 pages | 5-6 seconds | ~350MB |
| 20 pages | 8-9 seconds | ~500MB |
Comfortable margins for typical bank statements (1-5 pages).
Always call
pdf.destroy()after processing. unpdf holds the document in memory until explicitly released.
Pattern-Based vs LLM Extraction
For structured documents like bank statements, pattern-based extraction beats LLM:
| Approach | Accuracy | Cost | Speed |
|---|---|---|---|
| Pattern-based | 99% | $0 | 3-5s |
| LLM (GPT-5) | 99.5% | $0.01-0.05 | 10-30s |
| OCR + LLM | 95% | $0.02-0.08 | 15-45s |
For StatementSync processing 1000 statements/month:
- Pattern-based: $0
- LLM: $10-50/month
The 0.5% accuracy difference doesn't justify the cost for this use case. This cost analysis was a key input to the flat-rate vs per-file pricing decision for StatementSync.
When to Use What
Use unpdf when:
- Deploying to Vercel, Netlify, or Cloudflare
- Processing structured documents (statements, invoices)
- Need low memory footprint
- Running on edge runtimes
Use pdf-parse when:
- Running on traditional servers (EC2, DigitalOcean)
- Need advanced PDF features (annotations, forms)
- Have native build tools available
Use LLM extraction when:
- Documents are unstructured or variable
- Accuracy is more important than cost
- Processing low volumes
Setting Up in Next.js
Installing unpdf and configuring Next.js for serverless PDF processing:
npm install unpdf
No additional configuration needed for standard Vercel deployments. If you're using Next.js 14+ with the App Router, create your route handler:
// app/api/process/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { getDocument, extractText } from 'unpdf';
export const runtime = 'nodejs'; // not 'edge' — unpdf needs Node APIs
export async function POST(req: NextRequest) {
const formData = await req.formData();
const file = formData.get('pdf') as File;
if (!file || file.type !== 'application/pdf') {
return NextResponse.json({ error: 'Invalid file' }, { status: 400 });
}
const buffer = Buffer.from(await file.arrayBuffer());
const pdf = await getDocument({ data: buffer }).promise;
const text = await extractText(pdf);
pdf.destroy();
return NextResponse.json({ text });
}
One gotcha: set runtime = 'nodejs' not 'edge'. Edge runtime has stricter module constraints. unpdf works on Node runtime, not edge runtime.
Handling Edge Cases
Password-Protected PDFs
try {
const pdf = await getDocument({
data: buffer,
password: userProvidedPassword // optional
}).promise;
} catch (err) {
if (err.name === 'PasswordException') {
return { error: 'PDF is password protected' };
}
throw err;
}
PasswordException is thrown immediately, before any processing. Always catch it explicitly or you'll get an unhandled rejection in production.
Corrupted or Invalid Files
async function safePdfExtract(buffer: Buffer): Promise<string | null> {
try {
const pdf = await getDocument({ data: buffer }).promise;
const text = await extractText(pdf);
pdf.destroy();
return text;
} catch (err) {
// InvalidPDFException for malformed files
// MissingPDFException for empty or non-PDF data
console.error('PDF extraction failed:', err.name, err.message);
return null;
}
}
Return null instead of throwing to let the caller decide whether a failed extraction is a hard error or a skippable item.
Scanned PDFs (Image-Based)
unpdf extracts embedded text. Scanned documents—where each page is a JPEG embedded in a PDF—return empty strings. Before assuming extraction succeeded, check the output:
const text = await extractText(pdf);
if (text.trim().length < 50) {
// Likely a scanned document
return { error: 'Document appears to be scanned. Text extraction not supported.' };
}
For scanned documents, you'd need OCR (Tesseract.js or an external API like AWS Textract). That's out of scope for most bank statements—major US banks generate text-based PDFs—but worth detecting gracefully.
Testing the Pipeline
Two tests that catch 90% of production issues:
// __tests__/pdf-processing.test.ts
import { processPdf } from '../lib/pdf';
import fs from 'fs';
describe('PDF processing', () => {
it('extracts transactions from Chase statement', async () => {
const buffer = fs.readFileSync('__tests__/fixtures/chase-sample.pdf');
const result = await processPdf(buffer);
expect(result.transactions.length).toBeGreaterThan(0);
expect(result.transactions[0]).toMatchObject({
date: expect.stringMatching(/^\d{2}\/\d{2}$/),
amount: expect.any(Number),
});
});
it('returns null for scanned PDF', async () => {
const buffer = fs.readFileSync('__tests__/fixtures/scanned.pdf');
const result = await processPdf(buffer);
expect(result).toBeNull();
});
});
The fixture files are real PDFs (anonymized). Testing against actual bank statement formats catches the edge cases in date parsing and amount formatting before they hit production.
Verifying Your Setup in Production
Deployment to Vercel can surface issues that local testing misses. Before handling real user data, run three checks.
Memory ceiling: The performance table above shows 20-page PDFs using ~500MB. Vercel's free tier allows 1,024MB per function. Test your worst-case PDF during staging, not production. If you're regularly processing PDFs over 15 pages, bump to Vercel's Pro tier where you can configure function memory up to 3,008MB.
Cold start behavior: Vercel's serverless functions spin down after inactivity. The first PDF request after a cold start takes 2-4x longer than subsequent requests. If your users frequently trigger that first cold request, consider Vercel's Fluid Compute option that keeps functions warm between invocations.
File size limits: Vercel's default payload limit for serverless functions is 4.5MB. A 20-page bank statement PDF typically sits well under 1MB. But if your use case involves scanned PDFs or combined multi-month statements, verify your largest expected file size against this limit before launch. You can adjust it in next.config.js:
export default {
api: {
bodyParser: {
sizeLimit: '10mb',
},
},
};
Running these three checks in staging eliminates the category of production failures that are unrelated to code—infrastructure surprises that happen on first real traffic.
The Lesson
The right library matters more than clever workarounds. I spent 3 hours trying to make pdf-parse work on serverless. unpdf worked in 10 minutes.
If you're building PDF processing for serverless, start with unpdf. Save yourself the 2 AM debugging.
Related: From Pain Point to MVP: StatementSync in One Week | Portfolio: StatementSync
Top comments (0)